WEBVTT

00:00.000 --> 00:02.565
>> Understanding risk.

00:02.565 --> 00:04.470
The learning objectives for

00:04.470 --> 00:06.825
this lesson are to
explore risk management,

00:06.825 --> 00:09.210
to define how to measure risk,

00:09.210 --> 00:12.150
and to explain the ways
to respond to risk.

00:12.150 --> 00:15.945
Let's get started.
Risk management.

00:15.945 --> 00:18.210
The very basic overview of

00:18.210 --> 00:20.370
risk management is identifying,

00:20.370 --> 00:23.760
assessing, and mitigating
vulnerabilities and threats.

00:23.760 --> 00:26.295
An easier way to
look at this is,

00:26.295 --> 00:28.020
every organization has

00:28.020 --> 00:29.415
something that's
important to them,

00:29.415 --> 00:31.620
they need to protect
it and they want to

00:31.620 --> 00:34.305
find out what do they have
to spend to protect it.

00:34.305 --> 00:35.550
That's basically what
we're going to be

00:35.550 --> 00:37.275
talking about
through this lesson.

00:37.275 --> 00:39.010
What is important to us?

00:39.010 --> 00:41.810
How much is it going
to cost to protect it?

00:41.810 --> 00:44.990
What is the damage if something
were to happen to it?

00:44.990 --> 00:47.450
On top of that, every
organization has

00:47.450 --> 00:48.845
different types of risks

00:48.845 --> 00:50.705
and they will manage
that differently.

00:50.705 --> 00:52.820
There are different things
that are important to

00:52.820 --> 00:55.475
one organization that is
not important to another.

00:55.475 --> 00:57.575
It's important that we make

00:57.575 --> 00:59.720
sure that we've properly
identified what's

00:59.720 --> 01:02.450
important so that we can
later on come up with

01:02.450 --> 01:05.900
ways to manage the risk
to those valuable assets.

01:05.900 --> 01:07.550
But there are some
common frameworks

01:07.550 --> 01:09.310
we can use to help us with this.

01:09.310 --> 01:12.220
The first is the NIST risk
management framework.

01:12.220 --> 01:14.885
We can also use the ISO 31,000.

01:14.885 --> 01:17.970
We're going to discuss both
of these later in the lesson.

01:18.430 --> 01:20.870
There are five phases to

01:20.870 --> 01:23.180
the overall risk
management process.

01:23.180 --> 01:25.610
The first is where we start
with the identification

01:25.610 --> 01:28.120
of mission-critical
assets or functions.

01:28.120 --> 01:30.020
We go around the company
and making sure we've

01:30.020 --> 01:31.910
identified everything
that's important.

01:31.910 --> 01:33.470
Now it's critical
to make sure we're

01:33.470 --> 01:35.780
asking all the right people.

01:35.780 --> 01:37.850
I've gone into a
situation with a company,

01:37.850 --> 01:40.610
where the upper management

01:40.610 --> 01:43.290
let us know that these
things were important,

01:43.290 --> 01:44.570
but as we dug through and

01:44.570 --> 01:45.995
we're talking to
other departments,

01:45.995 --> 01:48.050
we had another department
tell us that this

01:48.050 --> 01:50.330
was important to them
only to find out

01:50.330 --> 01:52.370
that the upper management
wasn't aware of

01:52.370 --> 01:55.010
just how critical that was
to the overall business.

01:55.010 --> 01:56.720
You've got to make sure
that you're talking to

01:56.720 --> 01:59.690
everyone to get all of
the important parts

01:59.690 --> 02:02.730
of your company's assets or

02:02.730 --> 02:04.490
functions to make
sure that you know

02:04.490 --> 02:06.710
what you even need to
worry about for risk.

02:06.710 --> 02:08.465
From then, we move down to

02:08.465 --> 02:10.685
the identification of
known vulnerabilities.

02:10.685 --> 02:12.380
We have our assets or

02:12.380 --> 02:14.575
our functions, where
are they vulnerable?

02:14.575 --> 02:16.940
If we have a building that's in

02:16.940 --> 02:19.160
a typhoon area or
a hurricane area,

02:19.160 --> 02:22.040
and we don't have
any other way of

02:22.040 --> 02:23.750
restoring operations
if something

02:23.750 --> 02:25.700
were to happen, that's
a vulnerability.

02:25.700 --> 02:28.280
We need to make sure
that we have a way

02:28.280 --> 02:31.099
of planning for these
types of vulnerabilities.

02:31.099 --> 02:32.210
But it's critical that we

02:32.210 --> 02:33.410
know where all the
vulnerabilities

02:33.410 --> 02:36.580
are before we can even begin
addressing a plan for them.

02:36.580 --> 02:39.095
Then we move down to
the potential threats.

02:39.095 --> 02:41.075
What are the threats
to these assets?

02:41.075 --> 02:42.770
If we're a company
that's engaged in

02:42.770 --> 02:45.395
proprietary research that
will be valuable to someone,

02:45.395 --> 02:47.375
then you can bet that
corporate espionage

02:47.375 --> 02:50.150
or hacker attacks are
going to be a problem.

02:50.150 --> 02:52.550
We need to make sure we
make plans for those.

02:52.550 --> 02:55.265
Then we move to the analysis
of business impacts.

02:55.265 --> 02:56.945
If this thing were to happen,

02:56.945 --> 02:58.130
this thing that we're dreading,

02:58.130 --> 03:02.435
this vulnerability has
now been exploited,

03:02.435 --> 03:04.880
how is that going to
impact our business?

03:04.880 --> 03:07.250
Once we know all
of these things,

03:07.250 --> 03:09.200
we can begin to
figure out how to

03:09.200 --> 03:11.180
identify our risk responses.

03:11.180 --> 03:12.620
This is where we decide how

03:12.620 --> 03:15.090
we're going to handle that risk.

03:16.700 --> 03:19.445
How do you go about
measuring risk?

03:19.445 --> 03:21.770
Well, first we need
to define some terms.

03:21.770 --> 03:23.270
Risk is a measurement of

03:23.270 --> 03:25.760
the impact or the
consequence and

03:25.760 --> 03:27.725
the likelihood that a threat

03:27.725 --> 03:30.485
will exploit a vulnerability.

03:30.485 --> 03:32.524
We need to define likelihood.

03:32.524 --> 03:35.330
This is how realistic
is the threat to occur.

03:35.330 --> 03:37.910
Are we worried about that
a meteoroid is going

03:37.910 --> 03:40.250
to crash into our building?
That's not very likely.

03:40.250 --> 03:42.695
But again, if we're doing
proprietary research,

03:42.695 --> 03:45.410
that is very valuable

03:45.410 --> 03:47.240
is it likely that a competitor

03:47.240 --> 03:48.665
is going to try to
steal that data?

03:48.665 --> 03:50.395
Yes, that's pretty likely.

03:50.395 --> 03:52.075
Then finally, the impact.

03:52.075 --> 03:55.530
If the risk happened, how
bad would it be for us?

03:57.710 --> 04:01.590
There are different ways
of doing risk analysis.

04:01.590 --> 04:03.200
The first one we're
going to discuss is

04:03.200 --> 04:05.240
the quantitative risk analysis.

04:05.240 --> 04:06.545
When you hear quantitative,

04:06.545 --> 04:07.955
you need to think numbers.

04:07.955 --> 04:10.900
For risk, this usually
involves money.

04:10.900 --> 04:14.255
We start off with a
single loss expectancy.

04:14.255 --> 04:17.540
This is the cost of a single
event happening one time.

04:17.540 --> 04:19.375
For example, a server crash.

04:19.375 --> 04:23.300
If we look at the history of

04:23.300 --> 04:27.320
our servers and we
average one server crash,

04:27.320 --> 04:29.765
we want to look at the
cost of that crash.

04:29.765 --> 04:32.485
What does it cost us
when that crash happens?

04:32.485 --> 04:35.465
Then we move to the
annual loss expectancy.

04:35.465 --> 04:36.980
This is adding all of

04:36.980 --> 04:38.735
those single-loss
events together

04:38.735 --> 04:40.370
over the course of a year.

04:40.370 --> 04:42.830
We hope that our server is not

04:42.830 --> 04:45.770
crashing more than once a
year or even once a year,

04:45.770 --> 04:48.215
but if it were to
crash multiple times,

04:48.215 --> 04:50.240
then what are those
costs together?

04:50.240 --> 04:52.685
Then we have our annual
rate of occurrence.

04:52.685 --> 04:54.080
This is how many times in

04:54.080 --> 04:57.180
a year does that
single event occur?

04:57.590 --> 05:00.320
There's a formula here
for you to be able

05:00.320 --> 05:02.690
to calculate this on
a test and you might

05:02.690 --> 05:05.480
see some of these questions
where they are giving

05:05.480 --> 05:10.805
you numbers for you to calculate
based on this formula.

05:10.805 --> 05:16.420
The ALE equals the
SLE times the ARO.

05:16.420 --> 05:18.740
We're trying to
calculate what is

05:18.740 --> 05:20.329
the annual loss expectancy,

05:20.329 --> 05:22.130
which is expressed as a cost.

05:22.130 --> 05:26.420
All of these costs added
together and that is equal to

05:26.420 --> 05:29.015
the single loss expectancy times

05:29.015 --> 05:31.440
the annual rate of occurrence.

05:33.440 --> 05:35.900
But the SLE can be broken

05:35.900 --> 05:37.700
down further into
different parts.

05:37.700 --> 05:40.160
We can define the
asset value or the AV.

05:40.160 --> 05:42.395
How much is that asset worth?

05:42.395 --> 05:44.795
The exposure factor or EF is,

05:44.795 --> 05:49.345
what portion as a percentage
of that asset would be lost.

05:49.345 --> 05:51.080
An example would
be if a hurricane

05:51.080 --> 05:52.940
damaged half of our
corporate building,

05:52.940 --> 05:55.700
that will be an exposure
factor of 50 percent.

05:55.700 --> 06:02.845
Our SLE can be calculated
as SLE equals AV times EF.

06:02.845 --> 06:06.619
Then we also have total
cost of ownership or TCO.

06:06.619 --> 06:09.260
This is all costs
associated with an asset,

06:09.260 --> 06:11.135
including the cost to operate it

06:11.135 --> 06:14.520
and maintain it over
its entire lifetime.

06:17.000 --> 06:21.260
We also have our return
on investment or ROI.

06:21.260 --> 06:23.060
This compares the cost of

06:23.060 --> 06:26.335
the item to the
benefits it provides.

06:26.335 --> 06:29.150
These next terms are very
important and you're likely to

06:29.150 --> 06:31.625
see questions on the
test about these.

06:31.625 --> 06:34.210
Mean time to recovery, MTTR.

06:34.210 --> 06:35.660
This measures the amount of

06:35.660 --> 06:38.450
time a device or a
service is down,

06:38.450 --> 06:40.400
how long from when it

06:40.400 --> 06:43.435
goes down to when it
is back up again?

06:43.435 --> 06:46.250
Then the mean time
between failure or

06:46.250 --> 06:49.025
MTBF is the lifespan
of a device,

06:49.025 --> 06:53.250
but also the amount of time
until a service goes down.

06:54.410 --> 06:58.070
Then with a gap analysis
that measures the difference

06:58.070 --> 07:01.400
between the current state
and the desired state.

07:01.400 --> 07:06.290
By creating metrics such
as ALE, MTTR, MTBF,

07:06.290 --> 07:08.570
and TCO, an organization can

07:08.570 --> 07:11.330
evaluate where they stand
and make improvements.

07:11.330 --> 07:15.770
We look at our historical
MTTR and our MTBF,

07:15.770 --> 07:17.750
and we decide this isn't

07:17.750 --> 07:19.610
good enough and we
need to improve.

07:19.610 --> 07:22.160
By calculating those functions,

07:22.160 --> 07:25.430
we can get numbers that we
can use to help move us

07:25.430 --> 07:30.390
towards that goal line
by using a gap analysis.

07:33.149 --> 07:36.955
>> There are some issues with
quantitative risk analysis.

07:36.955 --> 07:39.610
It's difficult to perform
when the value of

07:39.610 --> 07:42.595
an asset or the components
cannot be easily determined.

07:42.595 --> 07:43.840
Sometimes it's hard for us to

07:43.840 --> 07:46.270
do, especially with intangibles.

07:46.270 --> 07:48.640
But it does offer
an effective way of

07:48.640 --> 07:51.025
describing the assets
in an organization,

07:51.025 --> 07:53.410
what the organization
actually has,

07:53.410 --> 07:56.545
and then the risks that are
associated with those assets.

07:56.545 --> 07:59.740
It can be used to help
decision-makers by

07:59.740 --> 08:01.030
providing good
information so they

08:01.030 --> 08:02.800
can plan where to
place the money,

08:02.800 --> 08:07.040
where they need to spend
to lower the risk.

08:07.890 --> 08:11.185
A qualitative risk analysis,

08:11.185 --> 08:14.725
this evaluates through
words and not numbers.

08:14.725 --> 08:17.545
Keep in mind quantitative
numbers, qualitative words.

08:17.545 --> 08:19.675
It's very subjective and

08:19.675 --> 08:22.645
this is especially so when
compared to quantitative.

08:22.645 --> 08:24.550
It works well for
the assets that are

08:24.550 --> 08:27.055
intangible such as
brand or reputation.

08:27.055 --> 08:28.960
But it requires a
lot of input from

08:28.960 --> 08:30.580
other departments such
as your marketing,

08:30.580 --> 08:34.130
your sales, and your corporate
communications teams.

08:35.070 --> 08:38.035
How do we respond to risk?

08:38.035 --> 08:41.035
The first thing we can
do is to avoid it.

08:41.035 --> 08:43.855
This means stop doing
whatever is causing the risk.

08:43.855 --> 08:47.590
It doesn't mean ignoring
the risk. We can accept it.

08:47.590 --> 08:49.615
This means that if
the risk happens,

08:49.615 --> 08:51.730
it's not worth the
cost to prevent it.

08:51.730 --> 08:53.590
If it happens, we contain it and

08:53.590 --> 08:57.115
then it's cheaper to contain
it than it is to prevent it,

08:57.115 --> 08:59.275
then we can mitigate the risk.

08:59.275 --> 09:01.150
This is the process of lowering

09:01.150 --> 09:03.130
the possibility that
the risk will occur.

09:03.130 --> 09:05.050
Usually mitigating controls help

09:05.050 --> 09:07.195
to lower the chance
of a risk occurrence.

09:07.195 --> 09:09.610
Then finally, we can
transfer the risk.

09:09.610 --> 09:11.935
This is give the risk
to a third party.

09:11.935 --> 09:14.170
This is usually done by
purchasing insurance.

09:14.170 --> 09:17.980
I've got some good examples

09:17.980 --> 09:19.540
here to help you
understand the differences

09:19.540 --> 09:22.465
between the different
types of risk responses.

09:22.465 --> 09:24.760
The company has a
software application and

09:24.760 --> 09:26.455
the manufacturer has
gone out of business.

09:26.455 --> 09:27.790
A lot of vulnerabilities have

09:27.790 --> 09:29.425
been discovered in the software.

09:29.425 --> 09:31.660
To avoid the risk
would be to stop using

09:31.660 --> 09:34.030
the software altogether
and find a replacement.

09:34.030 --> 09:35.890
To accept the risk is

09:35.890 --> 09:37.810
if the vulnerabilities
are exploited,

09:37.810 --> 09:41.275
the damage won't exceed the
cost to replace the software.

09:41.275 --> 09:43.870
If we find another software to

09:43.870 --> 09:46.375
replace and the cost is $50,000,

09:46.375 --> 09:48.490
but through our
calculations we discover

09:48.490 --> 09:50.965
that if the software

09:50.965 --> 09:52.440
the current one
that we're using is

09:52.440 --> 09:55.755
exploited and the cost
is only $10,000 to us,

09:55.755 --> 09:57.060
then it doesn't
really make sense to

09:57.060 --> 09:58.590
spend $50,000 to move to

09:58.590 --> 10:00.420
a different software platform

10:00.420 --> 10:03.365
because of the cost when we
can just accept the risk.

10:03.365 --> 10:05.980
We can mitigate
the risk by using

10:05.980 --> 10:08.755
various security products to
help harden the application.

10:08.755 --> 10:11.450
We can isolate it to its
own air gap network.

10:11.450 --> 10:13.230
Then we can transfer the risk by

10:13.230 --> 10:14.850
purchasing insurance
that would cover

10:14.850 --> 10:16.680
the company in the event that we

10:16.680 --> 10:19.570
were breached because
of this software.

10:20.450 --> 10:24.110
Let's talk about inherent
and residual risk.

10:24.110 --> 10:26.980
Inherent risk is
everything in life

10:26.980 --> 10:30.265
carries some level of
risk. It is built-in.

10:30.265 --> 10:33.100
Having any publicly
accessible servers

10:33.100 --> 10:35.110
creates the potential
for an attack.

10:35.110 --> 10:38.755
This is a risk included
with offering any service.

10:38.755 --> 10:42.220
Mitigating controls that
by lowering the risk.

10:42.220 --> 10:44.320
Residual risk is once we've

10:44.320 --> 10:46.285
done all our
mitigating controls,

10:46.285 --> 10:47.545
everything has been applied,

10:47.545 --> 10:51.415
whatever is leftover after
that is our residual risk.

10:51.415 --> 10:53.890
Risk appetite is the level of

10:53.890 --> 10:55.630
residual risk that is

10:55.630 --> 10:57.760
acceptable for a
given organization.

10:57.760 --> 10:59.530
This is basically you

10:59.530 --> 11:02.890
deciding how much you're
willing to put up with.

11:02.890 --> 11:05.860
After you've done
enough controls and

11:05.860 --> 11:06.985
you really don't feel like it's

11:06.985 --> 11:08.800
cost-effective to
spend any more,

11:08.800 --> 11:10.330
then you're accepting
it at that point

11:10.330 --> 11:11.560
you've mitigated it as far

11:11.560 --> 11:12.970
as you can go and now you

11:12.970 --> 11:14.410
have to accept what's left over.

11:14.410 --> 11:15.895
This is your risk appetite.

11:15.895 --> 11:18.520
Different organizations will
have higher risk appetites.

11:18.520 --> 11:20.890
You'll see some that
fly by the seat of

11:20.890 --> 11:24.415
their pants and don't have
quality backups in place.

11:24.415 --> 11:26.365
Obviously, they have a
very high risk appetite,

11:26.365 --> 11:30.080
but that has to be decided
for each organization.

11:31.470 --> 11:34.240
Risk exceptions.

11:34.240 --> 11:36.490
If a risk cannot be mitigated or

11:36.490 --> 11:38.830
another risk response cannot
be applied, for example,

11:38.830 --> 11:40.690
it can't be transferred or

11:40.690 --> 11:43.300
avoided then a risk
exception can be used.

11:43.300 --> 11:45.730
However, this should
not be done lightly.

11:45.730 --> 11:48.025
When you're doing this
you're basically saying,

11:48.025 --> 11:49.210
we can't do anything about it.

11:49.210 --> 11:51.580
We're going to keep performing
the risky activity.

11:51.580 --> 11:53.620
But we think we have

11:53.620 --> 11:56.470
a legitimate reason as
to why we're doing this.

11:56.470 --> 11:58.090
When you do that,
you need to have

11:58.090 --> 12:00.670
a complete description of
the risk and then document

12:00.670 --> 12:02.950
the rationale for
the decision you

12:02.950 --> 12:05.575
made for the risks exception.

12:05.575 --> 12:07.930
You need signatures
from all those making

12:07.930 --> 12:10.555
the decision to be documented
with all this together.

12:10.555 --> 12:12.310
This is especially
important when it comes

12:12.310 --> 12:14.335
to compliance
frameworks like HIPAA.

12:14.335 --> 12:16.360
In the next slide I'll
go into an example

12:16.360 --> 12:18.460
where that happened for me.

12:18.460 --> 12:21.280
But when you're basically

12:21.280 --> 12:23.920
saying we're not going to do
anything about this risk,

12:23.920 --> 12:25.780
you want to make sure
that all the people

12:25.780 --> 12:27.700
that are making that
decision have documented

12:27.700 --> 12:29.965
their signature and they're
signing off on it because

12:29.965 --> 12:31.060
that's the kind of
thing that could

12:31.060 --> 12:33.650
potentially come back
and bite you one day.

12:34.260 --> 12:37.210
Instructor side note.
I mentioned HIPAA,

12:37.210 --> 12:40.075
but risk is a key part
of HIPAA regulations.

12:40.075 --> 12:42.085
From my experience,
many practices

12:42.085 --> 12:44.065
either don't have the
financial capacity

12:44.065 --> 12:46.000
or the desire to do the things

12:46.000 --> 12:48.865
that are necessary to
protect patient data.

12:48.865 --> 12:50.920
They will take the response

12:50.920 --> 12:52.210
of sticking their
head in the sand,

12:52.210 --> 12:54.160
which is basically
is the same thing as

12:54.160 --> 12:56.950
if you pretend the risk isn't
there, it just goes away.

12:56.950 --> 12:59.800
It's surprising how many people

12:59.800 --> 13:01.690
take that attitude
about cybersecurity,

13:01.690 --> 13:02.860
because the cybersecurity is

13:02.860 --> 13:04.495
generally not
something you can see.

13:04.495 --> 13:06.070
It's not like someone walking up

13:06.070 --> 13:07.660
to you and pointing
a gun at you.

13:07.660 --> 13:11.095
Things are happening where
attackers are coming in

13:11.095 --> 13:12.730
and stealing data and it may be

13:12.730 --> 13:14.650
months or years before
that's ever found.

13:14.650 --> 13:15.970
But because it's not seen,

13:15.970 --> 13:17.860
we don't put importance on it.

13:17.860 --> 13:21.280
I've seen many providers
or physicians that

13:21.280 --> 13:22.750
will create wild exceptions

13:22.750 --> 13:24.565
for why they don't
want to do something.

13:24.565 --> 13:27.280
They're trying to create
documentations or at least

13:27.280 --> 13:29.620
they're trying to go that
far, but I can tell you this,

13:29.620 --> 13:32.125
that often the government

13:32.125 --> 13:35.650
agencies that are responsible
for investigating this,

13:35.650 --> 13:37.375
the Health and Human
Services department,

13:37.375 --> 13:38.890
they don't take
kindly this thing

13:38.890 --> 13:40.600
and fines can be very expensive.

13:40.600 --> 13:42.550
But the key to remember
is risk doesn't

13:42.550 --> 13:44.350
go away just because
we don't like it or

13:44.350 --> 13:48.565
we pretend it's not
there. Let's summarize.

13:48.565 --> 13:51.655
We went over risk
management and we discussed

13:51.655 --> 13:53.710
the ways we can
measure risk with

13:53.710 --> 13:56.170
quantitative or
qualitative analysis.

13:56.170 --> 13:59.260
We went over the different
risk responses and we

13:59.260 --> 14:01.270
discussed inherent
and residual risk

14:01.270 --> 14:03.025
along with risk perceptions.

14:03.025 --> 14:05.320
Let's do some example questions.

14:05.320 --> 14:09.205
Question 1. This is the
amount that would be lost

14:09.205 --> 14:13.580
over a year based on the
sum total of all SLEs.

14:13.740 --> 14:16.915
Annual loss expectancy or ALE.

14:16.915 --> 14:19.000
Keep in mind some of the
questions on the test are

14:19.000 --> 14:21.130
going to ask you
exactly like this,

14:21.130 --> 14:22.750
where they're going
to use those acronyms

14:22.750 --> 14:23.980
instead of spelling it out.

14:23.980 --> 14:25.750
You need to make sure
you know these acronyms

14:25.750 --> 14:27.160
because that's how
they could try to

14:27.160 --> 14:31.135
trip you up on some of these
questions. Question 2.

14:31.135 --> 14:33.805
When using this type
of risk analysis,

14:33.805 --> 14:37.700
words are used to describe
the risk and their impacts.

14:37.980 --> 14:40.795
Qualitative risk analysis.

14:40.795 --> 14:42.940
Remember, qualitative uses

14:42.940 --> 14:45.830
words, quantitative
uses numbers.

14:46.230 --> 14:49.360
Question 3, how
long between when

14:49.360 --> 14:52.120
an asset goes down to
when it is restored?

14:52.120 --> 14:55.460
What is the definition for this?

14:55.680 --> 14:59.860
Mean time to recovery or MTTR?

14:59.860 --> 15:01.960
Finally Question 4.

15:01.960 --> 15:05.170
When the cost of a risk
occurring is more than the cost

15:05.170 --> 15:09.145
of mitigating it this type
of risk responses used.

15:09.145 --> 15:13.010
Acceptance. I hope that
gave you a good overview

15:13.010 --> 15:14.630
of risk because we're going to

15:14.630 --> 15:17.240
use these a lot in
the next lessons.

15:17.240 --> 15:19.685
If you need to go back
and look at it again,

15:19.685 --> 15:21.470
make sure you understand
those formulas,

15:21.470 --> 15:23.090
make sure you understand
those terms and

15:23.090 --> 15:26.390
those risks responses. I'll
see you in the next lesson.

