WEBVTT

00:00:01.050 --> 00:00:04.059
Of course, we want to try to prevent incidents,

00:00:04.180 --> 00:00:06.350
but we have to be ready for when they happen.

00:00:06.720 --> 00:00:08.950
We need to detect the incidents,

00:00:08.960 --> 00:00:13.020
and this can come through use of various tools and technology that,

00:00:13.030 --> 00:00:18.310
for example, detects a change in behavior on a system network or user,

00:00:18.900 --> 00:00:21.960
looking for signatures of known types of attacks,

00:00:21.960 --> 00:00:25.200
we often see this with, for example, malicious code,

00:00:25.360 --> 00:00:26.730
heuristics,

00:00:26.740 --> 00:00:31.740
which is a type of artificial intelligence and tries to learn when

00:00:31.740 --> 00:00:34.640
there is something that maybe is undesirable.

00:00:35.290 --> 00:00:39.440
We use alarms because they can notify us if there is something that's gone

00:00:39.440 --> 00:00:43.970
wrong and an alert can come in that allow us as employees,

00:00:43.980 --> 00:00:48.840
customers, suppliers, so we're able to be aware of a problem and,

00:00:48.850 --> 00:00:53.040
of course, communicate that with these outside parties as well.

00:00:53.920 --> 00:00:57.830
One of the things that can be important is to do audits and reviews

00:00:57.990 --> 00:01:01.330
of how well we've handled incidents in the past.

00:01:01.620 --> 00:01:03.370
What are the things we could learn?

00:01:04.580 --> 00:01:06.530
When it comes to incident detection,

00:01:06.530 --> 00:01:09.800
the first line of defense is often the help desk.

00:01:10.080 --> 00:01:12.880
They are the first people who become aware of people

00:01:12.880 --> 00:01:14.980
calling in and saying we're having a problem.

00:01:15.570 --> 00:01:16.850
They have trouble tickets,

00:01:16.850 --> 00:01:20.100
and we should look for trends and patterns in the types

00:01:20.100 --> 00:01:21.880
of problems that people are having.

00:01:22.440 --> 00:01:25.770
Alerts come in from our various monitoring systems,

00:01:25.770 --> 00:01:28.670
maybe a security information event management system,

00:01:28.670 --> 00:01:29.360
for example.

00:01:30.520 --> 00:01:35.980
But when something goes wrong, our first priority must always be life safety,

00:01:36.410 --> 00:01:39.970
looking after our employee's customers, and certainly,

00:01:39.970 --> 00:01:42.090
the community around us as well.

00:01:43.270 --> 00:01:47.630
But then we have to do some analysis of the incident.

00:01:47.960 --> 00:01:51.260
The analysis of the incident should lead to a classification.

00:01:51.650 --> 00:01:54.960
Is this really an incident or is it just noise?

00:01:55.330 --> 00:01:59.570
It's not really serious and just we could call here a false positive,

00:01:59.870 --> 00:02:01.500
or is it a true positive?

00:02:01.510 --> 00:02:06.090
This is an incident and something we need to then immediately take action on.

00:02:07.300 --> 00:02:11.730
The identification of it as a real incident should lead to the

00:02:11.730 --> 00:02:15.500
classification of whether or not this is just a minor problem.

00:02:15.760 --> 00:02:20.990
Is it serious or even catastrophic that could affect the whole organization,

00:02:21.160 --> 00:02:22.040
for example.

00:02:23.530 --> 00:02:25.570
Depending on the classification,

00:02:25.770 --> 00:02:28.600
we can determine whether or not it's just an internal

00:02:28.600 --> 00:02:31.650
problem or something has come from outside.

00:02:32.340 --> 00:02:37.210
Was it something was done intentionally or just accidentally?

00:02:38.090 --> 00:02:42.670
And then, of course, we activate the appropriate response teams.

00:02:43.210 --> 00:02:47.290
If it's a minor incident, maybe just a few people are involved,

00:02:47.430 --> 00:02:49.080
but if it's catastrophic,

00:02:49.090 --> 00:02:53.190
it could be that we activate teams right up to the senior management

00:02:53.190 --> 00:02:56.670
level and even our public relations group as well.

00:02:58.350 --> 00:03:03.200
We want to contain incidents so we can contain the bad

00:03:03.200 --> 00:03:05.950
effect or adverse impact of the incident,

00:03:06.280 --> 00:03:09.050
and often, we'll do this through things like isolation.

00:03:09.340 --> 00:03:13.390
We have a system that's infected, we disconnect it from the network.

00:03:13.820 --> 00:03:16.740
In the case of a fire, we close fire doors,

00:03:16.960 --> 00:03:19.680
we disable network connections,

00:03:19.830 --> 00:03:24.840
we put a system into quarantine so we can examine and see what's going on.

00:03:25.520 --> 00:03:28.230
And then quite often, this is where we'll use a sandbox.

00:03:28.230 --> 00:03:31.730
A sandbox means we put, for example,

00:03:31.910 --> 00:03:36.230
malware or an infected machine into a secure environment

00:03:36.240 --> 00:03:39.290
where you can watch its execution, watch it,

00:03:39.290 --> 00:03:43.280
what it's trying to do, but is limited into that area,

00:03:43.290 --> 00:03:48.300
often a virtual machine, so it can't infect or spread to other systems.

00:03:48.840 --> 00:03:50.370
One of the things we often will do,

00:03:50.430 --> 00:03:53.740
power the system down and give us a chance to be able to

00:03:53.740 --> 00:03:57.670
stop it from then continuing to generate whatever type of

00:03:57.680 --> 00:03:59.840
malicious activity it's doing.

00:04:01.000 --> 00:04:04.570
We then sometimes, in a minor incident, might just monitor.

00:04:04.950 --> 00:04:05.560
For example,

00:04:05.560 --> 00:04:10.570
we have things like honeypots where we can try to watch the type of activity,

00:04:10.830 --> 00:04:13.880
or we see something that's going wrong and it's not

00:04:13.880 --> 00:04:15.610
something which is spreading quickly,

00:04:15.660 --> 00:04:19.050
but we can monitor so we can see whether or not there is

00:04:19.050 --> 00:04:22.170
something going on and how that is developing.

00:04:22.180 --> 00:04:24.400
We can learn maybe some of the behavior,

00:04:24.400 --> 00:04:27.090
the tools and techniques of the attacker.

00:04:28.720 --> 00:04:32.560
Some of the considerations when we want to contain or stop something from

00:04:32.560 --> 00:04:35.980
spreading depends on whether or not this is a critical system.

00:04:36.340 --> 00:04:39.590
If this is a system that is critical to business operations,

00:04:39.590 --> 00:04:42.040
maybe I can't power it down or isolate it.

00:04:42.880 --> 00:04:45.080
We also have to look is this something that's going to

00:04:45.080 --> 00:04:47.570
spread or is it something which is just,

00:04:47.570 --> 00:04:48.170
for example,

00:04:48.170 --> 00:04:52.340
in one area and not going to start infecting other systems or networks.

00:04:53.420 --> 00:04:55.250
We also said, in some cases,

00:04:55.250 --> 00:05:00.070
we'll allow an attack to continue because we're trying to gather evidence,

00:05:00.170 --> 00:05:03.890
we're trying to learn what's actually been going on so hopefully

00:05:03.890 --> 00:05:07.190
we can improve our response and protection.

00:05:08.510 --> 00:05:09.930
The key points review.

00:05:10.840 --> 00:05:13.700
Incident management starts with preparation,

00:05:14.150 --> 00:05:18.390
but then follows up with the ability, the watchfulness,

00:05:18.550 --> 00:05:21.540
so we detect any type of an incident.

00:05:22.090 --> 00:05:26.980
Then we need to classify the incident so we can respond appropriately.
