1 00:00:00,180 --> 00:00:01,670 Hello and this will do. 2 00:00:01,980 --> 00:00:09,810 We will create a very simple single person run more than to classify flower species depending on their 3 00:00:09,990 --> 00:00:12,210 pattern length and pattern. 4 00:00:15,000 --> 00:00:20,290 We will be using a skill to create this single person prone model. 5 00:00:20,730 --> 00:00:27,940 In the latter part of this course we will use gave us and that flow to create mighty level parts of 6 00:00:28,080 --> 00:00:30,400 models. 7 00:00:30,470 --> 00:00:40,200 Skillern is a very popular machine learning library for Python it is the go to library to create regression 8 00:00:40,230 --> 00:00:41,600 classification. 9 00:00:41,700 --> 00:00:50,640 This year increase or as we are mortals we have separate lectures on all this machine learning models. 10 00:00:50,640 --> 00:00:58,230 So if you are interested in learning any one of these you can go ahead and check out courses on regression 11 00:00:58,230 --> 00:01:06,420 classification between trees and SVM so let's start first. 12 00:01:06,470 --> 00:01:09,640 We will import numbers and find us. 13 00:01:10,010 --> 00:01:18,260 Now if you have installed Anaconda there is no need to install a skill on separately. 14 00:01:18,260 --> 00:01:20,190 You just have to import a scanner. 15 00:01:20,960 --> 00:01:29,930 But in case you are facing any editing importing a skill you can install it using pip or Condi installed. 16 00:01:30,590 --> 00:01:39,470 So just run this Come on pip install a Skillern in your command prompt you can open your command prompt 17 00:01:39,860 --> 00:01:49,430 by pressing windows out and right CMB in the run come on and hit enter there in the command prompt you 18 00:01:49,430 --> 00:01:58,290 can write this code and execute it to install that skill in or else you can directly run this code and 19 00:01:58,310 --> 00:02:03,980 despite the notebook and this will also install escalation for you. 20 00:02:04,440 --> 00:02:11,640 So if you have installed a skill on first We will load to Iris data. 21 00:02:11,850 --> 00:02:14,340 There are various CSP files out there. 22 00:02:14,400 --> 00:02:22,140 You can also import those CSC failed to load this data but as skilled on have some predefined data sets 23 00:02:22,200 --> 00:02:25,650 and we will load our Iris data from there. 24 00:02:26,730 --> 00:02:34,530 So just write from a scale on dot datasets import load underscored itis and then we are saving the word 25 00:02:34,560 --> 00:02:38,650 Iris data into this variable Iris. 26 00:02:38,780 --> 00:02:42,590 So just run this now. 27 00:02:42,640 --> 00:02:47,130 Let's take a look at our data. 28 00:02:47,140 --> 00:02:55,950 You can see there are four columns two columns that set a length separate by the length and patterns 29 00:02:55,950 --> 00:03:02,730 with as I said earlier this data of different types of flowers. 30 00:03:02,760 --> 00:03:09,000 There are three different types of flowers sir dosa What's the colour Virginia. 31 00:03:09,450 --> 00:03:15,360 And for each of this flower we have their separate land separate but the length and petals. 32 00:03:16,860 --> 00:03:25,700 So here we have these four variables and that category is stored in Iris dot target. 33 00:03:26,970 --> 00:03:35,130 So in our example we want to create a perception model which would identify whether the flower is that 34 00:03:35,170 --> 00:03:46,680 dosa or not using it and by land as the independent variables if you want you can take all the four 35 00:03:46,680 --> 00:03:53,280 variables but for our example we are just taking these two variables to predict where the flower is 36 00:03:53,290 --> 00:03:54,510 that shows up or not. 37 00:03:56,130 --> 00:04:04,440 So we want our independent variable to be this by the lantern petals which we will store this information 38 00:04:04,440 --> 00:04:13,260 in another variable which we are calling X. We are defining X as Iris dark data and here we just want 39 00:04:13,740 --> 00:04:15,350 the third and fourth column. 40 00:04:15,360 --> 00:04:18,170 That's why we have returned to common three. 41 00:04:18,450 --> 00:04:21,190 If you remember the indexing it starts with 0. 42 00:04:21,540 --> 00:04:25,230 So the last two columns are two and three. 43 00:04:25,560 --> 00:04:30,900 Just run this so our x variable is now ready. 44 00:04:30,910 --> 00:04:36,780 Now let's just look at the target variable here. 45 00:04:36,930 --> 00:04:46,810 You can see we have different categories 0 1 and 2 0 percent for the set dosa one sentence for what 46 00:04:46,970 --> 00:04:51,590 colour and 2 cents for Virginia. 47 00:04:51,750 --> 00:04:57,480 Now if you have some machine learning knowledge you may know that to create classification more than 48 00:04:57,650 --> 00:05:02,190 a what via variable should be in the form of 0 and 1. 49 00:05:02,190 --> 00:05:11,910 So ideally we want one and all these records where the flower is set up and 0 and all these records 50 00:05:12,270 --> 00:05:21,020 where the flower is virtually color or Virginia so let's just convert this target variable using some 51 00:05:21,020 --> 00:05:31,430 basic operations first let's convert this target variable in the form of true and false we want crew 52 00:05:31,880 --> 00:05:41,120 where the flower is set dosa that is the value of Target is 0 and we want false where the flower is 53 00:05:41,210 --> 00:05:47,760 pathogenic or what color that is the numerical value is 1 or 2. 54 00:05:49,850 --> 00:05:58,430 So let's just blend this Come on we are just checking whether the target is equal to zero or not if 55 00:05:58,430 --> 00:06:04,430 the target is zero we will get group and if the target is not equal to zero we will get false let's 56 00:06:04,440 --> 00:06:07,540 run this pain. 57 00:06:08,180 --> 00:06:18,250 Now let's look at our way variable you can see we have converted zeros to crew and one or two boot false 58 00:06:20,810 --> 00:06:22,170 now in the next step. 59 00:06:22,190 --> 00:06:26,280 We want to convert this crew in falls to 1 and 0. 60 00:06:26,330 --> 00:06:34,520 We want one in place of crew and we want zero in place of false. 61 00:06:34,670 --> 00:06:39,880 So we will just use as pipe method and we will convert it to. 62 00:06:39,980 --> 00:06:45,680 And so just on and look at our Y variable. 63 00:06:47,000 --> 00:06:58,510 So as you can see now our Y variable is an array with 1 and 0 1 means that dosa and zero means not set 64 00:06:58,630 --> 00:06:58,900 so 65 00:07:02,390 --> 00:07:11,240 now let's look at our x variable before creating open it's up front more than you can see our x variable 66 00:07:11,240 --> 00:07:16,550 is a two dimensional array with by the length and by the late 67 00:07:20,370 --> 00:07:23,520 know over X and Y variable separately. 68 00:07:23,960 --> 00:07:30,230 We just have to create our perception model and train that model using this x and y variables 69 00:07:33,240 --> 00:07:36,950 we can create a single concept drawn more than using Eskil then. 70 00:07:37,140 --> 00:07:43,190 But to create MLP on my level but it's up front more than we have to use k does an intensive look. 71 00:07:44,160 --> 00:07:52,820 We will be looking at an LP in the later part of this scopes but now we will create this but it's up 72 00:07:52,820 --> 00:07:54,920 grown model using Escalade only 73 00:07:57,520 --> 00:07:58,640 now. 74 00:07:58,670 --> 00:08:07,220 First we need to import spectrum from a scale undoubtedly no more than let's just import it and you 75 00:08:07,220 --> 00:08:11,720 can also look at the documentation using this link. 76 00:08:11,870 --> 00:08:16,640 This is department notebook is also shared in the resources section of this video. 77 00:08:16,880 --> 00:08:23,070 So you can download it and learn it if you want to practice. 78 00:08:23,100 --> 00:08:31,360 So this is the official document of Sep drawn in Eskil Skillern you can look at all the parameters which 79 00:08:31,360 --> 00:08:32,710 we can give. 80 00:08:32,860 --> 00:08:40,840 The first one is the penalty if you remember in linear regression there are regularization domes L1 81 00:08:40,890 --> 00:08:45,350 a two also known as lasso and rich. 82 00:08:45,490 --> 00:08:51,880 You can also give a fart that is the variable we use for regularization. 83 00:08:51,970 --> 00:08:54,970 Then there are different other hyper parameters. 84 00:08:54,970 --> 00:08:59,560 We will just stick to basic default high but parameters. 85 00:08:59,860 --> 00:09:02,320 But you can look at it if you want. 86 00:09:02,320 --> 00:09:09,130 These are just some very basic high but parameters that we get with most of the machine learning algorithms 87 00:09:11,240 --> 00:09:11,970 now. 88 00:09:11,990 --> 00:09:20,140 As with any other skill on machine learning model we first have to create an object of the Talbot item. 89 00:09:20,150 --> 00:09:29,300 Then we have to fit over x and y variables and put that object and then we can use that object to predict 90 00:09:29,330 --> 00:09:30,950 the future values of Y. 91 00:09:32,270 --> 00:09:40,550 So first let's create an object that is perceptive on classifier we are naming all the variables but 92 00:09:40,610 --> 00:09:50,400 underscored CnF and then we are using perception that we have just imported and we are giving on the 93 00:09:50,400 --> 00:09:54,730 one hyper parameter that is random I say to equate to 42. 94 00:09:54,770 --> 00:09:59,780 This is basically to reproduce the same result whenever we run this model. 95 00:09:59,840 --> 00:10:04,830 So if you give random a set you will always going to get the same result. 96 00:10:06,320 --> 00:10:09,080 Giving this hyper parameter is not mandatory. 97 00:10:09,080 --> 00:10:18,570 You can skip that also if you want and in the next line we are fighting over x and y variables into 98 00:10:18,590 --> 00:10:20,030 this object. 99 00:10:20,030 --> 00:10:23,420 So let's fund this. 100 00:10:23,480 --> 00:10:25,150 So what model is screened. 101 00:10:25,400 --> 00:10:28,400 You can get the values of different types but I might add here. 102 00:10:28,820 --> 00:10:30,030 If you want. 103 00:10:32,010 --> 00:10:38,100 Now let's predict the value using the word classifier object we can just use dot predict my term. 104 00:10:38,450 --> 00:10:45,890 So this is a word object and we are using dot product and we are giving our independent inquiry but 105 00:10:46,040 --> 00:10:49,950 which is x as an input. 106 00:10:51,050 --> 00:10:52,640 Let's do that. 107 00:10:55,610 --> 00:10:58,670 You can see this are the predicted values. 108 00:10:58,730 --> 00:11:04,180 We can compare this predicted values with the actual values. 109 00:11:04,190 --> 00:11:06,960 There is no need to manually do that. 110 00:11:07,010 --> 00:11:13,580 There is then another function that is accuracy scored available in Ashkelon which will give us the 111 00:11:13,640 --> 00:11:24,350 accuracy for what prediction accuracy score vary between 0 and 1 0 means 0 percent accuracy which means 112 00:11:24,500 --> 00:11:30,710 all the predictions are wrong and one means hundred percent accuracy which meant all the predictions 113 00:11:30,800 --> 00:11:32,930 outright. 114 00:11:32,960 --> 00:11:42,590 So let's just first and the accuracy score from a scalar not my kicks and then we are going to use accuracy 115 00:11:42,590 --> 00:11:43,190 score. 116 00:11:43,940 --> 00:11:46,070 And here there are two arguments. 117 00:11:46,070 --> 00:11:51,530 First we have to provide the actual values and in the next type of humans we have to provide the predicted 118 00:11:51,530 --> 00:11:52,790 values. 119 00:11:52,790 --> 00:11:58,550 So what actual values are restored in just by a variable and the predicted values we have is stored 120 00:11:58,550 --> 00:12:00,660 in y underscore right. 121 00:12:00,800 --> 00:12:06,280 So let's get the accuracy squared off forward predictions. 122 00:12:06,830 --> 00:12:14,450 So the accuracy here is one that is a hundred percent accuracy or what perception was able to identify 123 00:12:15,020 --> 00:12:20,480 the specie of the float with 100 percent accuracy. 124 00:12:20,480 --> 00:12:22,330 Now this is a very simple model. 125 00:12:22,340 --> 00:12:31,110 Usually we don't use parts of upfront for any regression or classification tasks we usually go for machine 126 00:12:31,110 --> 00:12:38,040 learning techniques where the data is not following any context pattern and we usually go for. 127 00:12:38,070 --> 00:12:43,350 And that would be where the data is falling a very complex pattern. 128 00:12:43,740 --> 00:12:47,490 Usually you will never find yourself using but it's a drawn. 129 00:12:47,550 --> 00:12:54,780 And business settings but this is just center in production and we want you to give just some basic 130 00:12:54,780 --> 00:13:02,370 idea about running but sap run models using ask them now after training your more than you will get 131 00:13:02,490 --> 00:13:10,490 these two attributes that is the coefficient and then that Sept basically over perception as dividing 132 00:13:10,500 --> 00:13:14,360 this is space using linear regression line. 133 00:13:14,370 --> 00:13:21,390 So this is the coefficient and this is the intercept of that line. 134 00:13:21,450 --> 00:13:31,350 So your equation of that line will be for minus one point four times the length minus two point two 135 00:13:31,350 --> 00:13:32,450 times battle. 136 00:13:34,750 --> 00:13:43,990 So if you want to cleared that line you can use coefficient and intercept values to do that and you 137 00:13:43,990 --> 00:13:50,260 can also see the impact of our different variables on the Y variable. 138 00:13:50,440 --> 00:13:55,570 So this coefficient is giving you the impact of each of those variables. 139 00:13:55,570 --> 00:13:59,730 This is the impact of the land and this the impact of the. 140 00:14:01,180 --> 00:14:03,850 That's all for this lecture. 141 00:14:03,970 --> 00:14:09,910 Next we will look at tensor flow and get us to create our MLP model. 142 00:14:10,180 --> 00:14:10,600 Thank you.