1 00:00:00,920 --> 00:00:01,630 Awesome. 2 00:00:01,640 --> 00:00:07,250 So now if you print out this little line here it should print out something like this. 3 00:00:07,250 --> 00:00:14,570 TMF version I'm using two point 1.0 but if yours is to point something above that should be fine unless 4 00:00:14,570 --> 00:00:18,920 tensor of love broken all the code that we're working on here which is unlikely but I should put up 5 00:00:18,920 --> 00:00:22,690 in here getting our workspace ready I should put out but we're actually going to do. 6 00:00:22,690 --> 00:00:27,860 So we need to import tensor flow to point X. We've done that. 7 00:00:27,860 --> 00:00:30,890 We can put a little checkmark beside that. 8 00:00:31,040 --> 00:00:35,680 I wonder if we can I'm going to put an emerging next to that. 9 00:00:35,680 --> 00:00:37,540 There we go green tech. 10 00:00:37,690 --> 00:00:38,580 Wonderful. 11 00:00:38,620 --> 00:00:40,760 We need to import tens of flow hub. 12 00:00:41,710 --> 00:00:49,540 And we also need to make sure we're using a GP you remember in the introductory lecture we said we're 13 00:00:49,540 --> 00:00:56,920 gonna need a GPO because when we're working with unstructured data oftentimes the patterns are a lot 14 00:00:56,920 --> 00:01:02,500 more vast and running a no network a special kind of machine learning model takes a lot more compute 15 00:01:02,500 --> 00:01:07,150 power than what a traditional machine learning model like something like a random forest. 16 00:01:07,260 --> 00:01:13,240 We ran in the past has or at least in our case because we're dealing with a lot more data this time. 17 00:01:13,240 --> 00:01:18,370 We've got ten thousand plus images nearly eight gigabytes worth of data. 18 00:01:18,370 --> 00:01:23,680 So whereas our previous CSO these were only a couple of megabytes or kilobytes even less than that. 19 00:01:24,400 --> 00:01:27,010 So we've import intensive low to point X. 20 00:01:27,400 --> 00:01:29,830 Now we need to figure out how to import tens if low hub. 21 00:01:29,840 --> 00:01:30,530 Okay. 22 00:01:30,790 --> 00:01:33,460 And make sure we're using a GP you. 23 00:01:33,710 --> 00:01:38,110 And now if we come back the reason why we need tens if not hub is because this is where we're gonna 24 00:01:38,140 --> 00:01:39,770 pick our model from. 25 00:01:40,030 --> 00:01:45,250 And the reason why we need sense of flow is of course to build this whole pipeline and to get our data 26 00:01:45,280 --> 00:01:46,460 into tenses. 27 00:01:46,630 --> 00:01:51,120 So we're still getting our workspace ready let's import tend to flow hub. 28 00:01:51,310 --> 00:01:52,360 So we'll put here. 29 00:01:52,450 --> 00:01:53,750 All right a little comment. 30 00:01:53,920 --> 00:02:01,180 Import necessary tools we could import a few more that we'll use but we might import them as we go. 31 00:02:01,210 --> 00:02:05,580 But I just want to make these two known right at the top because these are the main ones. 32 00:02:05,600 --> 00:02:06,770 This entire project. 33 00:02:07,190 --> 00:02:11,360 So import tend to flow hub it's already installed in collab which is beautiful. 34 00:02:11,360 --> 00:02:15,170 Another great reason for using that as hub. 35 00:02:15,170 --> 00:02:18,600 That's the traditional tensor flow hub way of importing it. 36 00:02:18,750 --> 00:02:25,380 Then we can also go print TAF hub version same way 37 00:02:28,100 --> 00:02:28,810 hub. 38 00:02:28,860 --> 00:02:34,040 I'm just going to school version and it's going to go wonderful. 39 00:02:34,330 --> 00:02:38,440 And then we're going to write a little line here now that we've import intensive flow which will help 40 00:02:38,440 --> 00:02:44,810 us check if there's a GP you available because tend flow what it does is it allows us to write Python 41 00:02:44,810 --> 00:02:52,500 code which gets executed on a jeep use a tensor flow underneath all the tensor flow code that we write 42 00:02:53,100 --> 00:03:00,570 calls another library which allows our code to be run on a GP you in turn make really fast numerical 43 00:03:00,570 --> 00:03:09,560 calculation so let's go here check for GP you availability and now another reason why we're using co 44 00:03:09,560 --> 00:03:16,880 lab is because Google kindly provide free GP you access to us so my laptop if I was to try run this 45 00:03:16,880 --> 00:03:23,240 project on my laptop it probably would probably burn out before it could even figure it out it could 46 00:03:23,240 --> 00:03:27,680 run for days on end but what we'll see as we eventually get to it because we're going to be using a 47 00:03:27,680 --> 00:03:34,660 GP you it runs a lot faster and even then it's going to take half an hour or so we're getting ahead 48 00:03:34,660 --> 00:03:39,040 of ourselves and you'll come on let's just check if the GP is available so I'll show you a little bit 49 00:03:39,040 --> 00:03:46,720 of code you can use its tensor flow based to see if a GP who is available now going to run out a little 50 00:03:46,870 --> 00:03:53,250 success message because I love seeing that we have access to a GP you and you should too to be honest 51 00:03:53,260 --> 00:04:03,010 because this is what's caused the deep learning revolution is the access to GP use now what is this 52 00:04:03,010 --> 00:04:04,660 little function doing here. 53 00:04:04,660 --> 00:04:11,400 Well I want you to try and imagine what is it doing accessing tens intensive light tens of first configuration 54 00:04:11,570 --> 00:04:17,880 and it's going to list the physical devices and that literally means what you might think it means what 55 00:04:17,880 --> 00:04:26,190 physical devices are powering this python 3 Google compute engine back it so we can see Ram gigabytes 56 00:04:26,220 --> 00:04:29,810 one point one one and we've got disk what else is there. 57 00:04:30,480 --> 00:04:36,750 We are most concerned with the GPO so we're just gonna pass the GP you string this is people who have 58 00:04:36,750 --> 00:04:40,830 written tend to flow have had this problem before and they've gone let's just make this easy to check 59 00:04:40,830 --> 00:04:49,270 if we've got a GP ls not available unhappy vice so why run. 60 00:04:49,280 --> 00:04:50,190 Let's see what happens. 61 00:04:52,360 --> 00:04:55,470 Oh well we do have tents flow hub. 62 00:04:55,570 --> 00:05:03,040 That's beautiful and we got a GP you not available and you might be asking what is tend to flow. 63 00:05:03,060 --> 00:05:05,490 Well this is where I'd encourage you to do your own research. 64 00:05:05,790 --> 00:05:12,330 What is going to go through it in a moment but you go what is tens of low job search that up have a 65 00:05:12,330 --> 00:05:16,210 quick rate if you're if you want to check it out but don't worry about that. 66 00:05:16,230 --> 00:05:20,590 Now what we're focused on is getting a GP view available. 67 00:05:20,590 --> 00:05:22,000 Mm hmm. 68 00:05:22,080 --> 00:05:22,420 All right. 69 00:05:22,440 --> 00:05:23,840 How do we do that. 70 00:05:23,850 --> 00:05:25,350 Well let's figure it out. 71 00:05:25,500 --> 00:05:30,410 So if we get to runtime we can go to change runtime time. 72 00:05:30,990 --> 00:05:37,210 Now again this is our runtime here is a lot of things that look a bit complicated so connect to a hosted 73 00:05:37,220 --> 00:05:38,480 runtime. 74 00:05:38,490 --> 00:05:45,610 Okay let's go runtime change runtime type this is how we get access to a GPO. 75 00:05:45,620 --> 00:05:47,680 We've got runtime type Python 3. 76 00:05:47,810 --> 00:05:54,560 I'm recording this in post 2020 so Python 2 is no longer supported so always by 3 that means Python 77 00:05:54,560 --> 00:05:56,930 2 just won't be getting any more updates. 78 00:05:56,930 --> 00:06:00,710 It'll probably still run through our hardware accelerator. 79 00:06:00,710 --> 00:06:01,460 Let's click this. 80 00:06:01,640 --> 00:06:02,340 What does this mean. 81 00:06:02,340 --> 00:06:06,910 Let's zoom in what type of GOP user available in collab. 82 00:06:07,040 --> 00:06:08,020 Beautiful. 83 00:06:08,060 --> 00:06:15,320 So this is necessary for Ecolab the GP use available in collab often include video Katie's t fours P 84 00:06:15,320 --> 00:06:16,520 fours and P1 hundreds. 85 00:06:16,580 --> 00:06:20,100 Again if you're curious what these are Google and a video. 86 00:06:20,130 --> 00:06:23,350 Katie Nvidia T for number of people in a video. 87 00:06:23,380 --> 00:06:24,380 One hundred. 88 00:06:24,380 --> 00:06:30,380 Long story short is that these are different types of NVIDIA GP use the video is a GP you maker like 89 00:06:30,440 --> 00:06:33,880 Intel is a CPA maker and these are just different models of chips. 90 00:06:34,900 --> 00:06:36,510 So let's go in here. 91 00:06:36,520 --> 00:06:41,790 We're going to choose a GP you can also choose TPA but we're focused on GP for now. 92 00:06:41,880 --> 00:06:47,230 TPA requires a little bit more setup with the code but that could be again something that you could 93 00:06:47,620 --> 00:06:51,370 look up yourself how to use a TPA you in collab. 94 00:06:51,460 --> 00:07:00,820 That's a bit of a rabbit hole you can get down but let's go GP you click save again to do that runtime 95 00:07:00,880 --> 00:07:07,170 change runtime type GP you and you'll see all we've remount at Google Drive. 96 00:07:07,390 --> 00:07:08,890 And this is changing. 97 00:07:08,890 --> 00:07:17,630 Look at that oh yeah connected to Python 3 Google compute engine back end GP you wonderful. 98 00:07:17,750 --> 00:07:26,620 So what we have to do now is because we've just restarted our runtime we're going to have to if you've 99 00:07:26,680 --> 00:07:34,180 already got access to tensor flow 2.0 like the output of this cell intensifier 2.0 you can just rerun 100 00:07:34,180 --> 00:07:35,030 this cell. 101 00:07:35,230 --> 00:07:40,600 But because I don't have of 2.0 by default I'm going to have to rerun this cell here. 102 00:07:41,140 --> 00:07:49,030 So we run this testified to point X selected now fingers crossed if we have a GP you available meaning 103 00:07:49,030 --> 00:07:55,890 superfast numerical computing this little output message should change because this little piece of 104 00:07:55,890 --> 00:08:00,240 code is gonna check to see if a GP Q is available which it looks like it is. 105 00:08:00,360 --> 00:08:01,080 Let's run this 106 00:08:06,590 --> 00:08:08,670 Oh yes. 107 00:08:09,320 --> 00:08:12,200 So now we have a GPO available. 108 00:08:12,200 --> 00:08:15,690 We have tensor flow available and we have tensor flow hub. 109 00:08:15,740 --> 00:08:19,490 Let's go have a look at our keynote where we're up to we've done this. 110 00:08:19,490 --> 00:08:22,690 This is what we've done except we've built it all up and collapse. 111 00:08:23,360 --> 00:08:29,180 So now we're ready to actually go to our workflow I should put the step zero in here is get our workspace 112 00:08:29,180 --> 00:08:29,750 ready. 113 00:08:29,750 --> 00:08:31,350 But this is really exciting. 114 00:08:31,370 --> 00:08:35,490 We can now get our data ready by turning it into tenses. 115 00:08:35,720 --> 00:08:40,850 So come back to our notebook make sure if you run this cell you've got access to a GP you because the 116 00:08:40,850 --> 00:08:47,240 code we run later on will utilize the GPO and make sure your tensor flow version is higher than two 117 00:08:47,240 --> 00:08:49,220 point X and your hub version. 118 00:08:49,220 --> 00:08:50,350 It doesn't really matter. 119 00:08:50,380 --> 00:08:53,780 But if it's zero point seven or above that's a good thing. 120 00:08:53,840 --> 00:08:55,110 So I'll see you in the next video. 121 00:08:55,130 --> 00:08:58,850 Let's start figuring out how to get our data ready.