1 00:00:00,720 --> 00:00:06,890 Drawn the spectacle part of this goes, you will find that we didn't mention of get us an authentic 2 00:00:06,910 --> 00:00:07,230 Lou. 3 00:00:08,360 --> 00:00:12,610 In this video, we will try to understand what get us and then to blow up. 4 00:00:13,760 --> 00:00:14,320 So let's see. 5 00:00:16,020 --> 00:00:23,340 Get US is a deep learning framework that provides a convenient way to define and train almost any kind 6 00:00:23,340 --> 00:00:24,540 of deep learning model. 7 00:00:26,020 --> 00:00:28,950 Basically, get us works at any model living. 8 00:00:29,820 --> 00:00:31,380 It will help you define the model. 9 00:00:31,800 --> 00:00:33,240 That is how many lives. 10 00:00:33,630 --> 00:00:34,750 How many lives. 11 00:00:35,490 --> 00:00:36,660 What does the added function? 12 00:00:36,720 --> 00:00:38,330 What is the optimizer, etc.. 13 00:00:39,600 --> 00:00:42,120 But it does not handle any lower level operations. 14 00:00:43,200 --> 00:00:49,770 If you remember in the previous two reflectors, we learned that while training a neural network, we 15 00:00:49,770 --> 00:00:57,010 need a lot of differentiation, matrix, manipulation, etc., all these are not done by us. 16 00:00:57,910 --> 00:01:04,560 Instead, this low level manipulation and differentiation of data is done by certain specialized and 17 00:01:04,560 --> 00:01:06,030 well optimized libraries. 18 00:01:08,460 --> 00:01:14,760 Good thing about Get US is that it can vote seamlessly with several such lower level libraries. 19 00:01:16,270 --> 00:01:23,930 Currently, there are three main backend libraries in Tetlow, which is done by Google s.M Begaye, 20 00:01:24,040 --> 00:01:30,760 which stands for Cognitive Toolkit and is developed by Microsoft and Tierno, which is Dell'Abate Miller 21 00:01:30,760 --> 00:01:32,630 Lab at University of Montreal. 22 00:01:34,800 --> 00:01:40,890 Any piece of code written in Get US can be done with any of these back end without having to change 23 00:01:40,980 --> 00:01:41,940 anything in the code. 24 00:01:43,360 --> 00:01:49,900 But as of now, denser flow is the most widely adopted, more scalable and most production ready. 25 00:01:50,500 --> 00:01:52,960 So we will be using pensive of law and discourse. 26 00:01:55,360 --> 00:01:59,320 Now, denser flow or any other such low level library. 27 00:01:59,980 --> 00:02:04,990 These libraries need processing power from our system to do all this data manipulation. 28 00:02:05,920 --> 00:02:12,610 This processing power can be provided by either C.P.U or DIPU, which stands for Central Processing 29 00:02:12,610 --> 00:02:16,510 Unit or demographical processing it by default. 30 00:02:17,140 --> 00:02:21,060 We do all C.P.U based installation of care doesn't antiflu. 31 00:02:21,940 --> 00:02:29,830 But if you are running on a system with End Radio Deepu and a properly configured libraries of Invidia 32 00:02:29,980 --> 00:02:34,590 such as Kouda or so you'd be an in which I would be Blanning. 33 00:02:34,930 --> 00:02:40,060 Then you can install the deep you based virgin of data and to flow back in InGen as well. 34 00:02:42,360 --> 00:02:43,930 So that's all we need to know about. 35 00:02:43,950 --> 00:02:44,860 Get us intense upload. 36 00:02:45,600 --> 00:02:47,960 No need to be overwhelmed by these domes now. 37 00:02:48,570 --> 00:02:53,010 You will see how using guitars will define definer neural network model. 38 00:02:53,940 --> 00:03:00,540 And then we will take it us to use Tenzer flow backing to train the model in the next video. 39 00:03:00,630 --> 00:03:03,930 We will learn how to install guitars and tens of low in our system.