1 00:00:00,310 --> 00:00:04,180 In this lecture, we will start diving into agents. 2 00:00:04,930 --> 00:00:10,020 Large language models are incredibly powerful, but they lack the ability to 3 00:00:10,030 --> 00:00:15,020 perform certain tasks that even the simplest application can do easily. 4 00:00:15,530 --> 00:00:22,260 For example, LLMs struggle with logic, calculation, or communicating with other 5 00:00:22,270 --> 00:00:23,580 external components. 6 00:00:24,770 --> 00:00:25,600 Let's see an example. 7 00:00:27,640 --> 00:00:32,230 I am asking GPT -4 for the result of an exponentiation. 8 00:00:33,120 --> 00:00:39,350 What is the answer to 5 .1 to the power 9 00:00:39,360 --> 00:00:41,730 of 7 .3? 10 00:00:52,380 --> 00:00:55,930 And the GPT -4, probably the most complex 11 00:00:55,940 --> 00:01:02,390 computer application that has been ever built , fails. 12 00:01:02,400 --> 00:01:08,270 Calculating powers where the exponent is not a whole number is a complex operation 13 00:01:08,280 --> 00:01:13,260 and can't be done by an AI model. 14 00:01:13,270 --> 00:01:16,220 This is not a good result. 15 00:01:16,230 --> 00:01:20,160 This operation can be done easily by any calculator. 16 00:01:20,170 --> 00:01:23,500 For example, in Python, you would use the 17 00:01:23,510 --> 00:01:25,080 double star operator. 18 00:01:26,430 --> 00:01:34,780 So, 5 .1 star star 7 .3. 19 00:01:34,790 --> 00:01:39,930 And I've got a completely different answer. 20 00:01:39,940 --> 00:01:42,970 Let's try another one. 21 00:01:42,980 --> 00:01:57,230 5 .1 to the power of 1 .7. 22 00:01:57,240 --> 00:02:02,010 And once again, GPT -4 fails. 23 00:02:02,020 --> 00:02:10,680 Let's do this again using Python. 24 00:02:10,690 --> 00:02:13,560 As you can see, the answer is different. 25 00:02:13,570 --> 00:02:19,800 This is because GPT -4 is using an approximation, while Python is using the 26 00:02:19,810 --> 00:02:22,440 exact mathematical formula. 27 00:02:22,450 --> 00:02:26,300 In conclusion, LLMs are powerful tools, 28 00:02:26,310 --> 00:02:28,100 but they are not perfect. 29 00:02:28,110 --> 00:02:30,620 They can be used to generate text, 30 00:02:30,630 --> 00:02:35,760 translate languages, and answer questions, but they cannot replace 31 00:02:35,770 --> 00:02:42,010 calculators or other tools that are designed for specific tasks. 32 00:02:42,020 --> 00:02:43,310 Let's try another one. 33 00:02:43,320 --> 00:02:45,890 I am giving ChuckGPT the task of writing 34 00:02:45,900 --> 00:02:48,690 an article about Lanchain Agents. 35 00:02:48,700 --> 00:03:02,120 So write an article about Lanchain Agents. 36 00:03:02,130 --> 00:03:04,220 We see how it hallucinates. 37 00:03:04,230 --> 00:03:06,420 Nothing from what you see here is true. 38 00:03:06,430 --> 00:03:09,800 It completely made up the answer. 39 00:03:09,810 --> 00:03:12,580 That's because it was trained on data 40 00:03:12,590 --> 00:03:17,520 that cuts off in late 2021. 41 00:03:17,530 --> 00:03:20,060 You could say that there are plugins that 42 00:03:20,070 --> 00:03:26,140 help GPT -4 browse the internet and access updated information, but that's 43 00:03:26,150 --> 00:03:27,760 something different. 44 00:03:27,770 --> 00:03:30,320 Plugins are external tools that are 45 00:03:30,330 --> 00:03:32,460 similar to agents. 46 00:03:32,470 --> 00:03:35,460 One solution to these problems is to use 47 00:03:35,470 --> 00:03:37,200 Lanchain Agents. 48 00:03:37,210 --> 00:03:40,920 Agents are enabling tools for LLMs. 49 00:03:40,930 --> 00:03:48,180 Using agents, LLMs can run code, do calculations, search the web, or run SQL queries. 50 00:03:48,190 --> 00:03:53,580 Now that you've learned why we need agents and what they are, we'll take a 51 00:03:53,590 --> 00:03:58,060 break and in the next video, we'll see Lanchain Agents in action.