1 00:00:00,700 --> 00:00:07,410 Welcome back, the core element of any LLM application is the AI model. 2 00:00:08,080 --> 00:00:14,010 Langchain doesn t have its own LLMs, but it provides a standard interface for 3 00:00:14,020 --> 00:00:20,370 interacting with various LLMs, including OpenAI s GPT models, Google s Gemini, 4 00:00:20,860 --> 00:00:22,450 Metas, LAMA and more. 5 00:00:23,040 --> 00:00:25,790 In Langchain terminology, these are 6 00:00:25,800 --> 00:00:27,350 called LLM providers. 7 00:00:28,540 --> 00:00:31,850 Now, let s dive into how to invoke OpenAI 8 00:00:31,860 --> 00:00:38,530 s GPT models like GPT -3 .5 Turbo, GPT -4 or GPT -4 Turbo within Langchain. 9 00:00:39,020 --> 00:00:46,750 These models expose an interface where chat messages or conversations serve as 10 00:00:46,760 --> 00:00:48,290 inputs and outputs. 11 00:00:49,580 --> 00:00:55,360 From Langchain OpenAI, I am importing 12 00:00:55,370 --> 00:01:04,160 chatOpenA. Now, let s create an LLM object, LLM equals chatOpenA. 13 00:01:06,410 --> 00:01:14,440 Next, I ll use LLM .invoke to send a request to the OpenAI GPT model. 14 00:01:15,370 --> 00:01:18,200 This method takes the prompt as an argument. 15 00:01:20,070 --> 00:01:24,380 For example, explain quantum mechanics in 16 00:01:24,390 --> 00:01:25,040 one sentence. 17 00:01:33,510 --> 00:01:36,720 The invoke method returns an output 18 00:01:36,730 --> 00:01:45,320 object, and the content attribute of the output object contains the text response 19 00:01:45,330 --> 00:01:47,760 from the OpenAI GPT model. 20 00:01:48,470 --> 00:01:53,540 I am running the code very well. 21 00:01:54,510 --> 00:02:01,320 Notice that I called LLM .invoke without any arguments, and you might be wondering 22 00:02:01,330 --> 00:02:04,100 which model and parameters it used. 23 00:02:05,090 --> 00:02:08,360 To view the constructor s arguments along 24 00:02:08,370 --> 00:02:12,660 with their defaults, let s check the built -in help of the class. 25 00:02:13,590 --> 00:02:16,180 Help, chatOpenA. 26 00:02:22,630 --> 00:02:24,680 As you can see, there are several 27 00:02:24,690 --> 00:02:30,760 available arguments of particular interest to us are model and temperature. 28 00:02:31,830 --> 00:02:38,520 Note that the default model is GPT 3 .5 turbo and the default value for the 29 00:02:38,530 --> 00:02:41,640 temperature is 0 .7. 30 00:02:42,330 --> 00:02:45,080 I ve delved into OpenAI s chat 31 00:02:45,090 --> 00:02:50,580 completions parameters during the OpenAI API with Python course. 32 00:02:51,510 --> 00:02:59,280 If you prefer to use GPT 4 or GPT 4 turbo instead of GPT 3 .5 turbo, simply change 33 00:02:59,290 --> 00:03:00,280 the model name. 34 00:03:01,510 --> 00:03:04,780 I am adding a new argument model equals 35 00:03:04,790 --> 00:03:07,040 GPT 4 turbo preview. 36 00:03:07,610 --> 00:03:10,880 This is the name for GPT 4 turbo, and I m 37 00:03:10,890 --> 00:03:11,480 running the code. 38 00:03:14,370 --> 00:03:18,340 Note that at this moment, GPT 4 turbo is 39 00:03:18,350 --> 00:03:20,820 available only to paid accounts. 40 00:03:21,550 --> 00:03:24,700 For this course, we ll stick with GPT 3 41 00:03:24,710 --> 00:03:29,540 .5 turbo because it s both cost -effective and powerful. 42 00:03:35,070 --> 00:03:39,040 Let s take a look at the OpenAI chat completions API. 43 00:03:43,410 --> 00:03:52,280 The OpenAI API for GPT 3 .5 turbo, GPT 4, GPT 4 turbo and so on uses a list of 44 00:03:52,290 --> 00:03:54,260 dictionaries called messages. 45 00:03:55,130 --> 00:03:59,980 These messages define three roles System, 46 00:04:00,670 --> 00:04:02,280 User and Assistant. 47 00:04:03,870 --> 00:04:07,820 The System role helps set the behavior of 48 00:04:07,830 --> 00:04:14,960 the Assistant, the User message serves as the prompt or the question we ask the 49 00:04:14,970 --> 00:04:20,320 Assistant, and the Assistant messages store prior responses. 50 00:04:21,310 --> 00:04:28,060 To use these messages when calling the LLM from Langchain, we need to import a 51 00:04:28,070 --> 00:04:29,680 schema for the messages. 52 00:04:30,670 --> 00:04:39,120 From Langchain Schema Import, and I will 53 00:04:39,130 --> 00:04:42,680 import three classes for the three message types, 54 00:04:57,820 --> 00:05:05,210 In the OpenAI chat completions API, a message is equivalent to the Assistant 55 00:05:05,220 --> 00:05:10,950 message and a human message represents the user s message or prompt. 56 00:05:11,760 --> 00:05:14,330 Now, let s create a list called messages. 57 00:05:16,310 --> 00:05:19,360 The list will contain a System message 58 00:05:19,370 --> 00:05:21,480 and a human message which is the prompt. 59 00:05:22,350 --> 00:05:29,060 System message of content equals you are 60 00:05:29,070 --> 00:05:48,120 a physicist and respond only in German and the human message content equals and 61 00:05:48,130 --> 00:05:52,700 the same task explain quantum mechanics in one sentence. 62 00:05:57,430 --> 00:06:05,900 Finally, let s make the API call to OpenAI output equals LLM .invoke of 63 00:06:05,910 --> 00:06:13,340 messages and print output .content. 64 00:06:16,180 --> 00:06:22,840 I am running the code very well, the LLM 65 00:06:22,850 --> 00:06:28,720 followed my instructions and responded in one sentence in German.