Overview of LangChain
LangChain offers an advanced framework for engaging with LLMs, external libraries, prompts, and user interfaces. It is available in Python or JavaScript (TypeScript) packages. LangChain's user-friendly architecture allows users to utilize its various components such as agents and chains, and strategically assemble them to fulfill specific tasks like summarization, Q&A, and more.
Key Components of LangChain
AI Response: "As of 2021, Nike is the most popular shoe brand in the US. Other popular brands include Adidas, Vans, Converse, and New Balance."
- chat_models: ChatModel class supports the interaction with different chat models supported by OpenAI. It receives input in the form of a list of chat messages and produces a chat message in return. There are three different types of chat models designed for implementing chatbot applications:
- HumanMessage: HumanMessages are used for receiving input from the users.
- AIMessage: AIMessages are response generated by the language model.
- SystemMessage: The purpose of SystemMessage is to instruct the chat model describing its role.
AIMessage(content="Black holes are fascinating yet mysterious objects in space. From a visual standpoint, black holes do not emit or reflect any light, making them essentially invisible to the naked eye. This is because their gravitational pull is so strong that nothing, including light, can escape from their vicinity.\n\nHowever, black holes can be indirectly observed through their effects on nearby matter and light. When matter gets too close to a black hole, it spirals around it in a disk-like structure called an accretion disk. Due to the intense gravity, this matter gets heated up and emits high-energy radiation, including X-rays and gamma-rays...")
- Prompt Templates: Prompt templates are used to generate templates for string prompts. They are used with the LLM models explained above.
- ChatPrompt Templates: ChatPrompt Templates are predefined templates for ChatModels to personalize prompts. As previously mentioned, there are three categories of Chat Models: System, AI, and Human. Each type has a distinct template, as demonstrated in the example below. Similar to Prompt Template, you have the option to create your own templates for ChatPrompts.
- OutputParsers: LangChain's prompt libraries support output parsing. Output parsing involves taking the model's output and transforming it into any structured format we want. LangChain has different types of output parsers. To build complex applications using LLMs, we can use any output parser to generate their output in certain formats. The following example involves having an LLM-generated JSON output and utilizing LangChain to parse that output.
Chains: Chains are used for linking multiple LLMs together or integrating them with other elements, such as third party tools, libraries and even OpenAI functions. Chains are responsible for connecting prompts, LLMs and output parsers. LangChain offers LCEL Chain constructors and Legacy Chains that are readily customizable. The given code demonstrates the integration of multiple LLMs to perform various tasks, including writing a movie review, generating a follow-up comment, summarizing the review, and determining the emotional tone of the review.
- Generic: A single LLM is the simplest chain. This simplest chain can be used to build input prompts for text generation. Let's consider an example to demonstrate building a basic chain. In the following example, we've created a prompt template for a movie recommendation system. The input prompt tells the AI to suggest a good movie based on the specified genre. Following this, an LLMChain instance is formed with OpenAI's model to generate the AI prediction for the recommended movie.
- Utility: Utility are customized chains, consisting of multiple LLMs, designed to address particular tasks. For example, the LLMMathChain is a simple utility chain that gives LLMs the ability to perform mathematical operation.
Agents: The Agent class uses LLMs and prompts to determine appropriate course of actions, offering flexibility. In contrast, Chains follow a predetermined sequence of fixed steps. Agents are more versatile and can be integrated with various tools. In case of agents, the LLM decides which tool to use for executing a query based on its nature. For chains, the entire code and sequence of steps are predetermined. Therefore, for a more adaptable and dynamic application, Agents present a superior choice. Below is the code snippet for creating an agent: