Unnamed: 0
stringlengths 1
178
| link
stringlengths 31
163
| text
stringlengths 18
32.8k
⌀ |
---|---|---|
0 | https://python.langchain.com/docs/get_started | Get startedGet startedGet started with LangChain📄️ IntroductionLangChain is a framework for developing applications powered by language models. It enables applications that:📄️ Installation📄️ QuickstartInstallationNextIntroduction |
1 | https://python.langchain.com/docs/get_started/introduction | Get startedIntroductionOn this pageIntroductionLangChain is a framework for developing applications powered by language models. It enables applications that:Are context-aware: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)Reason: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)The main value props of LangChain are:Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or notOff-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasksOff-the-shelf chains make it easy to get started. For complex applications, components make it easy to customize existing chains and build new ones.Get startedHere’s how to install LangChain, set up your environment, and start building.We recommend following our Quickstart guide to familiarize yourself with the framework by building your first LangChain application.Note: These docs are for the LangChain Python package. For documentation on LangChain.js, the JS/TS version, head here.ModulesLangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:Model I/OInterface with language modelsRetrievalInterface with application-specific dataChainsConstruct sequences of callsAgentsLet chains choose which tools to use given high-level directivesMemoryPersist application state between runs of a chainCallbacksLog and stream intermediate steps of any chainExamples, ecosystem, and resourcesUse casesWalkthroughs and best-practices for common end-to-end use cases, like:Document question answeringChatbotsAnalyzing structured dataand much more...GuidesLearn best practices for developing with LangChain.EcosystemLangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of integrations and dependent repos.Additional resourcesOur community is full of prolific developers, creative builders, and fantastic teachers. Check out YouTube tutorials for great tutorials from folks in the community, and Gallery for a list of awesome LangChain projects, compiled by the folks at KyroLabs.CommunityHead to the Community navigator to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.API referenceHead to the reference section for full documentation of all classes and methods in the LangChain Python package.PreviousGet startedNextInstallationGet startedModulesExamples, ecosystem, and resourcesUse casesGuidesEcosystemAdditional resourcesCommunityAPI reference |
2 | https://python.langchain.com/docs/get_started/installation | Get startedInstallationInstallationOfficial releaseTo install LangChain run:PipCondapip install langchainconda install langchain -c conda-forgeThis will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc.
By default, the dependencies needed to do that are NOT installed.
However, there are two other ways to install LangChain that do bring in those dependencies.To install modules needed for the common LLM providers, run:pip install langchain[llms]To install all modules needed for all integrations, run:pip install langchain[all]Note that if you are using zsh, you'll need to quote square brackets when passing them as an argument to a command, for example:pip install 'langchain[all]'From sourceIf you want to install from source, you can do so by cloning the repo and be sure that the directory is PATH/TO/REPO/langchain/libs/langchain running:pip install -e .PreviousIntroductionNextQuickstart |
3 | https://python.langchain.com/docs/get_started/quickstart | Get startedQuickstartOn this pageQuickstartInstallationTo install LangChain run:PipCondapip install langchainconda install langchain -c conda-forgeFor more details, see our Installation guide.Environment setupUsing LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.First we'll need to install their Python package:pip install openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:from langchain.llms import OpenAIllm = OpenAI(openai_api_key="...")Building an applicationNow we can start building our language model application. LangChain provides many modules that can be used to build language model applications.
Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.The most common and most important chain that LangChain helps create contains three things:LLM: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.Prompt Templates: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.Output Parsers: These translate the raw response from the LLM to a more workable format, making it easy to use the output downstream.In this getting started guide we will cover those three components by themselves, and then go over how to combine all of them.
Understanding these concepts will set you up well for being able to use and customize LangChain applications.
Most LangChain applications allow you to configure the LLM and/or the prompt used, so knowing how to take advantage of this will be a big enabler.LLMsThere are two types of language models, which in LangChain are called:LLMs: this is a language model which takes a string as input and returns a stringChatModels: this is a language model which takes a list of messages as input and returns a messageThe input/output for LLMs is simple and easy to understand - a string.
But what about ChatModels? The input there is a list of ChatMessages, and the output is a single ChatMessage.
A ChatMessage has two required components:content: This is the content of the message.role: This is the role of the entity from which the ChatMessage is coming from.LangChain provides several objects to easily distinguish between different roles:HumanMessage: A ChatMessage coming from a human/user.AIMessage: A ChatMessage coming from an AI/assistant.SystemMessage: A ChatMessage coming from the system.FunctionMessage: A ChatMessage coming from a function call.If none of those roles sound right, there is also a ChatMessage class where you can specify the role manually.
For more information on how to use these different messages most effectively, see our prompting guide.LangChain provides a standard interface for both, but it's useful to understand this difference in order to construct prompts for a given language model.
The standard interface that LangChain provides has two methods:predict: Takes in a string, returns a stringpredict_messages: Takes in a list of messages, returns a message.Let's see how to work with these different types of models and these different types of inputs.
First, let's import an LLM and a ChatModel.from langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIllm = OpenAI()chat_model = ChatOpenAI()llm.predict("hi!")>>> "Hi"chat_model.predict("hi!")>>> "Hi"The OpenAI and ChatOpenAI objects are basically just configuration objects.
You can initialize them with parameters like temperature and others, and pass them around.Next, let's use the predict method to run over a string input.text = "What would be a good company name for a company that makes colorful socks?"llm.predict(text)# >> Feetful of Funchat_model.predict(text)# >> Socks O'ColorFinally, let's use the predict_messages method to run over a list of messages.from langchain.schema import HumanMessagetext = "What would be a good company name for a company that makes colorful socks?"messages = [HumanMessage(content=text)]llm.predict_messages(messages)# >> Feetful of Funchat_model.predict_messages(messages)# >> Socks O'ColorFor both these methods, you can also pass in parameters as keyword arguments.
For example, you could pass in temperature=0 to adjust the temperature that is used from what the object was configured with.
Whatever values are passed in during run time will always override what the object was configured with.Prompt templatesMost LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.PromptTemplates help with exactly this!
They bundle up all the logic for going from user input into a fully formatted prompt.
This can start off very simple - for example, a prompt to produce the above string would just be:from langchain.prompts import PromptTemplateprompt = PromptTemplate.from_template("What is a good name for a company that makes {product}?")prompt.format(product="colorful socks")What is a good name for a company that makes colorful socks?However, the advantages of using these over raw string formatting are several.
You can "partial" out variables - e.g. you can format only some of the variables at a time.
You can compose them together, easily combining different templates into a single prompt.
For explanations of these functionalities, see the section on prompts for more detail.PromptTemplates can also be used to produce a list of messages.
In this case, the prompt not only contains information about the content, but also each message (its role, its position in the list, etc)
Here, what happens most often is a ChatPromptTemplate is a list of ChatMessageTemplates.
Each ChatMessageTemplate contains instructions for how to format that ChatMessage - its role, and then also its content.
Let's take a look at this below:from langchain.prompts.chat import ChatPromptTemplatetemplate = "You are a helpful assistant that translates {input_language} to {output_language}."human_template = "{text}"chat_prompt = ChatPromptTemplate.from_messages([ ("system", template), ("human", human_template),])chat_prompt.format_messages(input_language="English", output_language="French", text="I love programming.")[ SystemMessage(content="You are a helpful assistant that translates English to French.", additional_kwargs={}), HumanMessage(content="I love programming.")]ChatPromptTemplates can also be constructed in other ways - see the section on prompts for more detail.Output parsersOutputParsers convert the raw output of an LLM into a format that can be used downstream.
There are few main type of OutputParsers, including:Convert text from LLM -> structured information (e.g. JSON)Convert a ChatMessage into just a stringConvert the extra information returned from a call besides the message (like OpenAI function invocation) into a string.For full information on this, see the section on output parsersIn this getting started guide, we will write our own output parser - one that converts a comma separated list into a list.from langchain.schema import BaseOutputParserclass CommaSeparatedListOutputParser(BaseOutputParser): """Parse the output of an LLM call to a comma-separated list.""" def parse(self, text: str): """Parse the output of an LLM call.""" return text.strip().split(", ")CommaSeparatedListOutputParser().parse("hi, bye")# >> ['hi', 'bye']PromptTemplate + LLM + OutputParserWe can now combine all these into one chain.
This chain will take input variables, pass those to a prompt template to create a prompt, pass the prompt to a language model, and then pass the output through an (optional) output parser.
This is a convenient way to bundle up a modular piece of logic.
Let's see it in action!from langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ChatPromptTemplatefrom langchain.schema import BaseOutputParserclass CommaSeparatedListOutputParser(BaseOutputParser): """Parse the output of an LLM call to a comma-separated list.""" def parse(self, text: str): """Parse the output of an LLM call.""" return text.strip().split(", ")template = """You are a helpful assistant who generates comma separated lists.A user will pass in a category, and you should generate 5 objects in that category in a comma separated list.ONLY return a comma separated list, and nothing more."""human_template = "{text}"chat_prompt = ChatPromptTemplate.from_messages([ ("system", template), ("human", human_template),])chain = chat_prompt | ChatOpenAI() | CommaSeparatedListOutputParser()chain.invoke({"text": "colors"})# >> ['red', 'blue', 'green', 'yellow', 'orange']Note that we are using the | syntax to join these components together.
This | syntax is called the LangChain Expression Language.
To learn more about this syntax, read the documentation here.Next stepsThis is it!
We've now gone over how to create the core building block of LangChain applications.
There is a lot more nuance in all these components (LLMs, prompts, output parsers) and a lot more different components to learn about as well.
To continue on your journey:Dive deeper into LLMs, prompts, and output parsersLearn the other key componentsRead up on LangChain Expression Language to learn how to chain these components togetherCheck out our helpful guides for detailed walkthroughs on particular topicsExplore end-to-end use casesPreviousInstallationNextLangChain Expression Language (LCEL)InstallationEnvironment setupBuilding an applicationLLMsPrompt templatesOutput parsersPromptTemplate + LLM + OutputParserNext steps |
4 | https://python.langchain.com/docs/expression_language/ | LangChain Expression LanguageOn this pageLangChain Expression Language (LCEL)LangChain Expression Language or LCEL is a declarative way to easily compose chains together.
There are several benefits to writing chains in this manner (as opposed to writing normal code):Async, Batch, and Streaming Support
Any chain constructed this way will automatically have full sync, async, batch, and streaming support.
This makes it easy to prototype a chain in a Jupyter notebook using the sync interface, and then expose it as an async streaming interface.Fallbacks
The non-determinism of LLMs makes it important to be able to handle errors gracefully.
With LCEL you can easily attach fallbacks to any chain.Parallelism
Since LLM applications involve (sometimes long) API calls, it often becomes important to run things in parallel.
With LCEL syntax, any components that can be run in parallel automatically are.Seamless LangSmith Tracing Integration
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
With LCEL, all steps are automatically logged to LangSmith for maximal observability and debuggability.InterfaceThe base interface shared by all LCEL objectsHow toHow to use core features of LCELCookbookExamples of common LCEL usage patternsPreviousQuickstartNextInterface |
5 | https://python.langchain.com/docs/expression_language/interface | LangChain Expression LanguageInterfaceOn this pageInterfaceIn an effort to make it as easy as possible to create custom chains, we've implemented a "Runnable" protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:stream: stream back chunks of the responseinvoke: call the chain on an inputbatch: call the chain on a list of inputsThese also have corresponding async methods:astream: stream back chunks of the response asyncainvoke: call the chain on an input asyncabatch: call the chain on a list of inputs asyncastream_log: stream back intermediate steps as they happen, in addition to the final responseThe type of the input varies by component:ComponentInput TypePromptDictionaryRetrieverSingle stringLLM, ChatModelSingle string, list of chat messages or a PromptValueToolSingle string, or dictionary, depending on the toolOutputParserThe output of an LLM or ChatModelThe output type also varies by component:ComponentOutput TypeLLMStringChatModelChatMessagePromptPromptValueRetrieverList of documentsToolDepends on the toolOutputParserDepends on the parserAll runnables expose properties to inspect the input and output types:input_schema: an input Pydantic model auto-generated from the structure of the Runnableoutput_schema: an output Pydantic model auto-generated from the structure of the RunnableLet's take a look at these methods! To do so, we'll create a super simple PromptTemplate + ChatModel chain.from langchain.prompts import ChatPromptTemplatefrom langchain.chat_models import ChatOpenAImodel = ChatOpenAI()prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")chain = prompt | modelInput SchemaA description of the inputs accepted by a Runnable.
This is a Pydantic model dynamically generated from the structure of any Runnable.
You can call .schema() on it to obtain a JSONSchema representation.# The input schema of the chain is the input schema of its first part, the prompt.chain.input_schema.schema() {'title': 'PromptInput', 'type': 'object', 'properties': {'topic': {'title': 'Topic', 'type': 'string'}}}Output SchemaA description of the outputs produced by a Runnable.
This is a Pydantic model dynamically generated from the structure of any Runnable.
You can call .schema() on it to obtain a JSONSchema representation.# The output schema of the chain is the output schema of its last part, in this case a ChatModel, which outputs a ChatMessagechain.output_schema.schema() {'title': 'ChatOpenAIOutput', 'anyOf': [{'$ref': '#/definitions/HumanMessageChunk'}, {'$ref': '#/definitions/AIMessageChunk'}, {'$ref': '#/definitions/ChatMessageChunk'}, {'$ref': '#/definitions/FunctionMessageChunk'}, {'$ref': '#/definitions/SystemMessageChunk'}], 'definitions': {'HumanMessageChunk': {'title': 'HumanMessageChunk', 'description': 'A Human Message chunk.', 'type': 'object', 'properties': {'content': {'title': 'Content', 'type': 'string'}, 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'}, 'type': {'title': 'Type', 'default': 'human', 'enum': ['human'], 'type': 'string'}, 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}, 'is_chunk': {'title': 'Is Chunk', 'default': True, 'enum': [True], 'type': 'boolean'}}, 'required': ['content']}, 'AIMessageChunk': {'title': 'AIMessageChunk', 'description': 'A Message chunk from an AI.', 'type': 'object', 'properties': {'content': {'title': 'Content', 'type': 'string'}, 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'}, 'type': {'title': 'Type', 'default': 'ai', 'enum': ['ai'], 'type': 'string'}, 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}, 'is_chunk': {'title': 'Is Chunk', 'default': True, 'enum': [True], 'type': 'boolean'}}, 'required': ['content']}, 'ChatMessageChunk': {'title': 'ChatMessageChunk', 'description': 'A Chat Message chunk.', 'type': 'object', 'properties': {'content': {'title': 'Content', 'type': 'string'}, 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'}, 'type': {'title': 'Type', 'default': 'chat', 'enum': ['chat'], 'type': 'string'}, 'role': {'title': 'Role', 'type': 'string'}, 'is_chunk': {'title': 'Is Chunk', 'default': True, 'enum': [True], 'type': 'boolean'}}, 'required': ['content', 'role']}, 'FunctionMessageChunk': {'title': 'FunctionMessageChunk', 'description': 'A Function Message chunk.', 'type': 'object', 'properties': {'content': {'title': 'Content', 'type': 'string'}, 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'}, 'type': {'title': 'Type', 'default': 'function', 'enum': ['function'], 'type': 'string'}, 'name': {'title': 'Name', 'type': 'string'}, 'is_chunk': {'title': 'Is Chunk', 'default': True, 'enum': [True], 'type': 'boolean'}}, 'required': ['content', 'name']}, 'SystemMessageChunk': {'title': 'SystemMessageChunk', 'description': 'A System Message chunk.', 'type': 'object', 'properties': {'content': {'title': 'Content', 'type': 'string'}, 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'}, 'type': {'title': 'Type', 'default': 'system', 'enum': ['system'], 'type': 'string'}, 'is_chunk': {'title': 'Is Chunk', 'default': True, 'enum': [True], 'type': 'boolean'}}, 'required': ['content']}}}Streamfor s in chain.stream({"topic": "bears"}): print(s.content, end="", flush=True) Why don't bears wear shoes? Because they have bear feet!Invokechain.invoke({"topic": "bears"}) AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!")Batchchain.batch([{"topic": "bears"}, {"topic": "cats"}]) [AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!"), AIMessage(content="Why don't cats play poker in the wild?\n\nToo many cheetahs!")]You can set the number of concurrent requests by using the max_concurrency parameterchain.batch([{"topic": "bears"}, {"topic": "cats"}], config={"max_concurrency": 5}) [AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!"), AIMessage(content="Sure, here's a cat joke for you:\n\nWhy don't cats play poker in the wild?\n\nToo many cheetahs!")]Async Streamasync for s in chain.astream({"topic": "bears"}): print(s.content, end="", flush=True) Sure, here's a bear joke for you: Why don't bears wear shoes? Because they have bear feet!Async Invokeawait chain.ainvoke({"topic": "bears"}) AIMessage(content="Why don't bears wear shoes? \n\nBecause they have bear feet!")Async Batchawait chain.abatch([{"topic": "bears"}]) [AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!")]Async Stream Intermediate StepsAll runnables also have a method .astream_log() which can be used to stream (as they happen) all or part of the intermediate steps of your chain/sequence. This is useful eg. to show progress to the user, to use intermediate results, or even just to debug your chain.You can choose to stream all steps (default), or include/exclude steps by name, tags or metadata.This method yields JSONPatch ops that when applied in the same order as received build up the RunState.class LogEntry(TypedDict): id: str """ID of the sub-run.""" name: str """Name of the object being run.""" type: str """Type of the object being run, eg. prompt, chain, llm, etc.""" tags: List[str] """List of tags for the run.""" metadata: Dict[str, Any] """Key-value pairs of metadata for the run.""" start_time: str """ISO-8601 timestamp of when the run started.""" streamed_output_str: List[str] """List of LLM tokens streamed by this run, if applicable.""" final_output: Optional[Any] """Final output of this run. Only available after the run has finished successfully.""" end_time: Optional[str] """ISO-8601 timestamp of when the run ended. Only available after the run has finished."""class RunState(TypedDict): id: str """ID of the run.""" streamed_output: List[Any] """List of output chunks streamed by Runnable.stream()""" final_output: Optional[Any] """Final output of the run, usually the result of aggregating (`+`) streamed_output. Only available after the run has finished successfully.""" logs: Dict[str, LogEntry] """Map of run names to sub-runs. If filters were supplied, this list will contain only the runs that matched the filters."""Streaming JSONPatch chunksThis is useful eg. to stream the JSONPatch in an HTTP server, and then apply the ops on the client to rebuild the run state there. See LangServe for tooling to make it easier to build a webserver from any Runnable.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.schema.output_parser import StrOutputParserfrom langchain.schema.runnable import RunnablePassthroughfrom langchain.vectorstores import FAISStemplate = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)vectorstore = FAISS.from_texts(["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()retrieval_chain = ( {"context": retriever.with_config(run_name='Docs'), "question": RunnablePassthrough()} | prompt | model | StrOutputParser())async for chunk in retrieval_chain.astream_log("where did harrison work?", include_names=['Docs']): print(chunk) RunLogPatch({'op': 'replace', 'path': '', 'value': {'final_output': None, 'id': 'fd6fcf62-c92c-4edf-8713-0fc5df000f62', 'logs': {}, 'streamed_output': []}}) RunLogPatch({'op': 'add', 'path': '/logs/Docs', 'value': {'end_time': None, 'final_output': None, 'id': '8c998257-1ec8-4546-b744-c3fdb9728c41', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:35.668', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}) RunLogPatch({'op': 'add', 'path': '/logs/Docs/final_output', 'value': {'documents': [Document(page_content='harrison worked at kensho')]}}, {'op': 'add', 'path': '/logs/Docs/end_time', 'value': '2023-10-05T12:52:36.033'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ''}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'H'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'arrison'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' worked'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' at'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' Kens'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'ho'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': '.'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ''}) RunLogPatch({'op': 'replace', 'path': '/final_output', 'value': {'output': 'Harrison worked at Kensho.'}})Streaming the incremental RunStateYou can simply pass diff=False to get incremental values of RunState.async for chunk in retrieval_chain.astream_log("where did harrison work?", include_names=['Docs'], diff=False): print(chunk) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {}, 'streamed_output': []}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': None, 'final_output': None, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': []}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': []}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['']}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['', 'H']}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['', 'H', 'arrison']}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['', 'H', 'arrison', ' worked']}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['', 'H', 'arrison', ' worked', ' at']}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens']}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens', 'ho']}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens', 'ho', '.']}) RunLog({'final_output': None, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens', 'ho', '.', '']}) RunLog({'final_output': {'output': 'Harrison worked at Kensho.'}, 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185', 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217', 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]}, 'id': '621597dd-d716-4532-938d-debc21a453d1', 'metadata': {}, 'name': 'Docs', 'start_time': '2023-10-05T12:52:36.935', 'streamed_output_str': [], 'tags': ['map:key:context', 'FAISS'], 'type': 'retriever'}}, 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens', 'ho', '.', '']})ParallelismLet's take a look at how LangChain Expression Language support parallel requests as much as possible. For example, when using a RunnableParallel (often written as a dictionary) it executes each element in parallel.from langchain.schema.runnable import RunnableParallelchain1 = ChatPromptTemplate.from_template("tell me a joke about {topic}") | modelchain2 = ChatPromptTemplate.from_template("write a short (2 line) poem about {topic}") | modelcombined = RunnableParallel(joke=chain1, poem=chain2)chain1.invoke({"topic": "bears"}) CPU times: user 31.7 ms, sys: 8.59 ms, total: 40.3 ms Wall time: 1.05 s AIMessage(content="Why don't bears like fast food?\n\nBecause they can't catch it!", additional_kwargs={}, example=False)chain2.invoke({"topic": "bears"}) CPU times: user 42.9 ms, sys: 10.2 ms, total: 53 ms Wall time: 1.93 s AIMessage(content="In forest's embrace, bears roam free,\nSilent strength, nature's majesty.", additional_kwargs={}, example=False)combined.invoke({"topic": "bears"}) CPU times: user 96.3 ms, sys: 20.4 ms, total: 117 ms Wall time: 1.1 s {'joke': AIMessage(content="Why don't bears wear socks?\n\nBecause they have bear feet!", additional_kwargs={}, example=False), 'poem': AIMessage(content="In forest's embrace,\nMajestic bears leave their trace.", additional_kwargs={}, example=False)}PreviousLangChain Expression Language (LCEL)NextHow toInput SchemaOutput SchemaStreamInvokeBatchAsync StreamAsync InvokeAsync BatchAsync Stream Intermediate StepsStreaming JSONPatch chunksStreaming the incremental RunStateParallelism |
6 | https://python.langchain.com/docs/expression_language/how_to/ | LangChain Expression LanguageHow toHow to📄️ Bind runtime argsSometimes we want to invoke a Runnable within a Runnable sequence with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use Runnable.bind() to easily pass these arguments in.📄️ Add fallbacksThere are many possible points of failure in an LLM application, whether that be issues with LLM API's, poor model outputs, issues with other integrations, etc. Fallbacks help you gracefully handle and isolate these issues.📄️ Run arbitrary functionsYou can use arbitrary functions in the pipeline📄️ Use RunnableParallel/RunnableMapRunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map.📄️ Route between multiple RunnablesThis notebook covers how to do routing in the LangChain Expression Language.PreviousInterfaceNextBind runtime args |
7 | https://python.langchain.com/docs/expression_language/how_to/binding | LangChain Expression LanguageHow toBind runtime argsOn this pageBind runtime argsSometimes we want to invoke a Runnable within a Runnable sequence with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use Runnable.bind() to easily pass these arguments in.Suppose we have a simple prompt + model sequence:from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.schema import StrOutputParserfrom langchain.schema.runnable import RunnablePassthroughprompt = ChatPromptTemplate.from_messages( [ ("system", "Write out the following equation using algebraic symbols then solve it. Use the format\n\nEQUATION:...\nSOLUTION:...\n\n"), ("human", "{equation_statement}") ])model = ChatOpenAI(temperature=0)runnable = {"equation_statement": RunnablePassthrough()} | prompt | model | StrOutputParser()print(runnable.invoke("x raised to the third plus seven equals 12")) EQUATION: x^3 + 7 = 12 SOLUTION: Subtracting 7 from both sides of the equation, we get: x^3 = 12 - 7 x^3 = 5 Taking the cube root of both sides, we get: x = ∛5 Therefore, the solution to the equation x^3 + 7 = 12 is x = ∛5.and want to call the model with certain stop words:runnable = ( {"equation_statement": RunnablePassthrough()} | prompt | model.bind(stop="SOLUTION") | StrOutputParser())print(runnable.invoke("x raised to the third plus seven equals 12")) EQUATION: x^3 + 7 = 12 Attaching OpenAI functionsOne particularly useful application of binding is to attach OpenAI functions to a compatible OpenAI model:functions = [ { "name": "solver", "description": "Formulates and solves an equation", "parameters": { "type": "object", "properties": { "equation": { "type": "string", "description": "The algebraic expression of the equation" }, "solution": { "type": "string", "description": "The solution to the equation" } }, "required": ["equation", "solution"] } } ]# Need gpt-4 to solve this one correctlyprompt = ChatPromptTemplate.from_messages( [ ("system", "Write out the following equation using algebraic symbols then solve it."), ("human", "{equation_statement}") ])model = ChatOpenAI(model="gpt-4", temperature=0).bind(function_call={"name": "solver"}, functions=functions)runnable = ( {"equation_statement": RunnablePassthrough()} | prompt | model)runnable.invoke("x raised to the third plus seven equals 12") AIMessage(content='', additional_kwargs={'function_call': {'name': 'solver', 'arguments': '{\n"equation": "x^3 + 7 = 12",\n"solution": "x = ∛5"\n}'}}, example=False)PreviousHow toNextAdd fallbacksAttaching OpenAI functions |
8 | https://python.langchain.com/docs/expression_language/how_to/fallbacks | LangChain Expression LanguageHow toAdd fallbacksOn this pageAdd fallbacksThere are many possible points of failure in an LLM application, whether that be issues with LLM API's, poor model outputs, issues with other integrations, etc. Fallbacks help you gracefully handle and isolate these issues.Crucially, fallbacks can be applied not only on the LLM level but on the whole runnable level.Handling LLM API ErrorsThis is maybe the most common use case for fallbacks. A request to an LLM API can fail for a variety of reasons - the API could be down, you could have hit rate limits, any number of things. Therefore, using fallbacks can help protect against these types of things.IMPORTANT: By default, a lot of the LLM wrappers catch errors and retry. You will most likely want to turn those off when working with fallbacks. Otherwise the first wrapper will keep on retrying and not failing.from langchain.chat_models import ChatOpenAI, ChatAnthropicFirst, let's mock out what happens if we hit a RateLimitError from OpenAIfrom unittest.mock import patchfrom openai.error import RateLimitError# Note that we set max_retries = 0 to avoid retrying on RateLimits, etcopenai_llm = ChatOpenAI(max_retries=0)anthropic_llm = ChatAnthropic()llm = openai_llm.with_fallbacks([anthropic_llm])# Let's use just the OpenAI LLm first, to show that we run into an errorwith patch('openai.ChatCompletion.create', side_effect=RateLimitError()): try: print(openai_llm.invoke("Why did the chicken cross the road?")) except: print("Hit error") Hit error# Now let's try with fallbacks to Anthropicwith patch('openai.ChatCompletion.create', side_effect=RateLimitError()): try: print(llm.invoke("Why did the the chicken cross the road?")) except: print("Hit error") content=' I don\'t actually know why the chicken crossed the road, but here are some possible humorous answers:\n\n- To get to the other side!\n\n- It was too chicken to just stand there. \n\n- It wanted a change of scenery.\n\n- It wanted to show the possum it could be done.\n\n- It was on its way to a poultry farmers\' convention.\n\nThe joke plays on the double meaning of "the other side" - literally crossing the road to the other side, or the "other side" meaning the afterlife. So it\'s an anti-joke, with a silly or unexpected pun as the answer.' additional_kwargs={} example=FalseWe can use our "LLM with Fallbacks" as we would a normal LLM.from langchain.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_messages( [ ("system", "You're a nice assistant who always includes a compliment in your response"), ("human", "Why did the {animal} cross the road"), ])chain = prompt | llmwith patch('openai.ChatCompletion.create', side_effect=RateLimitError()): try: print(chain.invoke({"animal": "kangaroo"})) except: print("Hit error") content=" I don't actually know why the kangaroo crossed the road, but I'm happy to take a guess! Maybe the kangaroo was trying to get to the other side to find some tasty grass to eat. Or maybe it was trying to get away from a predator or other danger. Kangaroos do need to cross roads and other open areas sometimes as part of their normal activities. Whatever the reason, I'm sure the kangaroo looked both ways before hopping across!" additional_kwargs={} example=FalseSpecifying errors to handleWe can also specify the errors to handle if we want to be more specific about when the fallback is invoked:llm = openai_llm.with_fallbacks([anthropic_llm], exceptions_to_handle=(KeyboardInterrupt,))chain = prompt | llmwith patch('openai.ChatCompletion.create', side_effect=RateLimitError()): try: print(chain.invoke({"animal": "kangaroo"})) except: print("Hit error") Hit errorFallbacks for SequencesWe can also create fallbacks for sequences, that are sequences themselves. Here we do that with two different models: ChatOpenAI and then normal OpenAI (which does not use a chat model). Because OpenAI is NOT a chat model, you likely want a different prompt.# First let's create a chain with a ChatModel# We add in a string output parser here so the outputs between the two are the same typefrom langchain.schema.output_parser import StrOutputParserchat_prompt = ChatPromptTemplate.from_messages( [ ("system", "You're a nice assistant who always includes a compliment in your response"), ("human", "Why did the {animal} cross the road"), ])# Here we're going to use a bad model name to easily create a chain that will errorchat_model = ChatOpenAI(model_name="gpt-fake")bad_chain = chat_prompt | chat_model | StrOutputParser()# Now lets create a chain with the normal OpenAI modelfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplateprompt_template = """Instructions: You should always include a compliment in your response.Question: Why did the {animal} cross the road?"""prompt = PromptTemplate.from_template(prompt_template)llm = OpenAI()good_chain = prompt | llm# We can now create a final chain which combines the twochain = bad_chain.with_fallbacks([good_chain])chain.invoke({"animal": "turtle"}) '\n\nAnswer: The turtle crossed the road to get to the other side, and I have to say he had some impressive determination.'PreviousBind runtime argsNextRun arbitrary functionsHandling LLM API ErrorsSpecifying errors to handleFallbacks for Sequences |
9 | https://python.langchain.com/docs/expression_language/how_to/functions | LangChain Expression LanguageHow toRun arbitrary functionsOn this pageRun arbitrary functionsYou can use arbitrary functions in the pipelineNote that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single input and unpacks it into multiple argument.from langchain.schema.runnable import RunnableLambdafrom langchain.prompts import ChatPromptTemplatefrom langchain.chat_models import ChatOpenAIfrom operator import itemgetterdef length_function(text): return len(text)def _multiple_length_function(text1, text2): return len(text1) * len(text2)def multiple_length_function(_dict): return _multiple_length_function(_dict["text1"], _dict["text2"])prompt = ChatPromptTemplate.from_template("what is {a} + {b}")model = ChatOpenAI()chain1 = prompt | modelchain = { "a": itemgetter("foo") | RunnableLambda(length_function), "b": {"text1": itemgetter("foo"), "text2": itemgetter("bar")} | RunnableLambda(multiple_length_function)} | prompt | modelchain.invoke({"foo": "bar", "bar": "gah"}) AIMessage(content='3 + 9 equals 12.', additional_kwargs={}, example=False)Accepting a Runnable ConfigRunnable lambdas can optionally accept a RunnableConfig, which they can use to pass callbacks, tags, and other configuration information to nested runs.from langchain.schema.runnable import RunnableConfigfrom langchain.schema.output_parser import StrOutputParserimport jsondef parse_or_fix(text: str, config: RunnableConfig): fixing_chain = ( ChatPromptTemplate.from_template( "Fix the following text:\n\n```text\n{input}\n```\nError: {error}" " Don't narrate, just respond with the fixed data." ) | ChatOpenAI() | StrOutputParser() ) for _ in range(3): try: return json.loads(text) except Exception as e: text = fixing_chain.invoke({"input": text, "error": e}, config) return "Failed to parse"from langchain.callbacks import get_openai_callbackwith get_openai_callback() as cb: RunnableLambda(parse_or_fix).invoke("{foo: bar}", {"tags": ["my-tag"], "callbacks": [cb]}) print(cb) Tokens Used: 65 Prompt Tokens: 56 Completion Tokens: 9 Successful Requests: 1 Total Cost (USD): $0.00010200000000000001PreviousAdd fallbacksNextUse RunnableParallel/RunnableMapAccepting a Runnable Config |
10 | https://python.langchain.com/docs/expression_language/how_to/map | LangChain Expression LanguageHow toUse RunnableParallel/RunnableMapOn this pageUse RunnableParallel/RunnableMapRunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map.from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.schema.runnable import RunnableParallelmodel = ChatOpenAI()joke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | modelpoem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | modelmap_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)map_chain.invoke({"topic": "bear"}) {'joke': AIMessage(content="Why don't bears wear shoes? \n\nBecause they have bear feet!", additional_kwargs={}, example=False), 'poem': AIMessage(content="In woodland depths, bear prowls with might,\nSilent strength, nature's sovereign, day and night.", additional_kwargs={}, example=False)}Manipulating outputs/inputsMaps can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.schema.output_parser import StrOutputParserfrom langchain.schema.runnable import RunnablePassthroughfrom langchain.vectorstores import FAISSvectorstore = FAISS.from_texts(["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)retrieval_chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | model | StrOutputParser())retrieval_chain.invoke("where did harrison work?") 'Harrison worked at Kensho.'Here the input to prompt is expected to be a map with keys "context" and "question". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the "question" key.Note that when composing a RunnableMap when another Runnable we don't even need to wrap our dictuionary in the RunnableMap class — the type conversion is handled for us.ParallelismRunnableMaps are also useful for running independent processes in parallel, since each Runnable in the map is executed in parallel. For example, we can see our earlier joke_chain, poem_chain and map_chain all have about the same runtime, even though map_chain executes both of the other two.joke_chain.invoke({"topic": "bear"}) 958 ms ± 402 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)poem_chain.invoke({"topic": "bear"}) 1.22 s ± 508 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)map_chain.invoke({"topic": "bear"}) 1.15 s ± 119 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)PreviousRun arbitrary functionsNextRoute between multiple RunnablesManipulating outputs/inputsParallelism |
11 | https://python.langchain.com/docs/expression_language/how_to/routing | LangChain Expression LanguageHow toRoute between multiple RunnablesOn this pageRoute between multiple RunnablesThis notebook covers how to do routing in the LangChain Expression Language.Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. Routing helps provide structure and consistency around interactions with LLMs.There are two ways to perform routing:Using a RunnableBranch.Writing custom factory function that takes the input of a previous step and returns a runnable. Importantly, this should return a runnable and NOT actually execute.We'll illustrate both methods using a two step sequence where the first step classifies an input question as being about LangChain, Anthropic, or Other, then routes to a corresponding prompt chain.Using a RunnableBranchA RunnableBranch is initialized with a list of (condition, runnable) pairs and a default runnable. It selects which branch by passing each condition the input it's invoked with. It selects the first condition to evaluate to True, and runs the corresponding runnable to that condition with the input. If no provided conditions match, it runs the default runnable.Here's an example of what it looks like in action:from langchain.prompts import PromptTemplatefrom langchain.chat_models import ChatAnthropicfrom langchain.schema.output_parser import StrOutputParserFirst, let's create a chain that will identify incoming questions as being about LangChain, Anthropic, or Other:chain = PromptTemplate.from_template("""Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`. Do not respond with more than one word.<question>{question}</question>Classification:""") | ChatAnthropic() | StrOutputParser()chain.invoke({"question": "how do I call Anthropic?"}) ' Anthropic'Now, let's create three sub chains:langchain_chain = PromptTemplate.from_template("""You are an expert in langchain. \Always answer questions starting with "As Harrison Chase told me". \Respond to the following question:Question: {question}Answer:""") | ChatAnthropic()anthropic_chain = PromptTemplate.from_template("""You are an expert in anthropic. \Always answer questions starting with "As Dario Amodei told me". \Respond to the following question:Question: {question}Answer:""") | ChatAnthropic()general_chain = PromptTemplate.from_template("""Respond to the following question:Question: {question}Answer:""") | ChatAnthropic()from langchain.schema.runnable import RunnableBranchbranch = RunnableBranch( (lambda x: "anthropic" in x["topic"].lower(), anthropic_chain), (lambda x: "langchain" in x["topic"].lower(), langchain_chain), general_chain)full_chain = { "topic": chain, "question": lambda x: x["question"]} | branchfull_chain.invoke({"question": "how do I use Anthropic?"}) AIMessage(content=" As Dario Amodei told me, here are some ways to use Anthropic:\n\n- Sign up for an account on Anthropic's website to access tools like Claude, Constitutional AI, and Writer. \n\n- Use Claude for tasks like email generation, customer service chat, and QA. Claude can understand natural language prompts and provide helpful responses.\n\n- Use Constitutional AI if you need an AI assistant that is harmless, honest, and helpful. It is designed to be safe and aligned with human values.\n\n- Use Writer to generate natural language content for things like marketing copy, stories, reports, and more. Give it a topic and prompt and it will create high-quality written content.\n\n- Check out Anthropic's documentation and blog for tips, tutorials, examples, and announcements about new capabilities as they continue to develop their AI technology.\n\n- Follow Anthropic on social media or subscribe to their newsletter to stay up to date on new features and releases.\n\n- For most people, the easiest way to leverage Anthropic's technology is through their website - just create an account to get started!", additional_kwargs={}, example=False)full_chain.invoke({"question": "how do I use LangChain?"}) AIMessage(content=' As Harrison Chase told me, here is how you use LangChain:\n\nLangChain is an AI assistant that can have conversations, answer questions, and generate text. To use LangChain, you simply type or speak your input and LangChain will respond. \n\nYou can ask LangChain questions, have discussions, get summaries or explanations about topics, and request it to generate text on a subject. Some examples of interactions:\n\n- Ask general knowledge questions and LangChain will try to answer factually. For example "What is the capital of France?"\n\n- Have conversations on topics by taking turns speaking. You can prompt the start of a conversation by saying something like "Let\'s discuss machine learning"\n\n- Ask for summaries or high-level explanations on subjects. For example "Can you summarize the main themes in Shakespeare\'s Hamlet?" \n\n- Give creative writing prompts or requests to have LangChain generate text in different styles. For example "Write a short children\'s story about a mouse" or "Generate a poem in the style of Robert Frost about nature"\n\n- Correct LangChain if it makes an inaccurate statement and provide the right information. This helps train it.\n\nThe key is interacting naturally and giving it clear prompts and requests', additional_kwargs={}, example=False)full_chain.invoke({"question": "whats 2 + 2"}) AIMessage(content=' 2 + 2 = 4', additional_kwargs={}, example=False)Using a custom functionYou can also use a custom function to route between different outputs. Here's an example:def route(info): if "anthropic" in info["topic"].lower(): return anthropic_chain elif "langchain" in info["topic"].lower(): return langchain_chain else: return general_chainfrom langchain.schema.runnable import RunnableLambdafull_chain = { "topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)full_chain.invoke({"question": "how do I use Anthroipc?"}) AIMessage(content=' As Dario Amodei told me, to use Anthropic IPC you first need to import it:\n\n```python\nfrom anthroipc import ic\n```\n\nThen you can create a client and connect to the server:\n\n```python \nclient = ic.connect()\n```\n\nAfter that, you can call methods on the client and get responses:\n\n```python\nresponse = client.ask("What is the meaning of life?")\nprint(response)\n```\n\nYou can also register callbacks to handle events: \n\n```python\ndef on_poke(event):\n print("Got poked!")\n\nclient.on(\'poke\', on_poke)\n```\n\nAnd that\'s the basics of using the Anthropic IPC client library for Python! Let me know if you have any other questions!', additional_kwargs={}, example=False)full_chain.invoke({"question": "how do I use LangChain?"}) AIMessage(content=' As Harrison Chase told me, to use LangChain you first need to sign up for an API key at platform.langchain.com. Once you have your API key, you can install the Python library and write a simple Python script to call the LangChain API. Here is some sample code to get started:\n\n```python\nimport langchain\n\napi_key = "YOUR_API_KEY"\n\nlangchain.set_key(api_key)\n\nresponse = langchain.ask("What is the capital of France?")\n\nprint(response.response)\n```\n\nThis will send the question "What is the capital of France?" to the LangChain API and print the response. You can customize the request by providing parameters like max_tokens, temperature, etc. The LangChain Python library documentation has more details on the available options. The key things are getting an API key and calling langchain.ask() with your question text. Let me know if you have any other questions!', additional_kwargs={}, example=False)full_chain.invoke({"question": "whats 2 + 2"}) AIMessage(content=' 4', additional_kwargs={}, example=False)PreviousUse RunnableParallel/RunnableMapNextCookbookUsing a RunnableBranchUsing a custom function |
12 | https://python.langchain.com/docs/expression_language/cookbook/ | LangChain Expression LanguageCookbookCookbookExample code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you're just getting acquainted with LCEL, the Prompt + LLM page is a good place to start.📄️ Prompt + LLMThe most common and valuable composition is taking:📄️ RAGLet's look at adding in a retrieval step to a prompt and LLM, which adds up to a "retrieval-augmented generation" chain📄️ Multiple chainsRunnables can easily be used to string together multiple Chains📄️ Querying a SQL DBWe can replicate our SQLDatabaseChain with Runnables.📄️ AgentsYou can pass a Runnable into an agent.📄️ Code writingExample of how to use LCEL to write Python code.📄️ Adding memoryThis shows how to add memory to an arbitrary chain. Right now, you can use the memory classes but need to hook it up manually📄️ Adding moderationThis shows how to add in moderation (or other safeguards) around your LLM application.📄️ Using toolsYou can use any Tools with Runnables easily.PreviousRoute between multiple RunnablesNextPrompt + LLM |
13 | https://python.langchain.com/docs/expression_language/cookbook/prompt_llm_parser | LangChain Expression LanguageCookbookPrompt + LLMOn this pagePrompt + LLMThe most common and valuable composition is taking:PromptTemplate / ChatPromptTemplate -> LLM / ChatModel -> OutputParserAlmost any other chains you build will use this building block.PromptTemplate + LLMThe simplest composition is just combing a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model input.Note, you can mix and match PromptTemplate/ChatPromptTemplates and LLMs/ChatModels as you like here.from langchain.prompts import ChatPromptTemplatefrom langchain.chat_models import ChatOpenAIprompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")model = ChatOpenAI()chain = prompt | modelchain.invoke({"foo": "bears"}) AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!", additional_kwargs={}, example=False)Often times we want to attach kwargs that'll be passed to each model call. Here's a few examples of that:Attaching Stop Sequenceschain = prompt | model.bind(stop=["\n"])chain.invoke({"foo": "bears"}) AIMessage(content='Why did the bear never wear shoes?', additional_kwargs={}, example=False)Attaching Function Call informationfunctions = [ { "name": "joke", "description": "A joke", "parameters": { "type": "object", "properties": { "setup": { "type": "string", "description": "The setup for the joke" }, "punchline": { "type": "string", "description": "The punchline for the joke" } }, "required": ["setup", "punchline"] } } ]chain = prompt | model.bind(function_call= {"name": "joke"}, functions= functions)chain.invoke({"foo": "bears"}, config={}) AIMessage(content='', additional_kwargs={'function_call': {'name': 'joke', 'arguments': '{\n "setup": "Why don\'t bears wear shoes?",\n "punchline": "Because they have bear feet!"\n}'}}, example=False)PromptTemplate + LLM + OutputParserWe can also add in an output parser to easily trasform the raw LLM/ChatModel output into a more workable formatfrom langchain.schema.output_parser import StrOutputParserchain = prompt | model | StrOutputParser()Notice that this now returns a string - a much more workable format for downstream taskschain.invoke({"foo": "bears"}) "Why don't bears wear shoes?\n\nBecause they have bear feet!"Functions Output ParserWhen you specify the function to return, you may just want to parse that directlyfrom langchain.output_parsers.openai_functions import JsonOutputFunctionsParserchain = ( prompt | model.bind(function_call= {"name": "joke"}, functions= functions) | JsonOutputFunctionsParser())chain.invoke({"foo": "bears"}) {'setup': "Why don't bears like fast food?", 'punchline': "Because they can't catch it!"}from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParserchain = ( prompt | model.bind(function_call= {"name": "joke"}, functions= functions) | JsonKeyOutputFunctionsParser(key_name="setup"))chain.invoke({"foo": "bears"}) "Why don't bears wear shoes?"Simplifying inputTo make invocation even simpler, we can add a RunnableMap to take care of creating the prompt input dict for us:from langchain.schema.runnable import RunnableMap, RunnablePassthroughmap_ = RunnableMap(foo=RunnablePassthrough())chain = ( map_ | prompt | model.bind(function_call= {"name": "joke"}, functions= functions) | JsonKeyOutputFunctionsParser(key_name="setup"))chain.invoke("bears") "Why don't bears wear shoes?"Since we're composing our map with another Runnable, we can even use some syntactic sugar and just use a dict:chain = ( {"foo": RunnablePassthrough()} | prompt | model.bind(function_call= {"name": "joke"}, functions= functions) | JsonKeyOutputFunctionsParser(key_name="setup"))chain.invoke("bears") "Why don't bears like fast food?"PreviousCookbookNextRAGPromptTemplate + LLMAttaching Stop SequencesAttaching Function Call informationPromptTemplate + LLM + OutputParserFunctions Output ParserSimplifying input |
14 | https://python.langchain.com/docs/expression_language/cookbook/retrieval | LangChain Expression LanguageCookbookRAGOn this pageRAGLet's look at adding in a retrieval step to a prompt and LLM, which adds up to a "retrieval-augmented generation" chainpip install langchain openai faiss-cpu tiktokenfrom operator import itemgetterfrom langchain.prompts import ChatPromptTemplatefrom langchain.chat_models import ChatOpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.schema.output_parser import StrOutputParserfrom langchain.schema.runnable import RunnablePassthroughfrom langchain.vectorstores import FAISSvectorstore = FAISS.from_texts(["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | model | StrOutputParser())chain.invoke("where did harrison work?") 'Harrison worked at Kensho.'template = """Answer the question based only on the following context:{context}Question: {question}Answer in the following language: {language}"""prompt = ChatPromptTemplate.from_template(template)chain = { "context": itemgetter("question") | retriever, "question": itemgetter("question"), "language": itemgetter("language")} | prompt | model | StrOutputParser()chain.invoke({"question": "where did harrison work", "language": "italian"}) 'Harrison ha lavorato a Kensho.'Conversational Retrieval ChainWe can easily add in conversation history. This primarily means adding in chat_message_historyfrom langchain.schema.runnable import RunnableMapfrom langchain.schema import format_documentfrom langchain.prompts.prompt import PromptTemplate_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.Chat History:{chat_history}Follow Up Input: {question}Standalone question:"""CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)template = """Answer the question based only on the following context:{context}Question: {question}"""ANSWER_PROMPT = ChatPromptTemplate.from_template(template)DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")def _combine_documents(docs, document_prompt = DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"): doc_strings = [format_document(doc, document_prompt) for doc in docs] return document_separator.join(doc_strings)from typing import Tuple, Listdef _format_chat_history(chat_history: List[Tuple]) -> str: buffer = "" for dialogue_turn in chat_history: human = "Human: " + dialogue_turn[0] ai = "Assistant: " + dialogue_turn[1] buffer += "\n" + "\n".join([human, ai]) return buffer_inputs = RunnableMap( standalone_question=RunnablePassthrough.assign( chat_history=lambda x: _format_chat_history(x['chat_history']) ) | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),)_context = { "context": itemgetter("standalone_question") | retriever | _combine_documents, "question": lambda x: x["standalone_question"]}conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()conversational_qa_chain.invoke({ "question": "where did harrison work?", "chat_history": [],}) AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)conversational_qa_chain.invoke({ "question": "where did he work?", "chat_history": [("Who wrote this notebook?", "Harrison")],}) AIMessage(content='Harrison worked at Kensho.', additional_kwargs={}, example=False)With Memory and returning source documentsThis shows how to use memory with the above. For memory, we need to manage that outside at the memory. For returning the retrieved documents, we just need to pass them through all the way.from operator import itemgetterfrom langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory(return_messages=True, output_key="answer", input_key="question")# First we add a step to load memory# This adds a "memory" key to the input objectloaded_memory = RunnablePassthrough.assign( chat_history=memory.load_memory_variables | itemgetter("history"),)# Now we calculate the standalone questionstandalone_question = { "standalone_question": { "question": lambda x: x["question"], "chat_history": lambda x: _format_chat_history(x['chat_history']) } | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),}# Now we retrieve the documentsretrieved_documents = { "docs": itemgetter("standalone_question") | retriever, "question": lambda x: x["standalone_question"]}# Now we construct the inputs for the final promptfinal_inputs = { "context": lambda x: _combine_documents(x["docs"]), "question": itemgetter("question")}# And finally, we do the part that returns the answersanswer = { "answer": final_inputs | ANSWER_PROMPT | ChatOpenAI(), "docs": itemgetter("docs"),}# And now we put it all together!final_chain = loaded_memory | expanded_memory | standalone_question | retrieved_documents | answerinputs = {"question": "where did harrison work?"}result = final_chain.invoke(inputs)result {'answer': AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False), 'docs': [Document(page_content='harrison worked at kensho', metadata={})]}# Note that the memory does not save automatically# This will be improved in the future# For now you need to save it yourselfmemory.save_context(inputs, {"answer": result["answer"].content})memory.load_memory_variables({}) {'history': [HumanMessage(content='where did harrison work?', additional_kwargs={}, example=False), AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)]}PreviousPrompt + LLMNextMultiple chainsConversational Retrieval ChainWith Memory and returning source documents |
15 | https://python.langchain.com/docs/expression_language/cookbook/multiple_chains | LangChain Expression LanguageCookbookMultiple chainsOn this pageMultiple chainsRunnables can easily be used to string together multiple Chainsfrom operator import itemgetterfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.schema import StrOutputParserprompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")prompt2 = ChatPromptTemplate.from_template("what country is the city {city} in? respond in {language}")model = ChatOpenAI()chain1 = prompt1 | model | StrOutputParser()chain2 = {"city": chain1, "language": itemgetter("language")} | prompt2 | model | StrOutputParser()chain2.invoke({"person": "obama", "language": "spanish"}) 'El país donde se encuentra la ciudad de Honolulu, donde nació Barack Obama, el 44º Presidente de los Estados Unidos, es Estados Unidos. Honolulu se encuentra en la isla de Oahu, en el estado de Hawái.'from langchain.schema.runnable import RunnableMap, RunnablePassthroughprompt1 = ChatPromptTemplate.from_template("generate a {attribute} color. Return the name of the color and nothing else:")prompt2 = ChatPromptTemplate.from_template("what is a fruit of color: {color}. Return the name of the fruit and nothing else:")prompt3 = ChatPromptTemplate.from_template("what is a country with a flag that has the color: {color}. Return the name of the country and nothing else:")prompt4 = ChatPromptTemplate.from_template("What is the color of {fruit} and the flag of {country}?")model_parser = model | StrOutputParser()color_generator = {"attribute": RunnablePassthrough()} | prompt1 | {"color": model_parser}color_to_fruit = prompt2 | model_parsercolor_to_country = prompt3 | model_parserquestion_generator = color_generator | {"fruit": color_to_fruit, "country": color_to_country} | prompt4question_generator.invoke("warm") ChatPromptValue(messages=[HumanMessage(content='What is the color of strawberry and the flag of China?', additional_kwargs={}, example=False)])prompt = question_generator.invoke("warm")model.invoke(prompt) AIMessage(content='The color of an apple is typically red or green. The flag of China is predominantly red with a large yellow star in the upper left corner and four smaller yellow stars surrounding it.', additional_kwargs={}, example=False)Branching and MergingYou may want the output of one component to be processed by 2 or more other components. RunnableMaps let you split or fork the chain so multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following: Input / \ / \ Branch1 Branch2 \ / \ / Combineplanner = ( ChatPromptTemplate.from_template( "Generate an argument about: {input}" ) | ChatOpenAI() | StrOutputParser() | {"base_response": RunnablePassthrough()})arguments_for = ( ChatPromptTemplate.from_template( "List the pros or positive aspects of {base_response}" ) | ChatOpenAI() | StrOutputParser())arguments_against = ( ChatPromptTemplate.from_template( "List the cons or negative aspects of {base_response}" ) | ChatOpenAI() | StrOutputParser())final_responder = ( ChatPromptTemplate.from_messages( [ ("ai", "{original_response}"), ("human", "Pros:\n{results_1}\n\nCons:\n{results_2}"), ("system", "Generate a final response given the critique"), ] ) | ChatOpenAI() | StrOutputParser())chain = ( planner | { "results_1": arguments_for, "results_2": arguments_against, "original_response": itemgetter("base_response"), } | final_responder)chain.invoke({"input": "scrum"}) 'While Scrum has its potential cons and challenges, many organizations have successfully embraced and implemented this project management framework to great effect. The cons mentioned above can be mitigated or overcome with proper training, support, and a commitment to continuous improvement. It is also important to note that not all cons may be applicable to every organization or project.\n\nFor example, while Scrum may be complex initially, with proper training and guidance, teams can quickly grasp the concepts and practices. The lack of predictability can be mitigated by implementing techniques such as velocity tracking and release planning. The limited documentation can be addressed by maintaining a balance between lightweight documentation and clear communication among team members. The dependency on team collaboration can be improved through effective communication channels and regular team-building activities.\n\nScrum can be scaled and adapted to larger projects by using frameworks like Scrum of Scrums or LeSS (Large Scale Scrum). Concerns about speed versus quality can be addressed by incorporating quality assurance practices, such as continuous integration and automated testing, into the Scrum process. Scope creep can be managed by having a well-defined and prioritized product backlog, and a strong product owner can be developed through training and mentorship.\n\nResistance to change can be overcome by providing proper education and communication to stakeholders and involving them in the decision-making process. Ultimately, the cons of Scrum can be seen as opportunities for growth and improvement, and with the right mindset and support, they can be effectively managed.\n\nIn conclusion, while Scrum may have its challenges and potential cons, the benefits and advantages it offers in terms of collaboration, flexibility, adaptability, transparency, and customer satisfaction make it a widely adopted and successful project management framework. With proper implementation and continuous improvement, organizations can leverage Scrum to drive innovation, efficiency, and project success.'PreviousRAGNextQuerying a SQL DBBranching and Merging |
16 | https://python.langchain.com/docs/expression_language/cookbook/sql_db | LangChain Expression LanguageCookbookQuerying a SQL DBQuerying a SQL DBWe can replicate our SQLDatabaseChain with Runnables.from langchain.prompts import ChatPromptTemplatetemplate = """Based on the table schema below, write a SQL query that would answer the user's question:{schema}Question: {question}SQL Query:"""prompt = ChatPromptTemplate.from_template(template)from langchain.utilities import SQLDatabaseWe'll need the Chinook sample DB for this example. There's many places to download it from, e.g. https://database.guide/2-sample-databases-sqlite/db = SQLDatabase.from_uri("sqlite:///./Chinook.db")def get_schema(_): return db.get_table_info()def run_query(query): return db.run(query)from langchain.chat_models import ChatOpenAIfrom langchain.schema.output_parser import StrOutputParserfrom langchain.schema.runnable import RunnablePassthroughmodel = ChatOpenAI()sql_response = ( RunnablePassthrough.assign(schema=get_schema) | prompt | model.bind(stop=["\nSQLResult:"]) | StrOutputParser() )sql_response.invoke({"question": "How many employees are there?"}) 'SELECT COUNT(*) FROM Employee'template = """Based on the table schema below, question, sql query, and sql response, write a natural language response:{schema}Question: {question}SQL Query: {query}SQL Response: {response}"""prompt_response = ChatPromptTemplate.from_template(template)full_chain = ( RunnablePassthrough.assign(query=sql_response) | RunnablePassthrough.assign( schema=get_schema, response=lambda x: db.run(x["query"]), ) | prompt_response | model)full_chain.invoke({"question": "How many employees are there?"}) AIMessage(content='There are 8 employees.', additional_kwargs={}, example=False)PreviousMultiple chainsNextAgents |
17 | https://python.langchain.com/docs/expression_language/cookbook/agent | LangChain Expression LanguageCookbookAgentsAgentsYou can pass a Runnable into an agent.from langchain.agents import XMLAgent, tool, AgentExecutorfrom langchain.chat_models import ChatAnthropicmodel = ChatAnthropic(model="claude-2")@tooldef search(query: str) -> str: """Search things about current events.""" return "32 degrees"tool_list = [search]# Get prompt to useprompt = XMLAgent.get_default_prompt()# Logic for going from intermediate steps to a string to pass into model# This is pretty tied to the promptdef convert_intermediate_steps(intermediate_steps): log = "" for action, observation in intermediate_steps: log += ( f"<tool>{action.tool}</tool><tool_input>{action.tool_input}" f"</tool_input><observation>{observation}</observation>" ) return log# Logic for converting tools to string to go in promptdef convert_tools(tools): return "\n".join([f"{tool.name}: {tool.description}" for tool in tools])Building an agent from a runnable usually involves a few things:Data processing for the intermediate steps. These need to represented in a way that the language model can recognize them. This should be pretty tightly coupled to the instructions in the promptThe prompt itselfThe model, complete with stop tokens if neededThe output parser - should be in sync with how the prompt specifies things to be formatted.agent = ( { "question": lambda x: x["question"], "intermediate_steps": lambda x: convert_intermediate_steps(x["intermediate_steps"]) } | prompt.partial(tools=convert_tools(tool_list)) | model.bind(stop=["</tool_input>", "</final_answer>"]) | XMLAgent.get_default_output_parser())agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)agent_executor.invoke({"question": "whats the weather in New york?"}) > Entering new AgentExecutor chain... <tool>search</tool> <tool_input>weather in new york32 degrees <final_answer>The weather in New York is 32 degrees > Finished chain. {'question': 'whats the weather in New york?', 'output': 'The weather in New York is 32 degrees'}PreviousQuerying a SQL DBNextCode writing |
18 | https://python.langchain.com/docs/expression_language/cookbook/code_writing | LangChain Expression LanguageCookbookCode writingCode writingExample of how to use LCEL to write Python code.from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplatefrom langchain.schema.output_parser import StrOutputParserfrom langchain.utilities import PythonREPLtemplate = """Write some python code to solve the user's problem. Return only python code in Markdown format, e.g.:```python....```"""prompt = ChatPromptTemplate.from_messages( [("system", template), ("human", "{input}")])model = ChatOpenAI()def _sanitize_output(text: str): _, after = text.split("```python") return after.split("```")[0]chain = prompt | model | StrOutputParser() | _sanitize_output | PythonREPL().runchain.invoke({"input": "whats 2 plus 2"}) Python REPL can execute arbitrary code. Use with caution. '4\n'PreviousAgentsNextAdding memory |
19 | https://python.langchain.com/docs/expression_language/cookbook/memory | LangChain Expression LanguageCookbookAdding memoryAdding memoryThis shows how to add memory to an arbitrary chain. Right now, you can use the memory classes but need to hook it up manuallyfrom operator import itemgetterfrom langchain.chat_models import ChatOpenAIfrom langchain.memory import ConversationBufferMemoryfrom langchain.schema.runnable import RunnablePassthroughfrom langchain.prompts import ChatPromptTemplate, MessagesPlaceholdermodel = ChatOpenAI()prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful chatbot"), MessagesPlaceholder(variable_name="history"), ("human", "{input}")])memory = ConversationBufferMemory(return_messages=True)memory.load_memory_variables({}) {'history': []}chain = RunnablePassthrough.assign( memory=memory.load_memory_variables | itemgetter("history")) | prompt | modelinputs = {"input": "hi im bob"}response = chain.invoke(inputs)response AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, example=False)memory.save_context(inputs, {"output": response.content})memory.load_memory_variables({}) {'history': [HumanMessage(content='hi im bob', additional_kwargs={}, example=False), AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, example=False)]}inputs = {"input": "whats my name"}response = chain.invoke(inputs)response AIMessage(content='Your name is Bob.', additional_kwargs={}, example=False)PreviousCode writingNextAdding moderation |
20 | https://python.langchain.com/docs/expression_language/cookbook/moderation | LangChain Expression LanguageCookbookAdding moderationAdding moderationThis shows how to add in moderation (or other safeguards) around your LLM application.from langchain.chains import OpenAIModerationChainfrom langchain.llms import OpenAIfrom langchain.prompts import ChatPromptTemplatemoderate = OpenAIModerationChain()model = OpenAI()prompt = ChatPromptTemplate.from_messages([ ("system", "repeat after me: {input}")])chain = prompt | modelchain.invoke({"input": "you are stupid"}) '\n\nYou are stupid.'moderated_chain = chain | moderatemoderated_chain.invoke({"input": "you are stupid"}) {'input': '\n\nYou are stupid', 'output': "Text was found that violates OpenAI's content policy."}PreviousAdding memoryNextUsing tools |
21 | https://python.langchain.com/docs/expression_language/cookbook/tools | LangChain Expression LanguageCookbookUsing toolsUsing toolsYou can use any Tools with Runnables easily.pip install duckduckgo-searchfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.schema.output_parser import StrOutputParserfrom langchain.tools import DuckDuckGoSearchRunsearch = DuckDuckGoSearchRun()template = """turn the following user input into a search query for a search engine:{input}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()chain = prompt | model | StrOutputParser() | searchchain.invoke({"input": "I'd like to figure out what games are tonight"}) 'What sports games are on TV today & tonight? Watch and stream live sports on TV today, tonight, tomorrow. Today\'s 2023 sports TV schedule includes football, basketball, baseball, hockey, motorsports, soccer and more. Watch on TV or stream online on ESPN, FOX, FS1, CBS, NBC, ABC, Peacock, Paramount+, fuboTV, local channels and many other networks. MLB Games Tonight: How to Watch on TV, Streaming & Odds - Thursday, September 7. Seattle Mariners\' Julio Rodriguez greets teammates in the dugout after scoring against the Oakland Athletics in a ... Circle - Country Music and Lifestyle. Live coverage of all the MLB action today is available to you, with the information provided below. The Brewers will look to pick up a road win at PNC Park against the Pirates on Wednesday at 12:35 PM ET. Check out the latest odds and with BetMGM Sportsbook. Use bonus code "GNPLAY" for special offers! MLB Games Tonight: How to Watch on TV, Streaming & Odds - Tuesday, September 5. Houston Astros\' Kyle Tucker runs after hitting a double during the fourth inning of a baseball game against the Los Angeles Angels, Sunday, Aug. 13, 2023, in Houston. (AP Photo/Eric Christian Smith) (APMedia) The Houston Astros versus the Texas Rangers is one of ... The second half of tonight\'s college football schedule still has some good games remaining to watch on your television.. We\'ve already seen an exciting one when Colorado upset TCU. And we saw some ...'PreviousAdding moderationNextLangChain Expression Language (LCEL) |
22 | https://python.langchain.com/docs/expression_language/ | LangChain Expression LanguageOn this pageLangChain Expression Language (LCEL)LangChain Expression Language or LCEL is a declarative way to easily compose chains together.
There are several benefits to writing chains in this manner (as opposed to writing normal code):Async, Batch, and Streaming Support
Any chain constructed this way will automatically have full sync, async, batch, and streaming support.
This makes it easy to prototype a chain in a Jupyter notebook using the sync interface, and then expose it as an async streaming interface.Fallbacks
The non-determinism of LLMs makes it important to be able to handle errors gracefully.
With LCEL you can easily attach fallbacks to any chain.Parallelism
Since LLM applications involve (sometimes long) API calls, it often becomes important to run things in parallel.
With LCEL syntax, any components that can be run in parallel automatically are.Seamless LangSmith Tracing Integration
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
With LCEL, all steps are automatically logged to LangSmith for maximal observability and debuggability.InterfaceThe base interface shared by all LCEL objectsHow toHow to use core features of LCELCookbookExamples of common LCEL usage patternsPreviousQuickstartNextInterface |
23 | https://python.langchain.com/docs/modules/ | ModulesOn this pageModulesLangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:Model I/OInterface with language modelsRetrievalInterface with application-specific dataChainsConstruct sequences of callsAgentsLet chains choose which tools to use given high-level directivesMemoryPersist application state between runs of a chainCallbacksLog and stream intermediate steps of any chainPreviousLangChain Expression Language (LCEL)NextModel I/O |
24 | https://python.langchain.com/docs/modules/model_io/ | ModulesModel I/OModel I/OThe core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model.Prompts: Templatize, dynamically select, and manage model inputsLanguage models: Make calls to language models through common interfacesOutput parsers: Extract information from model outputsPreviousModulesNextPrompts |
25 | https://python.langchain.com/docs/modules/model_io/prompts/ | ModulesModel I/OPromptsPromptsA prompt for a language model is a set of instructions or input provided by a user to
guide the model's response, helping it understand the context and generate relevant
and coherent language-based output, such as answering questions, completing sentences,
or engaging in a conversation.LangChain provides several classes and functions to help construct and work with prompts.Prompt templates: Parametrized model inputsExample selectors: Dynamically select examples to include in promptsPreviousModel I/ONextPrompt templates |
26 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/ | ModulesModel I/OPromptsPrompt templatesPrompt templatesPrompt templates are pre-defined recipes for generating prompts for language models.A template may include instructions, few-shot examples, and specific context and
questions appropriate for a given task.LangChain provides tooling to create and work with prompt templates.LangChain strives to create model agnostic templates to make it easy to reuse
existing templates across different language models.Typically, language models expect the prompt to either be a string or else a list of chat messages.Prompt templateUse PromptTemplate to create a template for a string prompt.By default, PromptTemplate uses Python's str.format
syntax for templating; however other templating syntax is available (e.g., jinja2).from langchain.prompts import PromptTemplateprompt_template = PromptTemplate.from_template( "Tell me a {adjective} joke about {content}.")prompt_template.format(adjective="funny", content="chickens")"Tell me a funny joke about chickens."The template supports any number of variables, including no variables:from langchain.prompts import PromptTemplateprompt_template = PromptTemplate.from_template("Tell me a joke")prompt_template.format()For additional validation, specify input_variables explicitly. These variables
will be compared against the variables present in the template string during instantiation, raising an exception if
there is a mismatch; for example,from langchain.prompts import PromptTemplateinvalid_prompt = PromptTemplate( input_variables=["adjective"], template="Tell me a {adjective} joke about {content}.")You can create custom prompt templates that format the prompt in any way you want.
For more information, see Custom Prompt Templates.Chat prompt templateThe prompt to chat models is a list of chat messages.Each chat message is associated with content, and an additional parameter called role.
For example, in the OpenAI Chat Completions API, a chat message can be associated with an AI assistant, a human or a system role.Create a chat prompt template like this:from langchain.prompts import ChatPromptTemplatetemplate = ChatPromptTemplate.from_messages([ ("system", "You are a helpful AI bot. Your name is {name}."), ("human", "Hello, how are you doing?"), ("ai", "I'm doing well, thanks!"), ("human", "{user_input}"),])messages = template.format_messages( name="Bob", user_input="What is your name?")ChatPromptTemplate.from_messages accepts a variety of message representations.For example, in addition to using the 2-tuple representation of (type, content) used
above, you could pass in an instance of MessagePromptTemplate or BaseMessage.from langchain.prompts import ChatPromptTemplatefrom langchain.prompts.chat import SystemMessage, HumanMessagePromptTemplatetemplate = ChatPromptTemplate.from_messages( [ SystemMessage( content=( "You are a helpful assistant that re-writes the user's text to " "sound more upbeat." ) ), HumanMessagePromptTemplate.from_template("{text}"), ])from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()llm(template.format_messages(text='i dont like eating tasty things.'))AIMessage(content='I absolutely adore indulging in delicious treats!', additional_kwargs={}, example=False)This provides you with a lot of flexibility in how you construct your chat prompts.PreviousPromptsNextConnecting to a Feature Store |
27 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store | ModulesModel I/OPromptsPrompt templatesConnecting to a Feature StoreOn this pageConnecting to a Feature StoreFeature stores are a concept from traditional machine learning that make sure data fed into models is up-to-date and relevant. For more on this, see here.This concept is extremely relevant when considering putting LLM applications in production. In order to personalize LLM applications, you may want to combine LLMs with up-to-date information about particular users. Feature stores can be a great way to keep that data fresh, and LangChain provides an easy way to combine that data with LLMs.In this notebook we will show how to connect prompt templates to feature stores. The basic idea is to call a feature store from inside a prompt template to retrieve values that are then formatted into the prompt.FeastTo start, we will use the popular open source feature store framework Feast.This assumes you have already run the steps in the README around getting started. We will build off of that example in getting started, and create and LLMChain to write a note to a specific driver regarding their up-to-date statistics.Load Feast StoreAgain, this should be set up according to the instructions in the Feast README.from feast import FeatureStore# You may need to update the path depending on where you stored itfeast_repo_path = "../../../../../my_feature_repo/feature_repo/"store = FeatureStore(repo_path=feast_repo_path)PromptsHere we will set up a custom FeastPromptTemplate. This prompt template will take in a driver id, look up their stats, and format those stats into a prompt.Note that the input to this prompt template is just driver_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).from langchain.prompts import PromptTemplate, StringPromptTemplatetemplate = """Given the driver's up to date stats, write them note relaying those stats to them.If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel betterHere are the drivers stats:Conversation rate: {conv_rate}Acceptance rate: {acc_rate}Average Daily Trips: {avg_daily_trips}Your response:"""prompt = PromptTemplate.from_template(template)class FeastPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: driver_id = kwargs.pop("driver_id") feature_vector = store.get_online_features( features=[ "driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", "driver_hourly_stats:avg_daily_trips", ], entity_rows=[{"driver_id": driver_id}], ).to_dict() kwargs["conv_rate"] = feature_vector["conv_rate"][0] kwargs["acc_rate"] = feature_vector["acc_rate"][0] kwargs["avg_daily_trips"] = feature_vector["avg_daily_trips"][0] return prompt.format(**kwargs)prompt_template = FeastPromptTemplate(input_variables=["driver_id"])print(prompt_template.format(driver_id=1001)) Given the driver's up to date stats, write them note relaying those stats to them. If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better Here are the drivers stats: Conversation rate: 0.4745151400566101 Acceptance rate: 0.055561766028404236 Average Daily Trips: 936 Your response:Use in a chainWe can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store.from langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)chain.run(1001) "Hi there! I wanted to update you on your current stats. Your acceptance rate is 0.055561766028404236 and your average daily trips are 936. While your conversation rate is currently 0.4745151400566101, I have no doubt that with a little extra effort, you'll be able to exceed that .5 mark! Keep up the great work! And remember, even chickens can't always cross the road, but they still give it their best shot."TectonAbove, we showed how you could use Feast, a popular open source and self-managed feature store, with LangChain. Our examples below will show a similar integration using Tecton. Tecton is a fully managed feature platform built to orchestrate the complete ML feature lifecycle, from transformation to online serving, with enterprise-grade SLAs.PrerequisitesTecton Deployment (sign up at https://tecton.ai)TECTON_API_KEY environment variable set to a valid Service Account keyDefine and load featuresWe will use the user_transaction_counts Feature View from the Tecton tutorial as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt.user_transaction_metrics = FeatureService( name = "user_transaction_metrics", features = [user_transaction_counts])The above Feature Service is expected to be applied to a live workspace. For this example, we will be using the "prod" workspace.import tectonworkspace = tecton.get_workspace("prod")feature_service = workspace.get_feature_service("user_transaction_metrics")PromptsHere we will set up a custom TectonPromptTemplate. This prompt template will take in a user_id , look up their stats, and format those stats into a prompt.Note that the input to this prompt template is just user_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).from langchain.prompts import PromptTemplate, StringPromptTemplatetemplate = """Given the vendor's up to date transaction stats, write them a note based on the following rules:1. If they had a transaction in the last day, write a short congratulations message on their recent sales2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.3. Always add a silly joke about chickens at the endHere are the vendor's stats:Number of Transactions Last Day: {transaction_count_1d}Number of Transactions Last 30 Days: {transaction_count_30d}Your response:"""prompt = PromptTemplate.from_template(template)class TectonPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: user_id = kwargs.pop("user_id") feature_vector = feature_service.get_online_features( join_keys={"user_id": user_id} ).to_dict() kwargs["transaction_count_1d"] = feature_vector[ "user_transaction_counts.transaction_count_1d_1d" ] kwargs["transaction_count_30d"] = feature_vector[ "user_transaction_counts.transaction_count_30d_1d" ] return prompt.format(**kwargs)prompt_template = TectonPromptTemplate(input_variables=["user_id"])print(prompt_template.format(user_id="user_469998441571")) Given the vendor's up to date transaction stats, write them a note based on the following rules: 1. If they had a transaction in the last day, write a short congratulations message on their recent sales 2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more. 3. Always add a silly joke about chickens at the end Here are the vendor's stats: Number of Transactions Last Day: 657 Number of Transactions Last 30 Days: 20326 Your response:Use in a chainWe can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform.from langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)chain.run("user_469998441571") 'Wow, congratulations on your recent sales! Your business is really soaring like a chicken on a hot air balloon! Keep up the great work!'FeatureformFinally, we will use Featureform, an open-source and enterprise-grade feature store, to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations.Initialize FeatureformYou can follow in the instructions in the README to initialize your transformations and features in Featureform.import featureform as ffclient = ff.Client(host="demo.featureform.com")PromptsHere we will set up a custom FeatureformPromptTemplate. This prompt template will take in the average amount a user pays per transactions.Note that the input to this prompt template is just avg_transaction, since that is the only user defined piece (all other variables are looked up inside the prompt template).from langchain.prompts import PromptTemplate, StringPromptTemplatetemplate = """Given the amount a user spends on average per transaction, let them know if they are a high roller. Otherwise, make a silly joke about chickens at the end to make them feel betterHere are the user's stats:Average Amount per Transaction: ${avg_transcation}Your response:"""prompt = PromptTemplate.from_template(template)class FeatureformPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: user_id = kwargs.pop("user_id") fpf = client.features([("avg_transactions", "quickstart")], {"user": user_id}) return prompt.format(**kwargs)prompt_template = FeatureformPromptTemplate(input_variables=["user_id"])print(prompt_template.format(user_id="C1410926"))Use in a chainWe can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform.from langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)chain.run("C1410926")AzureML Managed Feature StoreWe will use AzureML Managed Feature Store to run the example below. PrerequisitesCreate feature store with online materialization using instructions here Enable online materialization and run online inference.A successfully created feature store by following the instructions should have an account featureset with version as 1. It will have accountID as index column with features accountAge, accountCountry, numPaymentRejects1dPerUser.PromptsHere we will set up a custom AzureMLFeatureStorePromptTemplate. This prompt template will take in an account_id and optional query. It then fetches feature values from feature store and format those features into the output prompt. Note that the required input to this prompt template is just account_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).Also note that this is a bootstrap example to showcase how LLM applications can leverage AzureML managed feature store. Developers are welcome to improve the prompt template further to suit their needs.import osos.environ['AZURE_ML_CLI_PRIVATE_FEATURES_ENABLED'] = 'True'import pandasfrom pydantic import Extrafrom langchain.prompts import PromptTemplate, StringPromptTemplatefrom azure.identity import AzureCliCredentialfrom azureml.featurestore import FeatureStoreClient, init_online_lookup, get_online_featuresclass AzureMLFeatureStorePromptTemplate(StringPromptTemplate, extra=Extra.allow): def __init__(self, subscription_id: str, resource_group: str, feature_store_name: str, **kwargs): # this is an example template for proof of concept and can be changed to suit the developer needs template = """ {query} ### account id = {account_id} account age = {account_age} account country = {account_country} payment rejects 1d per user = {payment_rejects_1d_per_user} ### """ prompt_template=PromptTemplate.from_template(template) super().__init__(prompt=prompt_template, input_variables=["account_id", "query"]) # use AzureMLOnBehalfOfCredential() in spark context credential = AzureCliCredential() self._fs_client = FeatureStoreClient( credential=credential, subscription_id=subscription_id, resource_group_name=resource_group, name=feature_store_name) self._feature_set = self._fs_client.feature_sets.get(name="accounts", version=1) init_online_lookup(self._feature_set.features, credential, force=True) def format(self, **kwargs) -> str: if "account_id" not in kwargs: raise "account_id needed to fetch details from feature store" account_id = kwargs.pop("account_id") query="" if "query" in kwargs: query = kwargs.pop("query") # feature set is registered with accountID as entity index column. obs = pandas.DataFrame({'accountID': [account_id]}) # get the feature details for the input entity from feature store. df = get_online_features(self._feature_set.features, obs) # populate prompt template output using the fetched feature values. kwargs["query"] = query kwargs["account_id"] = account_id kwargs["account_age"] = df["accountAge"][0] kwargs["account_country"] = df["accountCountry"][0] kwargs["payment_rejects_1d_per_user"] = df["numPaymentRejects1dPerUser"][0] return self.prompt.format(**kwargs)Test# Replace the place holders below with actual details of feature store that was created in previous stepsprompt_template = AzureMLFeatureStorePromptTemplate( subscription_id="", resource_group="", feature_store_name="")print(prompt_template.format(account_id="A1829581630230790")) ### account id = A1829581630230790 account age = 563.0 account country = GB payment rejects 1d per user = 15.0 ### Use in a chainWe can now use this in a chain, successfully creating a chain that achieves personalization backed by the AzureML Managed Feature Store.os.environ["OPENAI_API_KEY"]="" # Fill the open ai key herefrom langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)# NOTE: developer's can further fine tune AzureMLFeatureStorePromptTemplate# for getting even more accurate results for the input querychain.predict(account_id="A1829581630230790", query ="write a small thank you note within 20 words if account age > 10 using the account stats") 'Thank you for being a valued member for over 10 years! We appreciate your continued support.'PreviousPrompt templatesNextCustom prompt templateFeastLoad Feast StorePromptsUse in a chainTectonPrerequisitesDefine and load featuresPromptsUse in a chainFeatureformInitialize FeatureformPromptsUse in a chainAzureML Managed Feature StorePrerequisitesPromptsTestUse in a chain |
28 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/custom_prompt_template | ModulesModel I/OPromptsPrompt templatesCustom prompt templateOn this pageCustom prompt templateLet's suppose we want the LLM to generate English language explanations of a function given its name. To achieve this task, we will create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.Why are custom prompt templates needed?LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template.Creating a custom prompt templateThere are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API.In this guide, we will create a custom prompt using a string prompt template. To create a custom string prompt template, there are two requirements:It has an input_variables attribute that exposes what input variables the prompt template expects.It defines a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.We will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let's first create a function that will return the source code of a function given its name.import inspectdef get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name)Next, we'll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.from langchain.prompts import StringPromptTemplatefrom pydantic import BaseModel, validatorPROMPT = """\Given the function name and source code, generate an English language explanation of the function.Function Name: {function_name}Source Code:{source_code}Explanation:"""class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel): """A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.""" @validator("input_variables") def validate_input_variables(cls, v): """Validate that the input variables are correct.""" if len(v) != 1 or "function_name" not in v: raise ValueError("function_name must be the only input_variable.") return v def format(self, **kwargs) -> str: # Get the source code of the function source_code = get_source_code(kwargs["function_name"]) # Generate the prompt to be sent to the language model prompt = PROMPT.format( function_name=kwargs["function_name"].__name__, source_code=source_code ) return prompt def _prompt_type(self): return "function-explainer"Use the custom prompt templateNow that we have created a custom prompt template, we can use it to generate prompts for our task.fn_explainer = FunctionExplainerPromptTemplate(input_variables=["function_name"])# Generate a prompt for the function "get_source_code"prompt = fn_explainer.format(function_name=get_source_code)print(prompt) Given the function name and source code, generate an English language explanation of the function. Function Name: get_source_code Source Code: def get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name) Explanation: PreviousConnecting to a Feature StoreNextFew-shot prompt templatesWhy are custom prompt templates needed?Creating a custom prompt templateUse the custom prompt template |
29 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples | ModulesModel I/OPromptsPrompt templatesFew-shot prompt templatesFew-shot prompt templatesIn this tutorial, we'll learn how to create a prompt template that uses few-shot examples. A few-shot prompt template can be constructed from either a set of examples, or from an Example Selector object.Use CaseIn this tutorial, we'll configure few-shot examples for self-ask with search.Using an example setCreate the example setTo get started, create a list of few-shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.from langchain.prompts.few_shot import FewShotPromptTemplatefrom langchain.prompts.prompt import PromptTemplateexamples = [ { "question": "Who lived longer, Muhammad Ali or Alan Turing?", "answer": """Are follow up questions needed here: Yes.Follow up: How old was Muhammad Ali when he died?Intermediate answer: Muhammad Ali was 74 years old when he died.Follow up: How old was Alan Turing when he died?Intermediate answer: Alan Turing was 41 years old when he died.So the final answer is: Muhammad Ali""" }, { "question": "When was the founder of craigslist born?", "answer": """Are follow up questions needed here: Yes.Follow up: Who was the founder of craigslist?Intermediate answer: Craigslist was founded by Craig Newmark.Follow up: When was Craig Newmark born?Intermediate answer: Craig Newmark was born on December 6, 1952.So the final answer is: December 6, 1952""" }, { "question": "Who was the maternal grandfather of George Washington?", "answer":"""Are follow up questions needed here: Yes.Follow up: Who was the mother of George Washington?Intermediate answer: The mother of George Washington was Mary Ball Washington.Follow up: Who was the father of Mary Ball Washington?Intermediate answer: The father of Mary Ball Washington was Joseph Ball.So the final answer is: Joseph Ball""" }, { "question": "Are both the directors of Jaws and Casino Royale from the same country?", "answer":"""Are follow up questions needed here: Yes.Follow up: Who is the director of Jaws?Intermediate Answer: The director of Jaws is Steven Spielberg.Follow up: Where is Steven Spielberg from?Intermediate Answer: The United States.Follow up: Who is the director of Casino Royale?Intermediate Answer: The director of Casino Royale is Martin Campbell.Follow up: Where is Martin Campbell from?Intermediate Answer: New Zealand.So the final answer is: No""" }]Create a formatter for the few-shot examplesConfigure a formatter that will format the few-shot examples into a string. This formatter should be a PromptTemplate object.example_prompt = PromptTemplate(input_variables=["question", "answer"], template="Question: {question}\n{answer}")print(example_prompt.format(**examples[0])) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Feed examples and formatter to FewShotPromptTemplateFinally, create a FewShotPromptTemplate object. This object takes in the few-shot examples and the formatter for the few-shot examples.prompt = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"])print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Question: When was the founder of craigslist born? Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Question: Are both the directors of Jaws and Casino Royale from the same country? Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No Question: Who was the father of Mary Ball Washington?Using an example selectorFeed examples into ExampleSelectorWe will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object.In this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few-shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few-shot examples, as well as a vector store to perform the nearest neighbor search.from langchain.prompts.example_selector import SemanticSimilarityExampleSelectorfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsexample_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1)# Select the most similar example to the input.question = "Who was the father of Mary Ball Washington?"selected_examples = example_selector.select_examples({"question": question})print(f"Examples most similar to the input: {question}")for example in selected_examples: print("\n") for k, v in example.items(): print(f"{k}: {v}") Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Examples most similar to the input: Who was the father of Mary Ball Washington? question: Who was the maternal grandfather of George Washington? answer: Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Feed example selector into FewShotPromptTemplateFinally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few-shot examples.prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"])print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Question: Who was the father of Mary Ball Washington?PreviousCustom prompt templateNextFew-shot examples for chat models |
30 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples_chat | ModulesModel I/OPromptsPrompt templatesFew-shot examples for chat modelsOn this pageFew-shot examples for chat modelsThis notebook covers how to use few-shot examples in chat models. There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the FewShotChatMessagePromptTemplate as a flexible starting point, and you can modify or replace them as you see fit.The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.Note: The following code examples are for chat models. For similar few-shot prompt examples for completion models (LLMs), see the few-shot prompt templates guide.Fixed ExamplesThe most basic (and common) few-shot prompting technique is to use a fixed prompt example. This way you can select a chain, evaluate it, and avoid worrying about additional moving parts in production.The basic components of the template are:examples: A list of dictionary examples to include in the final prompt.example_prompt: converts each example into 1 or more messages through its format_messages method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.Below is a simple demonstration. First, import the modules for this example:from langchain.prompts import ( FewShotChatMessagePromptTemplate, ChatPromptTemplate,)Then, define the examples you'd like to include.examples = [ {"input": "2+2", "output": "4"}, {"input": "2+3", "output": "5"},]Next, assemble them into the few-shot prompt template.# This is a prompt template used to format each individual example.example_prompt = ChatPromptTemplate.from_messages( [ ("human", "{input}"), ("ai", "{output}"), ])few_shot_prompt = FewShotChatMessagePromptTemplate( example_prompt=example_prompt, examples=examples,)print(few_shot_prompt.format()) Human: 2+2 AI: 4 Human: 2+3 AI: 5Finally, assemble your final prompt and use it with a model.final_prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a wondrous wizard of math."), few_shot_prompt, ("human", "{input}"), ])from langchain.chat_models import ChatAnthropicchain = final_prompt | ChatAnthropic(temperature=0.0)chain.invoke({"input": "What's the square of a triangle?"}) AIMessage(content=' Triangles do not have a "square". A square refers to a shape with 4 equal sides and 4 right angles. Triangles have 3 sides and 3 angles.\n\nThe area of a triangle can be calculated using the formula:\n\nA = 1/2 * b * h\n\nWhere:\n\nA is the area \nb is the base (the length of one of the sides)\nh is the height (the length from the base to the opposite vertex)\n\nSo the area depends on the specific dimensions of the triangle. There is no single "square of a triangle". The area can vary greatly depending on the base and height measurements.', additional_kwargs={}, example=False)Dynamic few-shot promptingSometimes you may want to condition which examples are shown based on the input. For this, you can replace the examples with an example_selector. The other components remain the same as above! To review, the dynamic few-shot prompt template would look like:example_selector: responsible for selecting few-shot examples (and the order in which they are returned) for a given input. These implement the BaseExampleSelector interface. A common example is the vectorstore-backed SemanticSimilarityExampleSelectorexample_prompt: convert each example into 1 or more messages through its format_messages method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.These once again can be composed with other messages and chat templates to assemble your final prompt.from langchain.prompts import SemanticSimilarityExampleSelectorfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import ChromaSince we are using a vectorstore to select examples based on semantic similarity, we will want to first populate the store.examples = [ {"input": "2+2", "output": "4"}, {"input": "2+3", "output": "5"}, {"input": "2+4", "output": "6"}, {"input": "What did the cow say to the moon?", "output": "nothing at all"}, { "input": "Write me a poem about the moon", "output": "One for the moon, and one for me, who are we to talk about the moon?", },]to_vectorize = [" ".join(example.values()) for example in examples]embeddings = OpenAIEmbeddings()vectorstore = Chroma.from_texts(to_vectorize, embeddings, metadatas=examples)Create the example_selectorWith a vectorstore created, you can create the example_selector. Here we will isntruct it to only fetch the top 2 examples.example_selector = SemanticSimilarityExampleSelector( vectorstore=vectorstore, k=2,)# The prompt template will load examples by passing the input do the `select_examples` methodexample_selector.select_examples({"input": "horse"}) [{'input': 'What did the cow say to the moon?', 'output': 'nothing at all'}, {'input': '2+4', 'output': '6'}]Create prompt templateAssemble the prompt template, using the example_selector created above.from langchain.prompts import ( FewShotChatMessagePromptTemplate, ChatPromptTemplate,)# Define the few-shot prompt.few_shot_prompt = FewShotChatMessagePromptTemplate( # The input variables select the values to pass to the example_selector input_variables=["input"], example_selector=example_selector, # Define how each example will be formatted. # In this case, each example will become 2 messages: # 1 human, and 1 AI example_prompt=ChatPromptTemplate.from_messages( [("human", "{input}"), ("ai", "{output}")] ),)Below is an example of how this would be assembled.print(few_shot_prompt.format(input="What's 3+3?")) Human: 2+3 AI: 5 Human: 2+2 AI: 4Assemble the final prompt template:final_prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a wondrous wizard of math."), few_shot_prompt, ("human", "{input}"), ])print(few_shot_prompt.format(input="What's 3+3?")) Human: 2+3 AI: 5 Human: 2+2 AI: 4Use with an LLMNow, you can connect your model to the few-shot prompt.from langchain.chat_models import ChatAnthropicchain = final_prompt | ChatAnthropic(temperature=0.0)chain.invoke({"input": "What's 3+3?"}) AIMessage(content=' 3 + 3 = 6', additional_kwargs={}, example=False)PreviousFew-shot prompt templatesNextFormat template outputFixed ExamplesDynamic few-shot prompting |
31 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/format_output | ModulesModel I/OPromptsPrompt templatesFormat template outputFormat template outputThe output of the format method is available as a string, list of messages and ChatPromptValueAs string:output = chat_prompt.format(input_language="English", output_language="French", text="I love programming.")output 'System: You are a helpful assistant that translates English to French.\nHuman: I love programming.'# or alternativelyoutput_2 = chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_string()assert output == output_2As list of Message objects:chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages() [SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})]As ChatPromptValue:chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.") ChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})])PreviousFew-shot examples for chat modelsNextTemplate formats |
32 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/formats | ModulesModel I/OPromptsPrompt templatesTemplate formatsTemplate formatsPromptTemplate by default uses Python f-string as its template format. However, it can also use other formats like jinja2, specified through the template_format argument.To use the jinja2 template:from langchain.prompts import PromptTemplatejinja2_template = "Tell me a {{ adjective }} joke about {{ content }}"prompt = PromptTemplate.from_template(jinja2_template, template_format="jinja2")prompt.format(adjective="funny", content="chickens")# Output: Tell me a funny joke about chickens.To use the Python f-string template:from langchain.prompts import PromptTemplatefstring_template = """Tell me a {adjective} joke about {content}"""prompt = PromptTemplate.from_template(fstring_template)prompt.format(adjective="funny", content="chickens")# Output: Tell me a funny joke about chickens.Currently, only jinja2 and f-string are supported. For other formats, kindly raise an issue on the Github page.PreviousFormat template outputNextTypes of MessagePromptTemplate |
33 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/msg_prompt_templates | ModulesModel I/OPromptsPrompt templatesTypes of MessagePromptTemplateTypes of MessagePromptTemplateLangChain provides different types of MessagePromptTemplate. The most commonly used are AIMessagePromptTemplate, SystemMessagePromptTemplate and HumanMessagePromptTemplate, which create an AI message, system message and human message respectively.However, in cases where the chat model supports taking chat message with arbitrary role, you can use ChatMessagePromptTemplate, which allows user to specify the role name.from langchain.prompts import ChatMessagePromptTemplateprompt = "May the {subject} be with you"chat_message_prompt = ChatMessagePromptTemplate.from_template(role="Jedi", template=prompt)chat_message_prompt.format(subject="force") ChatMessage(content='May the force be with you', additional_kwargs={}, role='Jedi')LangChain also provides MessagesPlaceholder, which gives you full control of what messages to be rendered during formatting. This can be useful when you are uncertain of what role you should be using for your message prompt templates or when you wish to insert a list of messages during formatting.from langchain.prompts import MessagesPlaceholderhuman_prompt = "Summarize our conversation so far in {word_count} words."human_message_template = HumanMessagePromptTemplate.from_template(human_prompt)chat_prompt = ChatPromptTemplate.from_messages([MessagesPlaceholder(variable_name="conversation"), human_message_template])human_message = HumanMessage(content="What is the best way to learn programming?")ai_message = AIMessage(content="""\1. Choose a programming language: Decide on a programming language that you want to learn.2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.3. Practice, practice, practice: The best way to learn programming is through hands-on experience\""")chat_prompt.format_prompt(conversation=[human_message, ai_message], word_count="10").to_messages() [HumanMessage(content='What is the best way to learn programming?', additional_kwargs={}), AIMessage(content='1. Choose a programming language: Decide on a programming language that you want to learn. \n\n2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.\n\n3. Practice, practice, practice: The best way to learn programming is through hands-on experience', additional_kwargs={}), HumanMessage(content='Summarize our conversation so far in 10 words.', additional_kwargs={})]PreviousTemplate formatsNextPartial prompt templates |
34 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/partial | ModulesModel I/OPromptsPrompt templatesPartial prompt templatesPartial prompt templatesLike other methods, it can make sense to "partial" a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.LangChain supports this in two ways:Partial formatting with string values.Partial formatting with functions that return string values.These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.Partial with stringsOne common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:from langchain.prompts import PromptTemplateprompt = PromptTemplate(template="{foo}{bar}", input_variables=["foo", "bar"])partial_prompt = prompt.partial(foo="foo");print(partial_prompt.format(bar="baz")) foobazYou can also just initialize the prompt with the partialed variables.prompt = PromptTemplate(template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"})print(prompt.format(bar="baz")) foobazPartial with functionsThe other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.from datetime import datetimedef _get_datetime(): now = datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S")prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective", "date"]);partial_prompt = prompt.partial(date=_get_datetime)print(partial_prompt.format(adjective="funny")) Tell me a funny joke about the day 02/27/2023, 22:15:16You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective"], partial_variables={"date": _get_datetime});print(prompt.format(adjective="funny")) Tell me a funny joke about the day 02/27/2023, 22:15:16PreviousTypes of MessagePromptTemplateNextComposition |
35 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_composition | ModulesModel I/OPromptsPrompt templatesCompositionCompositionThis notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts:Final prompt: The final prompt that is returnedPipeline prompts: A list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.from langchain.prompts.pipeline import PipelinePromptTemplatefrom langchain.prompts.prompt import PromptTemplatefull_template = """{introduction}{example}{start}"""full_prompt = PromptTemplate.from_template(full_template)introduction_template = """You are impersonating {person}."""introduction_prompt = PromptTemplate.from_template(introduction_template)example_template = """Here's an example of an interaction: Q: {example_q}A: {example_a}"""example_prompt = PromptTemplate.from_template(example_template)start_template = """Now, do this for real!Q: {input}A:"""start_prompt = PromptTemplate.from_template(start_template)input_prompts = [ ("introduction", introduction_prompt), ("example", example_prompt), ("start", start_prompt)]pipeline_prompt = PipelinePromptTemplate(final_prompt=full_prompt, pipeline_prompts=input_prompts)pipeline_prompt.input_variables ['example_a', 'person', 'example_q', 'input']print(pipeline_prompt.format( person="Elon Musk", example_q="What's your favorite car?", example_a="Tesla", input="What's your favorite social media site?")) You are impersonating Elon Musk. Here's an example of an interaction: Q: What's your favorite car? A: Tesla Now, do this for real! Q: What's your favorite social media site? A: PreviousPartial prompt templatesNextSerialization |
36 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization | ModulesModel I/OPromptsPrompt templatesSerializationOn this pageSerializationIt is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options.At a high level, the following design principles are applied to serialization:Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like examples, different serialization methods may be supported.We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.There is also a single entry point to load prompts from disk, making it easy to load any type of prompt.# All prompts are loaded through the `load_prompt` function.from langchain.prompts import load_promptPromptTemplateThis section covers examples for loading a PromptTemplate.Loading from YAMLThis shows an example of loading a PromptTemplate from YAML.cat simple_prompt.yaml _type: prompt input_variables: ["adjective", "content"] template: Tell me a {adjective} joke about {content}.prompt = load_prompt("simple_prompt.yaml")print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens.Loading from JSONThis shows an example of loading a PromptTemplate from JSON.cat simple_prompt.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template": "Tell me a {adjective} joke about {content}." }prompt = load_prompt("simple_prompt.json")print(prompt.format(adjective="funny", content="chickens"))Tell me a funny joke about chickens.Loading template from a fileThis shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from template to template_path.cat simple_template.txt Tell me a {adjective} joke about {content}.cat simple_prompt_with_template_file.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template_path": "simple_template.txt" }prompt = load_prompt("simple_prompt_with_template_file.json")print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens.FewShotPromptTemplateThis section covers examples for loading few-shot prompt templates.ExamplesThis shows an example of what examples stored as json might look like.cat examples.json [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ]And here is what the same examples stored as yaml might look like.cat examples.yaml - input: happy output: sad - input: tall output: shortLoading from YAMLThis shows an example of loading a few-shot example from YAML.cat few_shot_prompt.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.json suffix: "Input: {adjective}\nOutput:"prompt = load_prompt("few_shot_prompt.yaml")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output:The same would work if you loaded examples from the yaml file.cat few_shot_prompt_yaml_examples.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.yaml suffix: "Input: {adjective}\nOutput:"prompt = load_prompt("few_shot_prompt_yaml_examples.yaml")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output:Loading from JSONThis shows an example of loading a few-shot example from JSON.cat few_shot_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt.json")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output:Examples in the configThis shows an example of referencing the examples directly in the config.cat few_shot_prompt_examples_in.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ], "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_examples_in.json")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output:Example prompt from a fileThis shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path.cat example_prompt.json { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }cat few_shot_prompt_example_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt_path": "example_prompt.json", "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_example_prompt.json")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output:PromptTemplate with OutputParserThis shows an example of loading a prompt along with an OutputParser from a file.cat prompt_with_output_parser.json { "input_variables": [ "question", "student_answer" ], "output_parser": { "regex": "(.*?)\\nScore: (.*)", "output_keys": [ "answer", "score" ], "default_output_key": null, "_type": "regex_parser" }, "partial_variables": {}, "template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:", "template_format": "f-string", "validate_template": true, "_type": "prompt" }prompt = load_prompt("prompt_with_output_parser.json")prompt.output_parser.parse( "George Washington was born in 1732 and died in 1799.\nScore: 1/2") {'answer': 'George Washington was born in 1732 and died in 1799.', 'score': '1/2'}PreviousCompositionNextPrompt pipeliningPromptTemplateLoading from YAMLLoading from JSONLoading template from a fileFewShotPromptTemplateExamplesLoading from YAMLLoading from JSONExamples in the configExample prompt from a filePromptTemplate with OutputParser |
37 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining | ModulesModel I/OPromptsPrompt templatesPrompt pipeliningOn this pagePrompt pipeliningThe idea behind prompt pipelining is to provide a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components.String prompt pipeliningWhen working with string prompts, each template is joined togther. You can work with either prompts directly or strings (the first element in the list needs to be a prompt).from langchain.prompts import PromptTemplateprompt = ( PromptTemplate.from_template("Tell me a joke about {topic}") + ", make it funny" + "\n\nand in {language}")prompt PromptTemplate(input_variables=['language', 'topic'], output_parser=None, partial_variables={}, template='Tell me a joke about {topic}, make it funny\n\nand in {language}', template_format='f-string', validate_template=True)prompt.format(topic="sports", language="spanish") 'Tell me a joke about sports, make it funny\n\nand in spanish'You can also use it in an LLMChain, just like before.from langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainmodel = ChatOpenAI()chain = LLMChain(llm=model, prompt=prompt)chain.run(topic="sports", language="spanish") '¿Por qué el futbolista llevaba un paraguas al partido?\n\nPorque pronosticaban lluvia de goles.'Chat prompt pipeliningA chat prompt is made up a of a list of messages. Purely for developer experience, we've added a convinient way to create these prompts. In this pipeline, each new element is a new message in the final prompt.from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.schema import HumanMessage, AIMessage, SystemMessageFirst, let's initialize the base ChatPromptTemplate with a system message. It doesn't have to start with a system, but it's often good practiceprompt = SystemMessage(content="You are a nice pirate")You can then easily create a pipeline combining it with other messages or message templates.
Use a Message when there is no variables to be formatted, use a MessageTemplate when there are variables to be formatted. You can also use just a string (note: this will automatically get inferred as a HumanMessagePromptTemplate.)new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}")Under the hood, this creates an instance of the ChatPromptTemplate class, so you can use it just as you did before!new_prompt.format_messages(input="i said hi") [SystemMessage(content='You are a nice pirate', additional_kwargs={}), HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='what?', additional_kwargs={}, example=False), HumanMessage(content='i said hi', additional_kwargs={}, example=False)]You can also use it in an LLMChain, just like before.from langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainmodel = ChatOpenAI()chain = LLMChain(llm=model, prompt=new_prompt)chain.run("i said hi") 'Oh, hello! How can I assist you today?'PreviousSerializationNextValidate templateString prompt pipeliningChat prompt pipelining |
38 | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/validate | ModulesModel I/OPromptsPrompt templatesValidate templateValidate templateBy default, PromptTemplate will validate the template string by checking whether the input_variables match the variables defined in template. You can disable this behavior by setting validate_template to False.template = "I am learning langchain because {reason}."prompt_template = PromptTemplate(template=template, input_variables=["reason", "foo"]) # ValueError due to extra variablesprompt_template = PromptTemplate(template=template, input_variables=["reason", "foo"], validate_template=False) # No errorPreviousPrompt pipeliningNextExample selectors |
39 | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ | ModulesModel I/OPromptsExample selectorsExample selectorsIf you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.The base interface is defined as below:class BaseExampleSelector(ABC): """Interface for selecting examples to include in prompts.""" @abstractmethod def select_examples(self, input_variables: Dict[str, str]) -> List[dict]: """Select which examples to use based on the inputs."""The only method it needs to define is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected.PreviousValidate templateNextCustom example selector |
40 | https://python.langchain.com/docs/modules/model_io/models/ | ModulesModel I/OLanguage modelsOn this pageLanguage modelsLangChain provides interfaces and integrations for two types of models:LLMs: Models that take a text string as input and return a text stringChat models: Models that are backed by a language model but take a list of Chat Messages as input and return a Chat MessageLLMs vs chat modelsLLMs and chat models are subtly but importantly different. LLMs in LangChain refer to pure text completion models.
The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM.
Chat models are often backed by LLMs but tuned specifically for having conversations.
And, crucially, their provider APIs use a different interface than pure text completion models. Instead of a single string,
they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System",
"AI", and "Human"). And they return an AI chat message as output. GPT-4 and Anthropic's Claude are both implemented as chat models.To make it possible to swap LLMs and chat models, both implement the Base Language Model interface. This includes common
methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message.
If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for chat models),
but if you're creating an application that should work with different types of models the shared interface can be helpful.PreviousSelect by similarityNextLLMsLLMs vs chat models |
41 | https://python.langchain.com/docs/modules/model_io/output_parsers/ | ModulesModel I/OOutput parsersOn this pageOutput parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted."Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get startedBelow we go over the main type of output parser, the PydanticOutputParser.from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import PydanticOutputParserfrom pydantic import BaseModel, Field, validatorfrom typing import Listmodel_name = 'text-davinci-003'temperature = 0.0model = OpenAI(model_name=model_name, temperature=temperature)# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()})# And a query intended to prompt a language model to populate the data structure.joke_query = "Tell me a joke."_input = prompt.format_prompt(query=joke_query)output = model(_input.to_string())parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')PreviousStreamingNextList parserGet started |
42 | https://python.langchain.com/docs/modules/data_connection/ | ModulesRetrievalRetrievalMany LLM applications require user-specific data that is not part of the model's training set.
The primary way of accomplishing this is through Retrieval Augmented Generation (RAG).
In this process, external data is retrieved and then passed to the LLM when doing the generation step.LangChain provides all the building blocks for RAG applications - from simple to complex.
This section of the documentation covers everything related to the retrieval step - e.g. the fetching of the data.
Although this sounds simple, it can be subtly complex.
This encompasses several key modules.Document loadersLoad documents from many different sources.
LangChain provides over 100 different document loaders as well as integrations with other major providers in the space,
like AirByte and Unstructured.
We provide integrations to load all types of documents (HTML, PDF, code) from all types of locations (private s3 buckets, public websites).Document transformersA key part of retrieval is fetching only the relevant parts of documents.
This involves several transformation steps in order to best prepare the documents for retrieval.
One of the primary ones here is splitting (or chunking) a large document into smaller chunks.
LangChain provides several different algorithms for doing this, as well as logic optimized for specific document types (code, markdown, etc).Text embedding modelsAnother key part of retrieval has become creating embeddings for documents.
Embeddings capture the semantic meaning of the text, allowing you to quickly and
efficiently find other pieces of text that are similar.
LangChain provides integrations with over 25 different embedding providers and methods,
from open-source to proprietary API,
allowing you to choose the one best suited for your needs.
LangChain provides a standard interface, allowing you to easily swap between models.Vector storesWith the rise of embeddings, there has emerged a need for databases to support efficient storage and searching of these embeddings.
LangChain provides integrations with over 50 different vectorstores, from open-source local ones to cloud-hosted proprietary ones,
allowing you to choose the one best suited for your needs.
LangChain exposes a standard interface, allowing you to easily swap between vector stores.RetrieversOnce the data is in the database, you still need to retrieve it.
LangChain supports many different retrieval algorithms and is one of the places where we add the most value.
We support basic methods that are easy to get started - namely simple semantic search.
However, we have also added a collection of algorithms on top of this to increase performance.
These include:Parent Document Retriever: This allows you to create multiple embeddings per parent document, allowing you to look up smaller chunks but return larger context.Self Query Retriever: User questions often contain a reference to something that isn't just semantic but rather expresses some logic that can best be represented as a metadata filter. Self-query allows you to parse out the semantic part of a query from other metadata filters present in the query.Ensemble Retriever: Sometimes you may want to retrieve documents from multiple different sources, or using multiple different algorithms. The ensemble retriever allows you to easily do this.And more!PreviousXML parserNextDocument loaders |
43 | https://python.langchain.com/docs/modules/data_connection/document_loaders/ | ModulesRetrievalDocument loadersOn this pageDocument loadersinfoHead to Integrations for documentation on built-in document loader integrations with 3rd-party tools.Use document loaders to load data from a source as Document's. A Document is a piece of text
and associated metadata. For example, there are document loaders for loading a simple .txt file, for loading the text
contents of any web page, or even for loading a transcript of a YouTube video.Document loaders provide a "load" method for loading data as documents from a configured source. They optionally
implement a "lazy load" as well for lazily loading data into memory.Get startedThe simplest loader reads in a file as text and places it all into one document.from langchain.document_loaders import TextLoaderloader = TextLoader("./index.md")loader.load()[ Document(page_content='---\nsidebar_position: 0\n---\n# Document loaders\n\nUse document loaders to load data from a source as `Document`\'s. A `Document` is a piece of text\nand associated metadata. For example, there are document loaders for loading a simple `.txt` file, for loading the text\ncontents of any web page, or even for loading a transcript of a YouTube video.\n\nEvery document loader exposes two methods:\n1. "Load": load documents from the configured source\n2. "Load and split": load documents from the configured source and split them using the passed in text splitter\n\nThey optionally implement:\n\n3. "Lazy load": load documents into memory lazily\n', metadata={'source': '../docs/docs/modules/data_connection/document_loaders/index.md'})]PreviousRetrievalNextCSVGet started |
44 | https://python.langchain.com/docs/modules/data_connection/document_loaders/csv | ModulesRetrievalDocument loadersCSVCSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.Load CSV data with a single row per document.from langchain.document_loaders.csv_loader import CSVLoaderloader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv')data = loader.load()print(data) [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0)]Customizing the CSV parsing and loadingSee the csv module documentation for more information of what csv args are supported.loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', csv_args={ 'delimiter': ',', 'quotechar': '"', 'fieldnames': ['MLB Team', 'Payroll in millions', 'Wins']})data = loader.load()print(data) [Document(page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='MLB Team: Reds\nPayroll in millions: 82.20\nWins: 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='MLB Team: Yankees\nPayroll in millions: 197.96\nWins: 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='MLB Team: Giants\nPayroll in millions: 117.62\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='MLB Team: Braves\nPayroll in millions: 83.31\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='MLB Team: Athletics\nPayroll in millions: 55.37\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='MLB Team: Rangers\nPayroll in millions: 120.51\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='MLB Team: Orioles\nPayroll in millions: 81.43\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='MLB Team: Rays\nPayroll in millions: 64.17\nWins: 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='MLB Team: Angels\nPayroll in millions: 154.49\nWins: 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='MLB Team: Tigers\nPayroll in millions: 132.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='MLB Team: Cardinals\nPayroll in millions: 110.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='MLB Team: Dodgers\nPayroll in millions: 95.14\nWins: 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='MLB Team: White Sox\nPayroll in millions: 96.92\nWins: 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='MLB Team: Brewers\nPayroll in millions: 97.65\nWins: 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='MLB Team: Phillies\nPayroll in millions: 174.54\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='MLB Team: Diamondbacks\nPayroll in millions: 74.28\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='MLB Team: Pirates\nPayroll in millions: 63.43\nWins: 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='MLB Team: Padres\nPayroll in millions: 55.24\nWins: 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='MLB Team: Mariners\nPayroll in millions: 81.97\nWins: 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='MLB Team: Mets\nPayroll in millions: 93.35\nWins: 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue Jays\nPayroll in millions: 75.48\nWins: 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='MLB Team: Royals\nPayroll in millions: 60.91\nWins: 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='MLB Team: Marlins\nPayroll in millions: 118.07\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='MLB Team: Red Sox\nPayroll in millions: 173.18\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='MLB Team: Indians\nPayroll in millions: 78.43\nWins: 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='MLB Team: Twins\nPayroll in millions: 94.08\nWins: 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='MLB Team: Rockies\nPayroll in millions: 78.06\nWins: 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='MLB Team: Cubs\nPayroll in millions: 88.19\nWins: 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0), Document(page_content='MLB Team: Astros\nPayroll in millions: 60.65\nWins: 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 30}, lookup_index=0)]Specify a column to identify the document sourceUse the source_column argument to specify a source for the document created from each row. Otherwise file_path will be used as the source for all documents created from the CSV file.This is useful when using documents loaded from CSV files for chains that answer questions using sources.loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', source_column="Team")data = loader.load()print(data) [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': 'Nationals', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': 'Reds', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': 'Yankees', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': 'Giants', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': 'Braves', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': 'Athletics', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': 'Rangers', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': 'Orioles', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': 'Rays', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': 'Angels', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': 'Tigers', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': 'Cardinals', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': 'Dodgers', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': 'White Sox', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': 'Brewers', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': 'Phillies', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': 'Diamondbacks', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': 'Pirates', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': 'Padres', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': 'Mariners', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': 'Mets', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': 'Blue Jays', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': 'Royals', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': 'Marlins', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': 'Red Sox', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': 'Indians', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': 'Twins', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': 'Rockies', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': 'Cubs', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': 'Astros', 'row': 29}, lookup_index=0)]PreviousDocument loadersNextFile Directory |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 1