filetype
stringclasses 2
values | content
stringlengths 0
75k
| filename
stringlengths 59
152
|
---|---|---|
.md |
# cassandra-entomology-rag
This template will perform RAG using Apache Cassandra® or Astra DB through CQL (`Cassandra` vector store class)
## Environment Setup
For the setup, you will require:
- an [Astra](https://astra.datastax.com) Vector Database. You must have a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure), specifically the string starting with `AstraCS:...`.
- [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier).
- an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access))
You may also use a regular Cassandra cluster. In this case, provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it.
The connection parameters and secrets must be provided through environment variables. Refer to `.env.template` for the required variables.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package cassandra-entomology-rag
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add cassandra-entomology-rag
```
And add the following code to your `server.py` file:
```python
from cassandra_entomology_rag import chain as cassandra_entomology_rag_chain
add_routes(app, cassandra_entomology_rag_chain, path="/cassandra-entomology-rag")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/cassandra-entomology-rag/playground](http://127.0.0.1:8000/cassandra-entomology-rag/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/cassandra-entomology-rag")
```
## Reference
Stand-alone repo with LangServe chain: [here](https://github.com/hemidactylus/langserve_cassandra_entomology_rag).
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\cassandra-entomology-rag\README.md |
.txt | # source: https://www.thoughtco.com/a-guide-to-the-twenty-nine-insect-orders-1968419
Order Thysanura: The silverfish and firebrats are found in the order Thysanura. They are wingless insects often found in people's attics, and have a lifespan of several years. There are about 600 species worldwide.
Order Diplura: Diplurans are the most primitive insect species, with no eyes or wings. They have the unusual ability among insects to regenerate body parts. There are over 400 members of the order Diplura in the world.
Order Protura: Another very primitive group, the proturans have no eyes, no antennae, and no wings. They are uncommon, with perhaps less than 100 species known.
Order Collembola: The order Collembola includes the springtails, primitive insects without wings. There are approximately 2,000 species of Collembola worldwide.
Order Ephemeroptera: The mayflies of order Ephemeroptera are short-lived, and undergo incomplete metamorphosis. The larvae are aquatic, feeding on algae and other plant life. Entomologists have described about 2,100 species worldwide.
Order Odonata: The order Odonata includes dragonflies and damselflies, which undergo incomplete metamorphosis. They are predators of other insects, even in their immature stage. There are about 5,000 species in the order Odonata.
Order Plecoptera: The stoneflies of order Plecoptera are aquatic and undergo incomplete metamorphosis. The nymphs live under rocks in well flowing streams. Adults are usually seen on the ground along stream and river banks. There are roughly 3,000 species in this group.
Order Grylloblatodea: Sometimes referred to as "living fossils," the insects of the order Grylloblatodea have changed little from their ancient ancestors. This order is the smallest of all the insect orders, with perhaps only 25 known species living today. Grylloblatodea live at elevations above 1500 ft., and are commonly named ice bugs or rock crawlers.
Order Orthoptera: These are familiar insects (grasshoppers, locusts, katydids, and crickets) and one of the largest orders of herbivorous insects. Many species in the order Orthoptera can produce and detect sounds. Approximately 20,000 species exist in this group.
Order Phasmida: The order Phasmida are masters of camouflage, the stick and leaf insects. They undergo incomplete metamorphosis and feed on leaves. There are some 3,000 insects in this group, but only a small fraction of this number is leaf insects. Stick insects are the longest insects in the world.
Order Dermaptera: This order contains the earwigs, an easily recognized insect that often has pincers at the end of the abdomen. Many earwigs are scavengers, eating both plant and animal matter. The order Dermaptera includes less than 2,000 species.
Order Embiidina: The order Embioptera is another ancient order with few species, perhaps only 200 worldwide. The web spinners have silk glands in their front legs and weave nests under leaf litter and in tunnels where they live. Webspinners live in tropical or subtropical climates.
Order Dictyoptera: The order Dictyoptera includes roaches and mantids. Both groups have long, segmented antennae and leathery forewings held tightly against their backs. They undergo incomplete metamorphosis. Worldwide, there approximately 6,000 species in this order, most living in tropical regions.
Order Isoptera: Termites feed on wood and are important decomposers in forest ecosystems. They also feed on wood products and are thought of as pests for the destruction they cause to man-made structures. There are between 2,000 and 3,000 species in this order.
Order Zoraptera: Little is know about the angel insects, which belong to the order Zoraptera. Though they are grouped with winged insects, many are actually wingless. Members of this group are blind, small, and often found in decaying wood. There are only about 30 described species worldwide.
Order Psocoptera: Bark lice forage on algae, lichen, and fungus in moist, dark places. Booklice frequent human dwellings, where they feed on book paste and grains. They undergo incomplete metamorphosis. Entomologists have named about 3,200 species in the order Psocoptera.
Order Mallophaga: Biting lice are ectoparasites that feed on birds and some mammals. There are an estimated 3,000 species in the order Mallophaga, all of which undergo incomplete metamorphosis.
Order Siphunculata: The order Siphunculata are the sucking lice, which feed on the fresh blood of mammals. Their mouthparts are adapted for sucking or siphoning blood. There are only about 500 species of sucking lice.
Order Hemiptera: Most people use the term "bugs" to mean insects; an entomologist uses the term to refer to the order Hemiptera. The Hemiptera are the true bugs, and include cicadas, aphids, and spittlebugs, and others. This is a large group of over 70,000 species worldwide.
Order Thysanoptera: The thrips of order Thysanoptera are small insects that feed on plant tissue. Many are considered agricultural pests for this reason. Some thrips prey on other small insects as well. This order contains about 5,000 species.
Order Neuroptera: Commonly called the order of lacewings, this group actually includes a variety of other insects, too: dobsonflies, owlflies, mantidflies, antlions, snakeflies, and alderflies. Insects in the order Neuroptera undergo complete metamorphosis. Worldwide, there are over 5,500 species in this group.
Order Mecoptera: This order includes the scorpionflies, which live in moist, wooded habitats. Scorpionflies are omnivorous in both their larval and adult forms. The larva are caterpillar-like. There are less than 500 described species in the order Mecoptera.
Order Siphonaptera: Pet lovers fear insects in the order Siphonaptera - the fleas. Fleas are blood-sucking ectoparasites that feed on mammals, and rarely, birds. There are well over 2,000 species of fleas in the world.
Order Coleoptera: This group, the beetles and weevils, is the largest order in the insect world, with over 300,000 distinct species known. The order Coleoptera includes well-known families: june beetles, lady beetles, click beetles, and fireflies. All have hardened forewings that fold over the abdomen to protect the delicate hindwings used for flight.
Order Strepsiptera: Insects in this group are parasites of other insects, particularly bees, grasshoppers, and the true bugs. The immature Strepsiptera lies in wait on a flower and quickly burrows into any host insect that comes along. Strepsiptera undergo complete metamorphosis and pupate within the host insect's body.
Order Diptera: Diptera is one of the largest orders, with nearly 100,000 insects named to the order. These are the true flies, mosquitoes, and gnats. Insects in this group have modified hindwings which are used for balance during flight. The forewings function as the propellers for flying.
Order Lepidoptera: The butterflies and moths of the order Lepidoptera comprise the second largest group in the class Insecta. These well-known insects have scaly wings with interesting colors and patterns. You can often identify an insect in this order just by the wing shape and color.
Order Trichoptera: Caddisflies are nocturnal as adults and aquatic when immature. The caddisfly adults have silky hairs on their wings and body, which is key to identifying a Trichoptera member. The larvae spin traps for prey with silk. They also make cases from the silk and other materials that they carry and use for protection.
Order Hymenoptera: The order Hymenoptera includes many of the most common insects - ants, bees, and wasps. The larvae of some wasps cause trees to form galls, which then provides food for the immature wasps. Other wasps are parasitic, living in caterpillars, beetles, or even aphids. This is the third-largest insect order with just over 100,000 species.
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\cassandra-entomology-rag\sources.txt |
.md |
# cassandra-synonym-caching
This template provides a simple chain template showcasing the usage of LLM Caching backed by Apache Cassandra® or Astra DB through CQL.
## Environment Setup
To set up your environment, you will need the following:
- an [Astra](https://astra.datastax.com) Vector Database (free tier is fine!). **You need a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure)**, in particular the string starting with `AstraCS:...`;
- likewise, get your [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier) ready, you will have to enter it below;
- an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access), note that out-of-the-box this demo supports OpenAI unless you tinker with the code.)
_Note:_ you can alternatively use a regular Cassandra cluster: to do so, make sure you provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package cassandra-synonym-caching
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add cassandra-synonym-caching
```
And add the following code to your `server.py` file:
```python
from cassandra_synonym_caching import chain as cassandra_synonym_caching_chain
add_routes(app, cassandra_synonym_caching_chain, path="/cassandra-synonym-caching")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/cassandra-synonym-caching/playground](http://127.0.0.1:8000/cassandra-synonym-caching/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/cassandra-synonym-caching")
```
## Reference
Stand-alone LangServe template repo: [here](https://github.com/hemidactylus/langserve_cassandra_synonym_caching).
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\cassandra-synonym-caching\README.md |
.md | # Chain-of-Note (Wikipedia)
Implements Chain-of-Note as described in https://arxiv.org/pdf/2311.09210.pdf by Yu, et al. Uses Wikipedia for retrieval.
Check out the prompt being used here https://smith.langchain.com/hub/bagatur/chain-of-note-wiki.
## Environment Setup
Uses Anthropic claude-2 chat model. Set Anthropic API key:
```bash
export ANTHROPIC_API_KEY="..."
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package chain-of-note-wiki
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add chain-of-note-wiki
```
And add the following code to your `server.py` file:
```python
from chain_of_note_wiki import chain as chain_of_note_wiki_chain
add_routes(app, chain_of_note_wiki_chain, path="/chain-of-note-wiki")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/chain-of-note-wiki/playground](http://127.0.0.1:8000/chain-of-note-wiki/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/chain-of-note-wiki")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\chain-of-note-wiki\README.md |
.md | # Chat Bot Feedback Template
This template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in [chain.py](https://github.com/langchain-ai/langchain/blob/master/templates/chat-bot-feedback/chat_bot_feedback/chain.py) and custom evaluator that scores bot response effectiveness based on the subsequent user response. You can apply this run evaluator to your own chat bot by calling `with_config` on the chat bot before serving. You can also directly deploy your chat app using this template.
[Chat bots](https://python.langchain.com/docs/use_cases/chatbots) are one of the most common interfaces for deploying LLMs. The quality of chat bots varies, making continuous development important. But users are wont to leave explicit feedback through mechanisms like thumbs-up or thumbs-down buttons. Furthermore, traditional analytics such as "session length" or "conversation length" often lack clarity. However, multi-turn conversations with a chat bot can provide a wealth of information, which we can transform into metrics for fine-tuning, evaluation, and product analytics.
Taking [Chat Langchain](https://chat.langchain.com/) as a case study, only about 0.04% of all queries receive explicit feedback. Yet, approximately 70% of the queries are follow-ups to previous questions. A significant portion of these follow-up queries continue useful information we can use to infer the quality of the previous AI response.
This template helps solve this "feedback scarcity" problem. Below is an example invocation of this chat bot:
[![Screenshot of a chat bot interaction where the AI responds in a pirate accent to a user asking where their keys are.](./static/chat_interaction.png "Chat Bot Interaction Example")](https://smith.langchain.com/public/3378daea-133c-4fe8-b4da-0a3044c5dbe8/r?runtab=1)
When the user responds to this ([link](https://smith.langchain.com/public/a7e2df54-4194-455d-9978-cecd8be0df1e/r)), the response evaluator is invoked, resulting in the following evaluationrun:
[![Screenshot of an evaluator run showing the AI's response effectiveness score based on the user's follow-up message expressing frustration.](./static/evaluator.png "Chat Bot Evaluator Run")](https://smith.langchain.com/public/534184ee-db8f-4831-a386-3f578145114c/r)
As shown, the evaluator sees that the user is increasingly frustrated, indicating that the prior response was not effective
## LangSmith Feedback
[LangSmith](https://smith.langchain.com/) is a platform for building production-grade LLM applications. Beyond its debugging and offline evaluation features, LangSmith helps you capture both user and model-assisted feedback to refine your LLM application. This template uses an LLM to generate feedback for your application, which you can use to continuously improve your service. For more examples on collecting feedback using LangSmith, consult the [documentation](https://docs.smith.langchain.com/cookbook/feedback-examples).
## Evaluator Implementation
The user feedback is inferred by custom `RunEvaluator`. This evaluator is called using the `EvaluatorCallbackHandler`, which run it in a separate thread to avoid interfering with the chat bot's runtime. You can use this custom evaluator on any compatible chat bot by calling the following function on your LangChain object:
```python
my_chain.with_config(
callbacks=[
EvaluatorCallbackHandler(
evaluators=[
ResponseEffectivenessEvaluator(evaluate_response_effectiveness)
]
)
],
)
```
The evaluator instructs an LLM, specifically `gpt-3.5-turbo`, to evaluate the AI's most recent chat message based on the user's followup response. It generates a score and accompanying reasoning that is converted to feedback in LangSmith, applied to the value provided as the `last_run_id`.
The prompt used within the LLM [is available on the hub](https://smith.langchain.com/hub/wfh/response-effectiveness). Feel free to customize it with things like additional app context (such as the goal of the app or the types of questions it should respond to) or "symptoms" you'd like the LLM to focus on. This evaluator also utilizes OpenAI's function-calling API to ensure a more consistent, structured output for the grade.
## Environment Variables
Ensure that `OPENAI_API_KEY` is set to use OpenAI models. Also, configure LangSmith by setting your `LANGSMITH_API_KEY`.
```bash
export OPENAI_API_KEY=sk-...
export LANGSMITH_API_KEY=...
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_PROJECT=my-project # Set to the project you want to save to
```
## Usage
If deploying via `LangServe`, we recommend configuring the server to return callback events as well. This will ensure the backend traces are included in whatever traces you generate using the `RemoteRunnable`.
```python
from chat_bot_feedback.chain import chain
add_routes(app, chain, path="/chat-bot-feedback", include_callback_events=True)
```
With the server running, you can use the following code snippet to stream the chat bot responses for a 2 turn conversation.
```python
from functools import partial
from typing import Dict, Optional, Callable, List
from langserve import RemoteRunnable
from langchain.callbacks.manager import tracing_v2_enabled
from langchain_core.messages import BaseMessage, AIMessage, HumanMessage
# Update with the URL provided by your LangServe server
chain = RemoteRunnable("http://127.0.0.1:8031/chat-bot-feedback")
def stream_content(
text: str,
chat_history: Optional[List[BaseMessage]] = None,
last_run_id: Optional[str] = None,
on_chunk: Callable = None,
):
results = []
with tracing_v2_enabled() as cb:
for chunk in chain.stream(
{"text": text, "chat_history": chat_history, "last_run_id": last_run_id},
):
on_chunk(chunk)
results.append(chunk)
last_run_id = cb.latest_run.id if cb.latest_run else None
return last_run_id, "".join(results)
chat_history = []
text = "Where are my keys?"
last_run_id, response_message = stream_content(text, on_chunk=partial(print, end=""))
print()
chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)])
text = "I CAN'T FIND THEM ANYWHERE" # The previous response will likely receive a low score,
# as the user's frustration appears to be escalating.
last_run_id, response_message = stream_content(
text,
chat_history=chat_history,
last_run_id=str(last_run_id),
on_chunk=partial(print, end=""),
)
print()
chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)])
```
This uses the `tracing_v2_enabled` callback manager to get the run ID of the call, which we provide in subsequent calls in the same chat thread, so the evaluator can assign feedback to the appropriate trace.
## Conclusion
This template provides a simple chat bot definition you can directly deploy using LangServe. It defines a custom evaluator to log evaluation feedback for the bot without any explicit user ratings. This is an effective way to augment your analytics and to better select data points for fine-tuning and evaluation. | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\chat-bot-feedback\README.md |
.md |
# cohere-librarian
This template turns Cohere into a librarian.
It demonstrates the use of a router to switch between chains that can handle different things: a vector database with Cohere embeddings; a chat bot that has a prompt with some information about the library; and finally a RAG chatbot that has access to the internet.
For a fuller demo of the book recomendation, consider replacing books_with_blurbs.csv with a larger sample from the following dataset: https://www.kaggle.com/datasets/jdobrow/57000-books-with-metadata-and-blurbs/ .
## Environment Setup
Set the `COHERE_API_KEY` environment variable to access the Cohere models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package cohere-librarian
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add cohere-librarian
```
And add the following code to your `server.py` file:
```python
from cohere_librarian.chain import chain as cohere_librarian_chain
add_routes(app, cohere_librarian_chain, path="/cohere-librarian")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://localhost:8000/docs](http://localhost:8000/docs)
We can access the playground at [http://localhost:8000/cohere-librarian/playground](http://localhost:8000/cohere-librarian/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/cohere-librarian")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\cohere-librarian\README.md |
.md |
# csv-agent
This template uses a [csv agent](https://python.langchain.com/docs/integrations/toolkits/csv) with tools (Python REPL) and memory (vectorstore) for interaction (question-answering) with text data.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To set up the environment, the `ingest.py` script should be run to handle the ingestion into a vectorstore.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package csv-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add csv-agent
```
And add the following code to your `server.py` file:
```python
from csv_agent.agent import agent_executor as csv_agent_chain
add_routes(app, csv_agent_chain, path="/csv-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/csv-agent/playground](http://127.0.0.1:8000/csv-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/csv-agent")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\csv-agent\README.md |
.md | # Contributing
Thanks for taking the time to contribute a new template!
We've tried to make this process as simple and painless as possible.
If you need any help at all, please reach out!
To contribute a new template, first fork this repository.
Then clone that fork and pull it down locally.
Set up an appropriate dev environment, and make sure you are in this `templates` directory.
Make sure you have `langchain-cli` installed.
```shell
pip install -U langchain-cli
```
You can then run the following command to create a new skeleton of a package.
By convention, package names should use `-` delimiters (not `_`).
```shell
langchain template new $PROJECT_NAME
```
You can then edit the contents of the package as you desire.
Note that by default we expect the main chain to be exposed as `chain` in the `__init__.py` file of the package.
You can change this (either the name or the location), but if you do so it is important to update the `tool.langchain`
part of `pyproject.toml`.
For example, if you update the main chain exposed to be called `agent_executor`, then that section should look like:
```text
[tool.langserve]
export_module = "..."
export_attr = "agent_executor"
```
Make sure to add any requirements of the package to `pyproject.toml` (and to remove any that are not used).
Please update the `README.md` file to give some background on your package and how to set it up.
If you want to change the license of your template for whatever, you may! Note that by default it is MIT licensed.
If you want to test out your package at any point in time, you can spin up a LangServe instance directly from the package.
See instructions [here](LAUNCHING_PACKAGE.md) on how to best do that.
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\docs\CONTRIBUTING.md |
.md | # Templates
Highlighting a few different categories of templates
## ⭐ Popular
These are some of the more popular templates to get started with.
- [Retrieval Augmented Generation Chatbot](../rag-conversation): Build a chatbot over your data. Defaults to OpenAI and PineconeVectorStore.
- [Extraction with OpenAI Functions](../extraction-openai-functions): Do extraction of structured data from unstructured data. Uses OpenAI function calling.
- [Local Retrieval Augmented Generation](../rag-chroma-private): Build a chatbot over your data. Uses only local tooling: Ollama, GPT4all, Chroma.
- [OpenAI Functions Agent](../openai-functions-agent): Build a chatbot that can take actions. Uses OpenAI function calling and Tavily.
- [XML Agent](../xml-agent): Build a chatbot that can take actions. Uses Anthropic and You.com.
## 📥 Advanced Retrieval
These templates cover advanced retrieval techniques, which can be used for chat and QA over databases or documents.
- [Reranking](../rag-pinecone-rerank): This retrieval technique uses Cohere's reranking endpoint to rerank documents from an initial retrieval step.
- [Anthropic Iterative Search](../anthropic-iterative-search): This retrieval technique uses iterative prompting to determine what to retrieve and whether the retriever documents are good enough.
- **Parent Document Retrieval** using [Neo4j](../neo4j-parent) or [MongoDB](../mongo-parent-document-retrieval): This retrieval technique stores embeddings for smaller chunks, but then returns larger chunks to pass to the model for generation.
- [Semi-Structured RAG](../rag-semi-structured): The template shows how to do retrieval over semi-structured data (e.g. data that involves both text and tables).
- [Temporal RAG](../rag-timescale-hybrid-search-time): The template shows how to do hybrid search over data with a time-based component using [Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral).
## 🔍Advanced Retrieval - Query Transformation
A selection of advanced retrieval methods that involve transforming the original user query, which can improve retrieval quality.
- [Hypothetical Document Embeddings](../hyde): A retrieval technique that generates a hypothetical document for a given query, and then uses the embedding of that document to do semantic search. [Paper](https://arxiv.org/abs/2212.10496).
- [Rewrite-Retrieve-Read](../rewrite-retrieve-read): A retrieval technique that rewrites a given query before passing it to a search engine. [Paper](https://arxiv.org/abs/2305.14283).
- [Step-back QA Prompting](../stepback-qa-prompting): A retrieval technique that generates a "step-back" question and then retrieves documents relevant to both that question and the original question. [Paper](https://arxiv.org/abs//2310.06117).
- [RAG-Fusion](../rag-fusion): A retrieval technique that generates multiple queries and then reranks the retrieved documents using reciprocal rank fusion. [Article](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1).
- [Multi-Query Retriever](../rag-pinecone-multi-query): This retrieval technique uses an LLM to generate multiple queries and then fetches documents for all queries.
## 🧠Advanced Retrieval - Query Construction
A selection of advanced retrieval methods that involve constructing a query in a separate DSL from natural language, which enable natural language chat over various structured databases.
- [Elastic Query Generator](../elastic-query-generator): Generate elastic search queries from natural language.
- [Neo4j Cypher Generation](../neo4j-cypher): Generate cypher statements from natural language. Available with a ["full text" option](../neo4j-cypher-ft) as well.
- [Supabase Self Query](../self-query-supabase): Parse a natural language query into a semantic query as well as a metadata filter for Supabase.
## 🦙 OSS Models
These templates use OSS models, which enable privacy for sensitive data.
- [Local Retrieval Augmented Generation](../rag-chroma-private): Build a chatbot over your data. Uses only local tooling: Ollama, GPT4all, Chroma.
- [SQL Question Answering (Replicate)](../sql-llama2): Question answering over a SQL database, using Llama2 hosted on [Replicate](https://replicate.com/).
- [SQL Question Answering (LlamaCpp)](../sql-llamacpp): Question answering over a SQL database, using Llama2 through [LlamaCpp](https://github.com/ggerganov/llama.cpp).
- [SQL Question Answering (Ollama)](../sql-ollama): Question answering over a SQL database, using Llama2 through [Ollama](https://github.com/jmorganca/ollama).
## ⛏️ Extraction
These templates extract data in a structured format based upon a user-specified schema.
- [Extraction Using OpenAI Functions](../extraction-openai-functions): Extract information from text using OpenAI Function Calling.
- [Extraction Using Anthropic Functions](../extraction-anthropic-functions): Extract information from text using a LangChain wrapper around the Anthropic endpoints intended to simulate function calling.
- [Extract BioTech Plate Data](../plate-chain): Extract microplate data from messy Excel spreadsheets into a more normalized format.
## ⛏️Summarization and tagging
These templates summarize or categorize documents and text.
- [Summarization using Anthropic](../summarize-anthropic): Uses Anthropic's Claude2 to summarize long documents.
## 🤖 Agents
These templates build chatbots that can take actions, helping to automate tasks.
- [OpenAI Functions Agent](../openai-functions-agent): Build a chatbot that can take actions. Uses OpenAI function calling and Tavily.
- [XML Agent](../xml-agent): Build a chatbot that can take actions. Uses Anthropic and You.com.
## :rotating_light: Safety and evaluation
These templates enable moderation or evaluation of LLM outputs.
- [Guardrails Output Parser](../guardrails-output-parser): Use guardrails-ai to validate LLM output.
- [Chatbot Feedback](../chat-bot-feedback): Use LangSmith to evaluate chatbot responses.
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\docs\INDEX.md |
.md | # Launching LangServe from a Package
You can also launch LangServe directly from a package, without having to pull it into a project.
This can be useful when you are developing a package and want to test it quickly.
The downside of this is that it gives you a little less control over how the LangServe APIs are configured,
which is why for proper projects we recommend creating a full project.
In order to do this, first change your working directory to the package itself.
For example, if you are currently in this `templates` module, you can go into the `pirate-speak` package with:
```shell
cd pirate-speak
```
Inside this package there is a `pyproject.toml` file.
This file contains a `tool.langchain` section that contains information on how this package should be used.
For example, in `pirate-speak` we see:
```text
[tool.langserve]
export_module = "pirate_speak.chain"
export_attr = "chain"
```
This information can be used to launch a LangServe instance automatically.
In order to do this, first make sure the CLI is installed:
```shell
pip install -U langchain-cli
```
You can then run:
```shell
langchain template serve
```
This will spin up endpoints, documentation, and playground for this chain.
For example, you can access the playground at [http://127.0.0.1:8000/playground/](http://127.0.0.1:8000/playground/)
![Screenshot of the LangServe Playground web interface with input and output fields.](playground.png "LangServe Playground Interface")
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\docs\LAUNCHING_PACKAGE.md |
.md |
# elastic-query-generator
This template allows interacting with Elasticsearch analytics databases in natural language using LLMs.
It builds search queries via the Elasticsearch DSL API (filters and aggregations).
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
### Installing Elasticsearch
There are a number of ways to run Elasticsearch. However, one recommended way is through Elastic Cloud.
Create a free trial account on [Elastic Cloud](https://cloud.elastic.co/registration?utm_source=langchain&utm_content=langserve).
With a deployment, update the connection string.
Password and connection (elasticsearch url) can be found on the deployment console.
Note that the Elasticsearch client must have permissions for index listing, mapping description, and search queries.
### Populating with data
If you want to populate the DB with some example info, you can run `python ingest.py`.
This will create a `customers` index. In this package, we specify indexes to generate queries against, and we specify `["customers"]`. This is specific to setting up your Elastic index.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package elastic-query-generator
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add elastic-query-generator
```
And add the following code to your `server.py` file:
```python
from elastic_query_generator.chain import chain as elastic_query_generator_chain
add_routes(app, elastic_query_generator_chain, path="/elastic-query-generator")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/elastic-query-generator/playground](http://127.0.0.1:8000/elastic-query-generator/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/elastic-query-generator")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\elastic-query-generator\README.md |
.md |
# extraction-anthropic-functions
This template enables [Anthropic function calling](https://python.langchain.com/docs/integrations/chat/anthropic_functions).
This can be used for various tasks, such as extraction or tagging.
The function output schema can be set in `chain.py`.
## Environment Setup
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package extraction-anthropic-functions
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add extraction-anthropic-functions
```
And add the following code to your `server.py` file:
```python
from extraction_anthropic_functions import chain as extraction_anthropic_functions_chain
add_routes(app, extraction_anthropic_functions_chain, path="/extraction-anthropic-functions")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/extraction-anthropic-functions/playground](http://127.0.0.1:8000/extraction-anthropic-functions/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/extraction-anthropic-functions")
```
By default, the package will extract the title and author of papers from the information you specify in `chain.py`. This template will use `Claude2` by default.
---
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\extraction-anthropic-functions\README.md |
.md |
# extraction-openai-functions
This template uses [OpenAI function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) for extraction of structured output from unstructured input text.
The extraction output schema can be set in `chain.py`.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package extraction-openai-functions
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add extraction-openai-functions
```
And add the following code to your `server.py` file:
```python
from extraction_openai_functions import chain as extraction_openai_functions_chain
add_routes(app, extraction_openai_functions_chain, path="/extraction-openai-functions")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/extraction-openai-functions/playground](http://127.0.0.1:8000/extraction-openai-functions/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/extraction-openai-functions")
```
By default, this package is set to extract the title and author of papers, as specified in the `chain.py` file.
LLM is leveraged by the OpenAI function by default.
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\extraction-openai-functions\README.md |
.md |
# gemini-functions-agent
This template creates an agent that uses Google Gemini function calling to communicate its decisions on what actions to take.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
[See an example LangSmith trace here](https://smith.langchain.com/public/0ebf1bd6-b048-4019-b4de-25efe8d3d18c/r)
## Environment Setup
The following environment variables need to be set:
Set the `TAVILY_API_KEY` environment variable to access Tavily
Set the `GOOGLE_API_KEY` environment variable to access the Google Gemini APIs.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package gemini-functions-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add gemini-functions-agent
```
And add the following code to your `server.py` file:
```python
from gemini_functions_agent import agent_executor as gemini_functions_agent_chain
add_routes(app, gemini_functions_agent_chain, path="/openai-functions-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/gemini-functions-agent/playground](http://127.0.0.1:8000/gemini-functions-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/gemini-functions-agent")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\gemini-functions-agent\README.md |
.md |
# guardrails-output-parser
This template uses [guardrails-ai](https://github.com/guardrails-ai/guardrails) to validate LLM output.
The `GuardrailsOutputParser` is set in `chain.py`.
The default example protects against profanity.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package guardrails-output-parser
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add guardrails-output-parser
```
And add the following code to your `server.py` file:
```python
from guardrails_output_parser.chain import chain as guardrails_output_parser_chain
add_routes(app, guardrails_output_parser_chain, path="/guardrails-output-parser")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/guardrails-output-parser/playground](http://127.0.0.1:8000/guardrails-output-parser/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/guardrails-output-parser")
```
If Guardrails does not find any profanity, then the translated output is returned as is. If Guardrails does find profanity, then an empty string is returned.
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\guardrails-output-parser\README.md |
.md | # Hybrid Search in Weaviate
This template shows you how to use the hybrid search feature in Weaviate. Hybrid search combines multiple search algorithms to improve the accuracy and relevance of search results.
Weaviate uses both sparse and dense vectors to represent the meaning and context of search queries and documents. The results use a combination of `bm25` and vector search ranking to return the top results.
## Configurations
Connect to your hosted Weaviate Vectorstore by setting a few env variables in `chain.py`:
* `WEAVIATE_ENVIRONMENT`
* `WEAVIATE_API_KEY`
You will also need to set your `OPENAI_API_KEY` to use the OpenAI models.
## Get Started
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package hybrid-search-weaviate
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add hybrid-search-weaviate
```
And add the following code to your `server.py` file:
```python
from hybrid_search_weaviate import chain as hybrid_search_weaviate_chain
add_routes(app, hybrid_search_weaviate_chain, path="/hybrid-search-weaviate")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/hybrid-search-weaviate/playground](http://127.0.0.1:8000/hybrid-search-weaviate/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/hybrid-search-weaviate")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\hybrid-search-weaviate\README.md |
.md |
# hyde
This template uses HyDE with RAG.
Hyde is a retrieval method that stands for Hypothetical Document Embeddings (HyDE). It is a method used to enhance retrieval by generating a hypothetical document for an incoming query.
The document is then embedded, and that embedding is utilized to look up real documents that are similar to the hypothetical document.
The underlying concept is that the hypothetical document may be closer in the embedding space than the query.
For a more detailed description, see the paper [here](https://arxiv.org/abs/2212.10496).
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package hyde
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add hyde
```
And add the following code to your `server.py` file:
```python
from hyde.chain import chain as hyde_chain
add_routes(app, hyde_chain, path="/hyde")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/hyde/playground](http://127.0.0.1:8000/hyde/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/hyde")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\hyde\README.md |
.md |
# llama2-functions
This template performs extraction of structured data from unstructured data using a [LLaMA2 model that supports a specified JSON output schema](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md).
The extraction schema can be set in `chain.py`.
## Environment Setup
This will use a [LLaMA2-13b model hosted by Replicate](https://replicate.com/andreasjansson/llama-2-13b-chat-gguf/versions).
Ensure that `REPLICATE_API_TOKEN` is set in your environment.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package llama2-functions
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add llama2-functions
```
And add the following code to your `server.py` file:
```python
from llama2_functions import chain as llama2_functions_chain
add_routes(app, llama2_functions_chain, path="/llama2-functions")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/llama2-functions/playground](http://127.0.0.1:8000/llama2-functions/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/llama2-functions")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\llama2-functions\README.md |
.md | # mongo-parent-document-retrieval
This template performs RAG using MongoDB and OpenAI.
It does a more advanced form of RAG called Parent-Document Retrieval.
In this form of retrieval, a large document is first split into medium sized chunks.
From there, those medium size chunks are split into small chunks.
Embeddings are created for the small chunks.
When a query comes in, an embedding is created for that query and compared to the small chunks.
But rather than passing the small chunks directly to the LLM for generation, the medium-sized chunks
from whence the smaller chunks came are passed.
This helps enable finer-grained search, but then passing of larger context (which can be useful during generation).
## Environment Setup
You should export two environment variables, one being your MongoDB URI, the other being your OpenAI API KEY.
If you do not have a MongoDB URI, see the `Setup Mongo` section at the bottom for instructions on how to do so.
```shell
export MONGO_URI=...
export OPENAI_API_KEY=...
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package mongo-parent-document-retrieval
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add mongo-parent-document-retrieval
```
And add the following code to your `server.py` file:
```python
from mongo_parent_document_retrieval import chain as mongo_parent_document_retrieval_chain
add_routes(app, mongo_parent_document_retrieval_chain, path="/mongo-parent-document-retrieval")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you DO NOT already have a Mongo Search Index you want to connect to, see `MongoDB Setup` section below before proceeding.
Note that because Parent Document Retrieval uses a different indexing strategy, it's likely you will want to run this new setup.
If you DO have a MongoDB Search index you want to connect to, edit the connection details in `mongo_parent_document_retrieval/chain.py`
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/mongo-parent-document-retrieval/playground](http://127.0.0.1:8000/mongo-parent-document-retrieval/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/mongo-parent-document-retrieval")
```
For additional context, please refer to [this notebook](https://colab.research.google.com/drive/1cr2HBAHyBmwKUerJq2if0JaNhy-hIq7I#scrollTo=TZp7_CBfxTOB).
## MongoDB Setup
Use this step if you need to setup your MongoDB account and ingest data.
We will first follow the standard MongoDB Atlas setup instructions [here](https://www.mongodb.com/docs/atlas/getting-started/).
1. Create an account (if not already done)
2. Create a new project (if not already done)
3. Locate your MongoDB URI.
This can be done by going to the deployement overview page and connecting to you database
![Screenshot highlighting the 'Connect' button in MongoDB Atlas.](_images/connect.png "MongoDB Atlas Connect Button")
We then look at the drivers available
![Screenshot showing the MongoDB Atlas drivers section for connecting to the database.](_images/driver.png "MongoDB Atlas Drivers Section")
Among which we will see our URI listed
![Screenshot displaying the MongoDB Atlas URI in the connection instructions.](_images/uri.png "MongoDB Atlas URI Display")
Let's then set that as an environment variable locally:
```shell
export MONGO_URI=...
```
4. Let's also set an environment variable for OpenAI (which we will use as an LLM)
```shell
export OPENAI_API_KEY=...
```
5. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:
```shell
python ingest.py
```
Note that you can (and should!) change this to ingest data of your choice
6. We now need to set up a vector index on our data.
We can first connect to the cluster where our database lives
![cluster.png](_images%2Fcluster.png)
We can then navigate to where all our collections are listed
![collections.png](_images%2Fcollections.png)
We can then find the collection we want and look at the search indexes for that collection
![search-indexes.png](_images%2Fsearch-indexes.png)
That should likely be empty, and we want to create a new one:
![create.png](_images%2Fcreate.png)
We will use the JSON editor to create it
![json_editor.png](_images%2Fjson_editor.png)
And we will paste the following JSON in:
```text
{
"mappings": {
"dynamic": true,
"fields": {
"doc_level": [
{
"type": "token"
}
],
"embedding": {
"dimensions": 1536,
"similarity": "cosine",
"type": "knnVector"
}
}
}
}
```
![json.png](_images%2Fjson.png)
From there, hit "Next" and then "Create Search Index". It will take a little bit but you should then have an index over your data!
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\mongo-parent-document-retrieval\README.md |
.txt | Dune is a 1965 epic science fiction novel by American author Frank Herbert, originally published as two separate serials in Analog magazine. It tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and it won the inaugural Nebula Award for Best Novel. It is the first installment of the Dune Chronicles. It is one of the world's best-selling science fiction novels.Dune is set in the distant future in a feudal interstellar society in which various noble houses control planetary fiefs. It tells the story of young Paul Atreides, whose family accepts the stewardship of the planet Arrakis. While the planet is an inhospitable and sparsely populated desert wasteland, it is the only source of melange, or "spice", a drug that extends life and enhances mental abilities. Melange is also necessary for space navigation, which requires a kind of multidimensional awareness and foresight that only the drug provides. As melange can only be produced on Arrakis, control of the planet is a coveted and dangerous undertaking. The story explores the multilayered interactions of politics, religion, ecology, technology, and human emotion, as the factions of the empire confront each other in a struggle for the control of Arrakis and its spice.
Herbert wrote five sequels: Dune Messiah, Children of Dune, God Emperor of Dune, Heretics of Dune, and Chapterhouse: Dune. Following Herbert's death in 1986, his son Brian Herbert and author Kevin J. Anderson continued the series in over a dozen additional novels since 1999.
Adaptations of the novel to cinema have been notoriously difficult and complicated. In the 1970s, cult filmmaker Alejandro Jodorowsky attempted to make a film based on the novel. After three years of development, the project was canceled due to a constantly growing budget. In 1984, a film adaptation directed by David Lynch was released to mostly negative responses from critics and failure at the box office, although it later developed a cult following. The book was also adapted into the 2000 Sci-Fi Channel miniseries Frank Herbert's Dune and its 2003 sequel Frank Herbert's Children of Dune (the latter of which combines the events of Dune Messiah and Children of Dune). A second film adaptation directed by Denis Villeneuve was released on October 21, 2021, to positive reviews. It grossed $401 million worldwide and went on to be nominated for ten Academy Awards, winning six. Villeneuve's film covers roughly the first half of the original novel; a sequel, which will cover the remaining story, will be released in March 2024.
The series has also been used as the basis for several board, role-playing, and video games.
Since 2009, the names of planets from the Dune novels have been adopted for the real-life nomenclature of plains and other features on Saturn's moon Titan.
== Origins ==
After his novel The Dragon in the Sea was published in 1957, Herbert traveled to Florence, Oregon, at the north end of the Oregon Dunes. Here, the United States Department of Agriculture was attempting to use poverty grasses to stabilize the sand dunes. Herbert claimed in a letter to his literary agent, Lurton Blassingame, that the moving dunes could "swallow whole cities, lakes, rivers, highways." Herbert's article on the dunes, "They Stopped the Moving Sands", was never completed (and only published decades later in The Road to Dune), but its research sparked Herbert's interest in ecology and deserts.Herbert further drew inspiration from Native American mentors like "Indian Henry" (as Herbert referred to the man to his son; likely a Henry Martin of the Hoh tribe) and Howard Hansen. Both Martin and Hansen grew up on the Quileute reservation near Herbert's hometown. According to historian Daniel Immerwahr, Hansen regularly shared his writing with Herbert. "White men are eating the earth," Hansen told Herbert in 1958, after sharing a piece on the effect of logging on the Quileute reservation. "They're gonna turn this whole planet into a wasteland, just like North Africa." The world could become a "big dune," Herbert responded in agreement.Herbert was also interested in the idea of the superhero mystique and messiahs. He believed that feudalism was a natural condition humans fell into, where some led and others gave up the responsibility of making decisions and just followed orders. He found that desert environments have historically given birth to several major religions with messianic impulses. He decided to join his interests together so he could play religious and ecological ideas against each other. In addition, he was influenced by the story of T. E. Lawrence and the "messianic overtones" in Lawrence's involvement in the Arab Revolt during World War I. In an early version of Dune, the hero was actually very similar to Lawrence of Arabia, but Herbert decided the plot was too straightforward and added more layers to his story.Herbert drew heavy inspiration also from Lesley Blanch's The Sabres of Paradise (1960), a narrative history recounting a mid-19th century conflict in the Caucasus between rugged Islamized caucasian tribes and the expansive Russian Empire. Language used on both sides of that conflict become terms in Herbert's world—chakobsa, a Caucasian hunting language, becomes a battle language of humans spread across the galaxy; kanly, a word for blood feud in the 19th century Caucasus, represents a feud between Dune's noble Houses; sietch and tabir are both words for camp borrowed from Ukrainian Cossacks (of the Pontic–Caspian steppe).Herbert also borrowed some lines which Blanch stated were Caucasian proverbs. "To kill with the point lacked artistry", used by Blanch to describe the Caucasus peoples' love of swordsmanship, becomes in Dune "Killing with the tip lacks artistry", a piece of advice given to a young Paul during his training. "Polish comes from the city, wisdom from the hills", a Caucasian aphorism, turns into a desert expression: "Polish comes from the cities, wisdom from the desert".
Another significant source of inspiration for Dune was Herbert's experiences with psilocybin and his hobby of cultivating mushrooms, according to mycologist Paul Stamets's account of meeting Herbert in the 1980s:Frank went on to tell me that much of the premise of Dune—the magic spice (spores) that allowed the bending of space (tripping), the giant sand worms (maggots digesting mushrooms), the eyes of the Fremen (the cerulean blue of Psilocybe mushrooms), the mysticism of the female spiritual warriors, the Bene Gesserits (influenced by the tales of Maria Sabina and the sacred mushroom cults of Mexico)—came from his perception of the fungal life cycle, and his imagination was stimulated through his experiences with the use of magic mushrooms.Herbert spent the next five years researching, writing, and revising. He published a three-part serial Dune World in the monthly Analog, from December 1963 to February 1964. The serial was accompanied by several illustrations that were not published again. After an interval of a year, he published the much slower-paced five-part The Prophet of Dune in the January–May 1965 issues. The first serial became "Book 1: Dune" in the final published Dune novel, and the second serial was divided into "Book Two: Muad'dib" and "Book Three: The Prophet". The serialized version was expanded, reworked, and submitted to more than twenty publishers, each of whom rejected it. The novel, Dune, was finally accepted and published in August 1965 by Chilton Books, a printing house better known for publishing auto repair manuals. Sterling Lanier, an editor at Chilton, had seen Herbert's manuscript and had urged his company to take a risk in publishing the book. However, the first printing, priced at $5.95 (equivalent to $55.25 in 2022), did not sell well and was poorly received by critics as being atypical of science fiction at the time. Chilton considered the publication of Dune a write-off and Lanier was fired. Over the course of time, the book gained critical acclaim, and its popularity spread by word-of-mouth to allow Herbert to start working full time on developing the sequels to Dune, elements of which were already written alongside Dune.At first Herbert considered using Mars as setting for his novel, but eventually decided to use a fictional planet instead. His son Brian said that "Readers would have too many preconceived ideas about that planet, due to the number of stories that had been written about it."Herbert dedicated his work "to the people whose labors go beyond ideas into the realm of 'real materials'—to the dry-land ecologists, wherever they may be, in whatever time they work, this effort at prediction is dedicated in humility and admiration."
== Plot ==
Duke Leto Atreides of House Atreides, ruler of the ocean planet Caladan, is assigned by the Padishah Emperor Shaddam IV to serve as fief ruler of the planet Arrakis. Although Arrakis is a harsh and inhospitable desert planet, it is of enormous importance because it is the only planetary source of melange, or the "spice", a unique and incredibly valuable substance that extends human youth, vitality and lifespan. It is also through the consumption of spice that Spacing Guild Navigators are able to effect safe interstellar travel. Shaddam, jealous of Duke Leto Atreides's rising popularity in the Landsraad, sees House Atreides as a potential future rival and threat, so conspires with House Harkonnen, the former stewards of Arrakis and the longstanding enemies of House Atreides, to destroy Leto and his family after their arrival. Leto is aware his assignment is a trap of some kind, but is compelled to obey the Emperor's orders anyway.
Leto's concubine Lady Jessica is an acolyte of the Bene Gesserit, an exclusively female group that pursues mysterious political aims and wields seemingly superhuman physical and mental abilities, such as the ability to control their bodies down to the cellular level, and also decide the sex of their children. Though Jessica was instructed by the Bene Gesserit to bear a daughter as part of their breeding program, out of love for Leto she bore a son, Paul. From a young age, Paul has been trained in warfare by Leto's aides, the elite soldiers Duncan Idaho and Gurney Halleck. Thufir Hawat, the Duke's Mentat (human computers, able to store vast amounts of data and perform advanced calculations on demand), has instructed Paul in the ways of political intrigue. Jessica has also trained her son in Bene Gesserit disciplines.
Paul's prophetic dreams interest Jessica's superior, the Reverend Mother Gaius Helen Mohiam, who subjects Paul to the deadly gom jabbar test. Holding a poisonous needle to his neck ready to strike should he be unable to resist the impulse to withdraw his hand from the nerve induction box, she tests Paul's self-control to overcome the extreme psychological pain he is being subjected to through the box.
Leto, Jessica, and Paul travel with their household to occupy Arrakeen, the capital on Arrakis formerly held by House Harkonnen. Leto learns of the dangers involved in harvesting the spice, which is protected by giant sandworms, and seeks to negotiate with the planet's native Fremen people, seeing them as a valuable ally rather than foes. Soon after the Atreides's arrival, Harkonnen forces attack, joined by the Emperor's ferocious Sardaukar troops in disguise. Leto is betrayed by his personal physician, the Suk doctor Wellington Yueh, who delivers a drugged Leto to the Baron Vladimir Harkonnen and his twisted Mentat, Piter De Vries. Yueh, however, arranges for Jessica and Paul to escape into the desert, where they are presumed dead by the Harkonnens. Yueh replaces one of Leto's teeth with a poison gas capsule, hoping Leto can kill the Baron during their encounter. The Baron narrowly avoids the gas due to his shield, which kills Leto, De Vries, and the others in the room. The Baron forces Hawat to take over De Vries's position by dosing him with a long-lasting, fatal poison and threatening to withhold the regular antidote doses unless he obeys. While he follows the Baron's orders, Hawat works secretly to undermine the Harkonnens.
Having fled into the desert, Paul is exposed to high concentrations of spice and has visions through which he realizes he has significant powers (as a result of the Bene Gesserit breeding scheme). He foresees potential futures in which he lives among the planet's native Fremen before leading them on a Holy Jihad across the known universe.
It is revealed Jessica is the daughter of Baron Harkonnen, a secret kept from her by the Bene Gesserit. After being captured by Fremen, Paul and Jessica are accepted into the Fremen community of Sietch Tabr, and teach the Fremen the Bene Gesserit fighting technique known as the "weirding way". Paul proves his manhood by killing a Fremen named Jamis in a ritualistic crysknife fight and chooses the Fremen name Muad'Dib, while Jessica opts to undergo a ritual to become a Reverend Mother by drinking the poisonous Water of Life. Pregnant with Leto's daughter, she inadvertently causes the unborn child, Alia, to become infused with the same powers in the womb. Paul takes a Fremen lover, Chani, and has a son with her, Leto II.
Two years pass and Paul's powerful prescience manifests, which confirms for the Fremen that he is their prophesied messiah, a legend planted by the Bene Gesserit's Missionaria Protectiva. Paul embraces his father's belief that the Fremen could be a powerful fighting force to take back Arrakis, but also sees that if he does not control them, their jihad could consume the entire universe. Word of the new Fremen leader reaches both Baron Harkonnen and the Emperor as spice production falls due to their increasingly destructive raids. The Baron encourages his brutish nephew Glossu Rabban to rule with an iron fist, hoping the contrast with his shrewder nephew Feyd-Rautha will make the latter popular among the people of Arrakis when he eventually replaces Rabban. The Emperor, suspecting the Baron of trying to create troops more powerful than the Sardaukar to seize power, sends spies to monitor activity on Arrakis. Hawat uses the opportunity to sow seeds of doubt in the Baron about the Emperor's true plans, putting further strain on their alliance.
Gurney, having survived the Harkonnen coup becomes a smuggler, reuniting with Paul and Jessica after a Fremen raid on his harvester. Believing Jessica to be the traitor, Gurney threatens to kill her, but is stopped by Paul. Paul did not foresee Gurney's attack, and concludes he must increase his prescience by drinking the Water of Life, which is traditionally fatal to males. Paul falls into unconsciousness for three weeks after drinking the poison, but when he wakes, he has clairvoyance across time and space: he is the Kwisatz Haderach, the ultimate goal of the Bene Gesserit breeding program.
Paul senses the Emperor and Baron are amassing fleets around Arrakis to quell the Fremen rebellion, and prepares the Fremen for a major offensive against the Harkonnen troops. The Emperor arrives with the Baron on Arrakis. The Emperor's troops seize a Fremen outpost, killing many including young Leto II, while Alia is captured and taken to the Emperor. Under cover of an electric storm, which shorts out the Emperor's troops' defensive shields, Paul and the Fremen, riding giant sandworms, assault the capital while Alia assassinates the Baron and escapes. The Fremen quickly defeat both the Harkonnen and Sardaukar troops.
Paul faces the Emperor, threatening to destroy spice production forever unless Shaddam abdicates the throne. Feyd-Rautha attempts to stop Paul by challenging him to a ritualistic knife fight, during which he attempts to cheat and kill Paul with a poison spur in his belt. Paul gains the upper hand and kills him. The Emperor reluctantly cedes the throne to Paul and promises his daughter Princess Irulan's hand in marriage. As Paul takes control of the Empire, he realizes that while he has achieved his goal, he is no longer able to stop the Fremen jihad, as their belief in him is too powerful to restrain.
== Characters ==
House AtreidesPaul Atreides, the Duke's son, and main character of the novel
Duke Leto Atreides, head of House Atreides
Lady Jessica, Bene Gesserit and concubine of the Duke, mother of Paul and Alia
Alia Atreides, Paul's younger sister
Thufir Hawat, Mentat and Master of Assassins to House Atreides
Gurney Halleck, staunchly loyal troubadour warrior of the Atreides
Duncan Idaho, Swordmaster for House Atreides, graduate of the Ginaz School
Wellington Yueh, Suk doctor for the Atreides who is secretly working for House HarkonnenHouse HarkonnenBaron Vladimir Harkonnen, head of House Harkonnen
Piter De Vries, twisted Mentat
Feyd-Rautha, nephew and heir-presumptive of the Baron
Glossu "Beast" Rabban, also called Rabban Harkonnen, older nephew of the Baron
Iakin Nefud, Captain of the GuardHouse CorrinoShaddam IV, Padishah Emperor of the Known Universe (the Imperium)
Princess Irulan, Shaddam's eldest daughter and heir, also a historian
Count Fenring, the Emperor's closest friend, advisor, and "errand boy"Bene GesseritReverend Mother Gaius Helen Mohiam, Proctor Superior of the Bene Gesserit school and the Emperor's Truthsayer
Lady Margot Fenring, Bene Gesserit wife of Count FenringFremenThe Fremen, native inhabitants of Arrakis
Stilgar, Fremen leader of Sietch Tabr
Chani, Paul's Fremen concubine and a Sayyadina (female acolyte) of Sietch Tabr
Dr. Liet-Kynes, the Imperial Planetologist on Arrakis and father of Chani, as well as a revered figure among the Fremen
The Shadout Mapes, head housekeeper of imperial residence on Arrakis
Jamis, Fremen killed by Paul in ritual duel
Harah, wife of Jamis and later servant to Paul who helps raise Alia among the Fremen
Reverend Mother Ramallo, religious leader of Sietch TabrSmugglersEsmar Tuek, a powerful smuggler and the father of Staban Tuek
Staban Tuek, the son of Esmar Tuek and a powerful smuggler who befriends and takes in Gurney Halleck and his surviving men after the attack on the Atreides
== Themes and influences ==
The Dune series is a landmark of science fiction. Herbert deliberately suppressed technology in his Dune universe so he could address the politics of humanity, rather than the future of humanity's technology. For example, a key pre-history event to the novel's present is the "Butlerian Jihad", in which all robots and computers were destroyed, eliminating these common elements to science fiction from the novel as to allow focus on humanity. Dune considers the way humans and their institutions might change over time. Director John Harrison, who adapted Dune for Syfy's 2000 miniseries, called the novel a universal and timeless reflection of "the human condition and its moral dilemmas", and said:
A lot of people refer to Dune as science fiction. I never do. I consider it an epic adventure in the classic storytelling tradition, a story of myth and legend not unlike the Morte d'Arthur or any messiah story. It just happens to be set in the future ... The story is actually more relevant today than when Herbert wrote it. In the 1960s, there were just these two colossal superpowers duking it out. Today we're living in a more feudal, corporatized world more akin to Herbert's universe of separate families, power centers and business interests, all interrelated and kept together by the one commodity necessary to all.
But Dune has also been called a mix of soft and hard science fiction since "the attention to ecology is hard, the anthropology and the psychic abilities are soft." Hard elements include the ecology of Arrakis, suspensor technology, weapon systems, and ornithopters, while soft elements include issues relating to religion, physical and mental training, cultures, politics, and psychology.Herbert said Paul's messiah figure was inspired by the Arthurian legend, and that the scarcity of water on Arrakis was a metaphor for oil, as well as air and water itself, and for the shortages of resources caused by overpopulation. Novelist Brian Herbert, his son and biographer, wrote:
Dune is a modern-day conglomeration of familiar myths, a tale in which great sandworms guard a precious treasure of melange, the geriatric spice that represents, among other things, the finite resource of oil. The planet Arrakis features immense, ferocious worms that are like dragons of lore, with "great teeth" and a "bellows breath of cinnamon." This resembles the myth described by an unknown English poet in Beowulf, the compelling tale of a fearsome fire dragon who guarded a great treasure hoard in a lair under cliffs, at the edge of the sea. The desert of Frank Herbert's classic novel is a vast ocean of sand, with giant worms diving into the depths, the mysterious and unrevealed domain of Shai-hulud. Dune tops are like the crests of waves, and there are powerful sandstorms out there, creating extreme danger. On Arrakis, life is said to emanate from the Maker (Shai-hulud) in the desert-sea; similarly all life on Earth is believed to have evolved from our oceans. Frank Herbert drew parallels, used spectacular metaphors, and extrapolated present conditions into world systems that seem entirely alien at first blush. But close examination reveals they aren't so different from systems we know … and the book characters of his imagination are not so different from people familiar to us.
Each chapter of Dune begins with an epigraph excerpted from the fictional writings of the character Princess Irulan. In forms such as diary entries, historical commentary, biography, quotations and philosophy, these writings set tone and provide exposition, context and other details intended to enhance understanding of Herbert's complex fictional universe and themes. They act as foreshadowing and invite the reader to keep reading to close the gap between what the epigraph says and what is happening in the main narrative. The epigraphs also give the reader the feeling that the world they are reading about is epically distanced, since Irulan writes about an idealized image of Paul as if he had already passed into memory. Brian Herbert wrote: "Dad told me that you could follow any of the novel's layers as you read it, and then start the book all over again, focusing on an entirely different layer. At the end of the book, he intentionally left loose ends and said he did this to send the readers spinning out of the story with bits and pieces of it still clinging to them, so that they would want to go back and read it again."
=== Middle-Eastern and Islamic references ===
Due to the similarities between some of Herbert's terms and ideas and actual words and concepts in the Arabic language, as well as the series' "Islamic undertones" and themes, a Middle-Eastern influence on Herbert's works has been noted repeatedly. In his descriptions of the Fremen culture and language, Herbert uses both authentic Arabic words and Arabic-sounding words. For example, one of the names for the sandworm, Shai-hulud, is derived from Arabic: شيء خلود, romanized: šayʾ ḫulūd, lit. 'immortal thing' or Arabic: شيخ خلود, romanized: šayḫ ḫulūd, lit. 'old man of eternity'. The title of the Fremen housekeeper, the Shadout Mapes, is borrowed from the Arabic: شادوف, romanized: šādūf, the Egyptian term for a device used to raise water. In particular, words related to the messianic religion of the Fremen, first implanted by the Bene Gesserit, are taken from Arabic, including Muad'Dib (from Arabic: مؤدب, romanized: muʾaddib, lit. 'educator'), Usul (from Arabic: أصول, romanized: ʾuṣūl, lit. 'fundamental principles'), Shari-a (from Arabic: شريعة, romanized: šarīʿa, lit. 'sharia; path'), Shaitan (from Arabic: شيطان, romanized: šayṭān, lit. 'Shaitan; devil; fiend', and jinn (from Arabic: جن, romanized: ǧinn, lit. 'jinn; spirit; demon; mythical being'). It is likely Herbert relied on second-hand resources such as phrasebooks and desert adventure stories to find these Arabic words and phrases for the Fremen. They are meaningful and carefully chosen, and help create an "imagined desert culture that resonates with exotic sounds, enigmas, and pseudo-Islamic references" and has a distinctly Bedouin aesthetic.As a foreigner who adopts the ways of a desert-dwelling people and then leads them in a military capacity, Paul Atreides bears many similarities to the historical T. E. Lawrence. His 1962 biopic Lawrence of Arabia has also been identified as a potential influence. The Sabres of Paradise (1960) has also been identified as a potential influence upon Dune, with its depiction of Imam Shamil and the Islamic culture of the Caucasus inspiring some of the themes, characters, events and terminology of Dune.The environment of the desert planet Arrakis was primarily inspired by the environments of the Middle East. Similarly Arrakis as a bioregion is presented as a particular kind of political site. Herbert has made it resemble a desertified petrostate area. The Fremen people of Arrakis were influenced by the Bedouin tribes of Arabia, and the Mahdi prophecy originates from Islamic eschatology. Inspiration is also adopted from medieval historian Ibn Khaldun's cyclical history and his dynastic concept in North Africa, hinted at by Herbert's reference to Khaldun's book Kitāb al-ʿibar ("The Book of Lessons"). The fictionalized version of the "Kitab al-ibar" in Dune is a combination of a Fremen religious manual and a desert survival book.
==== Additional language and historic influences ====
In addition to Arabic, Dune derives words and names from a variety of other languages, including Hebrew, Navajo, Latin, Dutch ("Landsraad"), Chakobsa, the Nahuatl language of the Aztecs, Greek, Persian, Sanskrit ("prana bindu", "prajna"), Russian, Turkish, Finnish, and Old English. Bene Gesserit is simply the Latin for "It will have been well fought", also carrying the sense of "It will have been well managed", which stands as a statement of the order's goal and as a pledge of faithfulness to that goal. Critics tend to miss the literal meaning of the phrase, some positing that the term is derived from the Latin meaning "it will have been well borne", which interpretation is not well supported by their doctrine in the story.Through the inspiration from The Sabres of Paradise, there are also allusions to the tsarist-era Russian nobility and Cossacks. Frank Herbert stated that bureaucracy that lasted long enough would become a hereditary nobility, and a significant theme behind the aristocratic families in Dune was "aristocratic bureaucracy" which he saw as analogous to the Soviet Union.
=== Environmentalism and ecology ===
Dune has been called the "first planetary ecology novel on a grand scale". Herbert hoped it would be seen as an "environmental awareness handbook" and said the title was meant to "echo the sound of 'doom'". It was reviewed in the best selling countercultural Whole Earth Catalog in 1968 as a "rich re-readable fantasy with clear portrayal of the fierce environment it takes to cohere a community".After the publication of Silent Spring by Rachel Carson in 1962, science fiction writers began treating the subject of ecological change and its consequences. Dune responded in 1965 with its complex descriptions of Arrakis life, from giant sandworms (for whom water is deadly) to smaller, mouse-like life forms adapted to live with limited water. Dune was followed in its creation of complex and unique ecologies by other science fiction books such as A Door into Ocean (1986) and Red Mars (1992). Environmentalists have pointed out that Dune's popularity as a novel depicting a planet as a complex—almost living—thing, in combination with the first images of Earth from space being published in the same time period, strongly influenced environmental movements such as the establishment of the international Earth Day.While the genre of climate fiction was popularized in the 2010s in response to real global climate change, Dune as well as other early science fiction works from authors like J. G. Ballard (The Drowned World) and Kim Stanley Robinson (the Mars trilogy) have retroactively been considered pioneering examples of the genre.
=== Declining empires ===
The Imperium in Dune contains features of various empires in Europe and the Near East, including the Roman Empire, Holy Roman Empire, and Ottoman Empire. Lorenzo DiTommaso compared Dune's portrayal of the downfall of a galactic empire to Edward Gibbon's Decline and Fall of the Roman Empire, which argues that Christianity allied with the profligacy of the Roman elite led to the fall of Ancient Rome. In "The Articulation of Imperial Decadence and Decline in Epic Science Fiction" (2007), DiTommaso outlines similarities between the two works by highlighting the excesses of the Emperor on his home planet of Kaitain and of the Baron Harkonnen in his palace. The Emperor loses his effectiveness as a ruler through an excess of ceremony and pomp. The hairdressers and attendants he brings with him to Arrakis are even referred to as "parasites". The Baron Harkonnen is similarly corrupt and materially indulgent. Gibbon's Decline and Fall partly blames the fall of Rome on the rise of Christianity. Gibbon claimed that this exotic import from a conquered province weakened the soldiers of Rome and left it open to attack. The Emperor's Sardaukar fighters are little match for the Fremen of Dune not only because of the Sardaukar's overconfidence and the fact that Jessica and Paul have trained the Fremen in their battle tactics, but because of the Fremen's capacity for self-sacrifice. The Fremen put the community before themselves in every instance, while the world outside wallows in luxury at the expense of others.The decline and long peace of the Empire sets the stage for revolution and renewal by genetic mixing of successful and unsuccessful groups through war, a process culminating in the Jihad led by Paul Atreides, described by Frank Herbert as depicting "war as a collective orgasm" (drawing on Norman Walter's 1950 The Sexual Cycle of Human Warfare), themes that would reappear in God Emperor of Dune's Scattering and Leto II's all-female Fish Speaker army.
=== Gender dynamics ===
Gender dynamics are complex in Dune. Within the Fremen sietch communities, women have almost full equality. They carry weapons and travel in raiding parties with men, fighting when necessary alongside the men. They can take positions of leadership as a Sayyadina or as a Reverend Mother (if she can survive the ritual of ingesting the Water of Life.) Both of these sietch religious leaders are routinely consulted by the all-male Council and can have a decisive voice in all matters of sietch life, security and internal politics. They are also protected by the entire community. Due to the high mortality rate among their men, women outnumber men in most sietches. Polygamy is common, and sexual relationships are voluntary and consensual; as Stilgar says to Jessica, "women among us are not taken against their will."
In contrast, the Imperial aristocracy leaves young women of noble birth very little agency. Frequently trained by the Bene Gesserit, they are raised to eventually marry other aristocrats. Marriages between Major and Minor Houses are political tools to forge alliances or heal old feuds; women are given very little say in the matter. Many such marriages are quietly maneuvered by the Bene Gesserit to produce offspring with some genetic characteristics needed by the sisterhood's human-breeding program. In addition, such highly-placed sisters were in a position to subtly influence their husbands' actions in ways that could move the politics of the Imperium toward Bene Gesserit goals.
The gom jabbar test of humanity is administered by the female Bene Gesserit order but rarely to males. The Bene Gesserit have seemingly mastered the unconscious and can play on the unconscious weaknesses of others using the Voice, yet their breeding program seeks after a male Kwisatz Haderach. Their plan is to produce a male who can "possess complete racial memory, both male and female," and look into the black hole in the collective unconscious that they fear. A central theme of the book is the connection, in Jessica's son, of this female aspect with his male aspect. This aligns with concepts in Jungian psychology, which features conscious/unconscious and taking/giving roles associated with males and females, as well as the idea of the collective unconscious. Paul's approach to power consistently requires his upbringing under the matriarchal Bene Gesserit, who operate as a long-dominating shadow government behind all of the great houses and their marriages or divisions. He is trained by Jessica in the Bene Gesserit Way, which includes prana-bindu training in nerve and muscle control and precise perception. Paul also receives Mentat training, thus helping prepare him to be a type of androgynous Kwisatz Haderach, a male Reverend Mother.In a Bene Gesserit test early in the book, it is implied that people are generally "inhuman" in that they irrationally place desire over self-interest and reason. This applies Herbert's philosophy that humans are not created equal, while equal justice and equal opportunity are higher ideals than mental, physical, or moral equality.
=== Heroism ===
I am showing you the superhero syndrome and your own participation in it.
Throughout Paul's rise to superhuman status, he follows a plotline common to many stories describing the birth of a hero. He has unfortunate circumstances forced onto him. After a long period of hardship and exile, he confronts and defeats the source of evil in his tale. As such, Dune is representative of a general trend beginning in 1960s American science fiction in that it features a character who attains godlike status through scientific means. Eventually, Paul Atreides gains a level of omniscience which allows him to take over the planet and the galaxy, and causes the Fremen of Arrakis to worship him like a god. Author Frank Herbert said in 1979, "The bottom line of the Dune trilogy is: beware of heroes. Much better [to] rely on your own judgment, and your own mistakes." He wrote in 1985, "Dune was aimed at this whole idea of the infallible leader because my view of history says that mistakes made by a leader (or made in a leader's name) are amplified by the numbers who follow without question."Juan A. Prieto-Pablos says Herbert achieves a new typology with Paul's superpowers, differentiating the heroes of Dune from earlier heroes such as Superman, van Vogt's Gilbert Gosseyn and Henry Kuttner's telepaths. Unlike previous superheroes who acquire their powers suddenly and accidentally, Paul's are the result of "painful and slow personal progress." And unlike other superheroes of the 1960s—who are the exception among ordinary people in their respective worlds—Herbert's characters grow their powers through "the application of mystical philosophies and techniques." For Herbert, the ordinary person can develop incredible fighting skills (Fremen, Ginaz swordsmen and Sardaukar) or mental abilities (Bene Gesserit, Mentats, Spacing Guild Navigators).
=== Zen and religion ===
Early in his newspaper career, Herbert was introduced to Zen by two Jungian psychologists, Ralph and Irene Slattery, who "gave a crucial boost to his thinking". Zen teachings ultimately had "a profound and continuing influence on [Herbert's] work". Throughout the Dune series and particularly in Dune, Herbert employs concepts and forms borrowed from Zen Buddhism. The Fremen are referred to as Zensunni adherents, and many of Herbert's epigraphs are Zen-spirited. In "Dune Genesis", Frank Herbert wrote:
What especially pleases me is to see the interwoven themes, the fugue like relationships of images that exactly replay the way Dune took shape. As in an Escher lithograph, I involved myself with recurrent themes that turn into paradox. The central paradox concerns the human vision of time. What about Paul's gift of prescience - the Presbyterian fixation? For the Delphic Oracle to perform, it must tangle itself in a web of predestination. Yet predestination negates surprises and, in fact, sets up a mathematically enclosed universe whose limits are always inconsistent, always encountering the unprovable. It's like a koan, a Zen mind breaker. It's like the Cretan Epimenides saying, "All Cretans are liars."
Brian Herbert called the Dune universe "a spiritual melting pot", noting that his father incorporated elements of a variety of religions, including Buddhism, Sufi mysticism and other Islamic belief systems, Catholicism, Protestantism, Judaism, and Hinduism. He added that Frank Herbert's fictional future in which "religious beliefs have combined into interesting forms" represents the author's solution to eliminating arguments between religions, each of which claimed to have "the one and only revelation." | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-advanced-rag\dune.txt |
.md | # neo4j-advanced-rag
This template allows you to balance precise embeddings and context retention by implementing advanced retrieval strategies.
## Strategies
1. **Typical RAG**:
- Traditional method where the exact data indexed is the data retrieved.
2. **Parent retriever**:
- Instead of indexing entire documents, data is divided into smaller chunks, referred to as Parent and Child documents.
- Child documents are indexed for better representation of specific concepts, while parent documents is retrieved to ensure context retention.
3. **Hypothetical Questions**:
- Documents are processed to determine potential questions they might answer.
- These questions are then indexed for better representation of specific concepts, while parent documents are retrieved to ensure context retention.
4. **Summaries**:
- Instead of indexing the entire document, a summary of the document is created and indexed.
- Similarly, the parent document is retrieved in a RAG application.
## Environment Setup
You need to define the following environment variables
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data
If you want to populate the DB with some example data, you can run `python ingest.py`.
The script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database.
First, the text is divided into larger chunks ("parents") and then further subdivided into smaller chunks ("children"), where both parent and child chunks overlap slightly to maintain context.
After storing these chunks in the database, embeddings for the child nodes are computed using OpenAI's embeddings and stored back in the graph for future retrieval or analysis.
For every parent node, hypothetical questions and summaries are generated, embedded, and added to the database.
Additionally, a vector index for each retrieval strategy is created for efficient querying of these embeddings.
*Note that ingestion can take a minute or two due to LLMs velocity of generating hypothetical questions and summaries.*
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-advanced-rag
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-advanced-rag
```
And add the following code to your `server.py` file:
```python
from neo4j_advanced_rag import chain as neo4j_advanced_chain
add_routes(app, neo4j_advanced_chain, path="/neo4j-advanced-rag")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-advanced-rag/playground](http://127.0.0.1:8000/neo4j-advanced-rag/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-advanced-rag")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-advanced-rag\README.md |
.md |
# neo4j_cypher
This template allows you to interact with a Neo4j graph database in natural language, using an OpenAI LLM.
It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results.
[![Diagram showing the workflow of a user asking a question, which is processed by a Cypher generating chain, resulting in a Cypher query to the Neo4j Knowledge Graph, and then an answer generating chain that provides a generated answer based on the information from the graph.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher/static/workflow.png "Neo4j Cypher Workflow Diagram")](https://medium.com/neo4j/langchain-cypher-search-tips-tricks-f7c9e9abca4d)
## Environment Setup
Define the following environment variables:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Neo4j database setup
There are a number of ways to set up a Neo4j database.
### Neo4j Aura
Neo4j AuraDB is a fully managed cloud graph database service.
Create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database instance, you'll receive credentials to access the database.
## Populating with data
If you want to populate the DB with some example data, you can run `python ingest.py`.
This script will populate the database with sample movie data.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-cypher
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-cypher
```
And add the following code to your `server.py` file:
```python
from neo4j_cypher import chain as neo4j_cypher_chain
add_routes(app, neo4j_cypher_chain, path="/neo4j-cypher")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j_cypher/playground](http://127.0.0.1:8000/neo4j_cypher/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-cypher")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-cypher\README.md |
.md |
# neo4j-cypher-ft
This template allows you to interact with a Neo4j graph database using natural language, leveraging OpenAI's LLM.
Its main function is to convert natural language questions into Cypher queries (the language used to query Neo4j databases), execute these queries, and provide natural language responses based on the query's results.
The package utilizes a full-text index for efficient mapping of text values to database entries, thereby enhancing the generation of accurate Cypher statements.
In the provided example, the full-text index is used to map names of people and movies from the user's query to corresponding database entries.
![Workflow diagram showing the process from a user asking a question to generating an answer using the Neo4j knowledge graph and full-text index.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher-ft/static/workflow.png "Neo4j Cypher Workflow Diagram")
## Environment Setup
The following environment variables need to be set:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
Additionally, if you wish to populate the DB with some example data, you can run `python ingest.py`.
This script will populate the database with sample movie data and create a full-text index named `entity`, which is used to map person and movies from user input to database values for precise Cypher statement generation.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-cypher-ft
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-cypher-ft
```
And add the following code to your `server.py` file:
```python
from neo4j_cypher_ft import chain as neo4j_cypher_ft_chain
add_routes(app, neo4j_cypher_ft_chain, path="/neo4j-cypher-ft")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-cypher-ft/playground](http://127.0.0.1:8000/neo4j-cypher-ft/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-cypher-ft")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-cypher-ft\README.md |
.md |
# neo4j-cypher-memory
This template allows you to have conversations with a Neo4j graph database in natural language, using an OpenAI LLM.
It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results.
Additionally, it features a conversational memory module that stores the dialogue history in the Neo4j graph database.
The conversation memory is uniquely maintained for each user session, ensuring personalized interactions.
To facilitate this, please supply both the `user_id` and `session_id` when using the conversation chain.
![Workflow diagram illustrating the process of a user asking a question, generating a Cypher query, retrieving conversational history, executing the query on a Neo4j database, generating an answer, and storing conversational memory.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher-memory/static/workflow.png "Neo4j Cypher Memory Workflow Diagram")
## Environment Setup
Define the following environment variables:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Neo4j database setup
There are a number of ways to set up a Neo4j database.
### Neo4j Aura
Neo4j AuraDB is a fully managed cloud graph database service.
Create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database instance, you'll receive credentials to access the database.
## Populating with data
If you want to populate the DB with some example data, you can run `python ingest.py`.
This script will populate the database with sample movie data.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-cypher-memory
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-cypher-memory
```
And add the following code to your `server.py` file:
```python
from neo4j_cypher_memory import chain as neo4j_cypher_memory_chain
add_routes(app, neo4j_cypher_memory_chain, path="/neo4j-cypher-memory")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j_cypher_memory/playground](http://127.0.0.1:8000/neo4j_cypher/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-cypher-memory")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-cypher-memory\README.md |
.md |
# neo4j-generation
This template pairs LLM-based knowledge graph extraction with Neo4j AuraDB, a fully managed cloud graph database.
You can create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database instance, you'll receive credentials to access the database.
This template is flexible and allows users to guide the extraction process by specifying a list of node labels and relationship types.
For more details on the functionality and capabilities of this package, please refer to [this blog post](https://blog.langchain.dev/constructing-knowledge-graphs-from-text-using-openai-functions/).
## Environment Setup
You need to set the following environment variables:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-generation
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-generation
```
And add the following code to your `server.py` file:
```python
from neo4j_generation.chain import chain as neo4j_generation_chain
add_routes(app, neo4j_generation_chain, path="/neo4j-generation")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-generation/playground](http://127.0.0.1:8000/neo4j-generation/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-generation")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-generation\README.md |
.txt | Dune is a 1965 epic science fiction novel by American author Frank Herbert, originally published as two separate serials in Analog magazine. It tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and it won the inaugural Nebula Award for Best Novel. It is the first installment of the Dune Chronicles. It is one of the world's best-selling science fiction novels.Dune is set in the distant future in a feudal interstellar society in which various noble houses control planetary fiefs. It tells the story of young Paul Atreides, whose family accepts the stewardship of the planet Arrakis. While the planet is an inhospitable and sparsely populated desert wasteland, it is the only source of melange, or "spice", a drug that extends life and enhances mental abilities. Melange is also necessary for space navigation, which requires a kind of multidimensional awareness and foresight that only the drug provides. As melange can only be produced on Arrakis, control of the planet is a coveted and dangerous undertaking. The story explores the multilayered interactions of politics, religion, ecology, technology, and human emotion, as the factions of the empire confront each other in a struggle for the control of Arrakis and its spice.
Herbert wrote five sequels: Dune Messiah, Children of Dune, God Emperor of Dune, Heretics of Dune, and Chapterhouse: Dune. Following Herbert's death in 1986, his son Brian Herbert and author Kevin J. Anderson continued the series in over a dozen additional novels since 1999.
Adaptations of the novel to cinema have been notoriously difficult and complicated. In the 1970s, cult filmmaker Alejandro Jodorowsky attempted to make a film based on the novel. After three years of development, the project was canceled due to a constantly growing budget. In 1984, a film adaptation directed by David Lynch was released to mostly negative responses from critics and failure at the box office, although it later developed a cult following. The book was also adapted into the 2000 Sci-Fi Channel miniseries Frank Herbert's Dune and its 2003 sequel Frank Herbert's Children of Dune (the latter of which combines the events of Dune Messiah and Children of Dune). A second film adaptation directed by Denis Villeneuve was released on October 21, 2021, to positive reviews. It grossed $401 million worldwide and went on to be nominated for ten Academy Awards, winning six. Villeneuve's film covers roughly the first half of the original novel; a sequel, which will cover the remaining story, will be released in March 2024.
The series has also been used as the basis for several board, role-playing, and video games.
Since 2009, the names of planets from the Dune novels have been adopted for the real-life nomenclature of plains and other features on Saturn's moon Titan.
== Origins ==
After his novel The Dragon in the Sea was published in 1957, Herbert traveled to Florence, Oregon, at the north end of the Oregon Dunes. Here, the United States Department of Agriculture was attempting to use poverty grasses to stabilize the sand dunes. Herbert claimed in a letter to his literary agent, Lurton Blassingame, that the moving dunes could "swallow whole cities, lakes, rivers, highways." Herbert's article on the dunes, "They Stopped the Moving Sands", was never completed (and only published decades later in The Road to Dune), but its research sparked Herbert's interest in ecology and deserts.Herbert further drew inspiration from Native American mentors like "Indian Henry" (as Herbert referred to the man to his son; likely a Henry Martin of the Hoh tribe) and Howard Hansen. Both Martin and Hansen grew up on the Quileute reservation near Herbert's hometown. According to historian Daniel Immerwahr, Hansen regularly shared his writing with Herbert. "White men are eating the earth," Hansen told Herbert in 1958, after sharing a piece on the effect of logging on the Quileute reservation. "They're gonna turn this whole planet into a wasteland, just like North Africa." The world could become a "big dune," Herbert responded in agreement.Herbert was also interested in the idea of the superhero mystique and messiahs. He believed that feudalism was a natural condition humans fell into, where some led and others gave up the responsibility of making decisions and just followed orders. He found that desert environments have historically given birth to several major religions with messianic impulses. He decided to join his interests together so he could play religious and ecological ideas against each other. In addition, he was influenced by the story of T. E. Lawrence and the "messianic overtones" in Lawrence's involvement in the Arab Revolt during World War I. In an early version of Dune, the hero was actually very similar to Lawrence of Arabia, but Herbert decided the plot was too straightforward and added more layers to his story.Herbert drew heavy inspiration also from Lesley Blanch's The Sabres of Paradise (1960), a narrative history recounting a mid-19th century conflict in the Caucasus between rugged Islamized caucasian tribes and the expansive Russian Empire. Language used on both sides of that conflict become terms in Herbert's world—chakobsa, a Caucasian hunting language, becomes a battle language of humans spread across the galaxy; kanly, a word for blood feud in the 19th century Caucasus, represents a feud between Dune's noble Houses; sietch and tabir are both words for camp borrowed from Ukrainian Cossacks (of the Pontic–Caspian steppe).Herbert also borrowed some lines which Blanch stated were Caucasian proverbs. "To kill with the point lacked artistry", used by Blanch to describe the Caucasus peoples' love of swordsmanship, becomes in Dune "Killing with the tip lacks artistry", a piece of advice given to a young Paul during his training. "Polish comes from the city, wisdom from the hills", a Caucasian aphorism, turns into a desert expression: "Polish comes from the cities, wisdom from the desert".
Another significant source of inspiration for Dune was Herbert's experiences with psilocybin and his hobby of cultivating mushrooms, according to mycologist Paul Stamets's account of meeting Herbert in the 1980s:Frank went on to tell me that much of the premise of Dune—the magic spice (spores) that allowed the bending of space (tripping), the giant sand worms (maggots digesting mushrooms), the eyes of the Fremen (the cerulean blue of Psilocybe mushrooms), the mysticism of the female spiritual warriors, the Bene Gesserits (influenced by the tales of Maria Sabina and the sacred mushroom cults of Mexico)—came from his perception of the fungal life cycle, and his imagination was stimulated through his experiences with the use of magic mushrooms.Herbert spent the next five years researching, writing, and revising. He published a three-part serial Dune World in the monthly Analog, from December 1963 to February 1964. The serial was accompanied by several illustrations that were not published again. After an interval of a year, he published the much slower-paced five-part The Prophet of Dune in the January–May 1965 issues. The first serial became "Book 1: Dune" in the final published Dune novel, and the second serial was divided into "Book Two: Muad'dib" and "Book Three: The Prophet". The serialized version was expanded, reworked, and submitted to more than twenty publishers, each of whom rejected it. The novel, Dune, was finally accepted and published in August 1965 by Chilton Books, a printing house better known for publishing auto repair manuals. Sterling Lanier, an editor at Chilton, had seen Herbert's manuscript and had urged his company to take a risk in publishing the book. However, the first printing, priced at $5.95 (equivalent to $55.25 in 2022), did not sell well and was poorly received by critics as being atypical of science fiction at the time. Chilton considered the publication of Dune a write-off and Lanier was fired. Over the course of time, the book gained critical acclaim, and its popularity spread by word-of-mouth to allow Herbert to start working full time on developing the sequels to Dune, elements of which were already written alongside Dune.At first Herbert considered using Mars as setting for his novel, but eventually decided to use a fictional planet instead. His son Brian said that "Readers would have too many preconceived ideas about that planet, due to the number of stories that had been written about it."Herbert dedicated his work "to the people whose labors go beyond ideas into the realm of 'real materials'—to the dry-land ecologists, wherever they may be, in whatever time they work, this effort at prediction is dedicated in humility and admiration."
== Plot ==
Duke Leto Atreides of House Atreides, ruler of the ocean planet Caladan, is assigned by the Padishah Emperor Shaddam IV to serve as fief ruler of the planet Arrakis. Although Arrakis is a harsh and inhospitable desert planet, it is of enormous importance because it is the only planetary source of melange, or the "spice", a unique and incredibly valuable substance that extends human youth, vitality and lifespan. It is also through the consumption of spice that Spacing Guild Navigators are able to effect safe interstellar travel. Shaddam, jealous of Duke Leto Atreides's rising popularity in the Landsraad, sees House Atreides as a potential future rival and threat, so conspires with House Harkonnen, the former stewards of Arrakis and the longstanding enemies of House Atreides, to destroy Leto and his family after their arrival. Leto is aware his assignment is a trap of some kind, but is compelled to obey the Emperor's orders anyway.
Leto's concubine Lady Jessica is an acolyte of the Bene Gesserit, an exclusively female group that pursues mysterious political aims and wields seemingly superhuman physical and mental abilities, such as the ability to control their bodies down to the cellular level, and also decide the sex of their children. Though Jessica was instructed by the Bene Gesserit to bear a daughter as part of their breeding program, out of love for Leto she bore a son, Paul. From a young age, Paul has been trained in warfare by Leto's aides, the elite soldiers Duncan Idaho and Gurney Halleck. Thufir Hawat, the Duke's Mentat (human computers, able to store vast amounts of data and perform advanced calculations on demand), has instructed Paul in the ways of political intrigue. Jessica has also trained her son in Bene Gesserit disciplines.
Paul's prophetic dreams interest Jessica's superior, the Reverend Mother Gaius Helen Mohiam, who subjects Paul to the deadly gom jabbar test. Holding a poisonous needle to his neck ready to strike should he be unable to resist the impulse to withdraw his hand from the nerve induction box, she tests Paul's self-control to overcome the extreme psychological pain he is being subjected to through the box.
Leto, Jessica, and Paul travel with their household to occupy Arrakeen, the capital on Arrakis formerly held by House Harkonnen. Leto learns of the dangers involved in harvesting the spice, which is protected by giant sandworms, and seeks to negotiate with the planet's native Fremen people, seeing them as a valuable ally rather than foes. Soon after the Atreides's arrival, Harkonnen forces attack, joined by the Emperor's ferocious Sardaukar troops in disguise. Leto is betrayed by his personal physician, the Suk doctor Wellington Yueh, who delivers a drugged Leto to the Baron Vladimir Harkonnen and his twisted Mentat, Piter De Vries. Yueh, however, arranges for Jessica and Paul to escape into the desert, where they are presumed dead by the Harkonnens. Yueh replaces one of Leto's teeth with a poison gas capsule, hoping Leto can kill the Baron during their encounter. The Baron narrowly avoids the gas due to his shield, which kills Leto, De Vries, and the others in the room. The Baron forces Hawat to take over De Vries's position by dosing him with a long-lasting, fatal poison and threatening to withhold the regular antidote doses unless he obeys. While he follows the Baron's orders, Hawat works secretly to undermine the Harkonnens.
Having fled into the desert, Paul is exposed to high concentrations of spice and has visions through which he realizes he has significant powers (as a result of the Bene Gesserit breeding scheme). He foresees potential futures in which he lives among the planet's native Fremen before leading them on a Holy Jihad across the known universe.
It is revealed Jessica is the daughter of Baron Harkonnen, a secret kept from her by the Bene Gesserit. After being captured by Fremen, Paul and Jessica are accepted into the Fremen community of Sietch Tabr, and teach the Fremen the Bene Gesserit fighting technique known as the "weirding way". Paul proves his manhood by killing a Fremen named Jamis in a ritualistic crysknife fight and chooses the Fremen name Muad'Dib, while Jessica opts to undergo a ritual to become a Reverend Mother by drinking the poisonous Water of Life. Pregnant with Leto's daughter, she inadvertently causes the unborn child, Alia, to become infused with the same powers in the womb. Paul takes a Fremen lover, Chani, and has a son with her, Leto II.
Two years pass and Paul's powerful prescience manifests, which confirms for the Fremen that he is their prophesied messiah, a legend planted by the Bene Gesserit's Missionaria Protectiva. Paul embraces his father's belief that the Fremen could be a powerful fighting force to take back Arrakis, but also sees that if he does not control them, their jihad could consume the entire universe. Word of the new Fremen leader reaches both Baron Harkonnen and the Emperor as spice production falls due to their increasingly destructive raids. The Baron encourages his brutish nephew Glossu Rabban to rule with an iron fist, hoping the contrast with his shrewder nephew Feyd-Rautha will make the latter popular among the people of Arrakis when he eventually replaces Rabban. The Emperor, suspecting the Baron of trying to create troops more powerful than the Sardaukar to seize power, sends spies to monitor activity on Arrakis. Hawat uses the opportunity to sow seeds of doubt in the Baron about the Emperor's true plans, putting further strain on their alliance.
Gurney, having survived the Harkonnen coup becomes a smuggler, reuniting with Paul and Jessica after a Fremen raid on his harvester. Believing Jessica to be the traitor, Gurney threatens to kill her, but is stopped by Paul. Paul did not foresee Gurney's attack, and concludes he must increase his prescience by drinking the Water of Life, which is traditionally fatal to males. Paul falls into unconsciousness for three weeks after drinking the poison, but when he wakes, he has clairvoyance across time and space: he is the Kwisatz Haderach, the ultimate goal of the Bene Gesserit breeding program.
Paul senses the Emperor and Baron are amassing fleets around Arrakis to quell the Fremen rebellion, and prepares the Fremen for a major offensive against the Harkonnen troops. The Emperor arrives with the Baron on Arrakis. The Emperor's troops seize a Fremen outpost, killing many including young Leto II, while Alia is captured and taken to the Emperor. Under cover of an electric storm, which shorts out the Emperor's troops' defensive shields, Paul and the Fremen, riding giant sandworms, assault the capital while Alia assassinates the Baron and escapes. The Fremen quickly defeat both the Harkonnen and Sardaukar troops.
Paul faces the Emperor, threatening to destroy spice production forever unless Shaddam abdicates the throne. Feyd-Rautha attempts to stop Paul by challenging him to a ritualistic knife fight, during which he attempts to cheat and kill Paul with a poison spur in his belt. Paul gains the upper hand and kills him. The Emperor reluctantly cedes the throne to Paul and promises his daughter Princess Irulan's hand in marriage. As Paul takes control of the Empire, he realizes that while he has achieved his goal, he is no longer able to stop the Fremen jihad, as their belief in him is too powerful to restrain.
== Characters ==
House AtreidesPaul Atreides, the Duke's son, and main character of the novel
Duke Leto Atreides, head of House Atreides
Lady Jessica, Bene Gesserit and concubine of the Duke, mother of Paul and Alia
Alia Atreides, Paul's younger sister
Thufir Hawat, Mentat and Master of Assassins to House Atreides
Gurney Halleck, staunchly loyal troubadour warrior of the Atreides
Duncan Idaho, Swordmaster for House Atreides, graduate of the Ginaz School
Wellington Yueh, Suk doctor for the Atreides who is secretly working for House HarkonnenHouse HarkonnenBaron Vladimir Harkonnen, head of House Harkonnen
Piter De Vries, twisted Mentat
Feyd-Rautha, nephew and heir-presumptive of the Baron
Glossu "Beast" Rabban, also called Rabban Harkonnen, older nephew of the Baron
Iakin Nefud, Captain of the GuardHouse CorrinoShaddam IV, Padishah Emperor of the Known Universe (the Imperium)
Princess Irulan, Shaddam's eldest daughter and heir, also a historian
Count Fenring, the Emperor's closest friend, advisor, and "errand boy"Bene GesseritReverend Mother Gaius Helen Mohiam, Proctor Superior of the Bene Gesserit school and the Emperor's Truthsayer
Lady Margot Fenring, Bene Gesserit wife of Count FenringFremenThe Fremen, native inhabitants of Arrakis
Stilgar, Fremen leader of Sietch Tabr
Chani, Paul's Fremen concubine and a Sayyadina (female acolyte) of Sietch Tabr
Dr. Liet-Kynes, the Imperial Planetologist on Arrakis and father of Chani, as well as a revered figure among the Fremen
The Shadout Mapes, head housekeeper of imperial residence on Arrakis
Jamis, Fremen killed by Paul in ritual duel
Harah, wife of Jamis and later servant to Paul who helps raise Alia among the Fremen
Reverend Mother Ramallo, religious leader of Sietch TabrSmugglersEsmar Tuek, a powerful smuggler and the father of Staban Tuek
Staban Tuek, the son of Esmar Tuek and a powerful smuggler who befriends and takes in Gurney Halleck and his surviving men after the attack on the Atreides
== Themes and influences ==
The Dune series is a landmark of science fiction. Herbert deliberately suppressed technology in his Dune universe so he could address the politics of humanity, rather than the future of humanity's technology. For example, a key pre-history event to the novel's present is the "Butlerian Jihad", in which all robots and computers were destroyed, eliminating these common elements to science fiction from the novel as to allow focus on humanity. Dune considers the way humans and their institutions might change over time. Director John Harrison, who adapted Dune for Syfy's 2000 miniseries, called the novel a universal and timeless reflection of "the human condition and its moral dilemmas", and said:
A lot of people refer to Dune as science fiction. I never do. I consider it an epic adventure in the classic storytelling tradition, a story of myth and legend not unlike the Morte d'Arthur or any messiah story. It just happens to be set in the future ... The story is actually more relevant today than when Herbert wrote it. In the 1960s, there were just these two colossal superpowers duking it out. Today we're living in a more feudal, corporatized world more akin to Herbert's universe of separate families, power centers and business interests, all interrelated and kept together by the one commodity necessary to all.
But Dune has also been called a mix of soft and hard science fiction since "the attention to ecology is hard, the anthropology and the psychic abilities are soft." Hard elements include the ecology of Arrakis, suspensor technology, weapon systems, and ornithopters, while soft elements include issues relating to religion, physical and mental training, cultures, politics, and psychology.Herbert said Paul's messiah figure was inspired by the Arthurian legend, and that the scarcity of water on Arrakis was a metaphor for oil, as well as air and water itself, and for the shortages of resources caused by overpopulation. Novelist Brian Herbert, his son and biographer, wrote:
Dune is a modern-day conglomeration of familiar myths, a tale in which great sandworms guard a precious treasure of melange, the geriatric spice that represents, among other things, the finite resource of oil. The planet Arrakis features immense, ferocious worms that are like dragons of lore, with "great teeth" and a "bellows breath of cinnamon." This resembles the myth described by an unknown English poet in Beowulf, the compelling tale of a fearsome fire dragon who guarded a great treasure hoard in a lair under cliffs, at the edge of the sea. The desert of Frank Herbert's classic novel is a vast ocean of sand, with giant worms diving into the depths, the mysterious and unrevealed domain of Shai-hulud. Dune tops are like the crests of waves, and there are powerful sandstorms out there, creating extreme danger. On Arrakis, life is said to emanate from the Maker (Shai-hulud) in the desert-sea; similarly all life on Earth is believed to have evolved from our oceans. Frank Herbert drew parallels, used spectacular metaphors, and extrapolated present conditions into world systems that seem entirely alien at first blush. But close examination reveals they aren't so different from systems we know … and the book characters of his imagination are not so different from people familiar to us.
Each chapter of Dune begins with an epigraph excerpted from the fictional writings of the character Princess Irulan. In forms such as diary entries, historical commentary, biography, quotations and philosophy, these writings set tone and provide exposition, context and other details intended to enhance understanding of Herbert's complex fictional universe and themes. They act as foreshadowing and invite the reader to keep reading to close the gap between what the epigraph says and what is happening in the main narrative. The epigraphs also give the reader the feeling that the world they are reading about is epically distanced, since Irulan writes about an idealized image of Paul as if he had already passed into memory. Brian Herbert wrote: "Dad told me that you could follow any of the novel's layers as you read it, and then start the book all over again, focusing on an entirely different layer. At the end of the book, he intentionally left loose ends and said he did this to send the readers spinning out of the story with bits and pieces of it still clinging to them, so that they would want to go back and read it again."
=== Middle-Eastern and Islamic references ===
Due to the similarities between some of Herbert's terms and ideas and actual words and concepts in the Arabic language, as well as the series' "Islamic undertones" and themes, a Middle-Eastern influence on Herbert's works has been noted repeatedly. In his descriptions of the Fremen culture and language, Herbert uses both authentic Arabic words and Arabic-sounding words. For example, one of the names for the sandworm, Shai-hulud, is derived from Arabic: شيء خلود, romanized: šayʾ ḫulūd, lit. 'immortal thing' or Arabic: شيخ خلود, romanized: šayḫ ḫulūd, lit. 'old man of eternity'. The title of the Fremen housekeeper, the Shadout Mapes, is borrowed from the Arabic: شادوف, romanized: šādūf, the Egyptian term for a device used to raise water. In particular, words related to the messianic religion of the Fremen, first implanted by the Bene Gesserit, are taken from Arabic, including Muad'Dib (from Arabic: مؤدب, romanized: muʾaddib, lit. 'educator'), Usul (from Arabic: أصول, romanized: ʾuṣūl, lit. 'fundamental principles'), Shari-a (from Arabic: شريعة, romanized: šarīʿa, lit. 'sharia; path'), Shaitan (from Arabic: شيطان, romanized: šayṭān, lit. 'Shaitan; devil; fiend', and jinn (from Arabic: جن, romanized: ǧinn, lit. 'jinn; spirit; demon; mythical being'). It is likely Herbert relied on second-hand resources such as phrasebooks and desert adventure stories to find these Arabic words and phrases for the Fremen. They are meaningful and carefully chosen, and help create an "imagined desert culture that resonates with exotic sounds, enigmas, and pseudo-Islamic references" and has a distinctly Bedouin aesthetic.As a foreigner who adopts the ways of a desert-dwelling people and then leads them in a military capacity, Paul Atreides bears many similarities to the historical T. E. Lawrence. His 1962 biopic Lawrence of Arabia has also been identified as a potential influence. The Sabres of Paradise (1960) has also been identified as a potential influence upon Dune, with its depiction of Imam Shamil and the Islamic culture of the Caucasus inspiring some of the themes, characters, events and terminology of Dune.The environment of the desert planet Arrakis was primarily inspired by the environments of the Middle East. Similarly Arrakis as a bioregion is presented as a particular kind of political site. Herbert has made it resemble a desertified petrostate area. The Fremen people of Arrakis were influenced by the Bedouin tribes of Arabia, and the Mahdi prophecy originates from Islamic eschatology. Inspiration is also adopted from medieval historian Ibn Khaldun's cyclical history and his dynastic concept in North Africa, hinted at by Herbert's reference to Khaldun's book Kitāb al-ʿibar ("The Book of Lessons"). The fictionalized version of the "Kitab al-ibar" in Dune is a combination of a Fremen religious manual and a desert survival book.
==== Additional language and historic influences ====
In addition to Arabic, Dune derives words and names from a variety of other languages, including Hebrew, Navajo, Latin, Dutch ("Landsraad"), Chakobsa, the Nahuatl language of the Aztecs, Greek, Persian, Sanskrit ("prana bindu", "prajna"), Russian, Turkish, Finnish, and Old English. Bene Gesserit is simply the Latin for "It will have been well fought", also carrying the sense of "It will have been well managed", which stands as a statement of the order's goal and as a pledge of faithfulness to that goal. Critics tend to miss the literal meaning of the phrase, some positing that the term is derived from the Latin meaning "it will have been well borne", which interpretation is not well supported by their doctrine in the story.Through the inspiration from The Sabres of Paradise, there are also allusions to the tsarist-era Russian nobility and Cossacks. Frank Herbert stated that bureaucracy that lasted long enough would become a hereditary nobility, and a significant theme behind the aristocratic families in Dune was "aristocratic bureaucracy" which he saw as analogous to the Soviet Union.
=== Environmentalism and ecology ===
Dune has been called the "first planetary ecology novel on a grand scale". Herbert hoped it would be seen as an "environmental awareness handbook" and said the title was meant to "echo the sound of 'doom'". It was reviewed in the best selling countercultural Whole Earth Catalog in 1968 as a "rich re-readable fantasy with clear portrayal of the fierce environment it takes to cohere a community".After the publication of Silent Spring by Rachel Carson in 1962, science fiction writers began treating the subject of ecological change and its consequences. Dune responded in 1965 with its complex descriptions of Arrakis life, from giant sandworms (for whom water is deadly) to smaller, mouse-like life forms adapted to live with limited water. Dune was followed in its creation of complex and unique ecologies by other science fiction books such as A Door into Ocean (1986) and Red Mars (1992). Environmentalists have pointed out that Dune's popularity as a novel depicting a planet as a complex—almost living—thing, in combination with the first images of Earth from space being published in the same time period, strongly influenced environmental movements such as the establishment of the international Earth Day.While the genre of climate fiction was popularized in the 2010s in response to real global climate change, Dune as well as other early science fiction works from authors like J. G. Ballard (The Drowned World) and Kim Stanley Robinson (the Mars trilogy) have retroactively been considered pioneering examples of the genre.
=== Declining empires ===
The Imperium in Dune contains features of various empires in Europe and the Near East, including the Roman Empire, Holy Roman Empire, and Ottoman Empire. Lorenzo DiTommaso compared Dune's portrayal of the downfall of a galactic empire to Edward Gibbon's Decline and Fall of the Roman Empire, which argues that Christianity allied with the profligacy of the Roman elite led to the fall of Ancient Rome. In "The Articulation of Imperial Decadence and Decline in Epic Science Fiction" (2007), DiTommaso outlines similarities between the two works by highlighting the excesses of the Emperor on his home planet of Kaitain and of the Baron Harkonnen in his palace. The Emperor loses his effectiveness as a ruler through an excess of ceremony and pomp. The hairdressers and attendants he brings with him to Arrakis are even referred to as "parasites". The Baron Harkonnen is similarly corrupt and materially indulgent. Gibbon's Decline and Fall partly blames the fall of Rome on the rise of Christianity. Gibbon claimed that this exotic import from a conquered province weakened the soldiers of Rome and left it open to attack. The Emperor's Sardaukar fighters are little match for the Fremen of Dune not only because of the Sardaukar's overconfidence and the fact that Jessica and Paul have trained the Fremen in their battle tactics, but because of the Fremen's capacity for self-sacrifice. The Fremen put the community before themselves in every instance, while the world outside wallows in luxury at the expense of others.The decline and long peace of the Empire sets the stage for revolution and renewal by genetic mixing of successful and unsuccessful groups through war, a process culminating in the Jihad led by Paul Atreides, described by Frank Herbert as depicting "war as a collective orgasm" (drawing on Norman Walter's 1950 The Sexual Cycle of Human Warfare), themes that would reappear in God Emperor of Dune's Scattering and Leto II's all-female Fish Speaker army.
=== Gender dynamics ===
Gender dynamics are complex in Dune. Within the Fremen sietch communities, women have almost full equality. They carry weapons and travel in raiding parties with men, fighting when necessary alongside the men. They can take positions of leadership as a Sayyadina or as a Reverend Mother (if she can survive the ritual of ingesting the Water of Life.) Both of these sietch religious leaders are routinely consulted by the all-male Council and can have a decisive voice in all matters of sietch life, security and internal politics. They are also protected by the entire community. Due to the high mortality rate among their men, women outnumber men in most sietches. Polygamy is common, and sexual relationships are voluntary and consensual; as Stilgar says to Jessica, "women among us are not taken against their will."
In contrast, the Imperial aristocracy leaves young women of noble birth very little agency. Frequently trained by the Bene Gesserit, they are raised to eventually marry other aristocrats. Marriages between Major and Minor Houses are political tools to forge alliances or heal old feuds; women are given very little say in the matter. Many such marriages are quietly maneuvered by the Bene Gesserit to produce offspring with some genetic characteristics needed by the sisterhood's human-breeding program. In addition, such highly-placed sisters were in a position to subtly influence their husbands' actions in ways that could move the politics of the Imperium toward Bene Gesserit goals.
The gom jabbar test of humanity is administered by the female Bene Gesserit order but rarely to males. The Bene Gesserit have seemingly mastered the unconscious and can play on the unconscious weaknesses of others using the Voice, yet their breeding program seeks after a male Kwisatz Haderach. Their plan is to produce a male who can "possess complete racial memory, both male and female," and look into the black hole in the collective unconscious that they fear. A central theme of the book is the connection, in Jessica's son, of this female aspect with his male aspect. This aligns with concepts in Jungian psychology, which features conscious/unconscious and taking/giving roles associated with males and females, as well as the idea of the collective unconscious. Paul's approach to power consistently requires his upbringing under the matriarchal Bene Gesserit, who operate as a long-dominating shadow government behind all of the great houses and their marriages or divisions. He is trained by Jessica in the Bene Gesserit Way, which includes prana-bindu training in nerve and muscle control and precise perception. Paul also receives Mentat training, thus helping prepare him to be a type of androgynous Kwisatz Haderach, a male Reverend Mother.In a Bene Gesserit test early in the book, it is implied that people are generally "inhuman" in that they irrationally place desire over self-interest and reason. This applies Herbert's philosophy that humans are not created equal, while equal justice and equal opportunity are higher ideals than mental, physical, or moral equality.
=== Heroism ===
I am showing you the superhero syndrome and your own participation in it.
Throughout Paul's rise to superhuman status, he follows a plotline common to many stories describing the birth of a hero. He has unfortunate circumstances forced onto him. After a long period of hardship and exile, he confronts and defeats the source of evil in his tale. As such, Dune is representative of a general trend beginning in 1960s American science fiction in that it features a character who attains godlike status through scientific means. Eventually, Paul Atreides gains a level of omniscience which allows him to take over the planet and the galaxy, and causes the Fremen of Arrakis to worship him like a god. Author Frank Herbert said in 1979, "The bottom line of the Dune trilogy is: beware of heroes. Much better [to] rely on your own judgment, and your own mistakes." He wrote in 1985, "Dune was aimed at this whole idea of the infallible leader because my view of history says that mistakes made by a leader (or made in a leader's name) are amplified by the numbers who follow without question."Juan A. Prieto-Pablos says Herbert achieves a new typology with Paul's superpowers, differentiating the heroes of Dune from earlier heroes such as Superman, van Vogt's Gilbert Gosseyn and Henry Kuttner's telepaths. Unlike previous superheroes who acquire their powers suddenly and accidentally, Paul's are the result of "painful and slow personal progress." And unlike other superheroes of the 1960s—who are the exception among ordinary people in their respective worlds—Herbert's characters grow their powers through "the application of mystical philosophies and techniques." For Herbert, the ordinary person can develop incredible fighting skills (Fremen, Ginaz swordsmen and Sardaukar) or mental abilities (Bene Gesserit, Mentats, Spacing Guild Navigators).
=== Zen and religion ===
Early in his newspaper career, Herbert was introduced to Zen by two Jungian psychologists, Ralph and Irene Slattery, who "gave a crucial boost to his thinking". Zen teachings ultimately had "a profound and continuing influence on [Herbert's] work". Throughout the Dune series and particularly in Dune, Herbert employs concepts and forms borrowed from Zen Buddhism. The Fremen are referred to as Zensunni adherents, and many of Herbert's epigraphs are Zen-spirited. In "Dune Genesis", Frank Herbert wrote:
What especially pleases me is to see the interwoven themes, the fugue like relationships of images that exactly replay the way Dune took shape. As in an Escher lithograph, I involved myself with recurrent themes that turn into paradox. The central paradox concerns the human vision of time. What about Paul's gift of prescience - the Presbyterian fixation? For the Delphic Oracle to perform, it must tangle itself in a web of predestination. Yet predestination negates surprises and, in fact, sets up a mathematically enclosed universe whose limits are always inconsistent, always encountering the unprovable. It's like a koan, a Zen mind breaker. It's like the Cretan Epimenides saying, "All Cretans are liars."
Brian Herbert called the Dune universe "a spiritual melting pot", noting that his father incorporated elements of a variety of religions, including Buddhism, Sufi mysticism and other Islamic belief systems, Catholicism, Protestantism, Judaism, and Hinduism. He added that Frank Herbert's fictional future in which "religious beliefs have combined into interesting forms" represents the author's solution to eliminating arguments between religions, each of which claimed to have "the one and only revelation."
=== Asimov's Foundation ===
Tim O'Reilly suggests that Herbert also wrote Dune as a counterpoint to Isaac Asimov's Foundation series. In his monograph on Frank Herbert, O'Reilly wrote that "Dune is clearly a commentary on the Foundation trilogy. Herbert has taken a look at the same imaginative situation that provoked Asimov's classic—the decay of a galactic empire—and restated it in a way that draws on different assumptions and suggests radically different conclusions. The twist he has introduced into Dune is that the Mule, not the Foundation, is his hero." According to O'Reilly, Herbert bases the Bene Gesserit on the scientific shamans of the Foundation, though they use biological rather than statistical science. In contrast to the Foundation series and its praise of science and rationality, Dune proposes that the unconscious and unexpected are actually what are needed for humanity.Both Herbert and Asimov explore the implications of prescience (i.e., visions of the future) both psychologically and socially. The Foundation series deploys a broadly determinist approach to prescient vision rooted in mathematical reasoning on a macroscopic social level. Dune, by contrast, invents a biologically rooted power of prescience that becomes determinist when the user actively relies on it to navigate past an undefined threshold of detail. Herbert’s eugenically produced and spice-enhanced prescience is also personalized to individual actors whose roles in later books constrain each other's visions, rendering the future more or less mutable as time progresses. In what might be a comment on Foundation, Herbert's most powerfully prescient being in God Emperor of Dune laments the boredom engendered by prescience, and values surprises, especially regarding one's death, as a psychological necessity.However, both works contain a similar theme of the restoration of civilization and seem to make the fundamental assumption that "political maneuvering, the need to control material resources, and friendship or mating bonds will be fundamentally the same in the future as they are now."
== Critical reception ==
Dune tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and won the inaugural Nebula Award for Best Novel. Reviews of the novel have been largely positive, and Dune is considered by some critics to be the best science fiction book ever written. The novel has been translated into dozens of languages, and has sold almost 20 million copies. Dune has been regularly cited as one of the world's best-selling science fiction novels.Arthur C. Clarke described Dune as "unique" and wrote, "I know nothing comparable to it except The Lord of the Rings." Robert A. Heinlein described the novel as "powerful, convincing, and most ingenious." It was described as "one of the monuments of modern science fiction" by the Chicago Tribune, and P. Schuyler Miller called Dune "one of the landmarks of modern science fiction ... an amazing feat of creation." The Washington Post described it as "a portrayal of an alien society more complete and deeply detailed than any other author in the field has managed ... a story absorbing equally for its action and philosophical vistas ... An astonishing science fiction phenomenon." Algis Budrys praised Dune for the vividness of its imagined setting, saying "The time lives. It breathes, it speaks, and Herbert has smelt it in his nostrils". He found that the novel, however, "turns flat and tails off at the end. ... [T]ruly effective villains simply simper and melt; fierce men and cunning statesmen and seeresses all bend before this new Messiah". Budrys faulted in particular Herbert's decision to kill Paul's infant son offstage, with no apparent emotional impact, saying "you cannot be so busy saving a world that you cannot hear an infant shriek". After criticizing unrealistic science fiction, Carl Sagan in 1978 listed Dune as among stories "that are so tautly constructed, so rich in the accommodating details of an unfamiliar society that they sweep me along before I have even a chance to be critical".The Louisville Times wrote, "Herbert's creation of this universe, with its intricate development and analysis of ecology, religion, politics, and philosophy, remains one of the supreme and seminal achievements in science fiction." Writing for The New Yorker, Jon Michaud praised Herbert's "clever authorial decision" to exclude robots and computers ("two staples of the genre") from his fictional universe, but suggested that this may be one explanation why Dune lacks "true fandom among science-fiction fans" to the extent that it "has not penetrated popular culture in the way that The Lord of the Rings and Star Wars have". Tamara I. Hladik wrote that the story "crafts a universe where lesser novels promulgate excuses for sequels. All its rich elements are in balance and plausible—not the patchwork confederacy of made-up languages, contrived customs, and meaningless histories that are the hallmark of so many other, lesser novels."On November 5, 2019, the BBC News listed Dune on its list of the 100 most influential novels.J. R. R. Tolkien refused to review Dune, on the grounds that he disliked it "with some intensity" and thus felt it would be unfair to Herbert, another working author, if he gave an honest review of the book.
== First edition prints and manuscripts ==
The first edition of Dune is one of the most valuable in science fiction book collecting. Copies have been sold for more than $10,000 at auction. The Chilton first edition of the novel is 9+1⁄4 inches (235 mm) tall, with bluish green boards and a price of $5.95 on the dust jacket, and notes Toronto as the Canadian publisher on the copyright page. Up to this point, Chilton had been publishing only automobile repair manuals.California State University, Fullerton's Pollak Library has several of Herbert's draft manuscripts of Dune and other works, with the author's notes, in their Frank Herbert Archives.
== Sequels and prequels ==
After Dune proved to be a critical and financial success for Herbert, he was able to devote himself full time to writing additional novels in the series. He had already drafted parts of the second and third while writing Dune. The series included Dune Messiah (1969), Children of Dune (1976), God Emperor of Dune (1981), Heretics of Dune (1984), and Chapterhouse: Dune (1985), each sequentially continuing on the narrative from Dune. Herbert died on February 11, 1986.Herbert's son, Brian Herbert, had found several thousand pages of notes left by his father that outlined ideas for other narratives related to Dune. Brian Herbert enlisted author Kevin J. Anderson to help build out prequel novels to the events of Dune. Brian Herbert's and Anderson's Dune prequels first started publication in 1999, and have led to additional stories that take place between those of Frank Herbert's books. The notes for what would have been Dune 7 also enabled them to publish Hunters of Dune (2006) and Sandworms of Dune (2007), sequels to Frank Herbert's final novel Chapterhouse: Dune, which complete the chronological progression of his original series, and wrap up storylines that began in Heretics of Dune.
== Adaptations ==
Dune has been considered as an "unfilmable" and "uncontainable" work to adapt from novel to film or other visual medium. Described by Wired, "It has four appendices and a glossary of its own gibberish, and its action takes place on two planets, one of which is a desert overrun by worms the size of airport runways. Lots of important people die or try to kill each other, and they're all tethered to about eight entangled subplots." There have been several attempts to achieve this difficult conversion with various degrees of success.
=== Early stalled attempts ===
In 1971, the production company Apjac International (APJ) (headed by Arthur P. Jacobs) optioned the rights to film Dune. As Jacobs was busy with other projects, such as the sequel to Planet of the Apes, Dune was delayed for another year. Jacobs' first choice for director was David Lean, but he turned down the offer. Charles Jarrott was also considered to direct. Work was also under way on a script while the hunt for a director continued. Initially, the first treatment had been handled by Robert Greenhut, the producer who had lobbied Jacobs to make the movie in the first place, but subsequently Rospo Pallenberg was approached to write the script, with shooting scheduled to begin in 1974. However, Jacobs died in 1973.
In December 1974, a French consortium led by Jean-Paul Gibon purchased the film rights from APJ, with Alejandro Jodorowsky set to direct. In 1975, Jodorowsky planned to film the story as a 14-hour feature, set to star his own son Brontis Jodorowsky in the lead role of Paul Atreides, Salvador Dalí as Shaddam IV, Padishah Emperor, Amanda Lear as Princess Irulan, Orson Welles as Baron Vladimir Harkonnen, Gloria Swanson as Reverend Mother Gaius Helen Mohiam, David Carradine as Duke Leto Atreides, Geraldine Chaplin as Lady Jessica, Alain Delon as Duncan Idaho, Hervé Villechaize as Gurney Halleck, Udo Kier as Piter De Vries, and Mick Jagger as Feyd-Rautha. It was at first proposed to score the film with original music by Karlheinz Stockhausen, Henry Cow, and Magma; later on, the soundtrack was to be provided by Pink Floyd. Jodorowsky set up a pre-production unit in Paris consisting of Chris Foss, a British artist who designed covers for science fiction periodicals, Jean Giraud (Moebius), a French illustrator who created and also wrote and drew for Metal Hurlant magazine, and H. R. Giger. Moebius began designing creatures and characters for the film, while Foss was brought in to design the film's space ships and hardware. Giger began designing the Harkonnen Castle based on Moebius's storyboards. Dan O'Bannon was to head the special effects department.Dalí was cast as the Emperor. Dalí later demanded to be paid $100,000 per hour; Jodorowsky agreed, but tailored Dalí's part to be filmed in one hour, drafting plans for other scenes of the emperor to use a mechanical mannequin as substitute for Dalí. According to Giger, Dalí was "later invited to leave the film because of his pro-Franco statements". Just as the storyboards, designs, and script were finished, the financial backing dried up. Frank Herbert traveled to Europe in 1976 to find that $2 million of the $9.5 million budget had already been spent in pre-production, and that Jodorowsky's script would result in a 14-hour movie ("It was the size of a phone book", Herbert later recalled). Jodorowsky took creative liberties with the source material, but Herbert said that he and Jodorowsky had an amicable relationship. Jodorowsky said in 1985 that he found the Dune story mythical and had intended to recreate it rather than adapt the novel; though he had an "enthusiastic admiration" for Herbert, Jodorowsky said he had done everything possible to distance the author and his input from the project. Although Jodorowsky was embittered by the experience, he said the Dune project changed his life, and some of the ideas were used in his and Moebius's The Incal. O'Bannon entered a psychiatric hospital after the production failed, then worked on 13 scripts, the last of which became Alien. A 2013 documentary, Jodorowsky's Dune, was made about Jodorowsky's failed attempt at an adaptation.
In 1976, Dino De Laurentiis acquired the rights from Gibon's consortium. De Laurentiis commissioned Herbert to write a new screenplay in 1978; the script Herbert turned in was 175 pages long, the equivalent of nearly three hours of screen time. De Laurentiis then hired director Ridley Scott in 1979, with Rudy Wurlitzer writing the screenplay and H. R. Giger retained from the Jodorowsky production; Scott and Giger had also just worked together on the film Alien, after O'Bannon recommended the artist. Scott intended to split the novel into two movies. He worked on three drafts of the script, using The Battle of Algiers as a point of reference, before moving on to direct another science fiction film, Blade Runner (1982). As he recalls, the pre-production process was slow, and finishing the project would have been even more time-intensive:
But after seven months I dropped out of Dune, by then Rudy Wurlitzer had come up with a first-draft script which I felt was a decent distillation of Frank Herbert's. But I also realised Dune was going to take a lot more work—at least two and a half years' worth. And I didn't have the heart to attack that because my older brother Frank unexpectedly died of cancer while I was prepping the De Laurentiis picture. Frankly, that freaked me out. So I went to Dino and told him the Dune script was his.
—From Ridley Scott: The Making of his Movies by Paul M. Sammon
=== 1984 film by David Lynch ===
In 1981, the nine-year film rights were set to expire. De Laurentiis re-negotiated the rights from the author, adding to them the rights to the Dune sequels (written and unwritten). After seeing The Elephant Man, De Laurentiis' daughter Raffaella decided that David Lynch should direct the movie. Around that time Lynch received several other directing offers, including Return of the Jedi. He agreed to direct Dune and write the screenplay even though he had not read the book, was not familiar with the story, or even been interested in science fiction. Lynch worked on the script for six months with Eric Bergren and Christopher De Vore. The team yielded two drafts of the script before it split over creative differences. Lynch would subsequently work on five more drafts. Production of the work was troubled by problems at the Mexican studio and hampering the film's timeline. Lynch ended up producing a nearly three-hour long film, but at demands from Universal Pictures, the film's distributor, he cut it back to about two hours, hastily filming additional scenes to make up for some of the cut footage.This first film of Dune, directed by Lynch, was released in 1984, nearly 20 years after the book's publication. Though Herbert said the book's depth and symbolism seemed to intimidate many filmmakers, he was pleased with the film, saying that "They've got it. It begins as Dune does. And I hear my dialogue all the way through. There are some interpretations and liberties, but you're gonna come out knowing you've seen Dune." Reviews of the film were negative, saying that it was incomprehensible to those unfamiliar with the book, and that fans would be disappointed by the way it strayed from the book's plot. Upon release for television and other forms of home media, Universal opted to reintroduce much of the footage that Lynch had cut, creating an over-three-hour long version with extensive monologue exposition. Lynch was extremely displeased with this move, and demanded that Universal replace his name on these cuts with the pseudonym "Alan Smithee", and has generally distanced himself from the film since.
=== 2000 miniseries by John Harrison ===
In 2000, John Harrison adapted the novel into Frank Herbert's Dune, a miniseries which premiered on American Sci-Fi Channel. As of 2004, the miniseries was one of the three highest-rated programs broadcast on the Sci-Fi Channel.
=== Further film attempts ===
In 2008, Paramount Pictures announced that they would produce a new film based on the book, with Peter Berg attached to direct. Producer Kevin Misher, who spent a year securing the rights from the Herbert estate, was to be joined by Richard Rubinstein and John Harrison (of both Sci-Fi Channel miniseries) as well as Sarah Aubrey and Mike Messina. The producers stated that they were going for a "faithful adaptation" of the novel, and considered "its theme of finite ecological resources particularly timely." Science fiction author Kevin J. Anderson and Frank Herbert's son Brian Herbert, who had together written multiple Dune sequels and prequels since 1999, were attached to the project as technical advisors. In October 2009, Berg dropped out of the project, later saying that it "for a variety of reasons wasn't the right thing" for him. Subsequently, with a script draft by Joshua Zetumer, Paramount reportedly sought a new director who could do the film for under $175 million. In 2010, Pierre Morel was signed on to direct, with screenwriter Chase Palmer incorporating Morel's vision of the project into Zetumer's original draft. By November 2010, Morel left the project. Paramount finally dropped plans for a remake in March 2011.
=== Films by Denis Villeneuve ===
In November 2016, Legendary Entertainment acquired the film and TV rights for Dune. Variety reported in December 2016 that Denis Villeneuve was in negotiations to direct the project, which was confirmed in February 2017. In April 2017, Legendary announced that Eric Roth would write the screenplay. Villeneuve explained in March 2018 that his adaptation will be split into two films, with the first installment scheduled to begin production in 2019. Casting includes Timothée Chalamet as Paul Atreides, Dave Bautista as Rabban, Stellan Skarsgård as Baron Harkonnen, Rebecca Ferguson as Lady Jessica, Charlotte Rampling as Reverend Mother Mohiam, Oscar Isaac as Duke Leto Atreides, Zendaya as Chani, Javier Bardem as Stilgar, Josh Brolin as Gurney Halleck, Jason Momoa as Duncan Idaho, David Dastmalchian as Piter De Vries, Chang Chen as Dr. Yueh, and Stephen Henderson as Thufir Hawat. Warner Bros. Pictures distributed the film, which had its initial premiere on September 3, 2021, at the Venice Film Festival, and wide release in both theaters and streaming on HBO Max on October 21, 2021, as part of Warner Bros.'s approach to handling the impact of the COVID-19 pandemic on the film industry. The film received "generally favorable reviews" on Metacritic. It has gone on to win multiple awards and was named by the National Board of Review as one of the 10 best films of 2021, as well as the American Film Institute in their annual top 10 list. The film went on to be nominated for ten Academy Awards, winning six, the most wins of the night for any film in contention.A sequel, Dune: Part Two, was scheduled for release on November 3, 2023, but will now instead be released on March 15th 2024 amid the 2023 SAG-AFTRA strike.
=== Audiobooks ===
In 1993, Recorded Books Inc. released a 20-disc audiobook narrated by George Guidall. In 2007, Audio Renaissance released an audio book narrated by Simon Vance with some parts performed by Scott Brick, Orlagh Cassidy, Euan Morton, and other performers.
== Cultural influence ==
Dune has been widely influential, inspiring numerous novels, music, films, television, games, and comic books. It is considered one of the greatest and most influential science fiction novels of all time, with numerous modern science fiction works such as Star Wars owing their existence to Dune. Dune has also been referenced in numerous other works of popular culture, including Star Trek, Chronicles of Riddick, The Kingkiller Chronicle and Futurama. Dune was cited as a source of inspiration for Hayao Miyazaki's anime film Nausicaä of the Valley of the Wind (1984) for its post-apocalyptic world.Dune was parodied in 1984's National Lampoon's Doon by Ellis Weiner, which William F. Touponce called "something of a tribute to Herbert's success on college campuses", noting that "the only other book to have been so honored is Tolkien's The Lord of the Rings," which was parodied by The Harvard Lampoon in 1969.
=== Music ===
In 1978, French electronic musician Richard Pinhas released the nine-track Dune-inspired album Chronolyse, which includes the seven-part Variations sur le thème des Bene Gesserit.
In 1979, German electronic music pioneer Klaus Schulze released an LP titled Dune featuring motifs and lyrics inspired by the novel.
A similar musical project, Visions of Dune, was released also in 1979 by Zed (a pseudonym of French electronic musician Bernard Sjazner).
Heavy metal band Iron Maiden wrote the song "To Tame a Land" based on the Dune story. It appears as the closing track to their 1983 album Piece of Mind. The original working title of the song was "Dune"; however, the band was denied permission to use it, with Frank Herbert's agents stating "Frank Herbert doesn't like rock bands, particularly heavy rock bands, and especially bands like Iron Maiden".
Dune inspired the German happy hardcore band Dune, who have released several albums with space travel-themed songs.
The progressive hardcore band Shai Hulud took their name from Dune.
"Traveller in Time", from the 1991 Blind Guardian album Tales from the Twilight World, is based mostly on Paul Atreides' visions of future and past.
The title of the 1993 Fear Factory album Fear is The Mindkiller is a quote from the "litany against fear".
The song "Near Fantastica", from the Matthew Good album Avalanche, makes reference to the "litany against fear", repeating "can't feel fear, fear's the mind killer" through a section of the song.
In the Fatboy Slim song "Weapon of Choice", the line "If you walk without rhythm/You won't attract the worm" is a near quotation from the sections of novel in which Stilgar teaches Paul to ride sandworms.
Dune also inspired the 1999 album The 2nd Moon by the German death metal band Golem, which is a concept album about the series.
Dune influenced Thirty Seconds to Mars on their self-titled debut album.
The Youngblood Brass Band's song "Is an Elegy" on Center:Level:Roar references "Muad'Dib", "Arrakis" and other elements from the novel.
The debut album of Canadian musician Grimes, called Geidi Primes, is a concept album based on Dune.
Japanese singer Kenshi Yonezu, released a song titled "Dune", also known as "Sand Planet". The song was released on 2017, and it was created using the voice synthesizer Hatsune Miku for her 10th anniversary.
"Fear is the Mind Killer", a song released in 2018 by Zheani (an Australian rapper) uses a quote from Dune.
"Litany Against Fear" is a spoken track released in 2018 under the 'Eight' album by Zheani. She recites an extract from Dune.
Sleep's 2018 album The Sciences features a song, Giza Butler, that references several aspects of Dune.
Tool's 2019 album Fear Inoculum has a song entitled "Litanie contre la peur (Litany against fear)".
"Rare to Wake", from Shannon Lay's album Geist (2019), is inspired by Dune.
Heavy Metal band Diamond Head based the song "The Sleeper" and its prelude, both off the album The Coffin Train, on the series.
=== Games ===
There have been a number of games based on the book, starting with the strategy–adventure game Dune (1992). The most important game adaptation is Dune II (1992), which established the conventions of modern real-time strategy games and is considered to be among the most influential video games of all time.The online game Lost Souls includes Dune-derived elements, including sandworms and melange—addiction to which can produce psychic talents. The 2016 game Enter the Gungeon features the spice melange as a random item which gives the player progressively stronger abilities and penalties with repeated uses, mirroring the long-term effects melange has on users.Rick Priestley cites Dune as a major influence on his 1987 wargame, Warhammer 40,000.In 2023, Funcom announced Dune: Awakening, an upcoming massively multiplayer online game set in the universe of Dune.
=== Space exploration ===
The Apollo 15 astronauts named a small crater on Earth's Moon after the novel during the 1971 mission, and the name was formally adopted by the International Astronomical Union in 1973. Since 2009, the names of planets from the Dune novels have been adopted for the real-world nomenclature of plains and other features on Saturn's moon Titan, like Arrakis Planitia.
== See also ==
Soft science fiction – Sub-genre of science fiction emphasizing "soft" sciences or human emotions
Hydraulic empire – Government by control of access to water
== References ==
== Further reading ==
Clute, John; Nicholls, Peter (1995). The Encyclopedia of Science Fiction. New York: St. Martin's Press. p. 1386. ISBN 978-0-312-13486-0.
Clute, John; Nicholls, Peter (1995). The Multimedia Encyclopedia of Science Fiction (CD-ROM). Danbury, CT: Grolier. ISBN 978-0-7172-3999-3.
Huddleston, Tom. The Worlds of Dune: The Places and Cultures That Inspired Frank Herbert. Minneapolis: Quarto Publishing Group UK, 2023.
Jakubowski, Maxim; Edwards, Malcolm (1983). The Complete Book of Science Fiction and Fantasy Lists. St Albans, Herts, UK: Granada Publishing Ltd. p. 350. ISBN 978-0-586-05678-3.
Kennedy, Kara. Frank Herbert's Dune: A Critical Companion. Cham, Switzerland: Palgrave Macmillan, 2022.
Kennedy, Kara. Women's Agency in the Dune Universe: Tracing Women's Liberation through Science Fiction. Cham, Switzerland: Palgrave Macmillan, 2020.
Nardi, Dominic J. & N. Trevor Brierly, eds. Discovering Dune: Essays on Frank Herbert's Epic Saga. Jefferson, NC: McFarland & Co., 2022.
Nicholas, Jeffery, ed. Dune and Philosophy: Weirding Way of Mentat. Chicago: Open Court, 2011.
Nicholls, Peter (1979). The Encyclopedia of Science Fiction. St Albans, Herts, UK: Granada Publishing Ltd. p. 672. ISBN 978-0-586-05380-5.
O’Reilly, Timothy. Frank Herbert. New York: Frederick Ungar, 1981.
Pringle, David (1990). The Ultimate Guide to Science Fiction. London: Grafton Books Ltd. p. 407. ISBN 978-0-246-13635-0.
Tuck, Donald H. (1974). The Encyclopedia of Science Fiction and Fantasy. Chicago: Advent. p. 136. ISBN 978-0-911682-20-5.
Williams, Kevin C. The Wisdom of the Sand: Philosophy and Frank Herbert's Dune. New York: Hampton Press, 2013.
== External links ==
Official website for Dune and its sequels
Dune title listing at the Internet Speculative Fiction Database
Turner, Paul (October 1973). "Vertex Interviews Frank Herbert" (Interview). Vol. 1, no. 4. Archived from the original on May 19, 2009.
Spark Notes: Dune, detailed study guide
DuneQuotes.com – Collection of quotes from the Dune series
Dune by Frank Herbert, reviewed by Ted Gioia (Conceptual Fiction)
"Frank Herbert Biography and Bibliography at LitWeb.net". www.litweb.net. Archived from the original on April 2, 2009. Retrieved January 2, 2009.
Works of Frank Herbert at Curlie
Timberg, Scott (April 18, 2010). "Frank Herbert's Dune holds timely – and timeless – appeal". Los Angeles Times. Archived from the original on December 3, 2013. Retrieved November 27, 2013.
Walton, Jo (January 12, 2011). "In league with the future: Frank Herbert's Dune (Review)". Tor.com. Retrieved November 27, 2013.
Leonard, Andrew (June 4, 2015). "To Save California, Read Dune". Nautilus. Archived from the original on November 4, 2017. Retrieved June 15, 2015.
Dune by Frank Herbert – Foreshadowing & Dedication at Fact Behind Fiction
Frank Herbert by Tim O'Reilly
DuneScholar.com – Collection of scholarly essays | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-parent\dune.txt |
.md |
# neo4j-parent
This template allows you to balance precise embeddings and context retention by splitting documents into smaller chunks and retrieving their original or larger text information.
Using a Neo4j vector index, the package queries child nodes using vector similarity search and retrieves the corresponding parent's text by defining an appropriate `retrieval_query` parameter.
## Environment Setup
You need to define the following environment variables
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data
If you want to populate the DB with some example data, you can run `python ingest.py`.
The script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database.
First, the text is divided into larger chunks ("parents") and then further subdivided into smaller chunks ("children"), where both parent and child chunks overlap slightly to maintain context.
After storing these chunks in the database, embeddings for the child nodes are computed using OpenAI's embeddings and stored back in the graph for future retrieval or analysis.
Additionally, a vector index named `retrieval` is created for efficient querying of these embeddings.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-parent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-parent
```
And add the following code to your `server.py` file:
```python
from neo4j_parent import chain as neo4j_parent_chain
add_routes(app, neo4j_parent_chain, path="/neo4j-parent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-parent/playground](http://127.0.0.1:8000/neo4j-parent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-parent")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-parent\README.md |
.md | # neo4j-semantic-layer
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using OpenAI function calling.
The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph databas based on the user's intent.
Learn more about the semantic layer template in the [corresponding blog post](https://medium.com/towards-data-science/enhancing-interaction-between-language-models-and-graph-databases-via-a-semantic-layer-0a78ad3eba49).
![Diagram illustrating the workflow of the Neo4j semantic layer with an agent interacting with tools like Information, Recommendation, and Memory, connected to a knowledge graph.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-semantic-layer/static/workflow.png "Neo4j Semantic Layer Workflow Diagram")
## Tools
The agent utilizes several tools to interact with the Neo4j graph database effectively:
1. **Information tool**:
- Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
2. **Recommendation Tool**:
- Provides movie recommendations based upon user preferences and input.
3. **Memory Tool**:
- Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.
## Environment Setup
You need to define the following environment variables
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data
If you want to populate the DB with an example movie dataset, you can run `python ingest.py`.
The script import information about movies and their rating by users.
Additionally, the script creates two [fulltext indices](https://neo4j.com/docs/cypher-manual/current/indexes-for-full-text-search/), which are used to map information from user input to the database.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-semantic-layer
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-semantic-layer
```
And add the following code to your `server.py` file:
```python
from neo4j_semantic_layer import agent_executor as neo4j_semantic_agent
add_routes(app, neo4j_semantic_agent, path="/neo4j-semantic-layer")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-semantic-layer/playground](http://127.0.0.1:8000/neo4j-semantic-layer/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-semantic-layer")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-semantic-layer\README.md |
.md | # neo4j-semantic-ollama
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using Mixtral as a JSON-based agent.
The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph database based on the user's intent.
Learn more about the semantic layer template in the [corresponding blog post](https://medium.com/towards-data-science/enhancing-interaction-between-language-models-and-graph-databases-via-a-semantic-layer-0a78ad3eba49).
![Diagram illustrating the workflow of the Neo4j semantic layer with an agent interacting with tools like Information, Recommendation, and Memory, connected to a knowledge graph.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-semantic-ollama/static/workflow.png "Neo4j Semantic Layer Workflow Diagram")
## Tools
The agent utilizes several tools to interact with the Neo4j graph database effectively:
1. **Information tool**:
- Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
2. **Recommendation Tool**:
- Provides movie recommendations based upon user preferences and input.
3. **Memory Tool**:
- Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.
4. **Smalltalk Tool**:
- Allows an agent to deal with smalltalk.
## Environment Setup
Before using this template, you need to set up Ollama and Neo4j database.
1. Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
2. Download your LLM of interest:
* This package uses `mixtral`: `ollama pull mixtral`
* You can choose from many LLMs [here](https://ollama.ai/library)
You need to define the following environment variables
```
OLLAMA_BASE_URL=<YOUR_OLLAMA_URL>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data
If you want to populate the DB with an example movie dataset, you can run `python ingest.py`.
The script import information about movies and their rating by users.
Additionally, the script creates two [fulltext indices](https://neo4j.com/docs/cypher-manual/current/indexes-for-full-text-search/), which are used to map information from user input to the database.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-semantic-ollama
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-semantic-ollama
```
And add the following code to your `server.py` file:
```python
from neo4j_semantic_layer import agent_executor as neo4j_semantic_agent
add_routes(app, neo4j_semantic_agent, path="/neo4j-semantic-ollama")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-semantic-ollama/playground](http://127.0.0.1:8000/neo4j-semantic-ollama/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-semantic-ollama")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-semantic-ollama\README.md |
.txt | Dune is a 1965 epic science fiction novel by American author Frank Herbert, originally published as two separate serials in Analog magazine. It tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and it won the inaugural Nebula Award for Best Novel. It is the first installment of the Dune Chronicles. It is one of the world's best-selling science fiction novels.Dune is set in the distant future in a feudal interstellar society in which various noble houses control planetary fiefs. It tells the story of young Paul Atreides, whose family accepts the stewardship of the planet Arrakis. While the planet is an inhospitable and sparsely populated desert wasteland, it is the only source of melange, or "spice", a drug that extends life and enhances mental abilities. Melange is also necessary for space navigation, which requires a kind of multidimensional awareness and foresight that only the drug provides. As melange can only be produced on Arrakis, control of the planet is a coveted and dangerous undertaking. The story explores the multilayered interactions of politics, religion, ecology, technology, and human emotion, as the factions of the empire confront each other in a struggle for the control of Arrakis and its spice.
Herbert wrote five sequels: Dune Messiah, Children of Dune, God Emperor of Dune, Heretics of Dune, and Chapterhouse: Dune. Following Herbert's death in 1986, his son Brian Herbert and author Kevin J. Anderson continued the series in over a dozen additional novels since 1999.
Adaptations of the novel to cinema have been notoriously difficult and complicated. In the 1970s, cult filmmaker Alejandro Jodorowsky attempted to make a film based on the novel. After three years of development, the project was canceled due to a constantly growing budget. In 1984, a film adaptation directed by David Lynch was released to mostly negative responses from critics and failure at the box office, although it later developed a cult following. The book was also adapted into the 2000 Sci-Fi Channel miniseries Frank Herbert's Dune and its 2003 sequel Frank Herbert's Children of Dune (the latter of which combines the events of Dune Messiah and Children of Dune). A second film adaptation directed by Denis Villeneuve was released on October 21, 2021, to positive reviews. It grossed $401 million worldwide and went on to be nominated for ten Academy Awards, winning six. Villeneuve's film covers roughly the first half of the original novel; a sequel, which will cover the remaining story, will be released in March 2024.
The series has also been used as the basis for several board, role-playing, and video games.
Since 2009, the names of planets from the Dune novels have been adopted for the real-life nomenclature of plains and other features on Saturn's moon Titan.
== Origins ==
After his novel The Dragon in the Sea was published in 1957, Herbert traveled to Florence, Oregon, at the north end of the Oregon Dunes. Here, the United States Department of Agriculture was attempting to use poverty grasses to stabilize the sand dunes. Herbert claimed in a letter to his literary agent, Lurton Blassingame, that the moving dunes could "swallow whole cities, lakes, rivers, highways." Herbert's article on the dunes, "They Stopped the Moving Sands", was never completed (and only published decades later in The Road to Dune), but its research sparked Herbert's interest in ecology and deserts.Herbert further drew inspiration from Native American mentors like "Indian Henry" (as Herbert referred to the man to his son; likely a Henry Martin of the Hoh tribe) and Howard Hansen. Both Martin and Hansen grew up on the Quileute reservation near Herbert's hometown. According to historian Daniel Immerwahr, Hansen regularly shared his writing with Herbert. "White men are eating the earth," Hansen told Herbert in 1958, after sharing a piece on the effect of logging on the Quileute reservation. "They're gonna turn this whole planet into a wasteland, just like North Africa." The world could become a "big dune," Herbert responded in agreement.Herbert was also interested in the idea of the superhero mystique and messiahs. He believed that feudalism was a natural condition humans fell into, where some led and others gave up the responsibility of making decisions and just followed orders. He found that desert environments have historically given birth to several major religions with messianic impulses. He decided to join his interests together so he could play religious and ecological ideas against each other. In addition, he was influenced by the story of T. E. Lawrence and the "messianic overtones" in Lawrence's involvement in the Arab Revolt during World War I. In an early version of Dune, the hero was actually very similar to Lawrence of Arabia, but Herbert decided the plot was too straightforward and added more layers to his story.Herbert drew heavy inspiration also from Lesley Blanch's The Sabres of Paradise (1960), a narrative history recounting a mid-19th century conflict in the Caucasus between rugged Islamized caucasian tribes and the expansive Russian Empire. Language used on both sides of that conflict become terms in Herbert's world—chakobsa, a Caucasian hunting language, becomes a battle language of humans spread across the galaxy; kanly, a word for blood feud in the 19th century Caucasus, represents a feud between Dune's noble Houses; sietch and tabir are both words for camp borrowed from Ukrainian Cossacks (of the Pontic–Caspian steppe).Herbert also borrowed some lines which Blanch stated were Caucasian proverbs. "To kill with the point lacked artistry", used by Blanch to describe the Caucasus peoples' love of swordsmanship, becomes in Dune "Killing with the tip lacks artistry", a piece of advice given to a young Paul during his training. "Polish comes from the city, wisdom from the hills", a Caucasian aphorism, turns into a desert expression: "Polish comes from the cities, wisdom from the desert".
Another significant source of inspiration for Dune was Herbert's experiences with psilocybin and his hobby of cultivating mushrooms, according to mycologist Paul Stamets's account of meeting Herbert in the 1980s:Frank went on to tell me that much of the premise of Dune—the magic spice (spores) that allowed the bending of space (tripping), the giant sand worms (maggots digesting mushrooms), the eyes of the Fremen (the cerulean blue of Psilocybe mushrooms), the mysticism of the female spiritual warriors, the Bene Gesserits (influenced by the tales of Maria Sabina and the sacred mushroom cults of Mexico)—came from his perception of the fungal life cycle, and his imagination was stimulated through his experiences with the use of magic mushrooms.Herbert spent the next five years researching, writing, and revising. He published a three-part serial Dune World in the monthly Analog, from December 1963 to February 1964. The serial was accompanied by several illustrations that were not published again. After an interval of a year, he published the much slower-paced five-part The Prophet of Dune in the January–May 1965 issues. The first serial became "Book 1: Dune" in the final published Dune novel, and the second serial was divided into "Book Two: Muad'dib" and "Book Three: The Prophet". The serialized version was expanded, reworked, and submitted to more than twenty publishers, each of whom rejected it. The novel, Dune, was finally accepted and published in August 1965 by Chilton Books, a printing house better known for publishing auto repair manuals. Sterling Lanier, an editor at Chilton, had seen Herbert's manuscript and had urged his company to take a risk in publishing the book. However, the first printing, priced at $5.95 (equivalent to $55.25 in 2022), did not sell well and was poorly received by critics as being atypical of science fiction at the time. Chilton considered the publication of Dune a write-off and Lanier was fired. Over the course of time, the book gained critical acclaim, and its popularity spread by word-of-mouth to allow Herbert to start working full time on developing the sequels to Dune, elements of which were already written alongside Dune.At first Herbert considered using Mars as setting for his novel, but eventually decided to use a fictional planet instead. His son Brian said that "Readers would have too many preconceived ideas about that planet, due to the number of stories that had been written about it."Herbert dedicated his work "to the people whose labors go beyond ideas into the realm of 'real materials'—to the dry-land ecologists, wherever they may be, in whatever time they work, this effort at prediction is dedicated in humility and admiration."
== Plot ==
Duke Leto Atreides of House Atreides, ruler of the ocean planet Caladan, is assigned by the Padishah Emperor Shaddam IV to serve as fief ruler of the planet Arrakis. Although Arrakis is a harsh and inhospitable desert planet, it is of enormous importance because it is the only planetary source of melange, or the "spice", a unique and incredibly valuable substance that extends human youth, vitality and lifespan. It is also through the consumption of spice that Spacing Guild Navigators are able to effect safe interstellar travel. Shaddam, jealous of Duke Leto Atreides's rising popularity in the Landsraad, sees House Atreides as a potential future rival and threat, so conspires with House Harkonnen, the former stewards of Arrakis and the longstanding enemies of House Atreides, to destroy Leto and his family after their arrival. Leto is aware his assignment is a trap of some kind, but is compelled to obey the Emperor's orders anyway.
Leto's concubine Lady Jessica is an acolyte of the Bene Gesserit, an exclusively female group that pursues mysterious political aims and wields seemingly superhuman physical and mental abilities, such as the ability to control their bodies down to the cellular level, and also decide the sex of their children. Though Jessica was instructed by the Bene Gesserit to bear a daughter as part of their breeding program, out of love for Leto she bore a son, Paul. From a young age, Paul has been trained in warfare by Leto's aides, the elite soldiers Duncan Idaho and Gurney Halleck. Thufir Hawat, the Duke's Mentat (human computers, able to store vast amounts of data and perform advanced calculations on demand), has instructed Paul in the ways of political intrigue. Jessica has also trained her son in Bene Gesserit disciplines.
Paul's prophetic dreams interest Jessica's superior, the Reverend Mother Gaius Helen Mohiam, who subjects Paul to the deadly gom jabbar test. Holding a poisonous needle to his neck ready to strike should he be unable to resist the impulse to withdraw his hand from the nerve induction box, she tests Paul's self-control to overcome the extreme psychological pain he is being subjected to through the box.
Leto, Jessica, and Paul travel with their household to occupy Arrakeen, the capital on Arrakis formerly held by House Harkonnen. Leto learns of the dangers involved in harvesting the spice, which is protected by giant sandworms, and seeks to negotiate with the planet's native Fremen people, seeing them as a valuable ally rather than foes. Soon after the Atreides's arrival, Harkonnen forces attack, joined by the Emperor's ferocious Sardaukar troops in disguise. Leto is betrayed by his personal physician, the Suk doctor Wellington Yueh, who delivers a drugged Leto to the Baron Vladimir Harkonnen and his twisted Mentat, Piter De Vries. Yueh, however, arranges for Jessica and Paul to escape into the desert, where they are presumed dead by the Harkonnens. Yueh replaces one of Leto's teeth with a poison gas capsule, hoping Leto can kill the Baron during their encounter. The Baron narrowly avoids the gas due to his shield, which kills Leto, De Vries, and the others in the room. The Baron forces Hawat to take over De Vries's position by dosing him with a long-lasting, fatal poison and threatening to withhold the regular antidote doses unless he obeys. While he follows the Baron's orders, Hawat works secretly to undermine the Harkonnens.
Having fled into the desert, Paul is exposed to high concentrations of spice and has visions through which he realizes he has significant powers (as a result of the Bene Gesserit breeding scheme). He foresees potential futures in which he lives among the planet's native Fremen before leading them on a Holy Jihad across the known universe.
It is revealed Jessica is the daughter of Baron Harkonnen, a secret kept from her by the Bene Gesserit. After being captured by Fremen, Paul and Jessica are accepted into the Fremen community of Sietch Tabr, and teach the Fremen the Bene Gesserit fighting technique known as the "weirding way". Paul proves his manhood by killing a Fremen named Jamis in a ritualistic crysknife fight and chooses the Fremen name Muad'Dib, while Jessica opts to undergo a ritual to become a Reverend Mother by drinking the poisonous Water of Life. Pregnant with Leto's daughter, she inadvertently causes the unborn child, Alia, to become infused with the same powers in the womb. Paul takes a Fremen lover, Chani, and has a son with her, Leto II.
Two years pass and Paul's powerful prescience manifests, which confirms for the Fremen that he is their prophesied messiah, a legend planted by the Bene Gesserit's Missionaria Protectiva. Paul embraces his father's belief that the Fremen could be a powerful fighting force to take back Arrakis, but also sees that if he does not control them, their jihad could consume the entire universe. Word of the new Fremen leader reaches both Baron Harkonnen and the Emperor as spice production falls due to their increasingly destructive raids. The Baron encourages his brutish nephew Glossu Rabban to rule with an iron fist, hoping the contrast with his shrewder nephew Feyd-Rautha will make the latter popular among the people of Arrakis when he eventually replaces Rabban. The Emperor, suspecting the Baron of trying to create troops more powerful than the Sardaukar to seize power, sends spies to monitor activity on Arrakis. Hawat uses the opportunity to sow seeds of doubt in the Baron about the Emperor's true plans, putting further strain on their alliance.
Gurney, having survived the Harkonnen coup becomes a smuggler, reuniting with Paul and Jessica after a Fremen raid on his harvester. Believing Jessica to be the traitor, Gurney threatens to kill her, but is stopped by Paul. Paul did not foresee Gurney's attack, and concludes he must increase his prescience by drinking the Water of Life, which is traditionally fatal to males. Paul falls into unconsciousness for three weeks after drinking the poison, but when he wakes, he has clairvoyance across time and space: he is the Kwisatz Haderach, the ultimate goal of the Bene Gesserit breeding program.
Paul senses the Emperor and Baron are amassing fleets around Arrakis to quell the Fremen rebellion, and prepares the Fremen for a major offensive against the Harkonnen troops. The Emperor arrives with the Baron on Arrakis. The Emperor's troops seize a Fremen outpost, killing many including young Leto II, while Alia is captured and taken to the Emperor. Under cover of an electric storm, which shorts out the Emperor's troops' defensive shields, Paul and the Fremen, riding giant sandworms, assault the capital while Alia assassinates the Baron and escapes. The Fremen quickly defeat both the Harkonnen and Sardaukar troops.
Paul faces the Emperor, threatening to destroy spice production forever unless Shaddam abdicates the throne. Feyd-Rautha attempts to stop Paul by challenging him to a ritualistic knife fight, during which he attempts to cheat and kill Paul with a poison spur in his belt. Paul gains the upper hand and kills him. The Emperor reluctantly cedes the throne to Paul and promises his daughter Princess Irulan's hand in marriage. As Paul takes control of the Empire, he realizes that while he has achieved his goal, he is no longer able to stop the Fremen jihad, as their belief in him is too powerful to restrain.
== Characters ==
House AtreidesPaul Atreides, the Duke's son, and main character of the novel
Duke Leto Atreides, head of House Atreides
Lady Jessica, Bene Gesserit and concubine of the Duke, mother of Paul and Alia
Alia Atreides, Paul's younger sister
Thufir Hawat, Mentat and Master of Assassins to House Atreides
Gurney Halleck, staunchly loyal troubadour warrior of the Atreides
Duncan Idaho, Swordmaster for House Atreides, graduate of the Ginaz School
Wellington Yueh, Suk doctor for the Atreides who is secretly working for House HarkonnenHouse HarkonnenBaron Vladimir Harkonnen, head of House Harkonnen
Piter De Vries, twisted Mentat
Feyd-Rautha, nephew and heir-presumptive of the Baron
Glossu "Beast" Rabban, also called Rabban Harkonnen, older nephew of the Baron
Iakin Nefud, Captain of the GuardHouse CorrinoShaddam IV, Padishah Emperor of the Known Universe (the Imperium)
Princess Irulan, Shaddam's eldest daughter and heir, also a historian
Count Fenring, the Emperor's closest friend, advisor, and "errand boy"Bene GesseritReverend Mother Gaius Helen Mohiam, Proctor Superior of the Bene Gesserit school and the Emperor's Truthsayer
Lady Margot Fenring, Bene Gesserit wife of Count FenringFremenThe Fremen, native inhabitants of Arrakis
Stilgar, Fremen leader of Sietch Tabr
Chani, Paul's Fremen concubine and a Sayyadina (female acolyte) of Sietch Tabr
Dr. Liet-Kynes, the Imperial Planetologist on Arrakis and father of Chani, as well as a revered figure among the Fremen
The Shadout Mapes, head housekeeper of imperial residence on Arrakis
Jamis, Fremen killed by Paul in ritual duel
Harah, wife of Jamis and later servant to Paul who helps raise Alia among the Fremen
Reverend Mother Ramallo, religious leader of Sietch TabrSmugglersEsmar Tuek, a powerful smuggler and the father of Staban Tuek
Staban Tuek, the son of Esmar Tuek and a powerful smuggler who befriends and takes in Gurney Halleck and his surviving men after the attack on the Atreides
== Themes and influences ==
The Dune series is a landmark of science fiction. Herbert deliberately suppressed technology in his Dune universe so he could address the politics of humanity, rather than the future of humanity's technology. For example, a key pre-history event to the novel's present is the "Butlerian Jihad", in which all robots and computers were destroyed, eliminating these common elements to science fiction from the novel as to allow focus on humanity. Dune considers the way humans and their institutions might change over time. Director John Harrison, who adapted Dune for Syfy's 2000 miniseries, called the novel a universal and timeless reflection of "the human condition and its moral dilemmas", and said:
A lot of people refer to Dune as science fiction. I never do. I consider it an epic adventure in the classic storytelling tradition, a story of myth and legend not unlike the Morte d'Arthur or any messiah story. It just happens to be set in the future ... The story is actually more relevant today than when Herbert wrote it. In the 1960s, there were just these two colossal superpowers duking it out. Today we're living in a more feudal, corporatized world more akin to Herbert's universe of separate families, power centers and business interests, all interrelated and kept together by the one commodity necessary to all.
But Dune has also been called a mix of soft and hard science fiction since "the attention to ecology is hard, the anthropology and the psychic abilities are soft." Hard elements include the ecology of Arrakis, suspensor technology, weapon systems, and ornithopters, while soft elements include issues relating to religion, physical and mental training, cultures, politics, and psychology.Herbert said Paul's messiah figure was inspired by the Arthurian legend, and that the scarcity of water on Arrakis was a metaphor for oil, as well as air and water itself, and for the shortages of resources caused by overpopulation. Novelist Brian Herbert, his son and biographer, wrote:
Dune is a modern-day conglomeration of familiar myths, a tale in which great sandworms guard a precious treasure of melange, the geriatric spice that represents, among other things, the finite resource of oil. The planet Arrakis features immense, ferocious worms that are like dragons of lore, with "great teeth" and a "bellows breath of cinnamon." This resembles the myth described by an unknown English poet in Beowulf, the compelling tale of a fearsome fire dragon who guarded a great treasure hoard in a lair under cliffs, at the edge of the sea. The desert of Frank Herbert's classic novel is a vast ocean of sand, with giant worms diving into the depths, the mysterious and unrevealed domain of Shai-hulud. Dune tops are like the crests of waves, and there are powerful sandstorms out there, creating extreme danger. On Arrakis, life is said to emanate from the Maker (Shai-hulud) in the desert-sea; similarly all life on Earth is believed to have evolved from our oceans. Frank Herbert drew parallels, used spectacular metaphors, and extrapolated present conditions into world systems that seem entirely alien at first blush. But close examination reveals they aren't so different from systems we know … and the book characters of his imagination are not so different from people familiar to us.
Each chapter of Dune begins with an epigraph excerpted from the fictional writings of the character Princess Irulan. In forms such as diary entries, historical commentary, biography, quotations and philosophy, these writings set tone and provide exposition, context and other details intended to enhance understanding of Herbert's complex fictional universe and themes. They act as foreshadowing and invite the reader to keep reading to close the gap between what the epigraph says and what is happening in the main narrative. The epigraphs also give the reader the feeling that the world they are reading about is epically distanced, since Irulan writes about an idealized image of Paul as if he had already passed into memory. Brian Herbert wrote: "Dad told me that you could follow any of the novel's layers as you read it, and then start the book all over again, focusing on an entirely different layer. At the end of the book, he intentionally left loose ends and said he did this to send the readers spinning out of the story with bits and pieces of it still clinging to them, so that they would want to go back and read it again."
=== Middle-Eastern and Islamic references ===
Due to the similarities between some of Herbert's terms and ideas and actual words and concepts in the Arabic language, as well as the series' "Islamic undertones" and themes, a Middle-Eastern influence on Herbert's works has been noted repeatedly. In his descriptions of the Fremen culture and language, Herbert uses both authentic Arabic words and Arabic-sounding words. For example, one of the names for the sandworm, Shai-hulud, is derived from Arabic: شيء خلود, romanized: šayʾ ḫulūd, lit. 'immortal thing' or Arabic: شيخ خلود, romanized: šayḫ ḫulūd, lit. 'old man of eternity'. The title of the Fremen housekeeper, the Shadout Mapes, is borrowed from the Arabic: شادوف, romanized: šādūf, the Egyptian term for a device used to raise water. In particular, words related to the messianic religion of the Fremen, first implanted by the Bene Gesserit, are taken from Arabic, including Muad'Dib (from Arabic: مؤدب, romanized: muʾaddib, lit. 'educator'), Usul (from Arabic: أصول, romanized: ʾuṣūl, lit. 'fundamental principles'), Shari-a (from Arabic: شريعة, romanized: šarīʿa, lit. 'sharia; path'), Shaitan (from Arabic: شيطان, romanized: šayṭān, lit. 'Shaitan; devil; fiend', and jinn (from Arabic: جن, romanized: ǧinn, lit. 'jinn; spirit; demon; mythical being'). It is likely Herbert relied on second-hand resources such as phrasebooks and desert adventure stories to find these Arabic words and phrases for the Fremen. They are meaningful and carefully chosen, and help create an "imagined desert culture that resonates with exotic sounds, enigmas, and pseudo-Islamic references" and has a distinctly Bedouin aesthetic.As a foreigner who adopts the ways of a desert-dwelling people and then leads them in a military capacity, Paul Atreides bears many similarities to the historical T. E. Lawrence. His 1962 biopic Lawrence of Arabia has also been identified as a potential influence. The Sabres of Paradise (1960) has also been identified as a potential influence upon Dune, with its depiction of Imam Shamil and the Islamic culture of the Caucasus inspiring some of the themes, characters, events and terminology of Dune.The environment of the desert planet Arrakis was primarily inspired by the environments of the Middle East. Similarly Arrakis as a bioregion is presented as a particular kind of political site. Herbert has made it resemble a desertified petrostate area. The Fremen people of Arrakis were influenced by the Bedouin tribes of Arabia, and the Mahdi prophecy originates from Islamic eschatology. Inspiration is also adopted from medieval historian Ibn Khaldun's cyclical history and his dynastic concept in North Africa, hinted at by Herbert's reference to Khaldun's book Kitāb al-ʿibar ("The Book of Lessons"). The fictionalized version of the "Kitab al-ibar" in Dune is a combination of a Fremen religious manual and a desert survival book.
==== Additional language and historic influences ====
In addition to Arabic, Dune derives words and names from a variety of other languages, including Hebrew, Navajo, Latin, Dutch ("Landsraad"), Chakobsa, the Nahuatl language of the Aztecs, Greek, Persian, Sanskrit ("prana bindu", "prajna"), Russian, Turkish, Finnish, and Old English. Bene Gesserit is simply the Latin for "It will have been well fought", also carrying the sense of "It will have been well managed", which stands as a statement of the order's goal and as a pledge of faithfulness to that goal. Critics tend to miss the literal meaning of the phrase, some positing that the term is derived from the Latin meaning "it will have been well borne", which interpretation is not well supported by their doctrine in the story.Through the inspiration from The Sabres of Paradise, there are also allusions to the tsarist-era Russian nobility and Cossacks. Frank Herbert stated that bureaucracy that lasted long enough would become a hereditary nobility, and a significant theme behind the aristocratic families in Dune was "aristocratic bureaucracy" which he saw as analogous to the Soviet Union.
=== Environmentalism and ecology ===
Dune has been called the "first planetary ecology novel on a grand scale". Herbert hoped it would be seen as an "environmental awareness handbook" and said the title was meant to "echo the sound of 'doom'". It was reviewed in the best selling countercultural Whole Earth Catalog in 1968 as a "rich re-readable fantasy with clear portrayal of the fierce environment it takes to cohere a community".After the publication of Silent Spring by Rachel Carson in 1962, science fiction writers began treating the subject of ecological change and its consequences. Dune responded in 1965 with its complex descriptions of Arrakis life, from giant sandworms (for whom water is deadly) to smaller, mouse-like life forms adapted to live with limited water. Dune was followed in its creation of complex and unique ecologies by other science fiction books such as A Door into Ocean (1986) and Red Mars (1992). Environmentalists have pointed out that Dune's popularity as a novel depicting a planet as a complex—almost living—thing, in combination with the first images of Earth from space being published in the same time period, strongly influenced environmental movements such as the establishment of the international Earth Day.While the genre of climate fiction was popularized in the 2010s in response to real global climate change, Dune as well as other early science fiction works from authors like J. G. Ballard (The Drowned World) and Kim Stanley Robinson (the Mars trilogy) have retroactively been considered pioneering examples of the genre.
=== Declining empires ===
The Imperium in Dune contains features of various empires in Europe and the Near East, including the Roman Empire, Holy Roman Empire, and Ottoman Empire. Lorenzo DiTommaso compared Dune's portrayal of the downfall of a galactic empire to Edward Gibbon's Decline and Fall of the Roman Empire, which argues that Christianity allied with the profligacy of the Roman elite led to the fall of Ancient Rome. In "The Articulation of Imperial Decadence and Decline in Epic Science Fiction" (2007), DiTommaso outlines similarities between the two works by highlighting the excesses of the Emperor on his home planet of Kaitain and of the Baron Harkonnen in his palace. The Emperor loses his effectiveness as a ruler through an excess of ceremony and pomp. The hairdressers and attendants he brings with him to Arrakis are even referred to as "parasites". The Baron Harkonnen is similarly corrupt and materially indulgent. Gibbon's Decline and Fall partly blames the fall of Rome on the rise of Christianity. Gibbon claimed that this exotic import from a conquered province weakened the soldiers of Rome and left it open to attack. The Emperor's Sardaukar fighters are little match for the Fremen of Dune not only because of the Sardaukar's overconfidence and the fact that Jessica and Paul have trained the Fremen in their battle tactics, but because of the Fremen's capacity for self-sacrifice. The Fremen put the community before themselves in every instance, while the world outside wallows in luxury at the expense of others.The decline and long peace of the Empire sets the stage for revolution and renewal by genetic mixing of successful and unsuccessful groups through war, a process culminating in the Jihad led by Paul Atreides, described by Frank Herbert as depicting "war as a collective orgasm" (drawing on Norman Walter's 1950 The Sexual Cycle of Human Warfare), themes that would reappear in God Emperor of Dune's Scattering and Leto II's all-female Fish Speaker army.
=== Gender dynamics ===
Gender dynamics are complex in Dune. Within the Fremen sietch communities, women have almost full equality. They carry weapons and travel in raiding parties with men, fighting when necessary alongside the men. They can take positions of leadership as a Sayyadina or as a Reverend Mother (if she can survive the ritual of ingesting the Water of Life.) Both of these sietch religious leaders are routinely consulted by the all-male Council and can have a decisive voice in all matters of sietch life, security and internal politics. They are also protected by the entire community. Due to the high mortality rate among their men, women outnumber men in most sietches. Polygamy is common, and sexual relationships are voluntary and consensual; as Stilgar says to Jessica, "women among us are not taken against their will."
In contrast, the Imperial aristocracy leaves young women of noble birth very little agency. Frequently trained by the Bene Gesserit, they are raised to eventually marry other aristocrats. Marriages between Major and Minor Houses are political tools to forge alliances or heal old feuds; women are given very little say in the matter. Many such marriages are quietly maneuvered by the Bene Gesserit to produce offspring with some genetic characteristics needed by the sisterhood's human-breeding program. In addition, such highly-placed sisters were in a position to subtly influence their husbands' actions in ways that could move the politics of the Imperium toward Bene Gesserit goals.
The gom jabbar test of humanity is administered by the female Bene Gesserit order but rarely to males. The Bene Gesserit have seemingly mastered the unconscious and can play on the unconscious weaknesses of others using the Voice, yet their breeding program seeks after a male Kwisatz Haderach. Their plan is to produce a male who can "possess complete racial memory, both male and female," and look into the black hole in the collective unconscious that they fear. A central theme of the book is the connection, in Jessica's son, of this female aspect with his male aspect. This aligns with concepts in Jungian psychology, which features conscious/unconscious and taking/giving roles associated with males and females, as well as the idea of the collective unconscious. Paul's approach to power consistently requires his upbringing under the matriarchal Bene Gesserit, who operate as a long-dominating shadow government behind all of the great houses and their marriages or divisions. He is trained by Jessica in the Bene Gesserit Way, which includes prana-bindu training in nerve and muscle control and precise perception. Paul also receives Mentat training, thus helping prepare him to be a type of androgynous Kwisatz Haderach, a male Reverend Mother.In a Bene Gesserit test early in the book, it is implied that people are generally "inhuman" in that they irrationally place desire over self-interest and reason. This applies Herbert's philosophy that humans are not created equal, while equal justice and equal opportunity are higher ideals than mental, physical, or moral equality.
=== Heroism ===
I am showing you the superhero syndrome and your own participation in it.
Throughout Paul's rise to superhuman status, he follows a plotline common to many stories describing the birth of a hero. He has unfortunate circumstances forced onto him. After a long period of hardship and exile, he confronts and defeats the source of evil in his tale. As such, Dune is representative of a general trend beginning in 1960s American science fiction in that it features a character who attains godlike status through scientific means. Eventually, Paul Atreides gains a level of omniscience which allows him to take over the planet and the galaxy, and causes the Fremen of Arrakis to worship him like a god. Author Frank Herbert said in 1979, "The bottom line of the Dune trilogy is: beware of heroes. Much better [to] rely on your own judgment, and your own mistakes." He wrote in 1985, "Dune was aimed at this whole idea of the infallible leader because my view of history says that mistakes made by a leader (or made in a leader's name) are amplified by the numbers who follow without question."Juan A. Prieto-Pablos says Herbert achieves a new typology with Paul's superpowers, differentiating the heroes of Dune from earlier heroes such as Superman, van Vogt's Gilbert Gosseyn and Henry Kuttner's telepaths. Unlike previous superheroes who acquire their powers suddenly and accidentally, Paul's are the result of "painful and slow personal progress." And unlike other superheroes of the 1960s—who are the exception among ordinary people in their respective worlds—Herbert's characters grow their powers through "the application of mystical philosophies and techniques." For Herbert, the ordinary person can develop incredible fighting skills (Fremen, Ginaz swordsmen and Sardaukar) or mental abilities (Bene Gesserit, Mentats, Spacing Guild Navigators).
=== Zen and religion ===
Early in his newspaper career, Herbert was introduced to Zen by two Jungian psychologists, Ralph and Irene Slattery, who "gave a crucial boost to his thinking". Zen teachings ultimately had "a profound and continuing influence on [Herbert's] work". Throughout the Dune series and particularly in Dune, Herbert employs concepts and forms borrowed from Zen Buddhism. The Fremen are referred to as Zensunni adherents, and many of Herbert's epigraphs are Zen-spirited. In "Dune Genesis", Frank Herbert wrote:
What especially pleases me is to see the interwoven themes, the fugue like relationships of images that exactly replay the way Dune took shape. As in an Escher lithograph, I involved myself with recurrent themes that turn into paradox. The central paradox concerns the human vision of time. What about Paul's gift of prescience - the Presbyterian fixation? For the Delphic Oracle to perform, it must tangle itself in a web of predestination. Yet predestination negates surprises and, in fact, sets up a mathematically enclosed universe whose limits are always inconsistent, always encountering the unprovable. It's like a koan, a Zen mind breaker. It's like the Cretan Epimenides saying, "All Cretans are liars."
Brian Herbert called the Dune universe "a spiritual melting pot", noting that his father incorporated elements of a variety of religions, including Buddhism, Sufi mysticism and other Islamic belief systems, Catholicism, Protestantism, Judaism, and Hinduism. He added that Frank Herbert's fictional future in which "religious beliefs have combined into interesting forms" represents the author's solution to eliminating arguments between religions, each of which claimed to have "the one and only revelation."
=== Asimov's Foundation ===
Tim O'Reilly suggests that Herbert also wrote Dune as a counterpoint to Isaac Asimov's Foundation series. In his monograph on Frank Herbert, O'Reilly wrote that "Dune is clearly a commentary on the Foundation trilogy. Herbert has taken a look at the same imaginative situation that provoked Asimov's classic—the decay of a galactic empire—and restated it in a way that draws on different assumptions and suggests radically different conclusions. The twist he has introduced into Dune is that the Mule, not the Foundation, is his hero." According to O'Reilly, Herbert bases the Bene Gesserit on the scientific shamans of the Foundation, though they use biological rather than statistical science. In contrast to the Foundation series and its praise of science and rationality, Dune proposes that the unconscious and unexpected are actually what are needed for humanity.Both Herbert and Asimov explore the implications of prescience (i.e., visions of the future) both psychologically and socially. The Foundation series deploys a broadly determinist approach to prescient vision rooted in mathematical reasoning on a macroscopic social level. Dune, by contrast, invents a biologically rooted power of prescience that becomes determinist when the user actively relies on it to navigate past an undefined threshold of detail. Herbert’s eugenically produced and spice-enhanced prescience is also personalized to individual actors whose roles in later books constrain each other's visions, rendering the future more or less mutable as time progresses. In what might be a comment on Foundation, Herbert's most powerfully prescient being in God Emperor of Dune laments the boredom engendered by prescience, and values surprises, especially regarding one's death, as a psychological necessity.However, both works contain a similar theme of the restoration of civilization and seem to make the fundamental assumption that "political maneuvering, the need to control material resources, and friendship or mating bonds will be fundamentally the same in the future as they are now."
== Critical reception ==
Dune tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and won the inaugural Nebula Award for Best Novel. Reviews of the novel have been largely positive, and Dune is considered by some critics to be the best science fiction book ever written. The novel has been translated into dozens of languages, and has sold almost 20 million copies. Dune has been regularly cited as one of the world's best-selling science fiction novels.Arthur C. Clarke described Dune as "unique" and wrote, "I know nothing comparable to it except The Lord of the Rings." Robert A. Heinlein described the novel as "powerful, convincing, and most ingenious." It was described as "one of the monuments of modern science fiction" by the Chicago Tribune, and P. Schuyler Miller called Dune "one of the landmarks of modern science fiction ... an amazing feat of creation." The Washington Post described it as "a portrayal of an alien society more complete and deeply detailed than any other author in the field has managed ... a story absorbing equally for its action and philosophical vistas ... An astonishing science fiction phenomenon." Algis Budrys praised Dune for the vividness of its imagined setting, saying "The time lives. It breathes, it speaks, and Herbert has smelt it in his nostrils". He found that the novel, however, "turns flat and tails off at the end. ... [T]ruly effective villains simply simper and melt; fierce men and cunning statesmen and seeresses all bend before this new Messiah". Budrys faulted in particular Herbert's decision to kill Paul's infant son offstage, with no apparent emotional impact, saying "you cannot be so busy saving a world that you cannot hear an infant shriek". After criticizing unrealistic science fiction, Carl Sagan in 1978 listed Dune as among stories "that are so tautly constructed, so rich in the accommodating details of an unfamiliar society that they sweep me along before I have even a chance to be critical".The Louisville Times wrote, "Herbert's creation of this universe, with its intricate development and analysis of ecology, religion, politics, and philosophy, remains one of the supreme and seminal achievements in science fiction." Writing for The New Yorker, Jon Michaud praised Herbert's "clever authorial decision" to exclude robots and computers ("two staples of the genre") from his fictional universe, but suggested that this may be one explanation why Dune lacks "true fandom among science-fiction fans" to the extent that it "has not penetrated popular culture in the way that The Lord of the Rings and Star Wars have". Tamara I. Hladik wrote that the story "crafts a universe where lesser novels promulgate excuses for sequels. All its rich elements are in balance and plausible—not the patchwork confederacy of made-up languages, contrived customs, and meaningless histories that are the hallmark of so many other, lesser novels."On November 5, 2019, the BBC News listed Dune on its list of the 100 most influential novels.J. R. R. Tolkien refused to review Dune, on the grounds that he disliked it "with some intensity" and thus felt it would be unfair to Herbert, another working author, if he gave an honest review of the book.
== First edition prints and manuscripts ==
The first edition of Dune is one of the most valuable in science fiction book collecting. Copies have been sold for more than $10,000 at auction. The Chilton first edition of the novel is 9+1⁄4 inches (235 mm) tall, with bluish green boards and a price of $5.95 on the dust jacket, and notes Toronto as the Canadian publisher on the copyright page. Up to this point, Chilton had been publishing only automobile repair manuals.California State University, Fullerton's Pollak Library has several of Herbert's draft manuscripts of Dune and other works, with the author's notes, in their Frank Herbert Archives.
== Sequels and prequels ==
After Dune proved to be a critical and financial success for Herbert, he was able to devote himself full time to writing additional novels in the series. He had already drafted parts of the second and third while writing Dune. The series included Dune Messiah (1969), Children of Dune (1976), God Emperor of Dune (1981), Heretics of Dune (1984), and Chapterhouse: Dune (1985), each sequentially continuing on the narrative from Dune. Herbert died on February 11, 1986.Herbert's son, Brian Herbert, had found several thousand pages of notes left by his father that outlined ideas for other narratives related to Dune. Brian Herbert enlisted author Kevin J. Anderson to help build out prequel novels to the events of Dune. Brian Herbert's and Anderson's Dune prequels first started publication in 1999, and have led to additional stories that take place between those of Frank Herbert's books. The notes for what would have been Dune 7 also enabled them to publish Hunters of Dune (2006) and Sandworms of Dune (2007), sequels to Frank Herbert's final novel Chapterhouse: Dune, which complete the chronological progression of his original series, and wrap up storylines that began in Heretics of Dune.
== Adaptations ==
Dune has been considered as an "unfilmable" and "uncontainable" work to adapt from novel to film or other visual medium. Described by Wired, "It has four appendices and a glossary of its own gibberish, and its action takes place on two planets, one of which is a desert overrun by worms the size of airport runways. Lots of important people die or try to kill each other, and they're all tethered to about eight entangled subplots." There have been several attempts to achieve this difficult conversion with various degrees of success.
=== Early stalled attempts ===
In 1971, the production company Apjac International (APJ) (headed by Arthur P. Jacobs) optioned the rights to film Dune. As Jacobs was busy with other projects, such as the sequel to Planet of the Apes, Dune was delayed for another year. Jacobs' first choice for director was David Lean, but he turned down the offer. Charles Jarrott was also considered to direct. Work was also under way on a script while the hunt for a director continued. Initially, the first treatment had been handled by Robert Greenhut, the producer who had lobbied Jacobs to make the movie in the first place, but subsequently Rospo Pallenberg was approached to write the script, with shooting scheduled to begin in 1974. However, Jacobs died in 1973.
In December 1974, a French consortium led by Jean-Paul Gibon purchased the film rights from APJ, with Alejandro Jodorowsky set to direct. In 1975, Jodorowsky planned to film the story as a 14-hour feature, set to star his own son Brontis Jodorowsky in the lead role of Paul Atreides, Salvador Dalí as Shaddam IV, Padishah Emperor, Amanda Lear as Princess Irulan, Orson Welles as Baron Vladimir Harkonnen, Gloria Swanson as Reverend Mother Gaius Helen Mohiam, David Carradine as Duke Leto Atreides, Geraldine Chaplin as Lady Jessica, Alain Delon as Duncan Idaho, Hervé Villechaize as Gurney Halleck, Udo Kier as Piter De Vries, and Mick Jagger as Feyd-Rautha. It was at first proposed to score the film with original music by Karlheinz Stockhausen, Henry Cow, and Magma; later on, the soundtrack was to be provided by Pink Floyd. Jodorowsky set up a pre-production unit in Paris consisting of Chris Foss, a British artist who designed covers for science fiction periodicals, Jean Giraud (Moebius), a French illustrator who created and also wrote and drew for Metal Hurlant magazine, and H. R. Giger. Moebius began designing creatures and characters for the film, while Foss was brought in to design the film's space ships and hardware. Giger began designing the Harkonnen Castle based on Moebius's storyboards. Dan O'Bannon was to head the special effects department.Dalí was cast as the Emperor. Dalí later demanded to be paid $100,000 per hour; Jodorowsky agreed, but tailored Dalí's part to be filmed in one hour, drafting plans for other scenes of the emperor to use a mechanical mannequin as substitute for Dalí. According to Giger, Dalí was "later invited to leave the film because of his pro-Franco statements". Just as the storyboards, designs, and script were finished, the financial backing dried up. Frank Herbert traveled to Europe in 1976 to find that $2 million of the $9.5 million budget had already been spent in pre-production, and that Jodorowsky's script would result in a 14-hour movie ("It was the size of a phone book", Herbert later recalled). Jodorowsky took creative liberties with the source material, but Herbert said that he and Jodorowsky had an amicable relationship. Jodorowsky said in 1985 that he found the Dune story mythical and had intended to recreate it rather than adapt the novel; though he had an "enthusiastic admiration" for Herbert, Jodorowsky said he had done everything possible to distance the author and his input from the project. Although Jodorowsky was embittered by the experience, he said the Dune project changed his life, and some of the ideas were used in his and Moebius's The Incal. O'Bannon entered a psychiatric hospital after the production failed, then worked on 13 scripts, the last of which became Alien. A 2013 documentary, Jodorowsky's Dune, was made about Jodorowsky's failed attempt at an adaptation.
In 1976, Dino De Laurentiis acquired the rights from Gibon's consortium. De Laurentiis commissioned Herbert to write a new screenplay in 1978; the script Herbert turned in was 175 pages long, the equivalent of nearly three hours of screen time. De Laurentiis then hired director Ridley Scott in 1979, with Rudy Wurlitzer writing the screenplay and H. R. Giger retained from the Jodorowsky production; Scott and Giger had also just worked together on the film Alien, after O'Bannon recommended the artist. Scott intended to split the novel into two movies. He worked on three drafts of the script, using The Battle of Algiers as a point of reference, before moving on to direct another science fiction film, Blade Runner (1982). As he recalls, the pre-production process was slow, and finishing the project would have been even more time-intensive:
But after seven months I dropped out of Dune, by then Rudy Wurlitzer had come up with a first-draft script which I felt was a decent distillation of Frank Herbert's. But I also realised Dune was going to take a lot more work—at least two and a half years' worth. And I didn't have the heart to attack that because my older brother Frank unexpectedly died of cancer while I was prepping the De Laurentiis picture. Frankly, that freaked me out. So I went to Dino and told him the Dune script was his.
—From Ridley Scott: The Making of his Movies by Paul M. Sammon
=== 1984 film by David Lynch ===
In 1981, the nine-year film rights were set to expire. De Laurentiis re-negotiated the rights from the author, adding to them the rights to the Dune sequels (written and unwritten). After seeing The Elephant Man, De Laurentiis' daughter Raffaella decided that David Lynch should direct the movie. Around that time Lynch received several other directing offers, including Return of the Jedi. He agreed to direct Dune and write the screenplay even though he had not read the book, was not familiar with the story, or even been interested in science fiction. Lynch worked on the script for six months with Eric Bergren and Christopher De Vore. The team yielded two drafts of the script before it split over creative differences. Lynch would subsequently work on five more drafts. Production of the work was troubled by problems at the Mexican studio and hampering the film's timeline. Lynch ended up producing a nearly three-hour long film, but at demands from Universal Pictures, the film's distributor, he cut it back to about two hours, hastily filming additional scenes to make up for some of the cut footage.This first film of Dune, directed by Lynch, was released in 1984, nearly 20 years after the book's publication. Though Herbert said the book's depth and symbolism seemed to intimidate many filmmakers, he was pleased with the film, saying that "They've got it. It begins as Dune does. And I hear my dialogue all the way through. There are some interpretations and liberties, but you're gonna come out knowing you've seen Dune." Reviews of the film were negative, saying that it was incomprehensible to those unfamiliar with the book, and that fans would be disappointed by the way it strayed from the book's plot. Upon release for television and other forms of home media, Universal opted to reintroduce much of the footage that Lynch had cut, creating an over-three-hour long version with extensive monologue exposition. Lynch was extremely displeased with this move, and demanded that Universal replace his name on these cuts with the pseudonym "Alan Smithee", and has generally distanced himself from the film since.
=== 2000 miniseries by John Harrison ===
In 2000, John Harrison adapted the novel into Frank Herbert's Dune, a miniseries which premiered on American Sci-Fi Channel. As of 2004, the miniseries was one of the three highest-rated programs broadcast on the Sci-Fi Channel.
=== Further film attempts ===
In 2008, Paramount Pictures announced that they would produce a new film based on the book, with Peter Berg attached to direct. Producer Kevin Misher, who spent a year securing the rights from the Herbert estate, was to be joined by Richard Rubinstein and John Harrison (of both Sci-Fi Channel miniseries) as well as Sarah Aubrey and Mike Messina. The producers stated that they were going for a "faithful adaptation" of the novel, and considered "its theme of finite ecological resources particularly timely." Science fiction author Kevin J. Anderson and Frank Herbert's son Brian Herbert, who had together written multiple Dune sequels and prequels since 1999, were attached to the project as technical advisors. In October 2009, Berg dropped out of the project, later saying that it "for a variety of reasons wasn't the right thing" for him. Subsequently, with a script draft by Joshua Zetumer, Paramount reportedly sought a new director who could do the film for under $175 million. In 2010, Pierre Morel was signed on to direct, with screenwriter Chase Palmer incorporating Morel's vision of the project into Zetumer's original draft. By November 2010, Morel left the project. Paramount finally dropped plans for a remake in March 2011.
=== Films by Denis Villeneuve ===
In November 2016, Legendary Entertainment acquired the film and TV rights for Dune. Variety reported in December 2016 that Denis Villeneuve was in negotiations to direct the project, which was confirmed in February 2017. In April 2017, Legendary announced that Eric Roth would write the screenplay. Villeneuve explained in March 2018 that his adaptation will be split into two films, with the first installment scheduled to begin production in 2019. Casting includes Timothée Chalamet as Paul Atreides, Dave Bautista as Rabban, Stellan Skarsgård as Baron Harkonnen, Rebecca Ferguson as Lady Jessica, Charlotte Rampling as Reverend Mother Mohiam, Oscar Isaac as Duke Leto Atreides, Zendaya as Chani, Javier Bardem as Stilgar, Josh Brolin as Gurney Halleck, Jason Momoa as Duncan Idaho, David Dastmalchian as Piter De Vries, Chang Chen as Dr. Yueh, and Stephen Henderson as Thufir Hawat. Warner Bros. Pictures distributed the film, which had its initial premiere on September 3, 2021, at the Venice Film Festival, and wide release in both theaters and streaming on HBO Max on October 21, 2021, as part of Warner Bros.'s approach to handling the impact of the COVID-19 pandemic on the film industry. The film received "generally favorable reviews" on Metacritic. It has gone on to win multiple awards and was named by the National Board of Review as one of the 10 best films of 2021, as well as the American Film Institute in their annual top 10 list. The film went on to be nominated for ten Academy Awards, winning six, the most wins of the night for any film in contention.A sequel, Dune: Part Two, was scheduled for release on November 3, 2023, but will now instead be released on March 15th 2024 amid the 2023 SAG-AFTRA strike.
=== Audiobooks ===
In 1993, Recorded Books Inc. released a 20-disc audiobook narrated by George Guidall. In 2007, Audio Renaissance released an audio book narrated by Simon Vance with some parts performed by Scott Brick, Orlagh Cassidy, Euan Morton, and other performers.
== Cultural influence ==
Dune has been widely influential, inspiring numerous novels, music, films, television, games, and comic books. It is considered one of the greatest and most influential science fiction novels of all time, with numerous modern science fiction works such as Star Wars owing their existence to Dune. Dune has also been referenced in numerous other works of popular culture, including Star Trek, Chronicles of Riddick, The Kingkiller Chronicle and Futurama. Dune was cited as a source of inspiration for Hayao Miyazaki's anime film Nausicaä of the Valley of the Wind (1984) for its post-apocalyptic world.Dune was parodied in 1984's National Lampoon's Doon by Ellis Weiner, which William F. Touponce called "something of a tribute to Herbert's success on college campuses", noting that "the only other book to have been so honored is Tolkien's The Lord of the Rings," which was parodied by The Harvard Lampoon in 1969.
=== Music ===
In 1978, French electronic musician Richard Pinhas released the nine-track Dune-inspired album Chronolyse, which includes the seven-part Variations sur le thème des Bene Gesserit.
In 1979, German electronic music pioneer Klaus Schulze released an LP titled Dune featuring motifs and lyrics inspired by the novel.
A similar musical project, Visions of Dune, was released also in 1979 by Zed (a pseudonym of French electronic musician Bernard Sjazner).
Heavy metal band Iron Maiden wrote the song "To Tame a Land" based on the Dune story. It appears as the closing track to their 1983 album Piece of Mind. The original working title of the song was "Dune"; however, the band was denied permission to use it, with Frank Herbert's agents stating "Frank Herbert doesn't like rock bands, particularly heavy rock bands, and especially bands like Iron Maiden".
Dune inspired the German happy hardcore band Dune, who have released several albums with space travel-themed songs.
The progressive hardcore band Shai Hulud took their name from Dune.
"Traveller in Time", from the 1991 Blind Guardian album Tales from the Twilight World, is based mostly on Paul Atreides' visions of future and past.
The title of the 1993 Fear Factory album Fear is The Mindkiller is a quote from the "litany against fear".
The song "Near Fantastica", from the Matthew Good album Avalanche, makes reference to the "litany against fear", repeating "can't feel fear, fear's the mind killer" through a section of the song.
In the Fatboy Slim song "Weapon of Choice", the line "If you walk without rhythm/You won't attract the worm" is a near quotation from the sections of novel in which Stilgar teaches Paul to ride sandworms.
Dune also inspired the 1999 album The 2nd Moon by the German death metal band Golem, which is a concept album about the series.
Dune influenced Thirty Seconds to Mars on their self-titled debut album.
The Youngblood Brass Band's song "Is an Elegy" on Center:Level:Roar references "Muad'Dib", "Arrakis" and other elements from the novel.
The debut album of Canadian musician Grimes, called Geidi Primes, is a concept album based on Dune.
Japanese singer Kenshi Yonezu, released a song titled "Dune", also known as "Sand Planet". The song was released on 2017, and it was created using the voice synthesizer Hatsune Miku for her 10th anniversary.
"Fear is the Mind Killer", a song released in 2018 by Zheani (an Australian rapper) uses a quote from Dune.
"Litany Against Fear" is a spoken track released in 2018 under the 'Eight' album by Zheani. She recites an extract from Dune.
Sleep's 2018 album The Sciences features a song, Giza Butler, that references several aspects of Dune.
Tool's 2019 album Fear Inoculum has a song entitled "Litanie contre la peur (Litany against fear)".
"Rare to Wake", from Shannon Lay's album Geist (2019), is inspired by Dune.
Heavy Metal band Diamond Head based the song "The Sleeper" and its prelude, both off the album The Coffin Train, on the series.
=== Games ===
There have been a number of games based on the book, starting with the strategy–adventure game Dune (1992). The most important game adaptation is Dune II (1992), which established the conventions of modern real-time strategy games and is considered to be among the most influential video games of all time.The online game Lost Souls includes Dune-derived elements, including sandworms and melange—addiction to which can produce psychic talents. The 2016 game Enter the Gungeon features the spice melange as a random item which gives the player progressively stronger abilities and penalties with repeated uses, mirroring the long-term effects melange has on users.Rick Priestley cites Dune as a major influence on his 1987 wargame, Warhammer 40,000.In 2023, Funcom announced Dune: Awakening, an upcoming massively multiplayer online game set in the universe of Dune.
=== Space exploration ===
The Apollo 15 astronauts named a small crater on Earth's Moon after the novel during the 1971 mission, and the name was formally adopted by the International Astronomical Union in 1973. Since 2009, the names of planets from the Dune novels have been adopted for the real-world nomenclature of plains and other features on Saturn's moon Titan, like Arrakis Planitia.
== See also ==
Soft science fiction – Sub-genre of science fiction emphasizing "soft" sciences or human emotions
Hydraulic empire – Government by control of access to water
== References ==
== Further reading ==
Clute, John; Nicholls, Peter (1995). The Encyclopedia of Science Fiction. New York: St. Martin's Press. p. 1386. ISBN 978-0-312-13486-0.
Clute, John; Nicholls, Peter (1995). The Multimedia Encyclopedia of Science Fiction (CD-ROM). Danbury, CT: Grolier. ISBN 978-0-7172-3999-3.
Huddleston, Tom. The Worlds of Dune: The Places and Cultures That Inspired Frank Herbert. Minneapolis: Quarto Publishing Group UK, 2023.
Jakubowski, Maxim; Edwards, Malcolm (1983). The Complete Book of Science Fiction and Fantasy Lists. St Albans, Herts, UK: Granada Publishing Ltd. p. 350. ISBN 978-0-586-05678-3.
Kennedy, Kara. Frank Herbert's Dune: A Critical Companion. Cham, Switzerland: Palgrave Macmillan, 2022.
Kennedy, Kara. Women's Agency in the Dune Universe: Tracing Women's Liberation through Science Fiction. Cham, Switzerland: Palgrave Macmillan, 2020.
Nardi, Dominic J. & N. Trevor Brierly, eds. Discovering Dune: Essays on Frank Herbert's Epic Saga. Jefferson, NC: McFarland & Co., 2022.
Nicholas, Jeffery, ed. Dune and Philosophy: Weirding Way of Mentat. Chicago: Open Court, 2011.
Nicholls, Peter (1979). The Encyclopedia of Science Fiction. St Albans, Herts, UK: Granada Publishing Ltd. p. 672. ISBN 978-0-586-05380-5.
O’Reilly, Timothy. Frank Herbert. New York: Frederick Ungar, 1981.
Pringle, David (1990). The Ultimate Guide to Science Fiction. London: Grafton Books Ltd. p. 407. ISBN 978-0-246-13635-0.
Tuck, Donald H. (1974). The Encyclopedia of Science Fiction and Fantasy. Chicago: Advent. p. 136. ISBN 978-0-911682-20-5.
Williams, Kevin C. The Wisdom of the Sand: Philosophy and Frank Herbert's Dune. New York: Hampton Press, 2013.
== External links ==
Official website for Dune and its sequels
Dune title listing at the Internet Speculative Fiction Database
Turner, Paul (October 1973). "Vertex Interviews Frank Herbert" (Interview). Vol. 1, no. 4. Archived from the original on May 19, 2009.
Spark Notes: Dune, detailed study guide
DuneQuotes.com – Collection of quotes from the Dune series
Dune by Frank Herbert, reviewed by Ted Gioia (Conceptual Fiction)
"Frank Herbert Biography and Bibliography at LitWeb.net". www.litweb.net. Archived from the original on April 2, 2009. Retrieved January 2, 2009.
Works of Frank Herbert at Curlie
Timberg, Scott (April 18, 2010). "Frank Herbert's Dune holds timely – and timeless – appeal". Los Angeles Times. Archived from the original on December 3, 2013. Retrieved November 27, 2013.
Walton, Jo (January 12, 2011). "In league with the future: Frank Herbert's Dune (Review)". Tor.com. Retrieved November 27, 2013.
Leonard, Andrew (June 4, 2015). "To Save California, Read Dune". Nautilus. Archived from the original on November 4, 2017. Retrieved June 15, 2015.
Dune by Frank Herbert – Foreshadowing & Dedication at Fact Behind Fiction
Frank Herbert by Tim O'Reilly
DuneScholar.com – Collection of scholarly essays | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-vector-memory\dune.txt |
.md |
# neo4j-vector-memory
This template allows you to integrate an LLM with a vector-based retrieval system using Neo4j as the vector store.
Additionally, it uses the graph capabilities of the Neo4j database to store and retrieve the dialogue history of a specific user's session.
Having the dialogue history stored as a graph allows for seamless conversational flows but also gives you the ability to analyze user behavior and text chunk retrieval through graph analytics.
## Environment Setup
You need to define the following environment variables
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data
If you want to populate the DB with some example data, you can run `python ingest.py`.
The script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database.
Additionally, a vector index named `dune` is created for efficient querying of these embeddings.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-vector-memory
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-vector-memory
```
And add the following code to your `server.py` file:
```python
from neo4j_vector_memory import chain as neo4j_vector_memory_chain
add_routes(app, neo4j_vector_memory_chain, path="/neo4j-vector-memory")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-vector-memory/playground](http://127.0.0.1:8000/neo4j-parent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-vector-memory")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-vector-memory\README.md |
.md |
# nvidia-rag-canonical
This template performs RAG using Milvus Vector Store and NVIDIA Models (Embedding and Chat).
## Environment Setup
You should export your NVIDIA API Key as an environment variable.
If you do not have an NVIDIA API Key, you can create one by following these steps:
1. Create a free account with the [NVIDIA GPU Cloud](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.
2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.
3. Select the `API` option and click `Generate Key`.
4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.
```shell
export NVIDIA_API_KEY=...
```
For instructions on hosting the Milvus Vector Store, refer to the section at the bottom.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To use the NVIDIA models, install the Langchain NVIDIA AI Endpoints package:
```shell
pip install -U langchain_nvidia_aiplay
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package nvidia-rag-canonical
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add nvidia-rag-canonical
```
And add the following code to your `server.py` file:
```python
from nvidia_rag_canonical import chain as nvidia_rag_canonical_chain
add_routes(app, nvidia_rag_canonical_chain, path="/nvidia-rag-canonical")
```
If you want to set up an ingestion pipeline, you can add the following code to your `server.py` file:
```python
from nvidia_rag_canonical import ingest as nvidia_rag_ingest
add_routes(app, nvidia_rag_ingest, path="/nvidia-rag-ingest")
```
Note that for files ingested by the ingestion API, the server will need to be restarted for the newly ingested files to be accessible by the retriever.
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you DO NOT already have a Milvus Vector Store you want to connect to, see `Milvus Setup` section below before proceeding.
If you DO have a Milvus Vector Store you want to connect to, edit the connection details in `nvidia_rag_canonical/chain.py`
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/nvidia-rag-canonical/playground](http://127.0.0.1:8000/nvidia-rag-canonical/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/nvidia-rag-canonical")
```
## Milvus Setup
Use this step if you need to create a Milvus Vector Store and ingest data.
We will first follow the standard Milvus setup instructions [here](https://milvus.io/docs/install_standalone-docker.md).
1. Download the Docker Compose YAML file.
```shell
wget https://github.com/milvus-io/milvus/releases/download/v2.3.3/milvus-standalone-docker-compose.yml -O docker-compose.yml
```
2. Start the Milvus Vector Store container
```shell
sudo docker compose up -d
```
3. Install the PyMilvus package to interact with the Milvus container.
```shell
pip install pymilvus
```
4. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:
```shell
python ingest.py
```
Note that you can (and should!) change this to ingest data of your choice.
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\nvidia-rag-canonical\README.md |
.md |
# openai-functions-agent
This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
## Environment Setup
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `TAVILY_API_KEY` environment variable to access Tavily.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package openai-functions-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add openai-functions-agent
```
And add the following code to your `server.py` file:
```python
from openai_functions_agent import agent_executor as openai_functions_agent_chain
add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/openai-functions-agent/playground](http://127.0.0.1:8000/openai-functions-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-agent")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\openai-functions-agent\README.md |
.md | # OpenAI Functions Agent - Gmail
Ever struggled to reach inbox zero?
Using this template, you can create and customize your very own AI assistant to manage your Gmail account. Using the default Gmail tools, it can read, search through, and draft emails to respond on your behalf. It also has access to a Tavily search engine so it can search for relevant information about any topics or people in the email thread before writing, ensuring the drafts include all the relevant information needed to sound well-informed.
![Animated GIF showing the interface of the Gmail Agent Playground with a cursor interacting with the input field.](./static/gmail-agent-playground.gif "Gmail Agent Playground Interface")
## The details
This assistant uses OpenAI's [function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) support to reliably select and invoke the tools you've provided
This template also imports directly from [langchain-core](https://pypi.org/project/langchain-core/) and [`langchain-community`](https://pypi.org/project/langchain-community/) where appropriate. We have restructured LangChain to let you select the specific integrations needed for your use case. While you can still import from `langchain` (we are making this transition backwards-compatible), we have separated the homes of most of the classes to reflect ownership and to make your dependency lists lighter. Most of the integrations you need can be found in the `langchain-community` package, and if you are just using the core expression language API's, you can even build solely based on `langchain-core`.
## Environment Setup
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `TAVILY_API_KEY` environment variable to access Tavily search.
Create a [`credentials.json`](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application) file containing your OAuth client ID from Gmail. To customize authentication, see the [Customize Auth](#customize-auth) section below.
_*Note:* The first time you run this app, it will force you to go through a user authentication flow._
(Optional): Set `GMAIL_AGENT_ENABLE_SEND` to `true` (or modify the `agent.py` file in this template) to give it access to the "Send" tool. This will give your assistant permissions to send emails on your behalf without your explicit review, which is not recommended.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package openai-functions-agent-gmail
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add openai-functions-agent-gmail
```
And add the following code to your `server.py` file:
```python
from openai_functions_agent import agent_executor as openai_functions_agent_chain
add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent-gmail")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/openai-functions-agent-gmail/playground](http://127.0.0.1:8000/openai-functions-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-agent-gmail")
```
## Customize Auth
```
from langchain_community.tools.gmail.utils import build_resource_service, get_gmail_credentials
# Can review scopes here https://developers.google.com/gmail/api/auth/scopes
# For instance, readonly scope is 'https://www.googleapis.com/auth/gmail.readonly'
credentials = get_gmail_credentials(
token_file="token.json",
scopes=["https://mail.google.com/"],
client_secrets_file="credentials.json",
)
api_resource = build_resource_service(credentials=credentials)
toolkit = GmailToolkit(api_resource=api_resource)
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\openai-functions-agent-gmail\README.md |
.md | # openai-functions-tool-retrieval-agent
The novel idea introduced in this template is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.
In this template we will create a somewhat contrived example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query.
This template is based on [this Agent How-To](https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval).
## Environment Setup
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `TAVILY_API_KEY` environment variable to access Tavily.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package openai-functions-tool-retrieval-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add openai-functions-tool-retrieval-agent
```
And add the following code to your `server.py` file:
```python
from openai_functions_tool_retrieval_agent import chain as openai_functions_tool_retrieval_agent_chain
add_routes(app, openai_functions_tool_retrieval_agent_chain, path="/openai-functions-tool-retrieval-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/openai-functions-tool-retrieval-agent/playground](http://127.0.0.1:8000/openai-functions-tool-retrieval-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-tool-retrieval-agent")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\openai-functions-tool-retrieval-agent\README.md |
.md | # pii-protected-chatbot
This template creates a chatbot that flags any incoming PII and doesn't pass it to the LLM.
## Environment Setup
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package pii-protected-chatbot
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add pii-protected-chatbot
```
And add the following code to your `server.py` file:
```python
from pii_protected_chatbot.chain import chain as pii_protected_chatbot
add_routes(app, pii_protected_chatbot, path="/openai-functions-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/pii_protected_chatbot/playground](http://127.0.0.1:8000/pii_protected_chatbot/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/pii_protected_chatbot")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\pii-protected-chatbot\README.md |
.md |
# pirate-speak
This template converts user input into pirate speak.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package pirate-speak
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add pirate-speak
```
And add the following code to your `server.py` file:
```python
from pirate_speak.chain import chain as pirate_speak_chain
add_routes(app, pirate_speak_chain, path="/pirate-speak")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/pirate-speak/playground](http://127.0.0.1:8000/pirate-speak/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/pirate-speak")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\pirate-speak\README.md |
.md | # pirate-speak-configurable
This template converts user input into pirate speak. It shows how you can allow
`configurable_alternatives` in the Runnable, allowing you to select from
OpenAI, Anthropic, or Cohere as your LLM Provider in the playground (or via API).
## Environment Setup
Set the following environment variables to access all 3 configurable alternative
model providers:
- `OPENAI_API_KEY`
- `ANTHROPIC_API_KEY`
- `COHERE_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package pirate-speak-configurable
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add pirate-speak-configurable
```
And add the following code to your `server.py` file:
```python
from pirate_speak_configurable import chain as pirate_speak_configurable_chain
add_routes(app, pirate_speak_configurable_chain, path="/pirate-speak-configurable")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/pirate-speak-configurable/playground](http://127.0.0.1:8000/pirate-speak-configurable/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/pirate-speak-configurable")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\pirate-speak-configurable\README.md |
.md |
# plate-chain
This template enables parsing of data from laboratory plates.
In the context of biochemistry or molecular biology, laboratory plates are commonly used tools to hold samples in a grid-like format.
This can parse the resulting data into standardized (e.g., JSON) format for further processing.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To utilize plate-chain, you must have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
Creating a new LangChain project and installing plate-chain as the only package can be done with:
```shell
langchain app new my-app --package plate-chain
```
If you wish to add this to an existing project, simply run:
```shell
langchain app add plate-chain
```
Then add the following code to your `server.py` file:
```python
from plate_chain import chain as plate_chain
add_routes(app, plate_chain, path="/plate-chain")
```
(Optional) For configuring LangSmith, which helps trace, monitor and debug LangChain applications, use the following code:
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you're in this directory, you can start a LangServe instance directly by:
```shell
langchain serve
```
This starts the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
All templates can be viewed at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
Access the playground at [http://127.0.0.1:8000/plate-chain/playground](http://127.0.0.1:8000/plate-chain/playground)
You can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/plate-chain")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\plate-chain\README.md |
.md | # propositional-retrieval
This template demonstrates the multi-vector indexing strategy proposed by Chen, et. al.'s [Dense X Retrieval: What Retrieval Granularity Should We Use?](https://arxiv.org/abs/2312.06648). The prompt, which you can [try out on the hub](https://smith.langchain.com/hub/wfh/proposal-indexing), directs an LLM to generate de-contextualized "propositions" which can be vectorized to increase the retrieval accuracy. You can see the full definition in `proposal_chain.py`.
![Diagram illustrating the multi-vector indexing strategy for information retrieval, showing the process from Wikipedia data through a Proposition-izer to FactoidWiki, and the retrieval of information units for a QA model.](https://github.com/langchain-ai/langchain/raw/master/templates/propositional-retrieval/_images/retriever_diagram.png "Retriever Diagram")
## Storage
For this demo, we index a simple academic paper using the RecursiveUrlLoader, and store all retriever information locally (using chroma and a bytestore stored on the local filesystem). You can modify the storage layer in `storage.py`.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access `gpt-3.5` and the OpenAI Embeddings classes.
## Indexing
Create the index by running the following:
```python
poetry install
poetry run python propositional_retrieval/ingest.py
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package propositional-retrieval
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add propositional-retrieval
```
And add the following code to your `server.py` file:
```python
from propositional_retrieval import chain
add_routes(app, chain, path="/propositional-retrieval")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/propositional-retrieval/playground](http://127.0.0.1:8000/propositional-retrieval/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/propositional-retrieval")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\propositional-retrieval\README.md |
.md | # python-lint
This agent specializes in generating high-quality Python code with a focus on proper formatting and linting. It uses `black`, `ruff`, and `mypy` to ensure the code meets standard quality checks.
This streamlines the coding process by integrating and responding to these checks, resulting in reliable and consistent code output.
It cannot actually execute the code it writes, as code execution may introduce additional dependencies and potential security vulnerabilities.
This makes the agent both a secure and efficient solution for code generation tasks.
You can use it to generate Python code directly, or network it with planning and execution agents.
## Environment Setup
- Install `black`, `ruff`, and `mypy`: `pip install -U black ruff mypy`
- Set `OPENAI_API_KEY` environment variable.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package python-lint
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add python-lint
```
And add the following code to your `server.py` file:
```python
from python_lint import agent_executor as python_lint_agent
add_routes(app, python_lint_agent, path="/python-lint")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/python-lint/playground](http://127.0.0.1:8000/python-lint/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/python-lint")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\python-lint\README.md |
.md |
# rag-astradb
This template will perform RAG using Astra DB (`AstraDB` vector store class)
## Environment Setup
An [Astra DB](https://astra.datastax.com) database is required; free tier is fine.
- You need the database **API endpoint** (such as `https://0123...-us-east1.apps.astra.datastax.com`) ...
- ... and a **token** (`AstraCS:...`).
Also, an **OpenAI API Key** is required. _Note that out-of-the-box this demo supports OpenAI only, unless you tinker with the code._
Provide the connection parameters and secrets through environment variables. Please refer to `.env.template` for the variable names.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-astradb
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-astradb
```
And add the following code to your `server.py` file:
```python
from astradb_entomology_rag import chain as astradb_entomology_rag_chain
add_routes(app, astradb_entomology_rag_chain, path="/rag-astradb")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-astradb/playground](http://127.0.0.1:8000/rag-astradb/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-astradb")
```
## Reference
Stand-alone repo with LangServe chain: [here](https://github.com/hemidactylus/langserve_astradb_entomology_rag).
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-astradb\README.md |
.txt | # source: https://www.thoughtco.com/a-guide-to-the-twenty-nine-insect-orders-1968419
Order Thysanura: The silverfish and firebrats are found in the order Thysanura. They are wingless insects often found in people's attics, and have a lifespan of several years. There are about 600 species worldwide.
Order Diplura: Diplurans are the most primitive insect species, with no eyes or wings. They have the unusual ability among insects to regenerate body parts. There are over 400 members of the order Diplura in the world.
Order Protura: Another very primitive group, the proturans have no eyes, no antennae, and no wings. They are uncommon, with perhaps less than 100 species known.
Order Collembola: The order Collembola includes the springtails, primitive insects without wings. There are approximately 2,000 species of Collembola worldwide.
Order Ephemeroptera: The mayflies of order Ephemeroptera are short-lived, and undergo incomplete metamorphosis. The larvae are aquatic, feeding on algae and other plant life. Entomologists have described about 2,100 species worldwide.
Order Odonata: The order Odonata includes dragonflies and damselflies, which undergo incomplete metamorphosis. They are predators of other insects, even in their immature stage. There are about 5,000 species in the order Odonata.
Order Plecoptera: The stoneflies of order Plecoptera are aquatic and undergo incomplete metamorphosis. The nymphs live under rocks in well flowing streams. Adults are usually seen on the ground along stream and river banks. There are roughly 3,000 species in this group.
Order Grylloblatodea: Sometimes referred to as "living fossils," the insects of the order Grylloblatodea have changed little from their ancient ancestors. This order is the smallest of all the insect orders, with perhaps only 25 known species living today. Grylloblatodea live at elevations above 1500 ft., and are commonly named ice bugs or rock crawlers.
Order Orthoptera: These are familiar insects (grasshoppers, locusts, katydids, and crickets) and one of the largest orders of herbivorous insects. Many species in the order Orthoptera can produce and detect sounds. Approximately 20,000 species exist in this group.
Order Phasmida: The order Phasmida are masters of camouflage, the stick and leaf insects. They undergo incomplete metamorphosis and feed on leaves. There are some 3,000 insects in this group, but only a small fraction of this number is leaf insects. Stick insects are the longest insects in the world.
Order Dermaptera: This order contains the earwigs, an easily recognized insect that often has pincers at the end of the abdomen. Many earwigs are scavengers, eating both plant and animal matter. The order Dermaptera includes less than 2,000 species.
Order Embiidina: The order Embioptera is another ancient order with few species, perhaps only 200 worldwide. The web spinners have silk glands in their front legs and weave nests under leaf litter and in tunnels where they live. Webspinners live in tropical or subtropical climates.
Order Dictyoptera: The order Dictyoptera includes roaches and mantids. Both groups have long, segmented antennae and leathery forewings held tightly against their backs. They undergo incomplete metamorphosis. Worldwide, there approximately 6,000 species in this order, most living in tropical regions.
Order Isoptera: Termites feed on wood and are important decomposers in forest ecosystems. They also feed on wood products and are thought of as pests for the destruction they cause to man-made structures. There are between 2,000 and 3,000 species in this order.
Order Zoraptera: Little is know about the angel insects, which belong to the order Zoraptera. Though they are grouped with winged insects, many are actually wingless. Members of this group are blind, small, and often found in decaying wood. There are only about 30 described species worldwide.
Order Psocoptera: Bark lice forage on algae, lichen, and fungus in moist, dark places. Booklice frequent human dwellings, where they feed on book paste and grains. They undergo incomplete metamorphosis. Entomologists have named about 3,200 species in the order Psocoptera.
Order Mallophaga: Biting lice are ectoparasites that feed on birds and some mammals. There are an estimated 3,000 species in the order Mallophaga, all of which undergo incomplete metamorphosis.
Order Siphunculata: The order Siphunculata are the sucking lice, which feed on the fresh blood of mammals. Their mouthparts are adapted for sucking or siphoning blood. There are only about 500 species of sucking lice.
Order Hemiptera: Most people use the term "bugs" to mean insects; an entomologist uses the term to refer to the order Hemiptera. The Hemiptera are the true bugs, and include cicadas, aphids, and spittlebugs, and others. This is a large group of over 70,000 species worldwide.
Order Thysanoptera: The thrips of order Thysanoptera are small insects that feed on plant tissue. Many are considered agricultural pests for this reason. Some thrips prey on other small insects as well. This order contains about 5,000 species.
Order Neuroptera: Commonly called the order of lacewings, this group actually includes a variety of other insects, too: dobsonflies, owlflies, mantidflies, antlions, snakeflies, and alderflies. Insects in the order Neuroptera undergo complete metamorphosis. Worldwide, there are over 5,500 species in this group.
Order Mecoptera: This order includes the scorpionflies, which live in moist, wooded habitats. Scorpionflies are omnivorous in both their larval and adult forms. The larva are caterpillar-like. There are less than 500 described species in the order Mecoptera.
Order Siphonaptera: Pet lovers fear insects in the order Siphonaptera - the fleas. Fleas are blood-sucking ectoparasites that feed on mammals, and rarely, birds. There are well over 2,000 species of fleas in the world.
Order Coleoptera: This group, the beetles and weevils, is the largest order in the insect world, with over 300,000 distinct species known. The order Coleoptera includes well-known families: june beetles, lady beetles, click beetles, and fireflies. All have hardened forewings that fold over the abdomen to protect the delicate hindwings used for flight.
Order Strepsiptera: Insects in this group are parasites of other insects, particularly bees, grasshoppers, and the true bugs. The immature Strepsiptera lies in wait on a flower and quickly burrows into any host insect that comes along. Strepsiptera undergo complete metamorphosis and pupate within the host insect's body.
Order Diptera: Diptera is one of the largest orders, with nearly 100,000 insects named to the order. These are the true flies, mosquitoes, and gnats. Insects in this group have modified hindwings which are used for balance during flight. The forewings function as the propellers for flying.
Order Lepidoptera: The butterflies and moths of the order Lepidoptera comprise the second largest group in the class Insecta. These well-known insects have scaly wings with interesting colors and patterns. You can often identify an insect in this order just by the wing shape and color.
Order Trichoptera: Caddisflies are nocturnal as adults and aquatic when immature. The caddisfly adults have silky hairs on their wings and body, which is key to identifying a Trichoptera member. The larvae spin traps for prey with silk. They also make cases from the silk and other materials that they carry and use for protection.
Order Hymenoptera: The order Hymenoptera includes many of the most common insects - ants, bees, and wasps. The larvae of some wasps cause trees to form galls, which then provides food for the immature wasps. Other wasps are parasitic, living in caterpillars, beetles, or even aphids. This is the third-largest insect order with just over 100,000 species.
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-astradb\sources.txt |
.md |
# rag-aws-bedrock
This template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.
It primarily uses the `Anthropic Claude` for text generation and `Amazon Titan` for text embedding, and utilizes FAISS as the vectorstore.
For additional context on the RAG pipeline, refer to [this notebook](https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb).
## Environment Setup
Before you can use this package, ensure that you have configured `boto3` to work with your AWS account.
For details on how to set up and configure `boto3`, visit [this page](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).
In addition, you need to install the `faiss-cpu` package to work with the FAISS vector store:
```bash
pip install faiss-cpu
```
You should also set the following environment variables to reflect your AWS profile and region (if you're not using the `default` AWS profile and `us-east-1` region):
* `AWS_DEFAULT_REGION`
* `AWS_PROFILE`
## Usage
First, install the LangChain CLI:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package:
```shell
langchain app new my-app --package rag-aws-bedrock
```
To add this package to an existing project:
```shell
langchain app add rag-aws-bedrock
```
Then add the following code to your `server.py` file:
```python
from rag_aws_bedrock import chain as rag_aws_bedrock_chain
add_routes(app, rag_aws_bedrock_chain, path="/rag-aws-bedrock")
```
(Optional) If you have access to LangSmith, you can configure it to trace, monitor, and debug LangChain applications. If you don't have access, you can skip this section.
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) and access the playground at [http://127.0.0.1:8000/rag-aws-bedrock/playground](http://127.0.0.1:8000/rag-aws-bedrock/playground).
You can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-aws-bedrock")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-aws-bedrock\README.md |
.md | # rag-aws-kendra
This template is an application that utilizes Amazon Kendra, a machine learning powered search service, and Anthropic Claude for text generation. The application retrieves documents using a Retrieval chain to answer questions from your documents.
It uses the `boto3` library to connect with the Bedrock service.
For more context on building RAG applications with Amazon Kendra, check [this page](https://aws.amazon.com/blogs/machine-learning/quickly-build-high-accuracy-generative-ai-applications-on-enterprise-data-using-amazon-kendra-langchain-and-large-language-models/).
## Environment Setup
Please ensure to setup and configure `boto3` to work with your AWS account.
You can follow the guide [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).
You should also have a Kendra Index set up before using this template.
You can use [this Cloudformation template](https://github.com/aws-samples/amazon-kendra-langchain-extensions/blob/main/kendra_retriever_samples/kendra-docs-index.yaml) to create a sample index.
This includes sample data containing AWS online documentation for Amazon Kendra, Amazon Lex, and Amazon SageMaker. Alternatively, you can use your own Amazon Kendra index if you have indexed your own dataset.
The following environment variables need to be set:
* `AWS_DEFAULT_REGION` - This should reflect the correct AWS region. Default is `us-east-1`.
* `AWS_PROFILE` - This should reflect your AWS profile. Default is `default`.
* `KENDRA_INDEX_ID` - This should have the Index ID of the Kendra index. Note that the Index ID is a 36 character alphanumeric value that can be found in the index detail page.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-aws-kendra
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-aws-kendra
```
And add the following code to your `server.py` file:
```python
from rag_aws_kendra.chain import chain as rag_aws_kendra_chain
add_routes(app, rag_aws_kendra_chain, path="/rag-aws-kendra")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-aws-kendra/playground](http://127.0.0.1:8000/rag-aws-kendra/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-aws-kendra")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-aws-kendra\README.md |
.md |
# rag-chroma
This template performs RAG using Chroma and OpenAI.
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma
```
And add the following code to your `server.py` file:
```python
from rag_chroma import chain as rag_chroma_chain
add_routes(app, rag_chroma_chain, path="/rag-chroma")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma/playground](http://127.0.0.1:8000/rag-chroma/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-chroma\README.md |
.md |
# rag-chroma-multi-modal
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
![Diagram illustrating the workflow of a multi-modal LLM visual assistant using OpenCLIP embeddings and GPT-4V for question-answering based on slide deck images.](https://github.com/langchain-ai/langchain/assets/122662504/b3bc8406-48ae-4707-9edf-d0b3a511b200 "Workflow Diagram for Multi-modal LLM Visual Assistant")
## Input
Supply a slide deck as pdf in the `/docs` directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
Example questions to ask can be:
```
How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
```
To create an index of the slide deck, run:
```
poetry install
python ingest.py
```
## Storage
This template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.
You can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
```
vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(re_vectorstore_path),
embedding_function=OpenCLIPEmbeddings(
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
),
)
```
## LLM
The app will retrieve images based on similarity between the text input and the image, which are both mapped to multi-modal embedding space. It will then pass the images to GPT-4V.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma-multi-modal
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma-multi-modal
```
And add the following code to your `server.py` file:
```python
from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain
add_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal/playground](http://127.0.0.1:8000/rag-chroma-multi-modal/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-chroma-multi-modal\README.md |
.md |
# rag-chroma-multi-modal-multi-vector
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses GPT-4V to create image summaries for each slide, embeds the summaries, and stores them in Chroma.
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
![Diagram illustrating the multi-modal LLM process with a slide deck, captioning, storage, question input, and answer synthesis with year-over-year growth percentages.](https://github.com/langchain-ai/langchain/assets/122662504/5277ef6b-d637-43c7-8dc1-9b1567470503 "Multi-modal LLM Process Diagram")
## Input
Supply a slide deck as pdf in the `/docs` directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
Example questions to ask can be:
```
How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
```
To create an index of the slide deck, run:
```
poetry install
python ingest.py
```
## Storage
Here is the process the template will use to create an index of the slides (see [blog](https://blog.langchain.dev/multi-modal-rag-template/)):
* Extract the slides as a collection of images
* Use GPT-4V to summarize each image
* Embed the image summaries using text embeddings with a link to the original images
* Retrieve relevant image based on similarity between the image summary and the user input question
* Pass those images to GPT-4V for answer synthesis
By default, this will use [LocalFileStore](https://python.langchain.com/docs/integrations/stores/file_system) to store images and Chroma to store summaries.
For production, it may be desirable to use a remote option such as Redis.
You can set the `local_file_store` flag in `chain.py` and `ingest.py` to switch between the two options.
For Redis, the template will use [UpstashRedisByteStore](https://python.langchain.com/docs/integrations/stores/upstash_redis).
We will use Upstash to store the images, which offers Redis with a REST API.
Simply login [here](https://upstash.com/) and create a database.
This will give you a REST API with:
* `UPSTASH_URL`
* `UPSTASH_TOKEN`
Set `UPSTASH_URL` and `UPSTASH_TOKEN` as environment variables to access your database.
We will use Chroma to store and index the image summaries, which will be created locally in the template directory.
## LLM
The app will retrieve images based on similarity between the text input and the image summary, and pass the images to GPT-4V.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.
Set `UPSTASH_URL` and `UPSTASH_TOKEN` as environment variables to access your database if you use `UpstashRedisByteStore`.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma-multi-modal-multi-vector
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma-multi-modal-multi-vector
```
And add the following code to your `server.py` file:
```python
from rag_chroma_multi_modal_multi_vector import chain as rag_chroma_multi_modal_chain_mv
add_routes(app, rag_chroma_multi_modal_chain_mv, path="/rag-chroma-multi-modal-multi-vector")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal-multi-vector/playground](http://127.0.0.1:8000/rag-chroma-multi-modal-multi-vector/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal-multi-vector")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-chroma-multi-modal-multi-vector\README.md |
.md |
# rag-chroma-private
This template performs RAG with no reliance on external APIs.
It utilizes Ollama the LLM, GPT4All for embeddings, and Chroma for the vectorstore.
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
## Environment Setup
To set up the environment, you need to download Ollama.
Follow the instructions [here](https://python.langchain.com/docs/integrations/chat/ollama).
You can choose the desired LLM with Ollama.
This template uses `llama2:7b-chat`, which can be accessed using `ollama pull llama2:7b-chat`.
There are many other options available [here](https://ollama.ai/library).
This package also uses [GPT4All](https://python.langchain.com/docs/integrations/text_embedding/gpt4all) embeddings.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma-private
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma-private
```
And add the following code to your `server.py` file:
```python
from rag_chroma_private import chain as rag_chroma_private_chain
add_routes(app, rag_chroma_private_chain, path="/rag-chroma-private")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma-private/playground](http://127.0.0.1:8000/rag-chroma-private/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-private")
```
The package will create and add documents to the vector database in `chain.py`. By default, it will load a popular blog post on agents. However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-chroma-private\README.md |
.md |
# rag-codellama-fireworks
This template performs RAG on a codebase.
It uses codellama-34b hosted by Fireworks' [LLM inference API](https://blog.fireworks.ai/accelerating-code-completion-with-fireworks-fast-llm-inference-f4e8b5ec534a).
## Environment Setup
Set the `FIREWORKS_API_KEY` environment variable to access the Fireworks models.
You can obtain it from [here](https://app.fireworks.ai/login?callbackURL=https://app.fireworks.ai).
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-codellama-fireworks
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-codellama-fireworks
```
And add the following code to your `server.py` file:
```python
from rag_codellama_fireworks import chain as rag_codellama_fireworks_chain
add_routes(app, rag_codellama_fireworks_chain, path="/rag-codellama-fireworks")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-codellama-fireworks/playground](http://127.0.0.1:8000/rag-codellama-fireworks/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-codellama-fireworks")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-codellama-fireworks\README.md |
.md |
# rag-conversation
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
It passes both a conversation history and retrieved documents into an LLM for synthesis.
## Environment Setup
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-conversation
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-conversation
```
And add the following code to your `server.py` file:
```python
from rag_conversation import chain as rag_conversation_chain
add_routes(app, rag_conversation_chain, path="/rag-conversation")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-conversation/playground](http://127.0.0.1:8000/rag-conversation/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-conversation")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-conversation\README.md |
.md | # rag-conversation-zep
This template demonstrates building a RAG conversation app using Zep.
Included in this template:
- Populating a [Zep Document Collection](https://docs.getzep.com/sdk/documents/) with a set of documents (a Collection is analogous to an index in other Vector Databases).
- Using Zep's [integrated embedding](https://docs.getzep.com/deployment/embeddings/) functionality to embed the documents as vectors.
- Configuring a LangChain [ZepVectorStore Retriever](https://docs.getzep.com/sdk/documents/) to retrieve documents using Zep's built, hardware accelerated in [Maximal Marginal Relevance](https://docs.getzep.com/sdk/search_query/) (MMR) re-ranking.
- Prompts, a simple chat history data structure, and other components required to build a RAG conversation app.
- The RAG conversation chain.
## About [Zep - Fast, scalable building blocks for LLM Apps](https://www.getzep.com/)
Zep is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.
Key Features:
- Fast! Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.
- Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.
- Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.
- Hybrid search over memories and metadata, with messages automatically embedded on creation.
- Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.
- Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.
- Python and JavaScript SDKs.
Zep project: https://github.com/getzep/zep | Docs: https://docs.getzep.com/
## Environment Setup
Set up a Zep service by following the [Quick Start Guide](https://docs.getzep.com/deployment/quickstart/).
## Ingesting Documents into a Zep Collection
Run `python ingest.py` to ingest the test documents into a Zep Collection. Review the file to modify the Collection name and document source.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-conversation-zep
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-conversation-zep
```
And add the following code to your `server.py` file:
```python
from rag_conversation_zep import chain as rag_conversation_zep_chain
add_routes(app, rag_conversation_zep_chain, path="/rag-conversation-zep")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-conversation-zep/playground](http://127.0.0.1:8000/rag-conversation-zep/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-conversation-zep")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-conversation-zep\README.md |
.md |
# rag-elasticsearch
This template performs RAG using [ElasticSearch](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch).
It relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To connect to your Elasticsearch instance, use the following environment variables:
```bash
export ELASTIC_CLOUD_ID = <ClOUD_ID>
export ELASTIC_USERNAME = <ClOUD_USERNAME>
export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
```
For local development with Docker, use:
```bash
export ES_URL="http://localhost:9200"
```
And run an Elasticsearch instance in Docker with
```bash
docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-elasticsearch
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-elasticsearch
```
And add the following code to your `server.py` file:
```python
from rag_elasticsearch import chain as rag_elasticsearch_chain
add_routes(app, rag_elasticsearch_chain, path="/rag-elasticsearch")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-elasticsearch/playground](http://127.0.0.1:8000/rag-elasticsearch/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-elasticsearch")
```
For loading the fictional workplace documents, run the following command from the root of this repository:
```bash
python ingest.py
```
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-elasticsearch\README.md |
.md |
# rag-fusion
This template enables RAG fusion using a re-implementation of the project found [here](https://github.com/Raudaschl/rag-fusion).
It performs multiple query generation and Reciprocal Rank Fusion to re-rank search results.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-fusion
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-fusion
```
And add the following code to your `server.py` file:
```python
from rag_fusion.chain import chain as rag_fusion_chain
add_routes(app, rag_fusion_chain, path="/rag-fusion")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-fusion/playground](http://127.0.0.1:8000/rag-fusion/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-fusion")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-fusion\README.md |
.md |
# rag-gemini-multi-modal
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
Given a question, relevat slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis.
![Diagram illustrating the process of a visual assistant using multi-modal LLM, from slide deck images to OpenCLIP embedding, retrieval, and synthesis with Google Gemini, resulting in an answer.](https://github.com/langchain-ai/langchain/assets/122662504/b9e69bef-d687-4ecf-a599-937e559d5184 "Workflow Diagram for Visual Assistant Using Multi-modal LLM")
## Input
Supply a slide deck as pdf in the `/docs` directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
Example questions to ask can be:
```
How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
```
To create an index of the slide deck, run:
```
poetry install
python ingest.py
```
## Storage
This template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.
You can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
```
vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(re_vectorstore_path),
embedding_function=OpenCLIPEmbeddings(
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
),
)
```
## LLM
The app will retrieve images using multi-modal embeddings, and pass them to Google Gemini.
## Environment Setup
Set your `GOOGLE_API_KEY` environment variable in order to access Gemini.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-gemini-multi-modal
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-gemini-multi-modal
```
And add the following code to your `server.py` file:
```python
from rag_gemini_multi_modal import chain as rag_gemini_multi_modal_chain
add_routes(app, rag_gemini_multi_modal_chain, path="/rag-gemini-multi-modal")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-gemini-multi-modal/playground](http://127.0.0.1:8000/rag-gemini-multi-modal/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-gemini-multi-modal")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-gemini-multi-modal\README.md |
.md | # rag-google-cloud-sensitive-data-protection
This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and
PaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.
This template is an application that utilizes Google Sensitive Data Protection, a service for detecting and redacting
sensitive data in text, and PaLM 2 for Chat (chat-bison), although you can use any model.
For more context on using Sensitive Data Protection,
check [here](https://cloud.google.com/dlp/docs/sensitive-data-protection-overview).
## Environment Setup
Before using this template, please ensure that you enable the [DLP API](https://console.cloud.google.com/marketplace/product/google/dlp.googleapis.com)
and [Vertex AI API](https://console.cloud.google.com/marketplace/product/google/aiplatform.googleapis.com) in your Google Cloud
project.
For some common environment troubleshooting steps related to Google Cloud, see the bottom
of this readme.
Set the following environment variables:
* `GOOGLE_CLOUD_PROJECT_ID` - Your Google Cloud project ID.
* `MODEL_TYPE` - The model type for Vertex AI Search (e.g. `chat-bison`)
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-google-cloud-sensitive-data-protection
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-google-cloud-sensitive-data-protection
```
And add the following code to your `server.py` file:
```python
from rag_google_cloud_sensitive_data_protection.chain import chain as rag_google_cloud_sensitive_data_protection_chain
add_routes(app, rag_google_cloud_sensitive_data_protection_chain, path="/rag-google-cloud-sensitive-data-protection")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground
at [http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground](http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-google-cloud-sensitive-data-protection")
```
```
# Troubleshooting Google Cloud
You can set your `gcloud` credentials with their CLI using `gcloud auth application-default login`
You can set your `gcloud` project with the following commands
```bash
gcloud config set project <your project>
gcloud auth application-default set-quota-project <your project>
export GOOGLE_CLOUD_PROJECT_ID=<your project>
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-google-cloud-sensitive-data-protection\README.md |
.md | # rag-google-cloud-vertexai-search
This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and
PaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.
For more context on building RAG applications with Vertex AI Search,
check [here](https://cloud.google.com/generative-ai-app-builder/docs/enterprise-search-introduction).
## Environment Setup
Before using this template, please ensure that you are authenticated with Vertex AI Search. See the authentication
guide: [here](https://cloud.google.com/generative-ai-app-builder/docs/authentication).
You will also need to create:
- A search application [here](https://cloud.google.com/generative-ai-app-builder/docs/create-engine-es)
- A data store [here](https://cloud.google.com/generative-ai-app-builder/docs/create-data-store-es)
A suitable dataset to test this template with is the Alphabet Earnings Reports, which you can
find [here](https://abc.xyz/investor/). The data is also available
at `gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs`.
Set the following environment variables:
* `GOOGLE_CLOUD_PROJECT_ID` - Your Google Cloud project ID.
* `DATA_STORE_ID` - The ID of the data store in Vertex AI Search, which is a 36-character alphanumeric value found on
the data store details page.
* `MODEL_TYPE` - The model type for Vertex AI Search.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-google-cloud-vertexai-search
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-google-cloud-vertexai-search
```
And add the following code to your `server.py` file:
```python
from rag_google_cloud_vertexai_search.chain import chain as rag_google_cloud_vertexai_search_chain
add_routes(app, rag_google_cloud_vertexai_search_chain, path="/rag-google-cloud-vertexai-search")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground
at [http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground](http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-google-cloud-vertexai-search")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-google-cloud-vertexai-search\README.md |
.md |
# rag-gpt-crawler
GPT-crawler will crawl websites to produce files for use in custom GPTs or other apps (RAG).
This template uses [gpt-crawler](https://github.com/BuilderIO/gpt-crawler) to build a RAG app
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Crawling
Run GPT-crawler to extact content from a set of urls, using the config file in GPT-crawler repo.
Here is example config for LangChain use-case docs:
```
export const config: Config = {
url: "https://python.langchain.com/docs/use_cases/",
match: "https://python.langchain.com/docs/use_cases/**",
selector: ".docMainContainer_gTbr",
maxPagesToCrawl: 10,
outputFileName: "output.json",
};
```
Then, run this as described in the [gpt-crawler](https://github.com/BuilderIO/gpt-crawler) README:
```
npm start
```
And copy the `output.json` file into the folder containing this README.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-gpt-crawler
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-gpt-crawler
```
And add the following code to your `server.py` file:
```python
from rag_chroma import chain as rag_gpt_crawler
add_routes(app, rag_gpt_crawler, path="/rag-gpt-crawler")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-gpt-crawler/playground](http://127.0.0.1:8000/rag-gpt-crawler/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-gpt-crawler")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-gpt-crawler\README.md |
.md | # rag-lancedb
This template performs RAG using LanceDB and OpenAI.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-lancedb
```
If you want to add this to as existing project, you can just run:
```shell
langchain app add rag-lancedb
```
And add the following code to your `server.py` file:
```python
from rag_lancedb import chain as rag_lancedb_chain
add_routes(app, rag_lancedb_chain, path="/rag-lancedb")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-lancedb/playground](http://127.0.0.1:8000/rag-lancedb/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-lancedb")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-lancedb\README.md |
.md |
# rag-matching-engine
This template performs RAG using Google Cloud Platform's Vertex AI with the matching engine.
It will utilize a previously created index to retrieve relevant documents or contexts based on user-provided questions.
## Environment Setup
An index should be created before running the code.
The process to create this index can be found [here](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/language/use-cases/document-qa/question_answering_documents_langchain_matching_engine.ipynb).
Environment variables for Vertex should be set:
```
PROJECT_ID
ME_REGION
GCS_BUCKET
ME_INDEX_ID
ME_ENDPOINT_ID
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-matching-engine
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-matching-engine
```
And add the following code to your `server.py` file:
```python
from rag_matching_engine import chain as rag_matching_engine_chain
add_routes(app, rag_matching_engine_chain, path="/rag-matching-engine")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-matching-engine/playground](http://127.0.0.1:8000/rag-matching-engine/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-matching-engine")
```
For more details on how to connect to the template, refer to the Jupyter notebook `rag_matching_engine`. | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-matching-engine\README.md |
.md | # rag-momento-vector-index
This template performs RAG using Momento Vector Index (MVI) and OpenAI.
> MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs. Combine with other Momento services such as Momento Cache to cache prompts and as a session store or Momento Topics as a pub/sub system to broadcast events to your application.
To sign up and access MVI, visit the [Momento Console](https://console.gomomento.com/).
## Environment Setup
This template uses Momento Vector Index as a vectorstore and requires that `MOMENTO_API_KEY`, and `MOMENTO_INDEX_NAME` are set.
Go to the [console](https://console.gomomento.com/) to get an API key.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-momento-vector-index
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-momento-vector-index
```
And add the following code to your `server.py` file:
```python
from rag_momento_vector_index import chain as rag_momento_vector_index_chain
add_routes(app, rag_momento_vector_index_chain, path="/rag-momento-vector-index")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-momento-vector-index/playground](http://127.0.0.1:8000/rag-momento-vector-index/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-momento-vector-index")
```
## Indexing Data
We have included a sample module to index data. That is available at `rag_momento_vector_index/ingest.py`. You will see a commented out line in `chain.py` that invokes this. Uncomment to use.
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-momento-vector-index\README.md |
.md |
# rag-mongo
This template performs RAG using MongoDB and OpenAI.
## Environment Setup
You should export two environment variables, one being your MongoDB URI, the other being your OpenAI API KEY.
If you do not have a MongoDB URI, see the `Setup Mongo` section at the bottom for instructions on how to do so.
```shell
export MONGO_URI=...
export OPENAI_API_KEY=...
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-mongo
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-mongo
```
And add the following code to your `server.py` file:
```python
from rag_mongo import chain as rag_mongo_chain
add_routes(app, rag_mongo_chain, path="/rag-mongo")
```
If you want to set up an ingestion pipeline, you can add the following code to your `server.py` file:
```python
from rag_mongo import ingest as rag_mongo_ingest
add_routes(app, rag_mongo_ingest, path="/rag-mongo-ingest")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you DO NOT already have a Mongo Search Index you want to connect to, see `MongoDB Setup` section below before proceeding.
If you DO have a MongoDB Search index you want to connect to, edit the connection details in `rag_mongo/chain.py`
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-mongo/playground](http://127.0.0.1:8000/rag-mongo/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-mongo")
```
For additional context, please refer to [this notebook](https://colab.research.google.com/drive/1cr2HBAHyBmwKUerJq2if0JaNhy-hIq7I#scrollTo=TZp7_CBfxTOB).
## MongoDB Setup
Use this step if you need to setup your MongoDB account and ingest data.
We will first follow the standard MongoDB Atlas setup instructions [here](https://www.mongodb.com/docs/atlas/getting-started/).
1. Create an account (if not already done)
2. Create a new project (if not already done)
3. Locate your MongoDB URI.
This can be done by going to the deployement overview page and connecting to you database
![Screenshot highlighting the 'Connect' button in MongoDB Atlas.](_images/connect.png "MongoDB Atlas Connect Button")
We then look at the drivers available
![Screenshot showing the MongoDB Atlas drivers section for connecting to the database.](_images/driver.png "MongoDB Atlas Drivers Section")
Among which we will see our URI listed
![Screenshot displaying an example of a MongoDB URI in the connection instructions.](_images/uri.png "MongoDB URI Example")
Let's then set that as an environment variable locally:
```shell
export MONGO_URI=...
```
4. Let's also set an environment variable for OpenAI (which we will use as an LLM)
```shell
export OPENAI_API_KEY=...
```
5. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:
```shell
python ingest.py
```
Note that you can (and should!) change this to ingest data of your choice
6. We now need to set up a vector index on our data.
We can first connect to the cluster where our database lives
![Screenshot of the MongoDB Atlas interface showing the cluster overview with a 'Connect' button.](_images/cluster.png "MongoDB Atlas Cluster Overview")
We can then navigate to where all our collections are listed
![Screenshot of the MongoDB Atlas interface showing the collections overview within a database.](_images/collections.png "MongoDB Atlas Collections Overview")
We can then find the collection we want and look at the search indexes for that collection
![Screenshot showing the search indexes section in MongoDB Atlas for a specific collection.](_images/search-indexes.png "MongoDB Atlas Search Indexes")
That should likely be empty, and we want to create a new one:
![Screenshot highlighting the 'Create Index' button in MongoDB Atlas.](_images/create.png "MongoDB Atlas Create Index Button")
We will use the JSON editor to create it
![Screenshot showing the JSON Editor option for creating a search index in MongoDB Atlas.](_images/json_editor.png "MongoDB Atlas JSON Editor Option")
And we will paste the following JSON in:
```text
{
"mappings": {
"dynamic": true,
"fields": {
"embedding": {
"dimensions": 1536,
"similarity": "cosine",
"type": "knnVector"
}
}
}
}
```
![Screenshot of the JSON configuration for a search index in MongoDB Atlas.](_images/json.png "MongoDB Atlas Search Index JSON Configuration")
From there, hit "Next" and then "Create Search Index". It will take a little bit but you should then have an index over your data! | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-mongo\README.md |
.md | # RAG with Multiple Indexes (Fusion)
A QA application that queries multiple domain-specific retrievers and selects the most relevant documents from across all retrieved results.
## Environment Setup
This application queries PubMed, ArXiv, Wikipedia, and [Kay AI](https://www.kay.ai) (for SEC filings).
You will need to create a free Kay AI account and [get your API key here](https://www.kay.ai).
Then set environment variable:
```bash
export KAY_API_KEY="<YOUR_API_KEY>"
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-multi-index-fusion
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-multi-index-fusion
```
And add the following code to your `server.py` file:
```python
from rag_multi_index_fusion import chain as rag_multi_index_fusion_chain
add_routes(app, rag_multi_index_fusion_chain, path="/rag-multi-index-fusion")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-multi-index-fusion/playground](http://127.0.0.1:8000/rag-multi-index-fusion/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-multi-index-fusion")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-multi-index-fusion\README.md |
.md | # RAG with Multiple Indexes (Routing)
A QA application that routes between different domain-specific retrievers given a user question.
## Environment Setup
This application queries PubMed, ArXiv, Wikipedia, and [Kay AI](https://www.kay.ai) (for SEC filings).
You will need to create a free Kay AI account and [get your API key here](https://www.kay.ai).
Then set environment variable:
```bash
export KAY_API_KEY="<YOUR_API_KEY>"
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-multi-index-router
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-multi-index-router
```
And add the following code to your `server.py` file:
```python
from rag_multi_index_router import chain as rag_multi_index_router_chain
add_routes(app, rag_multi_index_router_chain, path="/rag-multi-index-router")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-multi-index-router/playground](http://127.0.0.1:8000/rag-multi-index-router/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-multi-index-router")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-multi-index-router\README.md |
.md |
# rag-multi-modal-local
Visual search is a famililar application to many with iPhones or Android devices. It allows user to serch photos using natural language.
With the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.
This template demonstrates how to perform private visual search and question-answering over a collection of your photos.
It uses OpenCLIP embeddings to embed all of the photos and stores them in Chroma.
Given a question, relevant photos are retrieved and passed to an open source multi-modal LLM of your choice for answer synthesis.
![Diagram illustrating the visual search process with OpenCLIP embeddings and multi-modal LLM for question-answering, featuring example food pictures and a matcha soft serve answer trace.](https://github.com/langchain-ai/langchain/assets/122662504/da543b21-052c-4c43-939e-d4f882a45d75 "Visual Search Process Diagram")
## Input
Supply a set of photos in the `/docs` directory.
By default, this template has a toy collection of 3 food pictures.
Example questions to ask can be:
```
What kind of soft serve did I have?
```
In practice, a larger corpus of images can be tested.
To create an index of the images, run:
```
poetry install
python ingest.py
```
## Storage
This template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.
You can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
```
vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(re_vectorstore_path),
embedding_function=OpenCLIPEmbeddings(
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
),
)
```
## LLM
This template will use [Ollama](https://python.langchain.com/docs/integrations/chat/ollama#multi-modal).
Download the latest version of Ollama: https://ollama.ai/
Pull the an open source multi-modal LLM: e.g., https://ollama.ai/library/bakllava
```
ollama pull bakllava
```
The app is by default configured for `bakllava`. But you can change this in `chain.py` and `ingest.py` for different downloaded models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma-multi-modal
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma-multi-modal
```
And add the following code to your `server.py` file:
```python
from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain
add_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal/playground](http://127.0.0.1:8000/rag-chroma-multi-modal/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-multi-modal-local\README.md |
.md |
# rag-multi-modal-mv-local
Visual search is a famililar application to many with iPhones or Android devices. It allows user to serch photos using natural language.
With the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.
This template demonstrates how to perform private visual search and question-answering over a collection of your photos.
It uses an open source multi-modal LLM of your choice to create image summaries for each photos, embeds the summaries, and stores them in Chroma.
Given a question, relevat photos are retrieved and passed to the multi-modal LLM for answer synthesis.
![Diagram illustrating the visual search process with food pictures, captioning, a database, a question input, and the synthesis of an answer using a multi-modal LLM.](https://github.com/langchain-ai/langchain/assets/122662504/cd9b3d82-9b06-4a39-8490-7482466baf43 "Visual Search Process Diagram")
## Input
Supply a set of photos in the `/docs` directory.
By default, this template has a toy collection of 3 food pictures.
The app will look up and summarize photos based upon provided keywords or questions:
```
What kind of ice cream did I have?
```
In practice, a larger corpus of images can be tested.
To create an index of the images, run:
```
poetry install
python ingest.py
```
## Storage
Here is the process the template will use to create an index of the slides (see [blog](https://blog.langchain.dev/multi-modal-rag-template/)):
* Given a set of images
* It uses a local multi-modal LLM ([bakllava](https://ollama.ai/library/bakllava)) to summarize each image
* Embeds the image summaries with a link to the original images
* Given a user question, it will relevant image(s) based on similarity between the image summary and user input (using Ollama embeddings)
* It will pass those images to bakllava for answer synthesis
By default, this will use [LocalFileStore](https://python.langchain.com/docs/integrations/stores/file_system) to store images and Chroma to store summaries.
## LLM and Embedding Models
We will use [Ollama](https://python.langchain.com/docs/integrations/chat/ollama#multi-modal) for generating image summaries, embeddings, and the final image QA.
Download the latest version of Ollama: https://ollama.ai/
Pull an open source multi-modal LLM: e.g., https://ollama.ai/library/bakllava
Pull an open source embedding model: e.g., https://ollama.ai/library/llama2:7b
```
ollama pull bakllava
ollama pull llama2:7b
```
The app is by default configured for `bakllava`. But you can change this in `chain.py` and `ingest.py` for different downloaded models.
The app will retrieve images based on similarity between the text input and the image summary, and pass the images to `bakllava`.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-multi-modal-mv-local
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-multi-modal-mv-local
```
And add the following code to your `server.py` file:
```python
from rag_multi_modal_mv_local import chain as rag_multi_modal_mv_local_chain
add_routes(app, rag_multi_modal_mv_local_chain, path="/rag-multi-modal-mv-local")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-multi-modal-mv-local/playground](http://127.0.0.1:8000/rag-multi-modal-mv-local/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-multi-modal-mv-local")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-multi-modal-mv-local\README.md |
.md |
# rag-ollama-multi-query
This template performs RAG using Ollama and OpenAI with a multi-query retriever.
The multi-query retriever is an example of query transformation, generating multiple queries from different perspectives based on the user's input query.
For each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.
We use a private, local LLM for the narrow task of query generation to avoid excessive calls to a larger LLM API.
See an example trace for Ollama LLM performing the query expansion [here](https://smith.langchain.com/public/8017d04d-2045-4089-b47f-f2d66393a999/r).
But we use OpenAI for the more challenging task of answer syntesis (full trace example [here](https://smith.langchain.com/public/ec75793b-645b-498d-b855-e8d85e1f6738/r)).
## Environment Setup
To set up the environment, you need to download Ollama.
Follow the instructions [here](https://python.langchain.com/docs/integrations/chat/ollama).
You can choose the desired LLM with Ollama.
This template uses `zephyr`, which can be accessed using `ollama pull zephyr`.
There are many other options available [here](https://ollama.ai/library).
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first install the LangChain CLI:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this package, do:
```shell
langchain app new my-app --package rag-ollama-multi-query
```
To add this package to an existing project, run:
```shell
langchain app add rag-ollama-multi-query
```
And add the following code to your `server.py` file:
```python
from rag_ollama_multi_query import chain as rag_ollama_multi_query_chain
add_routes(app, rag_ollama_multi_query_chain, path="/rag-ollama-multi-query")
```
(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
You can access the playground at [http://127.0.0.1:8000/rag-ollama-multi-query/playground](http://127.0.0.1:8000/rag-ollama-multi-query/playground)
To access the template from code, use:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-ollama-multi-query")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-ollama-multi-query\README.md |
.txt | [INFO] Initializing machine learning training job. Model: Convolutional Neural Network Dataset: MNIST Hyperparameters: ; - Learning Rate: 0.001; - Batch Size: 64
[INFO] Loading training data. Training data loaded successfully. Number of training samples: 60,000
[INFO] Loading validation data. Validation data loaded successfully. Number of validation samples: 10,000
[INFO] Training started. Epoch 1/10; - Loss: 0.532; - Accuracy: 0.812 Epoch 2/10; - Loss: 0.398; - Accuracy: 0.874 Epoch 3/10; - Loss: 0.325; - Accuracy: 0.901 ... (training progress) Training completed.
[INFO] Validation started. Validation loss: 0.287 Validation accuracy: 0.915 Model performance meets validation criteria. Saving the model.
[INFO] Testing the trained model. Test loss: 0.298 Test accuracy: 0.910
[INFO] Deploying the trained model to production. Model deployment successful. API endpoint: http://your-api-endpoint/predict
[INFO] Monitoring system initialized. Monitoring metrics:; - CPU Usage: 25%; - Memory Usage: 40%; - GPU Usage: 80%
[ALERT] High GPU Usage Detected! Scaling resources to handle increased load.
[INFO] Machine learning training job completed successfully. Total training time: 3 hours and 45 minutes.
[INFO] Cleaning up resources. Job artifacts removed. Training environment closed.
[INFO] Image processing web server started. Listening on port 8080.
[INFO] Received image processing request from client at IP address 192.168.1.100. Preprocessing image: resizing to 800x600 pixels. Image preprocessing completed successfully.
[INFO] Applying filters to enhance image details. Filters applied: sharpening, contrast adjustment. Image enhancement completed.
[INFO] Generating thumbnail for the processed image. Thumbnail generated successfully.
[INFO] Uploading processed image to the user's gallery. Image successfully added to the gallery. Image ID: 123456.
[INFO] Sending notification to the user: Image processing complete. Notification sent successfully.
[ERROR] Failed to process image due to corrupted file format. Informing the client about the issue. Client notified about the image processing failure.
[INFO] Image processing web server shutting down. Cleaning up resources. Server shutdown complete. | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-opensearch\dummy_data.txt |
.md | # rag-opensearch
This Template performs RAG using [OpenSearch](https://python.langchain.com/docs/integrations/vectorstores/opensearch).
## Environment Setup
Set the following environment variables.
- `OPENAI_API_KEY` - To access OpenAI Embeddings and Models.
And optionally set the OpenSearch ones if not using defaults:
- `OPENSEARCH_URL` - URL of the hosted OpenSearch Instance
- `OPENSEARCH_USERNAME` - User name for the OpenSearch instance
- `OPENSEARCH_PASSWORD` - Password for the OpenSearch instance
- `OPENSEARCH_INDEX_NAME` - Name of the index
To run the default OpenSeach instance in docker, you can use the command
```shell
docker run -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" --name opensearch-node -d opensearchproject/opensearch:latest
```
Note: To load dummy index named `langchain-test` with dummy documents, run `python dummy_index_setup.py` in the package
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-opensearch
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-opensearch
```
And add the following code to your `server.py` file:
```python
from rag_opensearch import chain as rag_opensearch_chain
add_routes(app, rag_opensearch_chain, path="/rag-opensearch")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-opensearch/playground](http://127.0.0.1:8000/rag-opensearch/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-opensearch")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-opensearch\README.md |
.md |
# rag-pinecone
This template performs RAG using Pinecone and OpenAI.
## Environment Setup
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-pinecone
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-pinecone
```
And add the following code to your `server.py` file:
```python
from rag_pinecone import chain as rag_pinecone_chain
add_routes(app, rag_pinecone_chain, path="/rag-pinecone")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-pinecone/playground](http://127.0.0.1:8000/rag-pinecone/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-pinecone\README.md |
.md |
# rag-pinecone-multi-query
This template performs RAG using Pinecone and OpenAI with a multi-query retriever.
It uses an LLM to generate multiple queries from different perspectives based on the user's input query.
For each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.
## Environment Setup
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first install the LangChain CLI:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this package, do:
```shell
langchain app new my-app --package rag-pinecone-multi-query
```
To add this package to an existing project, run:
```shell
langchain app add rag-pinecone-multi-query
```
And add the following code to your `server.py` file:
```python
from rag_pinecone_multi_query import chain as rag_pinecone_multi_query_chain
add_routes(app, rag_pinecone_multi_query_chain, path="/rag-pinecone-multi-query")
```
(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
You can access the playground at [http://127.0.0.1:8000/rag-pinecone-multi-query/playground](http://127.0.0.1:8000/rag-pinecone-multi-query/playground)
To access the template from code, use:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone-multi-query")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-pinecone-multi-query\README.md |
.md |
# rag-pinecone-rerank
This template performs RAG using Pinecone and OpenAI along with [Cohere to perform re-ranking](https://txt.cohere.com/rerank/) on returned documents.
Re-ranking provides a way to rank retrieved documents using specified filters or criteria.
## Environment Setup
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `COHERE_API_KEY` environment variable to access the Cohere ReRank.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-pinecone-rerank
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-pinecone-rerank
```
And add the following code to your `server.py` file:
```python
from rag_pinecone_rerank import chain as rag_pinecone_rerank_chain
add_routes(app, rag_pinecone_rerank_chain, path="/rag-pinecone-rerank")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-pinecone-rerank/playground](http://127.0.0.1:8000/rag-pinecone-rerank/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone-rerank")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-pinecone-rerank\README.md |
.md |
# rag-redis
This template performs RAG using Redis (vector database) and OpenAI (LLM) on financial 10k filings docs for Nike.
It relies on the sentence transformer `all-MiniLM-L6-v2` for embedding chunks of the pdf and user questions.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the [OpenAI](https://platform.openai.com) models:
```bash
export OPENAI_API_KEY= <YOUR OPENAI API KEY>
```
Set the following [Redis](https://redis.com/try-free) environment variables:
```bash
export REDIS_HOST = <YOUR REDIS HOST>
export REDIS_PORT = <YOUR REDIS PORT>
export REDIS_USER = <YOUR REDIS USER NAME>
export REDIS_PASSWORD = <YOUR REDIS PASSWORD>
```
## Supported Settings
We use a variety of environment variables to configure this application
| Environment Variable | Description | Default Value |
|----------------------|-----------------------------------|---------------|
| `DEBUG` | Enable or disable Langchain debugging logs | True |
| `REDIS_HOST` | Hostname for the Redis server | "localhost" |
| `REDIS_PORT` | Port for the Redis server | 6379 |
| `REDIS_USER` | User for the Redis server | "" |
| `REDIS_PASSWORD` | Password for the Redis server | "" |
| `REDIS_URL` | Full URL for connecting to Redis | `None`, Constructed from user, password, host, and port if not provided |
| `INDEX_NAME` | Name of the vector index | "rag-redis" |
## Usage
To use this package, you should first have the LangChain CLI and Pydantic installed in a Python virtual environment:
```shell
pip install -U langchain-cli pydantic==1.10.13
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-redis
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-redis
```
And add the following code snippet to your `app/server.py` file:
```python
from rag_redis.chain import chain as rag_redis_chain
add_routes(app, rag_redis_chain, path="/rag-redis")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-redis/playground](http://127.0.0.1:8000/rag-redis/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-redis")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-redis\README.md |
.md | # rag-self-query
This template performs RAG using the self-query retrieval technique. The main idea is to let an LLM convert unstructured queries into structured queries. See the [docs for more on how this works](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query).
## Environment Setup
In this template we'll use OpenAI models and an Elasticsearch vector store, but the approach generalizes to all LLMs/ChatModels and [a number of vector stores](https://python.langchain.com/docs/integrations/retrievers/self_query/).
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To connect to your Elasticsearch instance, use the following environment variables:
```bash
export ELASTIC_CLOUD_ID = <ClOUD_ID>
export ELASTIC_USERNAME = <ClOUD_USERNAME>
export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
```
For local development with Docker, use:
```bash
export ES_URL = "http://localhost:9200"
docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-self-query
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-self-query
```
And add the following code to your `server.py` file:
```python
from rag_self_query import chain
add_routes(app, chain, path="/rag-elasticsearch")
```
To populate the vector store with the sample data, from the root of the directory run:
```bash
python ingest.py
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-elasticsearch/playground](http://127.0.0.1:8000/rag-elasticsearch/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-self-query")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-self-query\README.md |
.md | # rag-semi-structured
This template performs RAG on semi-structured data, such as a PDF with text and tables.
See [this cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb) as a reference.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
This uses [Unstructured](https://unstructured-io.github.io/unstructured/) for PDF parsing, which requires some system-level package installations.
On Mac, you can install the necessary packages with the following:
```shell
brew install tesseract poppler
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-semi-structured
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-semi-structured
```
And add the following code to your `server.py` file:
```python
from rag_semi_structured import chain as rag_semi_structured_chain
add_routes(app, rag_semi_structured_chain, path="/rag-semi-structured")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-semi-structured/playground](http://127.0.0.1:8000/rag-semi-structured/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-semi-structured")
```
For more details on how to connect to the template, refer to the Jupyter notebook `rag_semi_structured`. | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-semi-structured\README.md |
.md |
# rag-singlestoredb
This template performs RAG using SingleStoreDB and OpenAI.
## Environment Setup
This template uses SingleStoreDB as a vectorstore and requires that `SINGLESTOREDB_URL` is set. It should take the form `admin:[email protected]:port/db_name`
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-singlestoredb
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-singlestoredb
```
And add the following code to your `server.py` file:
```python
from rag_singlestoredb import chain as rag_singlestoredb_chain
add_routes(app, rag_singlestoredb_chain, path="/rag-singlestoredb")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-singlestoredb/playground](http://127.0.0.1:8000/rag-singlestoredb/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-singlestoredb")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-singlestoredb\README.md |
.md |
# rag_supabase
This template performs RAG with Supabase.
[Supabase](https://supabase.com/docs) is an open-source Firebase alternative. It is built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL), a free and open-source relational database management system (RDBMS) and uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
To find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api).
- `SUPABASE_URL` corresponds to the Project URL
- `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key
```shell
export SUPABASE_URL=
export SUPABASE_SERVICE_KEY=
export OPENAI_API_KEY=
```
## Setup Supabase Database
Use these steps to setup your Supabase database if you haven't already.
1. Head over to https://database.new to provision your Supabase database.
2. In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and run the following script to enable `pgvector` and setup your database as a vector store:
```sql
-- Enable the pgvector extension to work with embedding vectors
create extension if not exists vector;
-- Create a table to store your documents
create table
documents (
id uuid primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector (1536) -- 1536 works for OpenAI embeddings, change as needed
);
-- Create a function to search for documents
create function match_documents (
query_embedding vector (1536),
filter jsonb default '{}'
) returns table (
id uuid,
content text,
metadata jsonb,
similarity float
) language plpgsql as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding;
end;
$$;
```
## Setup Environment Variables
Since we are using [`SupabaseVectorStore`](https://python.langchain.com/docs/integrations/vectorstores/supabase) and [`OpenAIEmbeddings`](https://python.langchain.com/docs/integrations/text_embedding/openai), we need to load their API keys.
## Usage
First, install the LangChain CLI:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-supabase
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-supabase
```
And add the following code to your `server.py` file:
```python
from rag_supabase.chain import chain as rag_supabase_chain
add_routes(app, rag_supabase_chain, path="/rag-supabase")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-supabase/playground](http://127.0.0.1:8000/rag-supabase/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-supabase")
```
TODO: Add details about setting up the Supabase database | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-supabase\README.md |
.md |
# rag-timescale-conversation
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
It passes both a conversation history and retrieved documents into an LLM for synthesis.
## Environment Setup
This template uses Timescale Vector as a vectorstore and requires that `TIMESCALES_SERVICE_URL`. Signup for a 90-day trial [here](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) if you don't yet have an account.
To load the sample dataset, set `LOAD_SAMPLE_DATA=1`. To load your own dataset see the section below.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-timescale-conversation
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-timescale-conversation
```
And add the following code to your `server.py` file:
```python
from rag_timescale_conversation import chain as rag_timescale_conversation_chain
add_routes(app, rag_timescale_conversation_chain, path="/rag-timescale_conversation")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-timescale-conversation/playground](http://127.0.0.1:8000/rag-timescale-conversation/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-timescale-conversation")
```
See the `rag_conversation.ipynb` notebook for example usage.
## Loading your own dataset
To load your own dataset you will have to create a `load_dataset` function. You can see an example, in the
`load_ts_git_dataset` function defined in the `load_sample_dataset.py` file. You can then run this as a
standalone function (e.g. in a bash script) or add it to chain.py (but then you should run it just once). | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-timescale-conversation\README.md |
.md | # RAG with Timescale Vector using hybrid search
This template shows how to use timescale-vector with the self-query retriver to perform hybrid search on similarity and time.
This is useful any time your data has a strong time-based component. Some examples of such data are:
- News articles (politics, business, etc)
- Blog posts, documentation or other published material (public or private).
- Social media posts
- Changelogs of any kind
- Messages
Such items are often searched by both similarity and time. For example: Show me all news about Toyota trucks from 2022.
[Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) provides superior performance when searching for embeddings within a particular timeframe by leveraging automatic table partitioning to isolate data for particular time-ranges.
Langchain's self-query retriever allows deducing time-ranges (as well as other search criteria) from the text of user queries.
## What is Timescale Vector?
**[Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) is PostgreSQL++ for AI applications.**
Timescale Vector enables you to efficiently store and query billions of vector embeddings in `PostgreSQL`.
- Enhances `pgvector` with faster and more accurate similarity search on 1B+ vectors via DiskANN inspired indexing algorithm.
- Enables fast time-based vector search via automatic time-based partitioning and indexing.
- Provides a familiar SQL interface for querying vector embeddings and relational data.
Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:
- Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database.
- Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.
- Enables a worry-free experience with enterprise-grade security and compliance.
### How to access Timescale Vector
Timescale Vector is available on [Timescale](https://www.timescale.com/products?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral), the cloud PostgreSQL platform. (There is no self-hosted version at this time.)
- LangChain users get a 90-day free trial for Timescale Vector.
- To get started, [signup](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) to Timescale, create a new database and follow this notebook!
- See the [installation instructions](https://github.com/timescale/python-vector) for more details on using Timescale Vector in python.
## Environment Setup
This template uses Timescale Vector as a vectorstore and requires that `TIMESCALES_SERVICE_URL`. Signup for a 90-day trial [here](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) if you don't yet have an account.
To load the sample dataset, set `LOAD_SAMPLE_DATA=1`. To load your own dataset see the section below.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-timescale-hybrid-search-time
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-timescale-hybrid-search-time
```
And add the following code to your `server.py` file:
```python
from rag_timescale_hybrid_search.chain import chain as rag_timescale_hybrid_search_chain
add_routes(app, rag_timescale_hybrid_search_chain, path="/rag-timescale-hybrid-search")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-timescale-hybrid-search/playground](http://127.0.0.1:8000/rag-timescale-hybrid-search/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-timescale-hybrid-search")
```
## Loading your own dataset
To load your own dataset you will have to modify the code in the `DATASET SPECIFIC CODE` section of `chain.py`.
This code defines the name of the collection, how to load the data, and the human-language description of both the
contents of the collection and all of the metadata. The human-language descriptions are used by the self-query retriever
to help the LLM convert the question into filters on the metadata when searching the data in Timescale-vector. | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-timescale-hybrid-search-time\README.md |
.md |
# rag-vectara
This template performs RAG with vectara.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
* `VECTARA_CUSTOMER_ID`
* `VECTARA_CORPUS_ID`
* `VECTARA_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-vectara
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-vectara
```
And add the following code to your `server.py` file:
```python
from rag_vectara import chain as rag_vectara_chain
add_routes(app, rag_vectara_chain, path="/rag-vectara")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "vectara-demo"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-vectara/playground](http://127.0.0.1:8000/rag-vectara/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-vectara")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-vectara\README.md |
.md |
# rag-vectara-multiquery
This template performs multiquery RAG with vectara.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
* `VECTARA_CUSTOMER_ID`
* `VECTARA_CORPUS_ID`
* `VECTARA_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-vectara-multiquery
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-vectara-multiquery
```
And add the following code to your `server.py` file:
```python
from rag_vectara import chain as rag_vectara_chain
add_routes(app, rag_vectara_chain, path="/rag-vectara-multiquery")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "vectara-demo"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-vectara-multiquery/playground](http://127.0.0.1:8000/rag-vectara-multiquery/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-vectara-multiquery")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-vectara-multiquery\README.md |
.md |
# rag-weaviate
This template performs RAG with Weaviate.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
* `WEAVIATE_ENVIRONMENT`
* `WEAVIATE_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-weaviate
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-weaviate
```
And add the following code to your `server.py` file:
```python
from rag_weaviate import chain as rag_weaviate_chain
add_routes(app, rag_weaviate_chain, path="/rag-weaviate")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-weaviate/playground](http://127.0.0.1:8000/rag-weaviate/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-weaviate")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-weaviate\README.md |
.md | # research-assistant
This template implements a version of
[GPT Researcher](https://github.com/assafelovic/gpt-researcher) that you can use
as a starting point for a research agent.
## Environment Setup
The default template relies on ChatOpenAI and DuckDuckGo, so you will need the
following environment variable:
- `OPENAI_API_KEY`
And to use the Tavily LLM-optimized search engine, you will need:
- `TAVILY_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package research-assistant
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add research-assistant
```
And add the following code to your `server.py` file:
```python
from research_assistant import chain as research_assistant_chain
add_routes(app, research_assistant_chain, path="/research-assistant")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/research-assistant/playground](http://127.0.0.1:8000/research-assistant/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/research-assistant")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\research-assistant\README.md |
.md | # retrieval-agent
This package uses Azure OpenAI to do retrieval using an agent architecture.
By default, this does retrieval over Arxiv.
## Environment Setup
Since we are using Azure OpenAI, we will need to set the following environment variables:
```shell
export AZURE_OPENAI_ENDPOINT=...
export AZURE_OPENAI_API_VERSION=...
export AZURE_OPENAI_API_KEY=...
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package retrieval-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add retrieval-agent
```
And add the following code to your `server.py` file:
```python
from retrieval_agent import chain as retrieval_agent_chain
add_routes(app, retrieval_agent_chain, path="/retrieval-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/retrieval-agent/playground](http://127.0.0.1:8000/retrieval-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/retrieval-agent")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\retrieval-agent\README.md |
.md | # retrieval-agent-fireworks
This package uses open source models hosted on FireworksAI to do retrieval using an agent architecture. By default, this does retrieval over Arxiv.
We will use `Mixtral8x7b-instruct-v0.1`, which is shown in this blog to yield reasonable
results with function calling even though it is not fine tuned for this task: https://huggingface.co/blog/open-source-llms-as-agents
## Environment Setup
There are various great ways to run OSS models. We will use FireworksAI as an easy way to run the models. See [here](https://python.langchain.com/docs/integrations/providers/fireworks) for more information.
Set the `FIREWORKS_API_KEY` environment variable to access Fireworks.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package retrieval-agent-fireworks
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add retrieval-agent-fireworks
```
And add the following code to your `server.py` file:
```python
from retrieval_agent_fireworks import chain as retrieval_agent_fireworks_chain
add_routes(app, retrieval_agent_fireworks_chain, path="/retrieval-agent-fireworks")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/retrieval-agent-fireworks/playground](http://127.0.0.1:8000/retrieval-agent-fireworks/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/retrieval-agent-fireworks")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\retrieval-agent-fireworks\README.md |
.md |
# rewrite_retrieve_read
This template implemenets a method for query transformation (re-writing) in the paper [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/pdf/2305.14283.pdf) to optimize for RAG.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rewrite_retrieve_read
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rewrite_retrieve_read
```
And add the following code to your `server.py` file:
```python
from rewrite_retrieve_read.chain import chain as rewrite_retrieve_read_chain
add_routes(app, rewrite_retrieve_read_chain, path="/rewrite-retrieve-read")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rewrite_retrieve_read/playground](http://127.0.0.1:8000/rewrite_retrieve_read/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rewrite_retrieve_read")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rewrite-retrieve-read\README.md |
.md | # Langchain - Robocorp Action Server
This template enables using [Robocorp Action Server](https://github.com/robocorp/robocorp) served actions as tools for an Agent.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package robocorp-action-server
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add robocorp-action-server
```
And add the following code to your `server.py` file:
```python
from robocorp_action_server import agent_executor as action_server_chain
add_routes(app, action_server_chain, path="/robocorp-action-server")
```
### Running the Action Server
To run the Action Server, you need to have the Robocorp Action Server installed
```bash
pip install -U robocorp-action-server
```
Then you can run the Action Server with:
```bash
action-server new
cd ./your-project-name
action-server start
```
### Configure LangSmith (Optional)
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
### Start LangServe instance
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/robocorp-action-server/playground](http://127.0.0.1:8000/robocorp-action-server/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/robocorp-action-server")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\robocorp-action-server\README.md |
.md |
# self-query-qdrant
This template performs [self-querying](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)
using Qdrant and OpenAI. By default, it uses an artificial dataset of 10 documents, but you can replace it with your own dataset.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `QDRANT_URL` to the URL of your Qdrant instance. If you use [Qdrant Cloud](https://cloud.qdrant.io)
you have to set the `QDRANT_API_KEY` environment variable as well. If you do not set any of them,
the template will try to connect a local Qdrant instance at `http://localhost:6333`.
```shell
export QDRANT_URL=
export QDRANT_API_KEY=
export OPENAI_API_KEY=
```
## Usage
To use this package, install the LangChain CLI first:
```shell
pip install -U "langchain-cli[serve]"
```
Create a new LangChain project and install this package as the only one:
```shell
langchain app new my-app --package self-query-qdrant
```
To add this to an existing project, run:
```shell
langchain app add self-query-qdrant
```
### Defaults
Before you launch the server, you need to create a Qdrant collection and index the documents.
It can be done by running the following command:
```python
from self_query_qdrant.chain import initialize
initialize()
```
Add the following code to your `app/server.py` file:
```python
from self_query_qdrant.chain import chain
add_routes(app, chain, path="/self-query-qdrant")
```
The default dataset consists 10 documents about dishes, along with their price and restaurant information.
You can find the documents in the `packages/self-query-qdrant/self_query_qdrant/defaults.py` file.
Here is one of the documents:
```python
from langchain_core.documents import Document
Document(
page_content="Spaghetti with meatballs and tomato sauce",
metadata={
"price": 12.99,
"restaurant": {
"name": "Olive Garden",
"location": ["New York", "Chicago", "Los Angeles"],
},
},
)
```
The self-querying allows performing semantic search over the documents, with some additional filtering
based on the metadata. For example, you can search for the dishes that cost less than $15 and are served in New York.
### Customization
All the examples above assume that you want to launch the template with just the defaults.
If you want to customize the template, you can do it by passing the parameters to the `create_chain` function
in the `app/server.py` file:
```python
from langchain_community.llms import Cohere
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain.chains.query_constructor.schema import AttributeInfo
from self_query_qdrant.chain import create_chain
chain = create_chain(
llm=Cohere(),
embeddings=HuggingFaceEmbeddings(),
document_contents="Descriptions of cats, along with their names and breeds.",
metadata_field_info=[
AttributeInfo(name="name", description="Name of the cat", type="string"),
AttributeInfo(name="breed", description="Cat's breed", type="string"),
],
collection_name="cats",
)
```
The same goes for the `initialize` function that creates a Qdrant collection and indexes the documents:
```python
from langchain_core.documents import Document
from langchain_community.embeddings import HuggingFaceEmbeddings
from self_query_qdrant.chain import initialize
initialize(
embeddings=HuggingFaceEmbeddings(),
collection_name="cats",
documents=[
Document(
page_content="A mean lazy old cat who destroys furniture and eats lasagna",
metadata={"name": "Garfield", "breed": "Tabby"},
),
...
]
)
```
The template is flexible and might be used for different sets of documents easily.
### LangSmith
(Optional) If you have access to LangSmith, configure it to help trace, monitor and debug LangChain applications. If you don't have access, skip this section.
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
### Local Server
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
Access the playground at [http://127.0.0.1:8000/self-query-qdrant/playground](http://127.0.0.1:8000/self-query-qdrant/playground)
Access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/self-query-qdrant")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\self-query-qdrant\README.md |
.md |
# self-query-supabase
This templates allows natural language structured quering of Supabase.
[Supabase](https://supabase.com/docs) is an open-source alternative to Firebase, built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL).
It uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
To find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api).
- `SUPABASE_URL` corresponds to the Project URL
- `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key
```shell
export SUPABASE_URL=
export SUPABASE_SERVICE_KEY=
export OPENAI_API_KEY=
```
## Setup Supabase Database
Use these steps to setup your Supabase database if you haven't already.
1. Head over to https://database.new to provision your Supabase database.
2. In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and run the following script to enable `pgvector` and setup your database as a vector store:
```sql
-- Enable the pgvector extension to work with embedding vectors
create extension if not exists vector;
-- Create a table to store your documents
create table
documents (
id uuid primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector (1536) -- 1536 works for OpenAI embeddings, change as needed
);
-- Create a function to search for documents
create function match_documents (
query_embedding vector (1536),
filter jsonb default '{}'
) returns table (
id uuid,
content text,
metadata jsonb,
similarity float
) language plpgsql as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding;
end;
$$;
```
## Usage
To use this package, install the LangChain CLI first:
```shell
pip install -U langchain-cli
```
Create a new LangChain project and install this package as the only one:
```shell
langchain app new my-app --package self-query-supabase
```
To add this to an existing project, run:
```shell
langchain app add self-query-supabase
```
Add the following code to your `server.py` file:
```python
from self_query_supabase.chain import chain as self_query_supabase_chain
add_routes(app, self_query_supabase_chain, path="/self-query-supabase")
```
(Optional) If you have access to LangSmith, configure it to help trace, monitor and debug LangChain applications. If you don't have access, skip this section.
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
Access the playground at [http://127.0.0.1:8000/self-query-supabase/playground](http://127.0.0.1:8000/self-query-supabase/playground)
Access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/self-query-supabase")
```
TODO: Instructions to set up the Supabase database and install the package.
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\self-query-supabase\README.md |
.md | # shopping-assistant
This template creates a shopping assistant that helps users find products that they are looking for.
This template will use `Ionic` to search for products.
## Environment Setup
This template will use `OpenAI` by default.
Be sure that `OPENAI_API_KEY` is set in your environment.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package shopping-assistant
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add shopping-assistant
```
And add the following code to your `server.py` file:
```python
from shopping_assistant.agent import agent_executor as shopping_assistant_chain
add_routes(app, shopping_assistant_chain, path="/shopping-assistant")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/shopping-assistant/playground](http://127.0.0.1:8000/shopping-assistant/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/shopping-assistant")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\shopping-assistant\README.md |
.md | # skeleton-of-thought
Implements "Skeleton of Thought" from [this](https://sites.google.com/view/sot-llm) paper.
This technique makes it possible to generate longer generations more quickly by first generating a skeleton, then generating each point of the outline.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package skeleton-of-thought
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add skeleton-of-thought
```
And add the following code to your `server.py` file:
```python
from skeleton_of_thought import chain as skeleton_of_thought_chain
add_routes(app, skeleton_of_thought_chain, path="/skeleton-of-thought")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/skeleton-of-thought/playground](http://127.0.0.1:8000/skeleton-of-thought/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/skeleton-of-thought")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\skeleton-of-thought\README.md |
.md | # solo-performance-prompting-agent
This template creates an agent that transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas.
A cognitive synergist refers to an intelligent agent that collaborates with multiple minds, combining their individual strengths and knowledge, to enhance problem-solving and overall performance in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs.
This template will use the `DuckDuckGo` search API.
## Environment Setup
This template will use `OpenAI` by default.
Be sure that `OPENAI_API_KEY` is set in your environment.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package solo-performance-prompting-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add solo-performance-prompting-agent
```
And add the following code to your `server.py` file:
```python
from solo_performance_prompting_agent.agent import agent_executor as solo_performance_prompting_agent_chain
add_routes(app, solo_performance_prompting_agent_chain, path="/solo-performance-prompting-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/solo-performance-prompting-agent/playground](http://127.0.0.1:8000/solo-performance-prompting-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/solo-performance-prompting-agent")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\solo-performance-prompting-agent\README.md |
.md |
# sql-llama2
This template enables a user to interact with a SQL database using natural language.
It uses LLamA2-13b hosted by [Replicate](https://python.langchain.com/docs/integrations/llms/replicate), but can be adapted to any API that supports LLaMA2 including [Fireworks](https://python.langchain.com/docs/integrations/chat/fireworks).
The template includes an example database of 2023 NBA rosters.
For more information on how to build this database, see [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
## Environment Setup
Ensure the `REPLICATE_API_TOKEN` is set in your environment.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-llama2
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-llama2
```
And add the following code to your `server.py` file:
```python
from sql_llama2 import chain as sql_llama2_chain
add_routes(app, sql_llama2_chain, path="/sql-llama2")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-llama2/playground](http://127.0.0.1:8000/sql-llama2/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-llama2")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-llama2\README.md |
.md |
# sql-llamacpp
This template enables a user to interact with a SQL database using natural language.
It uses [Mistral-7b](https://mistral.ai/news/announcing-mistral-7b/) via [llama.cpp](https://github.com/ggerganov/llama.cpp) to run inference locally on a Mac laptop.
## Environment Setup
To set up the environment, use the following steps:
```shell
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
bash Miniforge3-MacOSX-arm64.sh
conda create -n llama python=3.9.16
conda activate /Users/rlm/miniforge3/envs/llama
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-llamacpp
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-llamacpp
```
And add the following code to your `server.py` file:
```python
from sql_llamacpp import chain as sql_llamacpp_chain
add_routes(app, sql_llamacpp_chain, path="/sql-llamacpp")
```
The package will download the Mistral-7b model from [here](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF). You can select other files and specify their download path (browse [here](https://huggingface.co/TheBloke)).
This package includes an example DB of 2023 NBA rosters. You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
(Optional) Configure LangSmith for tracing, monitoring and debugging LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
You can access the playground at [http://127.0.0.1:8000/sql-llamacpp/playground](http://127.0.0.1:8000/sql-llamacpp/playground)
You can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-llamacpp")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-llamacpp\README.md |
.md | # sql-ollama
This template enables a user to interact with a SQL database using natural language.
It uses [Zephyr-7b](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) via [Ollama](https://ollama.ai/library/zephyr) to run inference locally on a Mac laptop.
## Environment Setup
Before using this template, you need to set up Ollama and SQL database.
1. Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
2. Download your LLM of interest:
* This package uses `zephyr`: `ollama pull zephyr`
* You can choose from many LLMs [here](https://ollama.ai/library)
3. This package includes an example DB of 2023 NBA rosters. You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-ollama
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-ollama
```
And add the following code to your `server.py` file:
```python
from sql_ollama import chain as sql_ollama_chain
add_routes(app, sql_ollama_chain, path="/sql-ollama")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-ollama/playground](http://127.0.0.1:8000/sql-ollama/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-ollama")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-ollama\README.md |
.md | # sql-pgvector
This template enables user to use `pgvector` for combining postgreSQL with semantic search / RAG.
It uses [PGVector](https://github.com/pgvector/pgvector) extension as shown in the [RAG empowered SQL cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/retrieval_in_sql.ipynb)
## Environment Setup
If you are using `ChatOpenAI` as your LLM, make sure the `OPENAI_API_KEY` is set in your environment. You can change both the LLM and embeddings model inside `chain.py`
And you can configure configure the following environment variables
for use by the template (defaults are in parentheses)
- `POSTGRES_USER` (postgres)
- `POSTGRES_PASSWORD` (test)
- `POSTGRES_DB` (vectordb)
- `POSTGRES_HOST` (localhost)
- `POSTGRES_PORT` (5432)
If you don't have a postgres instance, you can run one locally in docker:
```bash
docker run \
--name some-postgres \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_USER=postgres \
-e POSTGRES_DB=vectordb \
-p 5432:5432 \
postgres:16
```
And to start again later, use the `--name` defined above:
```bash
docker start some-postgres
```
### PostgreSQL Database setup
Apart from having `pgvector` extension enabled, you will need to do some setup before being able to run semantic search within your SQL queries.
In order to run RAG over your postgreSQL database you will need to generate the embeddings for the specific columns you want.
This process is covered in the [RAG empowered SQL cookbook](cookbook/retrieval_in_sql.ipynb), but the overall approach consist of:
1. Querying for unique values in the column
2. Generating embeddings for those values
3. Store the embeddings in a separate column or in an auxiliary table.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-pgvector
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-pgvector
```
And add the following code to your `server.py` file:
```python
from sql_pgvector import chain as sql_pgvector_chain
add_routes(app, sql_pgvector_chain, path="/sql-pgvector")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-pgvector/playground](http://127.0.0.1:8000/sql-pgvector/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-pgvector")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-pgvector\README.md |
.md | # sql-research-assistant
This package does research over a SQL database
## Usage
This package relies on multiple models, which have the following dependencies:
- OpenAI: set the `OPENAI_API_KEY` environment variables
- Ollama: [install and run Ollama](https://python.langchain.com/docs/integrations/chat/ollama)
- llama2 (on Ollama): `ollama pull llama2` (otherwise you will get 404 errors from Ollama)
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-research-assistant
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-research-assistant
```
And add the following code to your `server.py` file:
```python
from sql_research_assistant import chain as sql_research_assistant_chain
add_routes(app, sql_research_assistant_chain, path="/sql-research-assistant")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-research-assistant/playground](http://127.0.0.1:8000/sql-research-assistant/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-research-assistant")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-research-assistant\README.md |
.md | # stepback-qa-prompting
This template replicates the "Step-Back" prompting technique that improves performance on complex questions by first asking a "step back" question.
This technique can be combined with regular question-answering applications by doing retrieval on both the original and step-back question.
Read more about this in the paper [here](https://arxiv.org/abs/2310.06117) and an excellent blog post by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)
We will modify the prompts slightly to work better with chat models in this template.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package stepback-qa-prompting
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add stepback-qa-prompting
```
And add the following code to your `server.py` file:
```python
from stepback_qa_prompting.chain import chain as stepback_qa_prompting_chain
add_routes(app, stepback_qa_prompting_chain, path="/stepback-qa-prompting")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/stepback-qa-prompting/playground](http://127.0.0.1:8000/stepback-qa-prompting/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/stepback-qa-prompting")
``` | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\stepback-qa-prompting\README.md |
.md |
# summarize-anthropic
This template uses Anthropic's `Claude2` to summarize long documents.
It leverages a large context window of 100k tokens, allowing for summarization of documents over 100 pages.
You can see the summarization prompt in `chain.py`.
## Environment Setup
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package summarize-anthropic
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add summarize-anthropic
```
And add the following code to your `server.py` file:
```python
from summarize_anthropic import chain as summarize_anthropic_chain
add_routes(app, summarize_anthropic_chain, path="/summarize-anthropic")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/summarize-anthropic/playground](http://127.0.0.1:8000/summarize-anthropic/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/summarize-anthropic")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\summarize-anthropic\README.md |
.md |
# vertexai-chuck-norris
This template makes jokes about Chuck Norris using Vertex AI PaLM2.
## Environment Setup
First, make sure you have a Google Cloud project with
an active billing account, and have the [gcloud CLI installed](https://cloud.google.com/sdk/docs/install).
Configure [application default credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc):
```shell
gcloud auth application-default login
```
To set a default Google Cloud project to use, run this command and set [the project ID](https://support.google.com/googleapi/answer/7014113?hl=en) of the project you want to use:
```shell
gcloud config set project [PROJECT-ID]
```
Enable the [Vertex AI API](https://console.cloud.google.com/apis/library/aiplatform.googleapis.com) for the project:
```shell
gcloud services enable aiplatform.googleapis.com
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package pirate-speak
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add vertexai-chuck-norris
```
And add the following code to your `server.py` file:
```python
from vertexai_chuck_norris.chain import chain as vertexai_chuck_norris_chain
add_routes(app, vertexai_chuck_norris_chain, path="/vertexai-chuck-norris")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/vertexai-chuck-norris/playground](http://127.0.0.1:8000/vertexai-chuck-norris/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/vertexai-chuck-norris")
```
| C:\Users\wesla\CodePilotAI\repositories\langchain\templates\vertexai-chuck-norris\README.md |