omarsol commited on
Commit
83e14bf
1 Parent(s): c6373ae

8c9be11e3bec88c20cee865fdfde76c9db80fb3f7c6408cf9569e19be7a869e7

Browse files
Files changed (50) hide show
  1. langchain_md_files/integrations/providers/iugu.mdx +0 -19
  2. langchain_md_files/integrations/providers/jaguar.mdx +0 -62
  3. langchain_md_files/integrations/providers/javelin_ai_gateway.mdx +0 -92
  4. langchain_md_files/integrations/providers/jina.mdx +0 -20
  5. langchain_md_files/integrations/providers/johnsnowlabs.mdx +0 -117
  6. langchain_md_files/integrations/providers/joplin.mdx +0 -19
  7. langchain_md_files/integrations/providers/kdbai.mdx +0 -24
  8. langchain_md_files/integrations/providers/kinetica.mdx +0 -44
  9. langchain_md_files/integrations/providers/konko.mdx +0 -65
  10. langchain_md_files/integrations/providers/labelstudio.mdx +0 -23
  11. langchain_md_files/integrations/providers/lakefs.mdx +0 -18
  12. langchain_md_files/integrations/providers/lancedb.mdx +0 -23
  13. langchain_md_files/integrations/providers/langchain_decorators.mdx +0 -370
  14. langchain_md_files/integrations/providers/lantern.mdx +0 -25
  15. langchain_md_files/integrations/providers/llamacpp.mdx +0 -50
  16. langchain_md_files/integrations/providers/llmonitor.mdx +0 -22
  17. langchain_md_files/integrations/providers/log10.mdx +0 -104
  18. langchain_md_files/integrations/providers/maritalk.mdx +0 -21
  19. langchain_md_files/integrations/providers/mediawikidump.mdx +0 -31
  20. langchain_md_files/integrations/providers/meilisearch.mdx +0 -30
  21. langchain_md_files/integrations/providers/metal.mdx +0 -26
  22. langchain_md_files/integrations/providers/milvus.mdx +0 -25
  23. langchain_md_files/integrations/providers/mindsdb.mdx +0 -14
  24. langchain_md_files/integrations/providers/minimax.mdx +0 -33
  25. langchain_md_files/integrations/providers/mistralai.mdx +0 -34
  26. langchain_md_files/integrations/providers/mlflow.mdx +0 -119
  27. langchain_md_files/integrations/providers/mlflow_ai_gateway.mdx +0 -160
  28. langchain_md_files/integrations/providers/mlx.mdx +0 -34
  29. langchain_md_files/integrations/providers/modal.mdx +0 -95
  30. langchain_md_files/integrations/providers/modelscope.mdx +0 -24
  31. langchain_md_files/integrations/providers/modern_treasury.mdx +0 -19
  32. langchain_md_files/integrations/providers/momento.mdx +0 -65
  33. langchain_md_files/integrations/providers/mongodb_atlas.mdx +0 -82
  34. langchain_md_files/integrations/providers/motherduck.mdx +0 -53
  35. langchain_md_files/integrations/providers/motorhead.mdx +0 -16
  36. langchain_md_files/integrations/providers/myscale.mdx +0 -66
  37. langchain_md_files/integrations/providers/neo4j.mdx +0 -60
  38. langchain_md_files/integrations/providers/nlpcloud.mdx +0 -31
  39. langchain_md_files/integrations/providers/notion.mdx +0 -20
  40. langchain_md_files/integrations/providers/nuclia.mdx +0 -78
  41. langchain_md_files/integrations/providers/nvidia.mdx +0 -82
  42. langchain_md_files/integrations/providers/obsidian.mdx +0 -19
  43. langchain_md_files/integrations/providers/oci.mdx +0 -51
  44. langchain_md_files/integrations/providers/octoai.mdx +0 -37
  45. langchain_md_files/integrations/providers/ollama.mdx +0 -73
  46. langchain_md_files/integrations/providers/ontotext_graphdb.mdx +0 -21
  47. langchain_md_files/integrations/providers/openllm.mdx +0 -70
  48. langchain_md_files/integrations/providers/opensearch.mdx +0 -21
  49. langchain_md_files/integrations/providers/openweathermap.mdx +0 -44
  50. langchain_md_files/integrations/providers/oracleai.mdx +0 -67
langchain_md_files/integrations/providers/iugu.mdx DELETED
@@ -1,19 +0,0 @@
1
- # Iugu
2
-
3
- >[Iugu](https://www.iugu.com/) is a Brazilian services and software as a service (SaaS)
4
- > company. It offers payment-processing software and application programming
5
- > interfaces for e-commerce websites and mobile applications.
6
-
7
-
8
- ## Installation and Setup
9
-
10
- The `Iugu API` requires an access token, which can be found inside of the `Iugu` dashboard.
11
-
12
-
13
- ## Document Loader
14
-
15
- See a [usage example](/docs/integrations/document_loaders/iugu).
16
-
17
- ```python
18
- from langchain_community.document_loaders import IuguLoader
19
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/jaguar.mdx DELETED
@@ -1,62 +0,0 @@
1
- # Jaguar
2
-
3
- This page describes how to use Jaguar vector database within LangChain.
4
- It contains three sections: introduction, installation and setup, and Jaguar API.
5
-
6
-
7
- ## Introduction
8
-
9
- Jaguar vector database has the following characteristics:
10
-
11
- 1. It is a distributed vector database
12
- 2. The “ZeroMove” feature of JaguarDB enables instant horizontal scalability
13
- 3. Multimodal: embeddings, text, images, videos, PDFs, audio, time series, and geospatial
14
- 4. All-masters: allows both parallel reads and writes
15
- 5. Anomaly detection capabilities
16
- 6. RAG support: combines LLM with proprietary and real-time data
17
- 7. Shared metadata: sharing of metadata across multiple vector indexes
18
- 8. Distance metrics: Euclidean, Cosine, InnerProduct, Manhatten, Chebyshev, Hamming, Jeccard, Minkowski
19
-
20
- [Overview of Jaguar scalable vector database](http://www.jaguardb.com)
21
-
22
- You can run JaguarDB in docker container; or download the software and run on-cloud or off-cloud.
23
-
24
- ## Installation and Setup
25
-
26
- - Install the JaguarDB on one host or multiple hosts
27
- - Install the Jaguar HTTP Gateway server on one host
28
- - Install the JaguarDB HTTP Client package
29
-
30
- The steps are described in [Jaguar Documents](http://www.jaguardb.com/support.html)
31
-
32
- Environment Variables in client programs:
33
-
34
- export OPENAI_API_KEY="......"
35
- export JAGUAR_API_KEY="......"
36
-
37
-
38
- ## Jaguar API
39
-
40
- Together with LangChain, a Jaguar client class is provided by importing it in Python:
41
-
42
- ```python
43
- from langchain_community.vectorstores.jaguar import Jaguar
44
- ```
45
-
46
- Supported API functions of the Jaguar class are:
47
-
48
- - `add_texts`
49
- - `add_documents`
50
- - `from_texts`
51
- - `from_documents`
52
- - `similarity_search`
53
- - `is_anomalous`
54
- - `create`
55
- - `delete`
56
- - `clear`
57
- - `drop`
58
- - `login`
59
- - `logout`
60
-
61
-
62
- For more details of the Jaguar API, please refer to [this notebook](/docs/integrations/vectorstores/jaguar)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/javelin_ai_gateway.mdx DELETED
@@ -1,92 +0,0 @@
1
- # Javelin AI Gateway
2
-
3
- [The Javelin AI Gateway](https://www.getjavelin.io) service is a high-performance, enterprise grade API Gateway for AI applications.
4
- It is designed to streamline the usage and access of various large language model (LLM) providers,
5
- such as OpenAI, Cohere, Anthropic and custom large language models within an organization by incorporating
6
- robust access security for all interactions with LLMs.
7
-
8
- Javelin offers a high-level interface that simplifies the interaction with LLMs by providing a unified endpoint
9
- to handle specific LLM related requests.
10
-
11
- See the Javelin AI Gateway [documentation](https://docs.getjavelin.io) for more details.
12
- [Javelin Python SDK](https://www.github.com/getjavelin/javelin-python) is an easy to use client library meant to be embedded into AI Applications
13
-
14
- ## Installation and Setup
15
-
16
- Install `javelin_sdk` to interact with Javelin AI Gateway:
17
-
18
- ```sh
19
- pip install 'javelin_sdk'
20
- ```
21
-
22
- Set the Javelin's API key as an environment variable:
23
-
24
- ```sh
25
- export JAVELIN_API_KEY=...
26
- ```
27
-
28
- ## Completions Example
29
-
30
- ```python
31
-
32
- from langchain.chains import LLMChain
33
- from langchain_community.llms import JavelinAIGateway
34
- from langchain_core.prompts import PromptTemplate
35
-
36
- route_completions = "eng_dept03"
37
-
38
- gateway = JavelinAIGateway(
39
- gateway_uri="http://localhost:8000",
40
- route=route_completions,
41
- model_name="text-davinci-003",
42
- )
43
-
44
- llmchain = LLMChain(llm=gateway, prompt=prompt)
45
- result = llmchain.run("podcast player")
46
-
47
- print(result)
48
-
49
- ```
50
-
51
- ## Embeddings Example
52
-
53
- ```python
54
- from langchain_community.embeddings import JavelinAIGatewayEmbeddings
55
- from langchain_openai import OpenAIEmbeddings
56
-
57
- embeddings = JavelinAIGatewayEmbeddings(
58
- gateway_uri="http://localhost:8000",
59
- route="embeddings",
60
- )
61
-
62
- print(embeddings.embed_query("hello"))
63
- print(embeddings.embed_documents(["hello"]))
64
- ```
65
-
66
- ## Chat Example
67
- ```python
68
- from langchain_community.chat_models import ChatJavelinAIGateway
69
- from langchain_core.messages import HumanMessage, SystemMessage
70
-
71
- messages = [
72
- SystemMessage(
73
- content="You are a helpful assistant that translates English to French."
74
- ),
75
- HumanMessage(
76
- content="Artificial Intelligence has the power to transform humanity and make the world a better place"
77
- ),
78
- ]
79
-
80
- chat = ChatJavelinAIGateway(
81
- gateway_uri="http://localhost:8000",
82
- route="mychatbot_route",
83
- model_name="gpt-3.5-turbo"
84
- params={
85
- "temperature": 0.1
86
- }
87
- )
88
-
89
- print(chat(messages))
90
-
91
- ```
92
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/jina.mdx DELETED
@@ -1,20 +0,0 @@
1
- # Jina
2
-
3
- This page covers how to use the Jina Embeddings within LangChain.
4
- It is broken into two parts: installation and setup, and then references to specific Jina wrappers.
5
-
6
- ## Installation and Setup
7
- - Get a Jina AI API token from [here](https://jina.ai/embeddings/) and set it as an environment variable (`JINA_API_TOKEN`)
8
-
9
- There exists a Jina Embeddings wrapper, which you can access with
10
-
11
- ```python
12
- from langchain_community.embeddings import JinaEmbeddings
13
-
14
- # you can pas jina_api_key, if none is passed it will be taken from `JINA_API_TOKEN` environment variable
15
- embeddings = JinaEmbeddings(jina_api_key='jina_**', model_name='jina-embeddings-v2-base-en')
16
- ```
17
-
18
- You can check the list of available models from [here](https://jina.ai/embeddings/)
19
-
20
- For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/johnsnowlabs.mdx DELETED
@@ -1,117 +0,0 @@
1
- # Johnsnowlabs
2
-
3
- Gain access to the [johnsnowlabs](https://www.johnsnowlabs.com/) ecosystem of enterprise NLP libraries
4
- with over 21.000 enterprise NLP models in over 200 languages with the open source `johnsnowlabs` library.
5
- For all 24.000+ models, see the [John Snow Labs Model Models Hub](https://nlp.johnsnowlabs.com/models)
6
-
7
- ## Installation and Setup
8
-
9
-
10
- ```bash
11
- pip install johnsnowlabs
12
- ```
13
-
14
- To [install enterprise features](https://nlp.johnsnowlabs.com/docs/en/jsl/install_licensed_quick, run:
15
- ```python
16
- # for more details see https://nlp.johnsnowlabs.com/docs/en/jsl/install_licensed_quick
17
- nlp.install()
18
- ```
19
-
20
-
21
- You can embed your queries and documents with either `gpu`,`cpu`,`apple_silicon`,`aarch` based optimized binaries.
22
- By default cpu binaries are used.
23
- Once a session is started, you must restart your notebook to switch between GPU or CPU, or changes will not take effect.
24
-
25
- ## Embed Query with CPU:
26
- ```python
27
- document = "foo bar"
28
- embedding = JohnSnowLabsEmbeddings('embed_sentence.bert')
29
- output = embedding.embed_query(document)
30
- ```
31
-
32
-
33
- ## Embed Query with GPU:
34
-
35
-
36
- ```python
37
- document = "foo bar"
38
- embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','gpu')
39
- output = embedding.embed_query(document)
40
- ```
41
-
42
-
43
-
44
-
45
- ## Embed Query with Apple Silicon (M1,M2,etc..):
46
-
47
- ```python
48
- documents = ["foo bar", 'bar foo']
49
- embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','apple_silicon')
50
- output = embedding.embed_query(document)
51
- ```
52
-
53
-
54
-
55
- ## Embed Query with AARCH:
56
-
57
- ```python
58
- documents = ["foo bar", 'bar foo']
59
- embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','aarch')
60
- output = embedding.embed_query(document)
61
- ```
62
-
63
-
64
-
65
-
66
-
67
-
68
- ## Embed Document with CPU:
69
- ```python
70
- documents = ["foo bar", 'bar foo']
71
- embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','gpu')
72
- output = embedding.embed_documents(documents)
73
- ```
74
-
75
-
76
-
77
- ## Embed Document with GPU:
78
-
79
- ```python
80
- documents = ["foo bar", 'bar foo']
81
- embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','gpu')
82
- output = embedding.embed_documents(documents)
83
- ```
84
-
85
-
86
-
87
-
88
-
89
- ## Embed Document with Apple Silicon (M1,M2,etc..):
90
-
91
- ```python
92
-
93
- ```python
94
- documents = ["foo bar", 'bar foo']
95
- embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','apple_silicon')
96
- output = embedding.embed_documents(documents)
97
- ```
98
-
99
-
100
-
101
- ## Embed Document with AARCH:
102
-
103
- ```python
104
-
105
- ```python
106
- documents = ["foo bar", 'bar foo']
107
- embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','aarch')
108
- output = embedding.embed_documents(documents)
109
- ```
110
-
111
-
112
-
113
-
114
- Models are loaded with [nlp.load](https://nlp.johnsnowlabs.com/docs/en/jsl/load_api) and spark session is started with [nlp.start()](https://nlp.johnsnowlabs.com/docs/en/jsl/start-a-sparksession) under the hood.
115
-
116
-
117
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/joplin.mdx DELETED
@@ -1,19 +0,0 @@
1
- # Joplin
2
-
3
- >[Joplin](https://joplinapp.org/) is an open-source note-taking app. It captures your thoughts
4
- > and securely accesses them from any device.
5
-
6
-
7
- ## Installation and Setup
8
-
9
- The `Joplin API` requires an access token.
10
- You can find installation instructions [here](https://joplinapp.org/api/references/rest_api/).
11
-
12
-
13
- ## Document Loader
14
-
15
- See a [usage example](/docs/integrations/document_loaders/joplin).
16
-
17
- ```python
18
- from langchain_community.document_loaders import JoplinLoader
19
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/kdbai.mdx DELETED
@@ -1,24 +0,0 @@
1
- # KDB.AI
2
-
3
- >[KDB.AI](https://kdb.ai) is a powerful knowledge-based vector database and search engine that allows you to build scalable, reliable AI applications, using real-time data, by providing advanced search, recommendation and personalization.
4
-
5
-
6
- ## Installation and Setup
7
-
8
- Install the Python SDK:
9
-
10
- ```bash
11
- pip install kdbai-client
12
- ```
13
-
14
-
15
- ## Vector store
16
-
17
- There exists a wrapper around KDB.AI indexes, allowing you to use it as a vectorstore,
18
- whether for semantic search or example selection.
19
-
20
- ```python
21
- from langchain_community.vectorstores import KDBAI
22
- ```
23
-
24
- For a more detailed walkthrough of the KDB.AI vectorstore, see [this notebook](/docs/integrations/vectorstores/kdbai)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/kinetica.mdx DELETED
@@ -1,44 +0,0 @@
1
- # Kinetica
2
-
3
- [Kinetica](https://www.kinetica.com/) is a real-time database purpose built for enabling
4
- analytics and generative AI on time-series & spatial data.
5
-
6
- ## Chat Model
7
-
8
- The Kinetica LLM wrapper uses the [Kinetica SqlAssist
9
- LLM](https://docs.kinetica.com/7.2/sql-gpt/concepts/) to transform natural language into
10
- SQL to simplify the process of data retrieval.
11
-
12
- See [Kinetica Language To SQL Chat Model](/docs/integrations/chat/kinetica) for usage.
13
-
14
- ```python
15
- from langchain_community.chat_models.kinetica import ChatKinetica
16
- ```
17
-
18
- ## Vector Store
19
-
20
- The Kinetca vectorstore wrapper leverages Kinetica's native support for [vector
21
- similarity search](https://docs.kinetica.com/7.2/vector_search/).
22
-
23
- See [Kinetica Vectorsore API](/docs/integrations/vectorstores/kinetica) for usage.
24
-
25
- ```python
26
- from langchain_community.vectorstores import Kinetica
27
- ```
28
-
29
- ## Document Loader
30
-
31
- The Kinetica Document loader can be used to load LangChain Documents from the
32
- Kinetica database.
33
-
34
- See [Kinetica Document Loader](/docs/integrations/document_loaders/kinetica) for usage
35
-
36
- ```python
37
- from langchain_community.document_loaders.kinetica_loader import KineticaLoader
38
- ```
39
-
40
- ## Retriever
41
-
42
- The Kinetica Retriever can return documents given an unstructured query.
43
-
44
- See [Kinetica VectorStore based Retriever](/docs/integrations/retrievers/kinetica) for usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/konko.mdx DELETED
@@ -1,65 +0,0 @@
1
- # Konko
2
- All functionality related to Konko
3
-
4
- >[Konko AI](https://www.konko.ai/) provides a fully managed API to help application developers
5
-
6
- >1. **Select** the right open source or proprietary LLMs for their application
7
- >2. **Build** applications faster with integrations to leading application frameworks and fully managed APIs
8
- >3. **Fine tune** smaller open-source LLMs to achieve industry-leading performance at a fraction of the cost
9
- >4. **Deploy production-scale APIs** that meet security, privacy, throughput, and latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure
10
-
11
- ## Installation and Setup
12
-
13
- 1. Sign in to our web app to [create an API key](https://platform.konko.ai/settings/api-keys) to access models via our endpoints for [chat completions](https://docs.konko.ai/reference/post-chat-completions) and [completions](https://docs.konko.ai/reference/post-completions).
14
- 2. Enable a Python3.8+ environment
15
- 3. Install the SDK
16
-
17
- ```bash
18
- pip install konko
19
- ```
20
-
21
- 4. Set API Keys as environment variables(`KONKO_API_KEY`,`OPENAI_API_KEY`)
22
-
23
- ```bash
24
- export KONKO_API_KEY={your_KONKO_API_KEY_here}
25
- export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional
26
- ```
27
-
28
- Please see [the Konko docs](https://docs.konko.ai/docs/getting-started) for more details.
29
-
30
-
31
- ## LLM
32
-
33
- **Explore Available Models:** Start by browsing through the [available models](https://docs.konko.ai/docs/list-of-models) on Konko. Each model caters to different use cases and capabilities.
34
-
35
- Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/get-models).
36
-
37
- See a usage [example](/docs/integrations/llms/konko).
38
-
39
- ### Examples of Endpoint Usage
40
-
41
- - **Completion with mistralai/Mistral-7B-v0.1:**
42
-
43
- ```python
44
- from langchain_community.llms import Konko
45
- llm = Konko(max_tokens=800, model='mistralai/Mistral-7B-v0.1')
46
- prompt = "Generate a Product Description for Apple Iphone 15"
47
- response = llm.invoke(prompt)
48
- ```
49
-
50
- ## Chat Models
51
-
52
- See a usage [example](/docs/integrations/chat/konko).
53
-
54
-
55
- - **ChatCompletion with Mistral-7B:**
56
-
57
- ```python
58
- from langchain_core.messages import HumanMessage
59
- from langchain_community.chat_models import ChatKonko
60
- chat_instance = ChatKonko(max_tokens=10, model = 'mistralai/mistral-7b-instruct-v0.1')
61
- msg = HumanMessage(content="Hi")
62
- chat_response = chat_instance([msg])
63
- ```
64
-
65
- For further assistance, contact [[email protected]](mailto:[email protected]) or join our [Discord](https://discord.gg/TXV2s3z7RZ).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/labelstudio.mdx DELETED
@@ -1,23 +0,0 @@
1
- # Label Studio
2
-
3
-
4
- >[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.
5
-
6
- ## Installation and Setup
7
-
8
- See the [Label Studio installation guide](https://labelstud.io/guide/install) for installation options.
9
-
10
- We need to install the `label-studio` and `label-studio-sdk-python` Python packages:
11
-
12
- ```bash
13
- pip install label-studio label-studio-sdk
14
- ```
15
-
16
-
17
- ## Callbacks
18
-
19
- See a [usage example](/docs/integrations/callbacks/labelstudio).
20
-
21
- ```python
22
- from langchain.callbacks import LabelStudioCallbackHandler
23
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/lakefs.mdx DELETED
@@ -1,18 +0,0 @@
1
- # lakeFS
2
-
3
- >[lakeFS](https://docs.lakefs.io/) provides scalable version control over
4
- > the data lake, and uses Git-like semantics to create and access those versions.
5
-
6
- ## Installation and Setup
7
-
8
- Get the `ENDPOINT`, `LAKEFS_ACCESS_KEY`, and `LAKEFS_SECRET_KEY`.
9
- You can find installation instructions [here](https://docs.lakefs.io/quickstart/launch.html).
10
-
11
-
12
- ## Document Loader
13
-
14
- See a [usage example](/docs/integrations/document_loaders/lakefs).
15
-
16
- ```python
17
- from langchain_community.document_loaders import LakeFSLoader
18
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/lancedb.mdx DELETED
@@ -1,23 +0,0 @@
1
- # LanceDB
2
-
3
- This page covers how to use [LanceDB](https://github.com/lancedb/lancedb) within LangChain.
4
- It is broken into two parts: installation and setup, and then references to specific LanceDB wrappers.
5
-
6
- ## Installation and Setup
7
-
8
- - Install the Python SDK with `pip install lancedb`
9
-
10
- ## Wrappers
11
-
12
- ### VectorStore
13
-
14
- There exists a wrapper around LanceDB databases, allowing you to use it as a vectorstore,
15
- whether for semantic search or example selection.
16
-
17
- To import this vectorstore:
18
-
19
- ```python
20
- from langchain_community.vectorstores import LanceDB
21
- ```
22
-
23
- For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/integrations/vectorstores/lancedb)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/langchain_decorators.mdx DELETED
@@ -1,370 +0,0 @@
1
- # LangChain Decorators ✨
2
-
3
- ~~~
4
- Disclaimer: `LangChain decorators` is not created by the LangChain team and is not supported by it.
5
- ~~~
6
-
7
- >`LangChain decorators` is a layer on the top of LangChain that provides syntactic sugar 🍭 for writing custom langchain prompts and chains
8
- >
9
- >For Feedback, Issues, Contributions - please raise an issue here:
10
- >[ju-bezdek/langchain-decorators](https://github.com/ju-bezdek/langchain-decorators)
11
-
12
-
13
- Main principles and benefits:
14
-
15
- - more `pythonic` way of writing code
16
- - write multiline prompts that won't break your code flow with indentation
17
- - making use of IDE in-built support for **hinting**, **type checking** and **popup with docs** to quickly peek in the function to see the prompt, parameters it consumes etc.
18
- - leverage all the power of 🦜🔗 LangChain ecosystem
19
- - adding support for **optional parameters**
20
- - easily share parameters between the prompts by binding them to one class
21
-
22
-
23
- Here is a simple example of a code written with **LangChain Decorators ✨**
24
-
25
- ``` python
26
-
27
- @llm_prompt
28
- def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
29
- """
30
- Write me a short header for my post about {topic} for {platform} platform.
31
- It should be for {audience} audience.
32
- (Max 15 words)
33
- """
34
- return
35
-
36
- # run it naturally
37
- write_me_short_post(topic="starwars")
38
- # or
39
- write_me_short_post(topic="starwars", platform="redit")
40
- ```
41
-
42
- # Quick start
43
- ## Installation
44
- ```bash
45
- pip install langchain_decorators
46
- ```
47
-
48
- ## Examples
49
-
50
- Good idea on how to start is to review the examples here:
51
- - [jupyter notebook](https://github.com/ju-bezdek/langchain-decorators/blob/main/example_notebook.ipynb)
52
- - [colab notebook](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)
53
-
54
- # Defining other parameters
55
- Here we are just marking a function as a prompt with `llm_prompt` decorator, turning it effectively into a LLMChain. Instead of running it
56
-
57
-
58
- Standard LLMchain takes much more init parameter than just inputs_variables and prompt... here is this implementation detail hidden in the decorator.
59
- Here is how it works:
60
-
61
- 1. Using **Global settings**:
62
-
63
- ``` python
64
- # define global settings for all prompty (if not set - chatGPT is the current default)
65
- from langchain_decorators import GlobalSettings
66
-
67
- GlobalSettings.define_settings(
68
- default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally
69
- default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming
70
- )
71
- ```
72
-
73
- 2. Using predefined **prompt types**
74
-
75
- ``` python
76
- #You can change the default prompt types
77
- from langchain_decorators import PromptTypes, PromptTypeSettings
78
-
79
- PromptTypes.AGENT_REASONING.llm = ChatOpenAI()
80
-
81
- # Or you can just define your own ones:
82
- class MyCustomPromptTypes(PromptTypes):
83
- GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))
84
-
85
- @llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
86
- def write_a_complicated_code(app_idea:str)->str:
87
- ...
88
-
89
- ```
90
-
91
- 3. Define the settings **directly in the decorator**
92
-
93
- ``` python
94
- from langchain_openai import OpenAI
95
-
96
- @llm_prompt(
97
- llm=OpenAI(temperature=0.7),
98
- stop_tokens=["\nObservation"],
99
- ...
100
- )
101
- def creative_writer(book_title:str)->str:
102
- ...
103
- ```
104
-
105
- ## Passing a memory and/or callbacks:
106
-
107
- To pass any of these, just declare them in the function (or use kwargs to pass anything)
108
-
109
- ```python
110
-
111
- @llm_prompt()
112
- async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
113
- """
114
- {history_key}
115
- Write me a short header for my post about {topic} for {platform} platform.
116
- It should be for {audience} audience.
117
- (Max 15 words)
118
- """
119
- pass
120
-
121
- await write_me_short_post(topic="old movies")
122
-
123
- ```
124
-
125
- # Simplified streaming
126
-
127
- If we want to leverage streaming:
128
- - we need to define prompt as async function
129
- - turn on the streaming on the decorator, or we can define PromptType with streaming on
130
- - capture the stream using StreamingContext
131
-
132
- This way we just mark which prompt should be streamed, not needing to tinker with what LLM should we use, passing around the creating and distribute streaming handler into particular part of our chain... just turn the streaming on/off on prompt/prompt type...
133
-
134
- The streaming will happen only if we call it in streaming context ... there we can define a simple function to handle the stream
135
-
136
- ``` python
137
- # this code example is complete and should run as it is
138
-
139
- from langchain_decorators import StreamingContext, llm_prompt
140
-
141
- # this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
142
- # note that only async functions can be streamed (will get an error if it's not)
143
- @llm_prompt(capture_stream=True)
144
- async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
145
- """
146
- Write me a short header for my post about {topic} for {platform} platform.
147
- It should be for {audience} audience.
148
- (Max 15 words)
149
- """
150
- pass
151
-
152
-
153
-
154
- # just an arbitrary function to demonstrate the streaming... will be some websockets code in the real world
155
- tokens=[]
156
- def capture_stream_func(new_token:str):
157
- tokens.append(new_token)
158
-
159
- # if we want to capture the stream, we need to wrap the execution into StreamingContext...
160
- # this will allow us to capture the stream even if the prompt call is hidden inside higher level method
161
- # only the prompts marked with capture_stream will be captured here
162
- with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
163
- result = await run_prompt()
164
- print("Stream finished ... we can distinguish tokens thanks to alternating colors")
165
-
166
-
167
- print("\nWe've captured",len(tokens),"tokens🎉\n")
168
- print("Here is the result:")
169
- print(result)
170
- ```
171
-
172
-
173
- # Prompt declarations
174
- By default the prompt is is the whole function docs, unless you mark your prompt
175
-
176
- ## Documenting your prompt
177
-
178
- We can specify what part of our docs is the prompt definition, by specifying a code block with `<prompt>` language tag
179
-
180
- ``` python
181
- @llm_prompt
182
- def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
183
- """
184
- Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.
185
-
186
- It needs to be a code block, marked as a `<prompt>` language
187
- ```<prompt>
188
- Write me a short header for my post about {topic} for {platform} platform.
189
- It should be for {audience} audience.
190
- (Max 15 words)
191
- ```
192
-
193
- Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers.
194
- (It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
195
- """
196
- return
197
- ```
198
-
199
- ## Chat messages prompt
200
-
201
- For chat models is very useful to define prompt as a set of message templates... here is how to do it:
202
-
203
- ``` python
204
- @llm_prompt
205
- def simulate_conversation(human_input:str, agent_role:str="a pirate"):
206
- """
207
- ## System message
208
- - note the `:system` suffix inside the <prompt:_role_> tag
209
-
210
-
211
- ```<prompt:system>
212
- You are a {agent_role} hacker. You mus act like one.
213
- You reply always in code, using python or javascript code block...
214
- for example:
215
-
216
- ... do not reply with anything else.. just with code - respecting your role.
217
- ```
218
-
219
- # human message
220
- (we are using the real role that are enforced by the LLM - GPT supports system, assistant, user)
221
- ``` <prompt:user>
222
- Helo, who are you
223
- ```
224
- a reply:
225
-
226
-
227
- ``` <prompt:assistant>
228
- \``` python <<- escaping inner code block with \ that should be part of the prompt
229
- def hello():
230
- print("Argh... hello you pesky pirate")
231
- \```
232
- ```
233
-
234
- we can also add some history using placeholder
235
- ```<prompt:placeholder>
236
- {history}
237
- ```
238
- ```<prompt:user>
239
- {human_input}
240
- ```
241
-
242
- Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers.
243
- (It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
244
- """
245
- pass
246
-
247
- ```
248
-
249
- the roles here are model native roles (assistant, user, system for chatGPT)
250
-
251
-
252
-
253
- # Optional sections
254
- - you can define a whole sections of your prompt that should be optional
255
- - if any input in the section is missing, the whole section won't be rendered
256
-
257
- the syntax for this is as follows:
258
-
259
- ``` python
260
- @llm_prompt
261
- def prompt_with_optional_partials():
262
- """
263
- this text will be rendered always, but
264
-
265
- {? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "") ?}
266
-
267
- you can also place it in between the words
268
- this too will be rendered{? , but
269
- this block will be rendered only if {this_value} and {this_value}
270
- is not empty?} !
271
- """
272
- ```
273
-
274
-
275
- # Output parsers
276
-
277
- - llm_prompt decorator natively tries to detect the best output parser based on the output type. (if not set, it returns the raw string)
278
- - list, dict and pydantic outputs are also supported natively (automatically)
279
-
280
- ``` python
281
- # this code example is complete and should run as it is
282
-
283
- from langchain_decorators import llm_prompt
284
-
285
- @llm_prompt
286
- def write_name_suggestions(company_business:str, count:int)->list:
287
- """ Write me {count} good name suggestions for company that {company_business}
288
- """
289
- pass
290
-
291
- write_name_suggestions(company_business="sells cookies", count=5)
292
- ```
293
-
294
- ## More complex structures
295
-
296
- for dict / pydantic you need to specify the formatting instructions...
297
- this can be tedious, that's why you can let the output parser gegnerate you the instructions based on the model (pydantic)
298
-
299
- ``` python
300
- from langchain_decorators import llm_prompt
301
- from pydantic import BaseModel, Field
302
-
303
-
304
- class TheOutputStructureWeExpect(BaseModel):
305
- name:str = Field (description="The name of the company")
306
- headline:str = Field( description="The description of the company (for landing page)")
307
- employees:list[str] = Field(description="5-8 fake employee names with their positions")
308
-
309
- @llm_prompt()
310
- def fake_company_generator(company_business:str)->TheOutputStructureWeExpect:
311
- """ Generate a fake company that {company_business}
312
- {FORMAT_INSTRUCTIONS}
313
- """
314
- return
315
-
316
- company = fake_company_generator(company_business="sells cookies")
317
-
318
- # print the result nicely formatted
319
- print("Company name: ",company.name)
320
- print("company headline: ",company.headline)
321
- print("company employees: ",company.employees)
322
-
323
- ```
324
-
325
-
326
- # Binding the prompt to an object
327
-
328
- ``` python
329
- from pydantic import BaseModel
330
- from langchain_decorators import llm_prompt
331
-
332
- class AssistantPersonality(BaseModel):
333
- assistant_name:str
334
- assistant_role:str
335
- field:str
336
-
337
- @property
338
- def a_property(self):
339
- return "whatever"
340
-
341
- def hello_world(self, function_kwarg:str=None):
342
- """
343
- We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
344
- """
345
-
346
-
347
- @llm_prompt
348
- def introduce_your_self(self)->str:
349
- """
350
- ``` <prompt:system>
351
- You are an assistant named {assistant_name}.
352
- Your role is to act as {assistant_role}
353
- ```
354
- ```<prompt:user>
355
- Introduce your self (in less than 20 words)
356
- ```
357
- """
358
-
359
-
360
-
361
- personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate")
362
-
363
- print(personality.introduce_your_self(personality))
364
- ```
365
-
366
-
367
- # More examples:
368
-
369
- - these and few more examples are also available in the [colab notebook here](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)
370
- - including the [ReAct Agent re-implementation](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=3bID5fryE2Yp) using purely langchain decorators
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/lantern.mdx DELETED
@@ -1,25 +0,0 @@
1
- # Lantern
2
-
3
- This page covers how to use the [Lantern](https://github.com/lanterndata/lantern) within LangChain
4
- It is broken into two parts: setup, and then references to specific Lantern wrappers.
5
-
6
- ## Setup
7
- 1. The first step is to create a database with the `lantern` extension installed.
8
-
9
- Follow the steps at [Lantern Installation Guide](https://github.com/lanterndata/lantern#-quick-install) to install the database and the extension. The docker image is the easiest way to get started.
10
-
11
- ## Wrappers
12
-
13
- ### VectorStore
14
-
15
- There exists a wrapper around Postgres vector databases, allowing you to use it as a vectorstore,
16
- whether for semantic search or example selection.
17
-
18
- To import this vectorstore:
19
- ```python
20
- from langchain_community.vectorstores import Lantern
21
- ```
22
-
23
- ### Usage
24
-
25
- For a more detailed walkthrough of the Lantern Wrapper, see [this notebook](/docs/integrations/vectorstores/lantern)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/llamacpp.mdx DELETED
@@ -1,50 +0,0 @@
1
- # Llama.cpp
2
-
3
- >[llama.cpp python](https://github.com/abetlen/llama-cpp-python) library is a simple Python bindings for `@ggerganov`
4
- >[llama.cpp](https://github.com/ggerganov/llama.cpp).
5
- >
6
- >This package provides:
7
- >
8
- > - Low-level access to C API via ctypes interface.
9
- > - High-level Python API for text completion
10
- > - `OpenAI`-like API
11
- > - `LangChain` compatibility
12
- > - `LlamaIndex` compatibility
13
- > - OpenAI compatible web server
14
- > - Local Copilot replacement
15
- > - Function Calling support
16
- > - Vision API support
17
- > - Multiple Models
18
-
19
- ## Installation and Setup
20
-
21
- - Install the Python package
22
- ```bash
23
- pip install llama-cpp-python
24
- ````
25
- - Download one of the [supported models](https://github.com/ggerganov/llama.cpp#description) and convert them to the llama.cpp format per the [instructions](https://github.com/ggerganov/llama.cpp)
26
-
27
-
28
- ## Chat models
29
-
30
- See a [usage example](/docs/integrations/chat/llamacpp).
31
-
32
- ```python
33
- from langchain_community.chat_models import ChatLlamaCpp
34
- ```
35
-
36
- ## LLMs
37
-
38
- See a [usage example](/docs/integrations/llms/llamacpp).
39
-
40
- ```python
41
- from langchain_community.llms import LlamaCpp
42
- ```
43
-
44
- ## Embedding models
45
-
46
- See a [usage example](/docs/integrations/text_embedding/llamacpp).
47
-
48
- ```python
49
- from langchain_community.embeddings import LlamaCppEmbeddings
50
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/llmonitor.mdx DELETED
@@ -1,22 +0,0 @@
1
- # LLMonitor
2
-
3
- >[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
4
-
5
- ## Installation and Setup
6
-
7
- Create an account on [llmonitor.com](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs), then copy your new app's `tracking id`.
8
-
9
- Once you have it, set it as an environment variable by running:
10
-
11
- ```bash
12
- export LLMONITOR_APP_ID="..."
13
- ```
14
-
15
-
16
- ## Callbacks
17
-
18
- See a [usage example](/docs/integrations/callbacks/llmonitor).
19
-
20
- ```python
21
- from langchain.callbacks import LLMonitorCallbackHandler
22
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/log10.mdx DELETED
@@ -1,104 +0,0 @@
1
- # Log10
2
-
3
- This page covers how to use the [Log10](https://log10.io) within LangChain.
4
-
5
- ## What is Log10?
6
-
7
- Log10 is an [open-source](https://github.com/log10-io/log10) proxiless LLM data management and application development platform that lets you log, debug and tag your Langchain calls.
8
-
9
- ## Quick start
10
-
11
- 1. Create your free account at [log10.io](https://log10.io)
12
- 2. Add your `LOG10_TOKEN` and `LOG10_ORG_ID` from the Settings and Organization tabs respectively as environment variables.
13
- 3. Also add `LOG10_URL=https://log10.io` and your usual LLM API key: for e.g. `OPENAI_API_KEY` or `ANTHROPIC_API_KEY` to your environment
14
-
15
- ## How to enable Log10 data management for Langchain
16
-
17
- Integration with log10 is a simple one-line `log10_callback` integration as shown below:
18
-
19
- ```python
20
- from langchain_openai import ChatOpenAI
21
- from langchain_core.messages import HumanMessage
22
-
23
- from log10.langchain import Log10Callback
24
- from log10.llm import Log10Config
25
-
26
- log10_callback = Log10Callback(log10_config=Log10Config())
27
-
28
- messages = [
29
- HumanMessage(content="You are a ping pong machine"),
30
- HumanMessage(content="Ping?"),
31
- ]
32
-
33
- llm = ChatOpenAI(model="gpt-3.5-turbo", callbacks=[log10_callback])
34
- ```
35
-
36
- [Log10 + Langchain + Logs docs](https://github.com/log10-io/log10/blob/main/logging.md#langchain-logger)
37
-
38
- [More details + screenshots](https://log10.io/docs/observability/logs) including instructions for self-hosting logs
39
-
40
- ## How to use tags with Log10
41
-
42
- ```python
43
- from langchain_openai import OpenAI
44
- from langchain_community.chat_models import ChatAnthropic
45
- from langchain_openai import ChatOpenAI
46
- from langchain_core.messages import HumanMessage
47
-
48
- from log10.langchain import Log10Callback
49
- from log10.llm import Log10Config
50
-
51
- log10_callback = Log10Callback(log10_config=Log10Config())
52
-
53
- messages = [
54
- HumanMessage(content="You are a ping pong machine"),
55
- HumanMessage(content="Ping?"),
56
- ]
57
-
58
- llm = ChatOpenAI(model="gpt-3.5-turbo", callbacks=[log10_callback], temperature=0.5, tags=["test"])
59
- completion = llm.predict_messages(messages, tags=["foobar"])
60
- print(completion)
61
-
62
- llm = ChatAnthropic(model="claude-2", callbacks=[log10_callback], temperature=0.7, tags=["baz"])
63
- llm.predict_messages(messages)
64
- print(completion)
65
-
66
- llm = OpenAI(model_name="gpt-3.5-turbo-instruct", callbacks=[log10_callback], temperature=0.5)
67
- completion = llm.predict("You are a ping pong machine.\nPing?\n")
68
- print(completion)
69
- ```
70
-
71
- You can also intermix direct OpenAI calls and Langchain LLM calls:
72
-
73
- ```python
74
- import os
75
- from log10.load import log10, log10_session
76
- import openai
77
- from langchain_openai import OpenAI
78
-
79
- log10(openai)
80
-
81
- with log10_session(tags=["foo", "bar"]):
82
- # Log a direct OpenAI call
83
- response = openai.Completion.create(
84
- model="text-ada-001",
85
- prompt="Where is the Eiffel Tower?",
86
- temperature=0,
87
- max_tokens=1024,
88
- top_p=1,
89
- frequency_penalty=0,
90
- presence_penalty=0,
91
- )
92
- print(response)
93
-
94
- # Log a call via Langchain
95
- llm = OpenAI(model_name="text-ada-001", temperature=0.5)
96
- response = llm.predict("You are a ping pong machine.\nPing?\n")
97
- print(response)
98
- ```
99
-
100
- ## How to debug Langchain calls
101
-
102
- [Example of debugging](https://log10.io/docs/observability/prompt_chain_debugging)
103
-
104
- [More Langchain examples](https://github.com/log10-io/log10/tree/main/examples#langchain)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/maritalk.mdx DELETED
@@ -1,21 +0,0 @@
1
- # MariTalk
2
-
3
- >[MariTalk](https://www.maritaca.ai/en) is an LLM-based chatbot trained to meet the needs of Brazil.
4
-
5
- ## Installation and Setup
6
-
7
- You have to get the MariTalk API key.
8
-
9
- You also need to install the `httpx` Python package.
10
-
11
- ```bash
12
- pip install httpx
13
- ```
14
-
15
- ## Chat models
16
-
17
- See a [usage example](/docs/integrations/chat/maritalk).
18
-
19
- ```python
20
- from langchain_community.chat_models import ChatMaritalk
21
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/mediawikidump.mdx DELETED
@@ -1,31 +0,0 @@
1
- # MediaWikiDump
2
-
3
- >[MediaWiki XML Dumps](https://www.mediawiki.org/wiki/Manual:Importing_XML_dumps) contain the content of a wiki
4
- > (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup
5
- > of the wiki database, the dump does not contain user accounts, images, edit logs, etc.
6
-
7
-
8
- ## Installation and Setup
9
-
10
- We need to install several python packages.
11
-
12
- The `mediawiki-utilities` supports XML schema 0.11 in unmerged branches.
13
- ```bash
14
- pip install -qU git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11
15
- ```
16
-
17
- The `mediawiki-utilities mwxml` has a bug, fix PR pending.
18
-
19
- ```bash
20
- pip install -qU git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11
21
- pip install -qU mwparserfromhell
22
- ```
23
-
24
- ## Document Loader
25
-
26
- See a [usage example](/docs/integrations/document_loaders/mediawikidump).
27
-
28
-
29
- ```python
30
- from langchain_community.document_loaders import MWDumpLoader
31
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/meilisearch.mdx DELETED
@@ -1,30 +0,0 @@
1
- # Meilisearch
2
-
3
- > [Meilisearch](https://meilisearch.com) is an open-source, lightning-fast, and hyper
4
- > relevant search engine.
5
- > It comes with great defaults to help developers build snappy search experiences.
6
- >
7
- > You can [self-host Meilisearch](https://www.meilisearch.com/docs/learn/getting_started/installation#local-installation)
8
- > or run on [Meilisearch Cloud](https://www.meilisearch.com/pricing).
9
- >
10
- >`Meilisearch v1.3` supports vector search.
11
-
12
- ## Installation and Setup
13
-
14
- See a [usage example](/docs/integrations/vectorstores/meilisearch) for detail configuration instructions.
15
-
16
-
17
- We need to install `meilisearch` python package.
18
-
19
- ```bash
20
- pip install meilisearch
21
- ```
22
-
23
- ## Vector Store
24
-
25
- See a [usage example](/docs/integrations/vectorstores/meilisearch).
26
-
27
- ```python
28
- from langchain_community.vectorstores import Meilisearch
29
- ```
30
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/metal.mdx DELETED
@@ -1,26 +0,0 @@
1
- # Metal
2
-
3
- This page covers how to use [Metal](https://getmetal.io) within LangChain.
4
-
5
- ## What is Metal?
6
-
7
- Metal is a managed retrieval & memory platform built for production. Easily index your data into `Metal` and run semantic search and retrieval on it.
8
-
9
- ![Screenshot of the Metal dashboard showing the Browse Index feature with sample data.](/img/MetalDash.png "Metal Dashboard Interface")
10
-
11
- ## Quick start
12
-
13
- Get started by [creating a Metal account](https://app.getmetal.io/signup).
14
-
15
- Then, you can easily take advantage of the `MetalRetriever` class to start retrieving your data for semantic search, prompting context, etc. This class takes a `Metal` instance and a dictionary of parameters to pass to the Metal API.
16
-
17
- ```python
18
- from langchain.retrievers import MetalRetriever
19
- from metal_sdk.metal import Metal
20
-
21
-
22
- metal = Metal("API_KEY", "CLIENT_ID", "INDEX_ID");
23
- retriever = MetalRetriever(metal, params={"limit": 2})
24
-
25
- docs = retriever.invoke("search term")
26
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/milvus.mdx DELETED
@@ -1,25 +0,0 @@
1
- # Milvus
2
-
3
- >[Milvus](https://milvus.io/docs/overview.md) is a database that stores, indexes, and manages
4
- > massive embedding vectors generated by deep neural networks and other machine learning (ML) models.
5
-
6
-
7
- ## Installation and Setup
8
-
9
- Install the Python SDK:
10
-
11
- ```bash
12
- pip install pymilvus
13
- ```
14
-
15
- ## Vector Store
16
-
17
- There exists a wrapper around `Milvus` indexes, allowing you to use it as a vectorstore,
18
- whether for semantic search or example selection.
19
-
20
- To import this vectorstore:
21
- ```python
22
- from langchain_community.vectorstores import Milvus
23
- ```
24
-
25
- For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/mindsdb.mdx DELETED
@@ -1,14 +0,0 @@
1
- # MindsDB
2
-
3
- MindsDB is the platform for customizing AI from enterprise data. With MindsDB and it's nearly 200 integrations to [data sources](https://docs.mindsdb.com/integrations/data-overview) and [AI/ML frameworks](https://docs.mindsdb.com/integrations/ai-overview), any developer can use their enterprise data to customize AI for their purpose, faster and more securely.
4
-
5
- With MindsDB, you can connect any data source to any AI/ML model to implement and automate AI-powered applications. Deploy, serve, and fine-tune models in real-time, utilizing data from databases, vector stores, or applications. Do all that using universal tools developers already know.
6
-
7
- MindsDB integrates with LangChain, enabling users to:
8
-
9
-
10
- - Deploy models available via LangChain within MindsDB, making them accessible to numerous data sources.
11
- - Fine-tune models available via LangChain within MindsDB using real-time and dynamic data.
12
- - Automate AI workflows with LangChain and MindsDB.
13
-
14
- Follow [our docs](https://docs.mindsdb.com/integrations/ai-engines/langchain) to learn more about MindsDB’s integration with LangChain and see examples.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/minimax.mdx DELETED
@@ -1,33 +0,0 @@
1
- # Minimax
2
-
3
- >[Minimax](https://api.minimax.chat) is a Chinese startup that provides natural language processing models
4
- > for companies and individuals.
5
-
6
- ## Installation and Setup
7
- Get a [Minimax api key](https://api.minimax.chat/user-center/basic-information/interface-key) and set it as an environment variable (`MINIMAX_API_KEY`)
8
- Get a [Minimax group id](https://api.minimax.chat/user-center/basic-information) and set it as an environment variable (`MINIMAX_GROUP_ID`)
9
-
10
-
11
- ## LLM
12
-
13
- There exists a Minimax LLM wrapper, which you can access with
14
- See a [usage example](/docs/integrations/llms/minimax).
15
-
16
- ```python
17
- from langchain_community.llms import Minimax
18
- ```
19
-
20
- ## Chat Models
21
-
22
- See a [usage example](/docs/integrations/chat/minimax)
23
-
24
- ```python
25
- from langchain_community.chat_models import MiniMaxChat
26
- ```
27
-
28
- ## Text Embedding Model
29
-
30
- There exists a Minimax Embedding model, which you can access with
31
- ```python
32
- from langchain_community.embeddings import MiniMaxEmbeddings
33
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/mistralai.mdx DELETED
@@ -1,34 +0,0 @@
1
- # MistralAI
2
-
3
- >[Mistral AI](https://docs.mistral.ai/api/) is a platform that offers hosting for their powerful open source models.
4
-
5
-
6
- ## Installation and Setup
7
-
8
- A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API.
9
-
10
- You will also need the `langchain-mistralai` package:
11
-
12
- ```bash
13
- pip install langchain-mistralai
14
- ```
15
-
16
- ## Chat models
17
-
18
- ### ChatMistralAI
19
-
20
- See a [usage example](/docs/integrations/chat/mistralai).
21
-
22
- ```python
23
- from langchain_mistralai.chat_models import ChatMistralAI
24
- ```
25
-
26
- ## Embedding models
27
-
28
- ### MistralAIEmbeddings
29
-
30
- See a [usage example](/docs/integrations/text_embedding/mistralai).
31
-
32
- ```python
33
- from langchain_mistralai import MistralAIEmbeddings
34
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/mlflow.mdx DELETED
@@ -1,119 +0,0 @@
1
- # MLflow Deployments for LLMs
2
-
3
- >[The MLflow Deployments for LLMs](https://www.mlflow.org/docs/latest/llms/deployments/index.html) is a powerful tool designed to streamline the usage and management of various large
4
- > language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface
5
- > that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests.
6
-
7
- ## Installation and Setup
8
-
9
- Install `mlflow` with MLflow Deployments dependencies:
10
-
11
- ```sh
12
- pip install 'mlflow[genai]'
13
- ```
14
-
15
- Set the OpenAI API key as an environment variable:
16
-
17
- ```sh
18
- export OPENAI_API_KEY=...
19
- ```
20
-
21
- Create a configuration file:
22
-
23
- ```yaml
24
- endpoints:
25
- - name: completions
26
- endpoint_type: llm/v1/completions
27
- model:
28
- provider: openai
29
- name: text-davinci-003
30
- config:
31
- openai_api_key: $OPENAI_API_KEY
32
-
33
- - name: embeddings
34
- endpoint_type: llm/v1/embeddings
35
- model:
36
- provider: openai
37
- name: text-embedding-ada-002
38
- config:
39
- openai_api_key: $OPENAI_API_KEY
40
- ```
41
-
42
- Start the deployments server:
43
-
44
- ```sh
45
- mlflow deployments start-server --config-path /path/to/config.yaml
46
- ```
47
-
48
- ## Example provided by `MLflow`
49
-
50
- >The `mlflow.langchain` module provides an API for logging and loading `LangChain` models.
51
- > This module exports multivariate LangChain models in the langchain flavor and univariate LangChain
52
- > models in the pyfunc flavor.
53
-
54
- See the [API documentation and examples](https://www.mlflow.org/docs/latest/llms/langchain/index.html) for more information.
55
-
56
- ## Completions Example
57
-
58
- ```python
59
- import mlflow
60
- from langchain.chains import LLMChain, PromptTemplate
61
- from langchain_community.llms import Mlflow
62
-
63
- llm = Mlflow(
64
- target_uri="http://127.0.0.1:5000",
65
- endpoint="completions",
66
- )
67
-
68
- llm_chain = LLMChain(
69
- llm=Mlflow,
70
- prompt=PromptTemplate(
71
- input_variables=["adjective"],
72
- template="Tell me a {adjective} joke",
73
- ),
74
- )
75
- result = llm_chain.run(adjective="funny")
76
- print(result)
77
-
78
- with mlflow.start_run():
79
- model_info = mlflow.langchain.log_model(chain, "model")
80
-
81
- model = mlflow.pyfunc.load_model(model_info.model_uri)
82
- print(model.predict([{"adjective": "funny"}]))
83
- ```
84
-
85
- ## Embeddings Example
86
-
87
- ```python
88
- from langchain_community.embeddings import MlflowEmbeddings
89
-
90
- embeddings = MlflowEmbeddings(
91
- target_uri="http://127.0.0.1:5000",
92
- endpoint="embeddings",
93
- )
94
-
95
- print(embeddings.embed_query("hello"))
96
- print(embeddings.embed_documents(["hello"]))
97
- ```
98
-
99
- ## Chat Example
100
-
101
- ```python
102
- from langchain_community.chat_models import ChatMlflow
103
- from langchain_core.messages import HumanMessage, SystemMessage
104
-
105
- chat = ChatMlflow(
106
- target_uri="http://127.0.0.1:5000",
107
- endpoint="chat",
108
- )
109
-
110
- messages = [
111
- SystemMessage(
112
- content="You are a helpful assistant that translates English to French."
113
- ),
114
- HumanMessage(
115
- content="Translate this sentence from English to French: I love programming."
116
- ),
117
- ]
118
- print(chat(messages))
119
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/mlflow_ai_gateway.mdx DELETED
@@ -1,160 +0,0 @@
1
- # MLflow AI Gateway
2
-
3
- :::warning
4
-
5
- MLflow AI Gateway has been deprecated. Please use [MLflow Deployments for LLMs](/docs/integrations/providers/mlflow/) instead.
6
-
7
- :::
8
-
9
- >[The MLflow AI Gateway](https://www.mlflow.org/docs/latest/index.html) service is a powerful tool designed to streamline the usage and management of various large
10
- > language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface
11
- > that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests.
12
-
13
- ## Installation and Setup
14
-
15
- Install `mlflow` with MLflow AI Gateway dependencies:
16
-
17
- ```sh
18
- pip install 'mlflow[gateway]'
19
- ```
20
-
21
- Set the OpenAI API key as an environment variable:
22
-
23
- ```sh
24
- export OPENAI_API_KEY=...
25
- ```
26
-
27
- Create a configuration file:
28
-
29
- ```yaml
30
- routes:
31
- - name: completions
32
- route_type: llm/v1/completions
33
- model:
34
- provider: openai
35
- name: text-davinci-003
36
- config:
37
- openai_api_key: $OPENAI_API_KEY
38
-
39
- - name: embeddings
40
- route_type: llm/v1/embeddings
41
- model:
42
- provider: openai
43
- name: text-embedding-ada-002
44
- config:
45
- openai_api_key: $OPENAI_API_KEY
46
- ```
47
-
48
- Start the Gateway server:
49
-
50
- ```sh
51
- mlflow gateway start --config-path /path/to/config.yaml
52
- ```
53
-
54
- ## Example provided by `MLflow`
55
-
56
- >The `mlflow.langchain` module provides an API for logging and loading `LangChain` models.
57
- > This module exports multivariate LangChain models in the langchain flavor and univariate LangChain
58
- > models in the pyfunc flavor.
59
-
60
- See the [API documentation and examples](https://www.mlflow.org/docs/latest/python_api/mlflow.langchain.html?highlight=langchain#module-mlflow.langchain).
61
-
62
-
63
-
64
- ## Completions Example
65
-
66
- ```python
67
- import mlflow
68
- from langchain.chains import LLMChain, PromptTemplate
69
- from langchain_community.llms import MlflowAIGateway
70
-
71
- gateway = MlflowAIGateway(
72
- gateway_uri="http://127.0.0.1:5000",
73
- route="completions",
74
- params={
75
- "temperature": 0.0,
76
- "top_p": 0.1,
77
- },
78
- )
79
-
80
- llm_chain = LLMChain(
81
- llm=gateway,
82
- prompt=PromptTemplate(
83
- input_variables=["adjective"],
84
- template="Tell me a {adjective} joke",
85
- ),
86
- )
87
- result = llm_chain.run(adjective="funny")
88
- print(result)
89
-
90
- with mlflow.start_run():
91
- model_info = mlflow.langchain.log_model(chain, "model")
92
-
93
- model = mlflow.pyfunc.load_model(model_info.model_uri)
94
- print(model.predict([{"adjective": "funny"}]))
95
- ```
96
-
97
- ## Embeddings Example
98
-
99
- ```python
100
- from langchain_community.embeddings import MlflowAIGatewayEmbeddings
101
-
102
- embeddings = MlflowAIGatewayEmbeddings(
103
- gateway_uri="http://127.0.0.1:5000",
104
- route="embeddings",
105
- )
106
-
107
- print(embeddings.embed_query("hello"))
108
- print(embeddings.embed_documents(["hello"]))
109
- ```
110
-
111
- ## Chat Example
112
-
113
- ```python
114
- from langchain_community.chat_models import ChatMLflowAIGateway
115
- from langchain_core.messages import HumanMessage, SystemMessage
116
-
117
- chat = ChatMLflowAIGateway(
118
- gateway_uri="http://127.0.0.1:5000",
119
- route="chat",
120
- params={
121
- "temperature": 0.1
122
- }
123
- )
124
-
125
- messages = [
126
- SystemMessage(
127
- content="You are a helpful assistant that translates English to French."
128
- ),
129
- HumanMessage(
130
- content="Translate this sentence from English to French: I love programming."
131
- ),
132
- ]
133
- print(chat(messages))
134
- ```
135
-
136
- ## Databricks MLflow AI Gateway
137
-
138
- Databricks MLflow AI Gateway is in private preview.
139
- Please contact a Databricks representative to enroll in the preview.
140
-
141
- ```python
142
- from langchain.chains import LLMChain
143
- from langchain_core.prompts import PromptTemplate
144
- from langchain_community.llms import MlflowAIGateway
145
-
146
- gateway = MlflowAIGateway(
147
- gateway_uri="databricks",
148
- route="completions",
149
- )
150
-
151
- llm_chain = LLMChain(
152
- llm=gateway,
153
- prompt=PromptTemplate(
154
- input_variables=["adjective"],
155
- template="Tell me a {adjective} joke",
156
- ),
157
- )
158
- result = llm_chain.run(adjective="funny")
159
- print(result)
160
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/mlx.mdx DELETED
@@ -1,34 +0,0 @@
1
- # MLX
2
-
3
- >[MLX](https://ml-explore.github.io/mlx/build/html/index.html) is a `NumPy`-like array framework
4
- > designed for efficient and flexible machine learning on `Apple` silicon,
5
- > brought to you by `Apple machine learning research`.
6
-
7
-
8
- ## Installation and Setup
9
-
10
- Install several Python packages:
11
-
12
- ```bash
13
- pip install mlx-lm transformers huggingface_hub
14
- ````
15
-
16
-
17
- ## Chat models
18
-
19
-
20
- See a [usage example](/docs/integrations/chat/mlx).
21
-
22
- ```python
23
- from langchain_community.chat_models.mlx import ChatMLX
24
- ```
25
-
26
- ## LLMs
27
-
28
- ### MLX Local Pipelines
29
-
30
- See a [usage example](/docs/integrations/llms/mlx_pipelines).
31
-
32
- ```python
33
- from langchain_community.llms.mlx_pipeline import MLXPipeline
34
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/modal.mdx DELETED
@@ -1,95 +0,0 @@
1
- # Modal
2
-
3
- This page covers how to use the Modal ecosystem to run LangChain custom LLMs.
4
- It is broken into two parts:
5
-
6
- 1. Modal installation and web endpoint deployment
7
- 2. Using deployed web endpoint with `LLM` wrapper class.
8
-
9
- ## Installation and Setup
10
-
11
- - Install with `pip install modal`
12
- - Run `modal token new`
13
-
14
- ## Define your Modal Functions and Webhooks
15
-
16
- You must include a prompt. There is a rigid response structure:
17
-
18
- ```python
19
- class Item(BaseModel):
20
- prompt: str
21
-
22
- @stub.function()
23
- @modal.web_endpoint(method="POST")
24
- def get_text(item: Item):
25
- return {"prompt": run_gpt2.call(item.prompt)}
26
- ```
27
-
28
- The following is an example with the GPT2 model:
29
-
30
- ```python
31
- from pydantic import BaseModel
32
-
33
- import modal
34
-
35
- CACHE_PATH = "/root/model_cache"
36
-
37
- class Item(BaseModel):
38
- prompt: str
39
-
40
- stub = modal.Stub(name="example-get-started-with-langchain")
41
-
42
- def download_model():
43
- from transformers import GPT2Tokenizer, GPT2LMHeadModel
44
- tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
45
- model = GPT2LMHeadModel.from_pretrained('gpt2')
46
- tokenizer.save_pretrained(CACHE_PATH)
47
- model.save_pretrained(CACHE_PATH)
48
-
49
- # Define a container image for the LLM function below, which
50
- # downloads and stores the GPT-2 model.
51
- image = modal.Image.debian_slim().pip_install(
52
- "tokenizers", "transformers", "torch", "accelerate"
53
- ).run_function(download_model)
54
-
55
- @stub.function(
56
- gpu="any",
57
- image=image,
58
- retries=3,
59
- )
60
- def run_gpt2(text: str):
61
- from transformers import GPT2Tokenizer, GPT2LMHeadModel
62
- tokenizer = GPT2Tokenizer.from_pretrained(CACHE_PATH)
63
- model = GPT2LMHeadModel.from_pretrained(CACHE_PATH)
64
- encoded_input = tokenizer(text, return_tensors='pt').input_ids
65
- output = model.generate(encoded_input, max_length=50, do_sample=True)
66
- return tokenizer.decode(output[0], skip_special_tokens=True)
67
-
68
- @stub.function()
69
- @modal.web_endpoint(method="POST")
70
- def get_text(item: Item):
71
- return {"prompt": run_gpt2.call(item.prompt)}
72
- ```
73
-
74
- ### Deploy the web endpoint
75
-
76
- Deploy the web endpoint to Modal cloud with the [`modal deploy`](https://modal.com/docs/reference/cli/deploy) CLI command.
77
- Your web endpoint will acquire a persistent URL under the `modal.run` domain.
78
-
79
- ## LLM wrapper around Modal web endpoint
80
-
81
- The `Modal` LLM wrapper class which will accept your deployed web endpoint's URL.
82
-
83
- ```python
84
- from langchain_community.llms import Modal
85
-
86
- endpoint_url = "https://ecorp--custom-llm-endpoint.modal.run" # REPLACE ME with your deployed Modal web endpoint's URL
87
-
88
- llm = Modal(endpoint_url=endpoint_url)
89
- llm_chain = LLMChain(prompt=prompt, llm=llm)
90
-
91
- question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
92
-
93
- llm_chain.run(question)
94
- ```
95
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/modelscope.mdx DELETED
@@ -1,24 +0,0 @@
1
- # ModelScope
2
-
3
- >[ModelScope](https://www.modelscope.cn/home) is a big repository of the models and datasets.
4
-
5
- This page covers how to use the modelscope ecosystem within LangChain.
6
- It is broken into two parts: installation and setup, and then references to specific modelscope wrappers.
7
-
8
- ## Installation and Setup
9
-
10
- Install the `modelscope` package.
11
-
12
- ```bash
13
- pip install modelscope
14
- ```
15
-
16
-
17
- ## Text Embedding Models
18
-
19
-
20
- ```python
21
- from langchain_community.embeddings import ModelScopeEmbeddings
22
- ```
23
-
24
- For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/modelscope_hub)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/modern_treasury.mdx DELETED
@@ -1,19 +0,0 @@
1
- # Modern Treasury
2
-
3
- >[Modern Treasury](https://www.moderntreasury.com/) simplifies complex payment operations. It is a unified platform to power products and processes that move money.
4
- >- Connect to banks and payment systems
5
- >- Track transactions and balances in real-time
6
- >- Automate payment operations for scale
7
-
8
- ## Installation and Setup
9
-
10
- There isn't any special setup for it.
11
-
12
- ## Document Loader
13
-
14
- See a [usage example](/docs/integrations/document_loaders/modern_treasury).
15
-
16
-
17
- ```python
18
- from langchain_community.document_loaders import ModernTreasuryLoader
19
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/momento.mdx DELETED
@@ -1,65 +0,0 @@
1
- # Momento
2
-
3
- > [Momento Cache](https://docs.momentohq.com/) is the world's first truly serverless caching service, offering instant elasticity, scale-to-zero
4
- > capability, and blazing-fast performance.
5
- >
6
- > [Momento Vector Index](https://docs.momentohq.com/vector-index) stands out as the most productive, easiest-to-use, fully serverless vector index.
7
- >
8
- > For both services, simply grab the SDK, obtain an API key, input a few lines into your code, and you're set to go. Together, they provide a comprehensive solution for your LLM data needs.
9
-
10
- This page covers how to use the [Momento](https://gomomento.com) ecosystem within LangChain.
11
-
12
- ## Installation and Setup
13
-
14
- - Sign up for a free account [here](https://console.gomomento.com/) to get an API key
15
- - Install the Momento Python SDK with `pip install momento`
16
-
17
- ## Cache
18
-
19
- Use Momento as a serverless, distributed, low-latency cache for LLM prompts and responses. The standard cache is the primary use case for Momento users in any environment.
20
-
21
- To integrate Momento Cache into your application:
22
-
23
- ```python
24
- from langchain.cache import MomentoCache
25
- ```
26
-
27
- Then, set it up with the following code:
28
-
29
- ```python
30
- from datetime import timedelta
31
- from momento import CacheClient, Configurations, CredentialProvider
32
- from langchain.globals import set_llm_cache
33
-
34
- # Instantiate the Momento client
35
- cache_client = CacheClient(
36
- Configurations.Laptop.v1(),
37
- CredentialProvider.from_environment_variable("MOMENTO_API_KEY"),
38
- default_ttl=timedelta(days=1))
39
-
40
- # Choose a Momento cache name of your choice
41
- cache_name = "langchain"
42
-
43
- # Instantiate the LLM cache
44
- set_llm_cache(MomentoCache(cache_client, cache_name))
45
- ```
46
-
47
- ## Memory
48
-
49
- Momento can be used as a distributed memory store for LLMs.
50
-
51
- See [this notebook](/docs/integrations/memory/momento_chat_message_history) for a walkthrough of how to use Momento as a memory store for chat message history.
52
-
53
- ```python
54
- from langchain.memory import MomentoChatMessageHistory
55
- ```
56
-
57
- ## Vector Store
58
-
59
- Momento Vector Index (MVI) can be used as a vector store.
60
-
61
- See [this notebook](/docs/integrations/vectorstores/momento_vector_index) for a walkthrough of how to use MVI as a vector store.
62
-
63
- ```python
64
- from langchain_community.vectorstores import MomentoVectorIndex
65
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/mongodb_atlas.mdx DELETED
@@ -1,82 +0,0 @@
1
- # MongoDB Atlas
2
-
3
- >[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a fully-managed cloud
4
- > database available in AWS, Azure, and GCP. It now has support for native
5
- > Vector Search on the MongoDB document data.
6
-
7
- ## Installation and Setup
8
-
9
- See [detail configuration instructions](/docs/integrations/vectorstores/mongodb_atlas).
10
-
11
- We need to install `langchain-mongodb` python package.
12
-
13
- ```bash
14
- pip install langchain-mongodb
15
- ```
16
-
17
- ## Vector Store
18
-
19
- See a [usage example](/docs/integrations/vectorstores/mongodb_atlas).
20
-
21
- ```python
22
- from langchain_mongodb import MongoDBAtlasVectorSearch
23
- ```
24
-
25
-
26
- ## LLM Caches
27
-
28
- ### MongoDBCache
29
- An abstraction to store a simple cache in MongoDB. This does not use Semantic Caching, nor does it require an index to be made on the collection before generation.
30
-
31
- To import this cache:
32
- ```python
33
- from langchain_mongodb.cache import MongoDBCache
34
- ```
35
-
36
- To use this cache with your LLMs:
37
- ```python
38
- from langchain_core.globals import set_llm_cache
39
-
40
- # use any embedding provider...
41
- from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
42
-
43
- mongodb_atlas_uri = "<YOUR_CONNECTION_STRING>"
44
- COLLECTION_NAME="<YOUR_CACHE_COLLECTION_NAME>"
45
- DATABASE_NAME="<YOUR_DATABASE_NAME>"
46
-
47
- set_llm_cache(MongoDBCache(
48
- connection_string=mongodb_atlas_uri,
49
- collection_name=COLLECTION_NAME,
50
- database_name=DATABASE_NAME,
51
- ))
52
- ```
53
-
54
-
55
- ### MongoDBAtlasSemanticCache
56
- Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends MongoDBAtlas as both a cache and a vectorstore.
57
- The MongoDBAtlasSemanticCache inherits from `MongoDBAtlasVectorSearch` and needs an Atlas Vector Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/mongodb_atlas) on how to set up the index.
58
-
59
- To import this cache:
60
- ```python
61
- from langchain_mongodb.cache import MongoDBAtlasSemanticCache
62
- ```
63
-
64
- To use this cache with your LLMs:
65
- ```python
66
- from langchain_core.globals import set_llm_cache
67
-
68
- # use any embedding provider...
69
- from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
70
-
71
- mongodb_atlas_uri = "<YOUR_CONNECTION_STRING>"
72
- COLLECTION_NAME="<YOUR_CACHE_COLLECTION_NAME>"
73
- DATABASE_NAME="<YOUR_DATABASE_NAME>"
74
-
75
- set_llm_cache(MongoDBAtlasSemanticCache(
76
- embedding=FakeEmbeddings(),
77
- connection_string=mongodb_atlas_uri,
78
- collection_name=COLLECTION_NAME,
79
- database_name=DATABASE_NAME,
80
- ))
81
- ```
82
- ``
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/motherduck.mdx DELETED
@@ -1,53 +0,0 @@
1
- # Motherduck
2
-
3
- >[Motherduck](https://motherduck.com/) is a managed DuckDB-in-the-cloud service.
4
-
5
- ## Installation and Setup
6
-
7
- First, you need to install `duckdb` python package.
8
-
9
- ```bash
10
- pip install duckdb
11
- ```
12
-
13
- You will also need to sign up for an account at [Motherduck](https://motherduck.com/)
14
-
15
- After that, you should set up a connection string - we mostly integrate with Motherduck through SQLAlchemy.
16
- The connection string is likely in the form:
17
-
18
- ```
19
- token="..."
20
-
21
- conn_str = f"duckdb:///md:{token}@my_db"
22
- ```
23
-
24
- ## SQLChain
25
-
26
- You can use the SQLChain to query data in your Motherduck instance in natural language.
27
-
28
- ```
29
- from langchain_openai import OpenAI
30
- from langchain_community.utilities import SQLDatabase
31
- from langchain_experimental.sql import SQLDatabaseChain
32
- db = SQLDatabase.from_uri(conn_str)
33
- db_chain = SQLDatabaseChain.from_llm(OpenAI(temperature=0), db, verbose=True)
34
- ```
35
-
36
- From here, see the [SQL Chain](/docs/how_to#qa-over-sql--csv) documentation on how to use.
37
-
38
-
39
- ## LLMCache
40
-
41
- You can also easily use Motherduck to cache LLM requests.
42
- Once again this is done through the SQLAlchemy wrapper.
43
-
44
- ```
45
- import sqlalchemy
46
- from langchain.globals import set_llm_cache
47
- eng = sqlalchemy.create_engine(conn_str)
48
- set_llm_cache(SQLAlchemyCache(engine=eng))
49
- ```
50
-
51
- From here, see the [LLM Caching](/docs/integrations/llm_caching) documentation on how to use.
52
-
53
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/motorhead.mdx DELETED
@@ -1,16 +0,0 @@
1
- # Motörhead
2
-
3
- >[Motörhead](https://github.com/getmetal/motorhead) is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.
4
-
5
- ## Installation and Setup
6
-
7
- See instructions at [Motörhead](https://github.com/getmetal/motorhead) for running the server locally.
8
-
9
-
10
- ## Memory
11
-
12
- See a [usage example](/docs/integrations/memory/motorhead_memory).
13
-
14
- ```python
15
- from langchain_community.memory import MotorheadMemory
16
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/myscale.mdx DELETED
@@ -1,66 +0,0 @@
1
- # MyScale
2
-
3
- This page covers how to use MyScale vector database within LangChain.
4
- It is broken into two parts: installation and setup, and then references to specific MyScale wrappers.
5
-
6
- With MyScale, you can manage both structured and unstructured (vectorized) data, and perform joint queries and analytics on both types of data using SQL. Plus, MyScale's cloud-native OLAP architecture, built on top of ClickHouse, enables lightning-fast data processing even on massive datasets.
7
-
8
- ## Introduction
9
-
10
- [Overview to MyScale and High performance vector search](https://docs.myscale.com/en/overview/)
11
-
12
- You can now register on our SaaS and [start a cluster now!](https://docs.myscale.com/en/quickstart/)
13
-
14
- If you are also interested in how we managed to integrate SQL and vector, please refer to [this document](https://docs.myscale.com/en/vector-reference/) for further syntax reference.
15
-
16
- We also deliver with live demo on huggingface! Please checkout our [huggingface space](https://huggingface.co/myscale)! They search millions of vector within a blink!
17
-
18
- ## Installation and Setup
19
- - Install the Python SDK with `pip install clickhouse-connect`
20
-
21
- ### Setting up environments
22
-
23
- There are two ways to set up parameters for myscale index.
24
-
25
- 1. Environment Variables
26
-
27
- Before you run the app, please set the environment variable with `export`:
28
- `export MYSCALE_HOST='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...`
29
-
30
- You can easily find your account, password and other info on our SaaS. For details please refer to [this document](https://docs.myscale.com/en/cluster-management/)
31
- Every attributes under `MyScaleSettings` can be set with prefix `MYSCALE_` and is case insensitive.
32
-
33
- 2. Create `MyScaleSettings` object with parameters
34
-
35
-
36
- ```python
37
- from langchain_community.vectorstores import MyScale, MyScaleSettings
38
- config = MyScaleSettings(host="<your-backend-url>", port=8443, ...)
39
- index = MyScale(embedding_function, config)
40
- index.add_documents(...)
41
- ```
42
-
43
- ## Wrappers
44
- supported functions:
45
- - `add_texts`
46
- - `add_documents`
47
- - `from_texts`
48
- - `from_documents`
49
- - `similarity_search`
50
- - `asimilarity_search`
51
- - `similarity_search_by_vector`
52
- - `asimilarity_search_by_vector`
53
- - `similarity_search_with_relevance_scores`
54
- - `delete`
55
-
56
- ### VectorStore
57
-
58
- There exists a wrapper around MyScale database, allowing you to use it as a vectorstore,
59
- whether for semantic search or similar example retrieval.
60
-
61
- To import this vectorstore:
62
- ```python
63
- from langchain_community.vectorstores import MyScale
64
- ```
65
-
66
- For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/integrations/vectorstores/myscale)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/neo4j.mdx DELETED
@@ -1,60 +0,0 @@
1
- # Neo4j
2
-
3
- >What is `Neo4j`?
4
-
5
- >- Neo4j is an `open-source database management system` that specializes in graph database technology.
6
- >- Neo4j allows you to represent and store data in nodes and edges, making it ideal for handling connected data and relationships.
7
- >- Neo4j provides a `Cypher Query Language`, making it easy to interact with and query your graph data.
8
- >- With Neo4j, you can achieve high-performance `graph traversals and queries`, suitable for production-level systems.
9
-
10
- >Get started with Neo4j by visiting [their website](https://neo4j.com/).
11
-
12
- ## Installation and Setup
13
-
14
- - Install the Python SDK with `pip install neo4j`
15
-
16
-
17
- ## VectorStore
18
-
19
- The Neo4j vector index is used as a vectorstore,
20
- whether for semantic search or example selection.
21
-
22
- ```python
23
- from langchain_community.vectorstores import Neo4jVector
24
- ```
25
-
26
- See a [usage example](/docs/integrations/vectorstores/neo4jvector)
27
-
28
- ## GraphCypherQAChain
29
-
30
- There exists a wrapper around Neo4j graph database that allows you to generate Cypher statements based on the user input
31
- and use them to retrieve relevant information from the database.
32
-
33
- ```python
34
- from langchain_community.graphs import Neo4jGraph
35
- from langchain.chains import GraphCypherQAChain
36
- ```
37
-
38
- See a [usage example](/docs/integrations/graphs/neo4j_cypher)
39
-
40
- ## Constructing a knowledge graph from text
41
-
42
- Text data often contain rich relationships and insights that can be useful for various analytics, recommendation engines, or knowledge management applications.
43
- Diffbot's NLP API allows for the extraction of entities, relationships, and semantic meaning from unstructured text data.
44
- By coupling Diffbot's NLP API with Neo4j, a graph database, you can create powerful, dynamic graph structures based on the information extracted from text.
45
- These graph structures are fully queryable and can be integrated into various applications.
46
-
47
- ```python
48
- from langchain_community.graphs import Neo4jGraph
49
- from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer
50
- ```
51
-
52
- See a [usage example](/docs/integrations/graphs/diffbot)
53
-
54
- ## Memory
55
-
56
- See a [usage example](/docs/integrations/memory/neo4j_chat_message_history).
57
-
58
- ```python
59
- from langchain.memory import Neo4jChatMessageHistory
60
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/nlpcloud.mdx DELETED
@@ -1,31 +0,0 @@
1
- # NLPCloud
2
-
3
- >[NLP Cloud](https://docs.nlpcloud.com/#introduction) is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data.
4
-
5
-
6
- ## Installation and Setup
7
-
8
- - Install the `nlpcloud` package.
9
-
10
- ```bash
11
- pip install nlpcloud
12
- ```
13
-
14
- - Get an NLPCloud api key and set it as an environment variable (`NLPCLOUD_API_KEY`)
15
-
16
-
17
- ## LLM
18
-
19
- See a [usage example](/docs/integrations/llms/nlpcloud).
20
-
21
- ```python
22
- from langchain_community.llms import NLPCloud
23
- ```
24
-
25
- ## Text Embedding Models
26
-
27
- See a [usage example](/docs/integrations/text_embedding/nlp_cloud)
28
-
29
- ```python
30
- from langchain_community.embeddings import NLPCloudEmbeddings
31
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/notion.mdx DELETED
@@ -1,20 +0,0 @@
1
- # Notion DB
2
-
3
- >[Notion](https://www.notion.so/) is a collaboration platform with modified Markdown support that integrates kanban
4
- > boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management,
5
- > and project and task management.
6
-
7
- ## Installation and Setup
8
-
9
- All instructions are in examples below.
10
-
11
- ## Document Loader
12
-
13
- We have two different loaders: `NotionDirectoryLoader` and `NotionDBLoader`.
14
-
15
- See [usage examples here](/docs/integrations/document_loaders/notion).
16
-
17
-
18
- ```python
19
- from langchain_community.document_loaders import NotionDirectoryLoader, NotionDBLoader
20
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/nuclia.mdx DELETED
@@ -1,78 +0,0 @@
1
- # Nuclia
2
-
3
- >[Nuclia](https://nuclia.com) automatically indexes your unstructured data from any internal
4
- > and external source, providing optimized search results and generative answers.
5
- > It can handle video and audio transcription, image content extraction, and document parsing.
6
-
7
-
8
-
9
- ## Installation and Setup
10
-
11
- We need to install the `nucliadb-protos` package to use the `Nuclia Understanding API`
12
-
13
- ```bash
14
- pip install nucliadb-protos
15
- ```
16
-
17
- We need to have a `Nuclia account`.
18
- We can create one for free at [https://nuclia.cloud](https://nuclia.cloud),
19
- and then [create a NUA key](https://docs.nuclia.dev/docs/docs/using/understanding/intro).
20
-
21
-
22
- ## Document Transformer
23
-
24
- ### Nuclia
25
-
26
- >`Nuclia Understanding API` document transformer splits text into paragraphs and sentences,
27
- > identifies entities, provides a summary of the text and generates embeddings for all the sentences.
28
-
29
- To use the Nuclia document transformer, we need to instantiate a `NucliaUnderstandingAPI`
30
- tool with `enable_ml` set to `True`:
31
-
32
- ```python
33
- from langchain_community.tools.nuclia import NucliaUnderstandingAPI
34
-
35
- nua = NucliaUnderstandingAPI(enable_ml=True)
36
- ```
37
-
38
- See a [usage example](/docs/integrations/document_transformers/nuclia_transformer).
39
-
40
- ```python
41
- from langchain_community.document_transformers.nuclia_text_transform import NucliaTextTransformer
42
- ```
43
-
44
- ## Document Loaders
45
-
46
- ### Nuclea loader
47
-
48
- See a [usage example](/docs/integrations/document_loaders/nuclia).
49
-
50
- ```python
51
- from langchain_community.document_loaders.nuclia import NucliaLoader
52
- ```
53
-
54
- ## Vector store
55
-
56
- ### NucliaDB
57
-
58
- We need to install a python package:
59
-
60
- ```bash
61
- pip install nuclia
62
- ```
63
-
64
- See a [usage example](/docs/integrations/vectorstores/nucliadb).
65
-
66
- ```python
67
- from langchain_community.vectorstores.nucliadb import NucliaDB
68
- ```
69
-
70
- ## Tools
71
-
72
- ### Nuclia Understanding
73
-
74
- See a [usage example](/docs/integrations/tools/nuclia).
75
-
76
- ```python
77
- from langchain_community.tools.nuclia import NucliaUnderstandingAPI
78
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/nvidia.mdx DELETED
@@ -1,82 +0,0 @@
1
- # NVIDIA
2
- The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on
3
- NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models
4
- from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA
5
- accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single
6
- command on NVIDIA accelerated infrastructure.
7
-
8
- NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing,
9
- NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud,
10
- giving enterprises ownership and full control of their IP and AI application.
11
-
12
- NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog.
13
- At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.
14
-
15
- Below is an example on how to use some common functionality surrounding text-generative and embedding models.
16
-
17
- ## Installation
18
-
19
- ```python
20
- pip install -U --quiet langchain-nvidia-ai-endpoints
21
- ```
22
-
23
- ## Setup
24
-
25
- **To get started:**
26
-
27
- 1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.
28
-
29
- 2. Click on your model of choice.
30
-
31
- 3. Under Input select the Python tab, and click `Get API Key`. Then click `Generate Key`.
32
-
33
- 4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.
34
-
35
- ```python
36
- import getpass
37
- import os
38
-
39
- if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
40
- nvidia_api_key = getpass.getpass("Enter your NVIDIA API key: ")
41
- assert nvidia_api_key.startswith("nvapi-"), f"{nvidia_api_key[:5]}... is not a valid key"
42
- os.environ["NVIDIA_API_KEY"] = nvidia_api_key
43
- ```
44
- ## Working with NVIDIA API Catalog
45
-
46
- ```python
47
- from langchain_nvidia_ai_endpoints import ChatNVIDIA
48
-
49
- llm = ChatNVIDIA(model="mistralai/mixtral-8x22b-instruct-v0.1")
50
- result = llm.invoke("Write a ballad about LangChain.")
51
- print(result.content)
52
- ```
53
-
54
- Using the API, you can query live endpoints available on the NVIDIA API Catalog to get quick results from a DGX-hosted cloud compute environment. All models are source-accessible and can be deployed on your own compute cluster using NVIDIA NIM which is part of NVIDIA AI Enterprise, shown in the next section [Working with NVIDIA NIMs](##working-with-nvidia-nims).
55
-
56
- ## Working with NVIDIA NIMs
57
- When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.
58
-
59
- [Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)
60
-
61
- ```python
62
- from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank
63
-
64
- # connect to a chat NIM running at localhost:8000, specifying a model
65
- llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta/llama3-8b-instruct")
66
-
67
- # connect to an embedding NIM running at localhost:8080
68
- embedder = NVIDIAEmbeddings(base_url="http://localhost:8080/v1")
69
-
70
- # connect to a reranking NIM running at localhost:2016
71
- ranker = NVIDIARerank(base_url="http://localhost:2016/v1")
72
- ```
73
-
74
- ## Using NVIDIA AI Foundation Endpoints
75
-
76
- A selection of NVIDIA AI Foundation models are supported directly in LangChain with familiar APIs.
77
-
78
- The active models which are supported can be found [in API Catalog](https://build.nvidia.com/).
79
-
80
- **The following may be useful examples to help you get started:**
81
- - **[`ChatNVIDIA` Model](/docs/integrations/chat/nvidia_ai_endpoints).**
82
- - **[`NVIDIAEmbeddings` Model for RAG Workflows](/docs/integrations/text_embedding/nvidia_ai_endpoints).**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/obsidian.mdx DELETED
@@ -1,19 +0,0 @@
1
- # Obsidian
2
-
3
- >[Obsidian](https://obsidian.md/) is a powerful and extensible knowledge base
4
- that works on top of your local folder of plain text files.
5
-
6
- ## Installation and Setup
7
-
8
- All instructions are in examples below.
9
-
10
- ## Document Loader
11
-
12
-
13
- See a [usage example](/docs/integrations/document_loaders/obsidian).
14
-
15
-
16
- ```python
17
- from langchain_community.document_loaders import ObsidianLoader
18
- ```
19
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/oci.mdx DELETED
@@ -1,51 +0,0 @@
1
- # Oracle Cloud Infrastructure (OCI)
2
-
3
- The `LangChain` integrations related to [Oracle Cloud Infrastructure](https://www.oracle.com/artificial-intelligence/).
4
-
5
- ## OCI Generative AI
6
- > Oracle Cloud Infrastructure (OCI) [Generative AI](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm) is a fully managed service that provides a set of state-of-the-art,
7
- > customizable large language models (LLMs) that cover a wide range of use cases, and which are available through a single API.
8
- > Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned
9
- > custom models based on your own data on dedicated AI clusters.
10
-
11
- To use, you should have the latest `oci` python SDK and the langchain_community package installed.
12
-
13
- ```bash
14
- pip install -U oci langchain-community
15
- ```
16
-
17
- See [chat](/docs/integrations/llms/oci_generative_ai), [complete](/docs/integrations/chat/oci_generative_ai), and [embedding](/docs/integrations/text_embedding/oci_generative_ai) usage examples.
18
-
19
- ```python
20
- from langchain_community.chat_models import ChatOCIGenAI
21
-
22
- from langchain_community.llms import OCIGenAI
23
-
24
- from langchain_community.embeddings import OCIGenAIEmbeddings
25
- ```
26
-
27
- ## OCI Data Science Model Deployment Endpoint
28
-
29
- > [OCI Data Science](https://docs.oracle.com/en-us/iaas/data-science/using/home.htm) is a
30
- > fully managed and serverless platform for data science teams. Using the OCI Data Science
31
- > platform you can build, train, and manage machine learning models, and then deploy them
32
- > as an OCI Model Deployment Endpoint using the
33
- > [OCI Data Science Model Deployment Service](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm).
34
-
35
- If you deployed a LLM with the VLLM or TGI framework, you can use the
36
- `OCIModelDeploymentVLLM` or `OCIModelDeploymentTGI` classes to interact with it.
37
-
38
- To use, you should have the latest `oracle-ads` python SDK installed.
39
-
40
- ```bash
41
- pip install -U oracle-ads
42
- ```
43
-
44
- See [usage examples](/docs/integrations/llms/oci_model_deployment_endpoint).
45
-
46
- ```python
47
- from langchain_community.llms import OCIModelDeploymentVLLM
48
-
49
- from langchain_community.llms import OCIModelDeploymentTGI
50
- ```
51
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/octoai.mdx DELETED
@@ -1,37 +0,0 @@
1
- # OctoAI
2
-
3
- >[OctoAI](https://docs.octoai.cloud/docs) offers easy access to efficient compute
4
- > and enables users to integrate their choice of AI models into applications.
5
- > The `OctoAI` compute service helps you run, tune, and scale AI applications easily.
6
-
7
-
8
- ## Installation and Setup
9
-
10
- - Install the `openai` Python package:
11
- ```bash
12
- pip install openai
13
- ````
14
- - Register on `OctoAI` and get an API Token from [your OctoAI account page](https://octoai.cloud/settings).
15
-
16
-
17
- ## Chat models
18
-
19
- See a [usage example](/docs/integrations/chat/octoai).
20
-
21
- ```python
22
- from langchain_community.chat_models import ChatOctoAI
23
- ```
24
-
25
- ## LLMs
26
-
27
- See a [usage example](/docs/integrations/llms/octoai).
28
-
29
- ```python
30
- from langchain_community.llms.octoai_endpoint import OctoAIEndpoint
31
- ```
32
-
33
- ## Embedding models
34
-
35
- ```python
36
- from langchain_community.embeddings.octoai_embeddings import OctoAIEmbeddings
37
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/ollama.mdx DELETED
@@ -1,73 +0,0 @@
1
- # Ollama
2
-
3
- >[Ollama](https://ollama.com/) allows you to run open-source large language models,
4
- > such as [Llama3.1](https://ai.meta.com/blog/meta-llama-3-1/), locally.
5
- >
6
- >`Ollama` bundles model weights, configuration, and data into a single package, defined by a Modelfile.
7
- >It optimizes setup and configuration details, including GPU usage.
8
- >For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).
9
-
10
- See [this guide](/docs/how_to/local_llms) for more details
11
- on how to use `Ollama` with LangChain.
12
-
13
- ## Installation and Setup
14
- ### Ollama installation
15
- Follow [these instructions](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
16
- to set up and run a local Ollama instance.
17
-
18
- Ollama will start as a background service automatically, if this is disabled, run:
19
-
20
- ```bash
21
- # export OLLAMA_HOST=127.0.0.1 # environment variable to set ollama host
22
- # export OLLAMA_PORT=11434 # environment variable to set the ollama port
23
- ollama serve
24
- ```
25
-
26
- After starting ollama, run `ollama pull <model_checkpoint>` to download a model
27
- from the [Ollama model library](https://ollama.ai/library).
28
-
29
- ```bash
30
- ollama pull llama3.1
31
- ```
32
-
33
- We're now ready to install the `langchain-ollama` partner package and run a model.
34
-
35
- ### Ollama LangChain partner package install
36
- Install the integration package with:
37
- ```bash
38
- pip install langchain-ollama
39
- ```
40
- ## LLM
41
-
42
- ```python
43
- from langchain_ollama.llms import OllamaLLM
44
- ```
45
-
46
- See the notebook example [here](/docs/integrations/llms/ollama).
47
-
48
- ## Chat Models
49
-
50
- ### Chat Ollama
51
-
52
- ```python
53
- from langchain_ollama.chat_models import ChatOllama
54
- ```
55
-
56
- See the notebook example [here](/docs/integrations/chat/ollama).
57
-
58
- ### Ollama tool calling
59
- [Ollama tool calling](https://ollama.com/blog/tool-support) uses the
60
- OpenAI compatible web server specification, and can be used with
61
- the default `BaseChatModel.bind_tools()` methods
62
- as described [here](/docs/how_to/tool_calling/).
63
- Make sure to select an ollama model that supports [tool calling](https://ollama.com/search?&c=tools).
64
-
65
- ## Embedding models
66
-
67
- ```python
68
- from langchain_community.embeddings import OllamaEmbeddings
69
- ```
70
-
71
- See the notebook example [here](/docs/integrations/text_embedding/ollama).
72
-
73
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/ontotext_graphdb.mdx DELETED
@@ -1,21 +0,0 @@
1
- # Ontotext GraphDB
2
-
3
- >[Ontotext GraphDB](https://graphdb.ontotext.com/) is a graph database and knowledge discovery tool compliant with RDF and SPARQL.
4
-
5
- ## Dependencies
6
-
7
- Install the [rdflib](https://github.com/RDFLib/rdflib) package with
8
- ```bash
9
- pip install rdflib==7.0.0
10
- ```
11
-
12
- ## Graph QA Chain
13
-
14
- Connect your GraphDB Database with a chat model to get insights on your data.
15
-
16
- See the notebook example [here](/docs/integrations/graphs/ontotext).
17
-
18
- ```python
19
- from langchain_community.graphs import OntotextGraphDBGraph
20
- from langchain.chains import OntotextGraphDBQAChain
21
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/openllm.mdx DELETED
@@ -1,70 +0,0 @@
1
- # OpenLLM
2
-
3
- This page demonstrates how to use [OpenLLM](https://github.com/bentoml/OpenLLM)
4
- with LangChain.
5
-
6
- `OpenLLM` is an open platform for operating large language models (LLMs) in
7
- production. It enables developers to easily run inference with any open-source
8
- LLMs, deploy to the cloud or on-premises, and build powerful AI apps.
9
-
10
- ## Installation and Setup
11
-
12
- Install the OpenLLM package via PyPI:
13
-
14
- ```bash
15
- pip install openllm
16
- ```
17
-
18
- ## LLM
19
-
20
- OpenLLM supports a wide range of open-source LLMs as well as serving users' own
21
- fine-tuned LLMs. Use `openllm model` command to see all available models that
22
- are pre-optimized for OpenLLM.
23
-
24
- ## Wrappers
25
-
26
- There is a OpenLLM Wrapper which supports loading LLM in-process or accessing a
27
- remote OpenLLM server:
28
-
29
- ```python
30
- from langchain_community.llms import OpenLLM
31
- ```
32
-
33
- ### Wrapper for OpenLLM server
34
-
35
- This wrapper supports connecting to an OpenLLM server via HTTP or gRPC. The
36
- OpenLLM server can run either locally or on the cloud.
37
-
38
- To try it out locally, start an OpenLLM server:
39
-
40
- ```bash
41
- openllm start flan-t5
42
- ```
43
-
44
- Wrapper usage:
45
-
46
- ```python
47
- from langchain_community.llms import OpenLLM
48
-
49
- llm = OpenLLM(server_url='http://localhost:3000')
50
-
51
- llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
52
- ```
53
-
54
- ### Wrapper for Local Inference
55
-
56
- You can also use the OpenLLM wrapper to load LLM in current Python process for
57
- running inference.
58
-
59
- ```python
60
- from langchain_community.llms import OpenLLM
61
-
62
- llm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b')
63
-
64
- llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
65
- ```
66
-
67
- ### Usage
68
-
69
- For a more detailed walkthrough of the OpenLLM Wrapper, see the
70
- [example notebook](/docs/integrations/llms/openllm)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/opensearch.mdx DELETED
@@ -1,21 +0,0 @@
1
- # OpenSearch
2
-
3
- This page covers how to use the OpenSearch ecosystem within LangChain.
4
- It is broken into two parts: installation and setup, and then references to specific OpenSearch wrappers.
5
-
6
- ## Installation and Setup
7
- - Install the Python package with `pip install opensearch-py`
8
- ## Wrappers
9
-
10
- ### VectorStore
11
-
12
- There exists a wrapper around OpenSearch vector databases, allowing you to use it as a vectorstore
13
- for semantic search using approximate vector search powered by lucene, nmslib and faiss engines
14
- or using painless scripting and script scoring functions for bruteforce vector search.
15
-
16
- To import this vectorstore:
17
- ```python
18
- from langchain_community.vectorstores import OpenSearchVectorSearch
19
- ```
20
-
21
- For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/integrations/vectorstores/opensearch)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/openweathermap.mdx DELETED
@@ -1,44 +0,0 @@
1
- # OpenWeatherMap
2
-
3
- >[OpenWeatherMap](https://openweathermap.org/api/) provides all essential weather data for a specific location:
4
- >- Current weather
5
- >- Minute forecast for 1 hour
6
- >- Hourly forecast for 48 hours
7
- >- Daily forecast for 8 days
8
- >- National weather alerts
9
- >- Historical weather data for 40+ years back
10
-
11
- This page covers how to use the `OpenWeatherMap API` within LangChain.
12
-
13
- ## Installation and Setup
14
-
15
- - Install requirements with
16
- ```bash
17
- pip install pyowm
18
- ```
19
- - Go to OpenWeatherMap and sign up for an account to get your API key [here](https://openweathermap.org/api/)
20
- - Set your API key as `OPENWEATHERMAP_API_KEY` environment variable
21
-
22
- ## Wrappers
23
-
24
- ### Utility
25
-
26
- There exists a OpenWeatherMapAPIWrapper utility which wraps this API. To import this utility:
27
-
28
- ```python
29
- from langchain_community.utilities.openweathermap import OpenWeatherMapAPIWrapper
30
- ```
31
-
32
- For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/openweathermap).
33
-
34
- ### Tool
35
-
36
- You can also easily load this wrapper as a Tool (to use with an Agent).
37
- You can do this with:
38
-
39
- ```python
40
- from langchain.agents import load_tools
41
- tools = load_tools(["openweathermap-api"])
42
- ```
43
-
44
- For more information on tools, see [this page](/docs/how_to/tools_builtin).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_md_files/integrations/providers/oracleai.mdx DELETED
@@ -1,67 +0,0 @@
1
- # OracleAI Vector Search
2
-
3
- Oracle AI Vector Search is designed for Artificial Intelligence (AI) workloads that allows you to query data based on semantics, rather than keywords.
4
- One of the biggest benefits of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data in one single system.
5
- This is not only powerful but also significantly more effective because you don't need to add a specialized vector database, eliminating the pain of data fragmentation between multiple systems.
6
-
7
- In addition, your vectors can benefit from all of Oracle Database’s most powerful features, like the following:
8
-
9
- * [Partitioning Support](https://www.oracle.com/database/technologies/partitioning.html)
10
- * [Real Application Clusters scalability](https://www.oracle.com/database/real-application-clusters/)
11
- * [Exadata smart scans](https://www.oracle.com/database/technologies/exadata/software/smartscan/)
12
- * [Shard processing across geographically distributed databases](https://www.oracle.com/database/distributed-database/)
13
- * [Transactions](https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/transactions.html)
14
- * [Parallel SQL](https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/parallel-exec-intro.html#GUID-D28717E4-0F77-44F5-BB4E-234C31D4E4BA)
15
- * [Disaster recovery](https://www.oracle.com/database/data-guard/)
16
- * [Security](https://www.oracle.com/security/database-security/)
17
- * [Oracle Machine Learning](https://www.oracle.com/artificial-intelligence/database-machine-learning/)
18
- * [Oracle Graph Database](https://www.oracle.com/database/integrated-graph-database/)
19
- * [Oracle Spatial and Graph](https://www.oracle.com/database/spatial/)
20
- * [Oracle Blockchain](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_blockchain_table.html#GUID-B469E277-978E-4378-A8C1-26D3FF96C9A6)
21
- * [JSON](https://docs.oracle.com/en/database/oracle/oracle-database/23/adjsn/json-in-oracle-database.html)
22
-
23
-
24
- ## Document Loaders
25
-
26
- Please check the [usage example](/docs/integrations/document_loaders/oracleai).
27
-
28
- ```python
29
- from langchain_community.document_loaders.oracleai import OracleDocLoader
30
- ```
31
-
32
- ## Text Splitter
33
-
34
- Please check the [usage example](/docs/integrations/document_loaders/oracleai).
35
-
36
- ```python
37
- from langchain_community.document_loaders.oracleai import OracleTextSplitter
38
- ```
39
-
40
- ## Embeddings
41
-
42
- Please check the [usage example](/docs/integrations/text_embedding/oracleai).
43
-
44
- ```python
45
- from langchain_community.embeddings.oracleai import OracleEmbeddings
46
- ```
47
-
48
- ## Summary
49
-
50
- Please check the [usage example](/docs/integrations/tools/oracleai).
51
-
52
- ```python
53
- from langchain_community.utilities.oracleai import OracleSummary
54
- ```
55
-
56
- ## Vector Store
57
-
58
- Please check the [usage example](/docs/integrations/vectorstores/oracle).
59
-
60
- ```python
61
- from langchain_community.vectorstores.oraclevs import OracleVS
62
- ```
63
-
64
- ## End to End Demo
65
-
66
- Please check the [Oracle AI Vector Search End-to-End Demo Guide](https://github.com/langchain-ai/langchain/blob/master/cookbook/oracleai_demo.ipynb).
67
-