75943fcbffa88cf3125f7fb5729293e826dcfb63ac7cf15097d5124a31d2cf67
Browse files- langchain_md_files/integrations/providers/outline.mdx +22 -0
- langchain_md_files/integrations/providers/pandas.mdx +29 -0
- langchain_md_files/integrations/providers/perplexity.mdx +25 -0
- langchain_md_files/integrations/providers/petals.mdx +17 -0
- langchain_md_files/integrations/providers/pg_embedding.mdx +22 -0
- langchain_md_files/integrations/providers/pgvector.mdx +29 -0
- langchain_md_files/integrations/providers/pinecone.mdx +51 -0
- langchain_md_files/integrations/providers/pipelineai.mdx +19 -0
- langchain_md_files/integrations/providers/predictionguard.mdx +102 -0
- langchain_md_files/integrations/providers/promptlayer.mdx +49 -0
- langchain_md_files/integrations/providers/psychic.mdx +34 -0
- langchain_md_files/integrations/providers/pygmalionai.mdx +21 -0
- langchain_md_files/integrations/providers/qdrant.mdx +27 -0
- langchain_md_files/integrations/providers/rank_bm25.mdx +25 -0
- langchain_md_files/integrations/providers/reddit.mdx +22 -0
- langchain_md_files/integrations/providers/redis.mdx +138 -0
- langchain_md_files/integrations/providers/remembrall.mdx +15 -0
- langchain_md_files/integrations/providers/replicate.mdx +46 -0
- langchain_md_files/integrations/providers/roam.mdx +17 -0
- langchain_md_files/integrations/providers/robocorp.mdx +37 -0
- langchain_md_files/integrations/providers/rockset.mdx +33 -0
- langchain_md_files/integrations/providers/runhouse.mdx +29 -0
- langchain_md_files/integrations/providers/rwkv.mdx +65 -0
- langchain_md_files/integrations/providers/salute_devices.mdx +37 -0
- langchain_md_files/integrations/providers/sap.mdx +25 -0
- langchain_md_files/integrations/providers/searchapi.mdx +80 -0
- langchain_md_files/integrations/providers/searx.mdx +90 -0
- langchain_md_files/integrations/providers/semadb.mdx +19 -0
- langchain_md_files/integrations/providers/serpapi.mdx +31 -0
- langchain_md_files/integrations/providers/singlestoredb.mdx +28 -0
- langchain_md_files/integrations/providers/sklearn.mdx +35 -0
- langchain_md_files/integrations/providers/slack.mdx +32 -0
- langchain_md_files/integrations/providers/snowflake.mdx +32 -0
- langchain_md_files/integrations/providers/spacy.mdx +28 -0
- langchain_md_files/integrations/providers/sparkllm.mdx +14 -0
- langchain_md_files/integrations/providers/spreedly.mdx +15 -0
- langchain_md_files/integrations/providers/sqlite.mdx +31 -0
- langchain_md_files/integrations/providers/stackexchange.mdx +36 -0
- langchain_md_files/integrations/providers/starrocks.mdx +21 -0
- langchain_md_files/integrations/providers/stochasticai.mdx +17 -0
- langchain_md_files/integrations/providers/streamlit.mdx +30 -0
- langchain_md_files/integrations/providers/stripe.mdx +16 -0
- langchain_md_files/integrations/providers/supabase.mdx +26 -0
- langchain_md_files/integrations/providers/symblai_nebula.mdx +17 -0
- langchain_md_files/integrations/providers/tair.mdx +23 -0
- langchain_md_files/integrations/providers/telegram.mdx +25 -0
- langchain_md_files/integrations/providers/tencent.mdx +95 -0
- langchain_md_files/integrations/providers/tensorflow_datasets.mdx +31 -0
- langchain_md_files/integrations/providers/tidb.mdx +38 -0
- langchain_md_files/integrations/providers/tigergraph.mdx +25 -0
langchain_md_files/integrations/providers/outline.mdx
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Outline
|
2 |
+
|
3 |
+
> [Outline](https://www.getoutline.com/) is an open-source collaborative knowledge base platform designed for team information sharing.
|
4 |
+
|
5 |
+
## Setup
|
6 |
+
|
7 |
+
You first need to [create an api key](https://www.getoutline.com/developers#section/Authentication) for your Outline instance. Then you need to set the following environment variables:
|
8 |
+
|
9 |
+
```python
|
10 |
+
import os
|
11 |
+
|
12 |
+
os.environ["OUTLINE_API_KEY"] = "xxx"
|
13 |
+
os.environ["OUTLINE_INSTANCE_URL"] = "https://app.getoutline.com"
|
14 |
+
```
|
15 |
+
|
16 |
+
## Retriever
|
17 |
+
|
18 |
+
See a [usage example](/docs/integrations/retrievers/outline).
|
19 |
+
|
20 |
+
```python
|
21 |
+
from langchain.retrievers import OutlineRetriever
|
22 |
+
```
|
langchain_md_files/integrations/providers/pandas.mdx
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Pandas
|
2 |
+
|
3 |
+
>[pandas](https://pandas.pydata.org) is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool,
|
4 |
+
built on top of the `Python` programming language.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
Install the `pandas` package using `pip`:
|
9 |
+
|
10 |
+
```bash
|
11 |
+
pip install pandas
|
12 |
+
```
|
13 |
+
|
14 |
+
|
15 |
+
## Document loader
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/document_loaders/pandas_dataframe).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.document_loaders import DataFrameLoader
|
21 |
+
```
|
22 |
+
|
23 |
+
## Toolkit
|
24 |
+
|
25 |
+
See a [usage example](/docs/integrations/tools/pandas).
|
26 |
+
|
27 |
+
```python
|
28 |
+
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
|
29 |
+
```
|
langchain_md_files/integrations/providers/perplexity.mdx
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Perplexity
|
2 |
+
|
3 |
+
>[Perplexity](https://www.perplexity.ai/pro) is the most powerful way to search
|
4 |
+
> the internet with unlimited Pro Search, upgraded AI models, unlimited file upload,
|
5 |
+
> image generation, and API credits.
|
6 |
+
>
|
7 |
+
> You can check a [list of available models](https://docs.perplexity.ai/docs/model-cards).
|
8 |
+
|
9 |
+
## Installation and Setup
|
10 |
+
|
11 |
+
Install a Python package:
|
12 |
+
|
13 |
+
```bash
|
14 |
+
pip install openai
|
15 |
+
````
|
16 |
+
|
17 |
+
Get your API key from [here](https://docs.perplexity.ai/docs/getting-started).
|
18 |
+
|
19 |
+
## Chat models
|
20 |
+
|
21 |
+
See a [usage example](/docs/integrations/chat/perplexity).
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.chat_models import ChatPerplexity
|
25 |
+
```
|
langchain_md_files/integrations/providers/petals.mdx
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Petals
|
2 |
+
|
3 |
+
This page covers how to use the Petals ecosystem within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then references to specific Petals wrappers.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
- Install with `pip install petals`
|
8 |
+
- Get a Hugging Face api key and set it as an environment variable (`HUGGINGFACE_API_KEY`)
|
9 |
+
|
10 |
+
## Wrappers
|
11 |
+
|
12 |
+
### LLM
|
13 |
+
|
14 |
+
There exists an Petals LLM wrapper, which you can access with
|
15 |
+
```python
|
16 |
+
from langchain_community.llms import Petals
|
17 |
+
```
|
langchain_md_files/integrations/providers/pg_embedding.mdx
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Postgres Embedding
|
2 |
+
|
3 |
+
> [pg_embedding](https://github.com/neondatabase/pg_embedding) is an open-source package for
|
4 |
+
> vector similarity search using `Postgres` and the `Hierarchical Navigable Small Worlds`
|
5 |
+
> algorithm for approximate nearest neighbor search.
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
We need to install several python packages.
|
10 |
+
|
11 |
+
```bash
|
12 |
+
pip install psycopg2-binary
|
13 |
+
```
|
14 |
+
|
15 |
+
## Vector Store
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/vectorstores/pgembedding).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.vectorstores import PGEmbedding
|
21 |
+
```
|
22 |
+
|
langchain_md_files/integrations/providers/pgvector.mdx
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# PGVector
|
2 |
+
|
3 |
+
This page covers how to use the Postgres [PGVector](https://github.com/pgvector/pgvector) ecosystem within LangChain
|
4 |
+
It is broken into two parts: installation and setup, and then references to specific PGVector wrappers.
|
5 |
+
|
6 |
+
## Installation
|
7 |
+
- Install the Python package with `pip install pgvector`
|
8 |
+
|
9 |
+
|
10 |
+
## Setup
|
11 |
+
1. The first step is to create a database with the `pgvector` extension installed.
|
12 |
+
|
13 |
+
Follow the steps at [PGVector Installation Steps](https://github.com/pgvector/pgvector#installation) to install the database and the extension. The docker image is the easiest way to get started.
|
14 |
+
|
15 |
+
## Wrappers
|
16 |
+
|
17 |
+
### VectorStore
|
18 |
+
|
19 |
+
There exists a wrapper around Postgres vector databases, allowing you to use it as a vectorstore,
|
20 |
+
whether for semantic search or example selection.
|
21 |
+
|
22 |
+
To import this vectorstore:
|
23 |
+
```python
|
24 |
+
from langchain_community.vectorstores.pgvector import PGVector
|
25 |
+
```
|
26 |
+
|
27 |
+
### Usage
|
28 |
+
|
29 |
+
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/integrations/vectorstores/pgvector)
|
langchain_md_files/integrations/providers/pinecone.mdx
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
keywords: [pinecone]
|
3 |
+
---
|
4 |
+
|
5 |
+
# Pinecone
|
6 |
+
|
7 |
+
>[Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality.
|
8 |
+
|
9 |
+
|
10 |
+
## Installation and Setup
|
11 |
+
|
12 |
+
Install the Python SDK:
|
13 |
+
|
14 |
+
```bash
|
15 |
+
pip install langchain-pinecone
|
16 |
+
```
|
17 |
+
|
18 |
+
|
19 |
+
## Vector store
|
20 |
+
|
21 |
+
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
|
22 |
+
whether for semantic search or example selection.
|
23 |
+
|
24 |
+
```python
|
25 |
+
from langchain_pinecone import PineconeVectorStore
|
26 |
+
```
|
27 |
+
|
28 |
+
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone)
|
29 |
+
|
30 |
+
## Retrievers
|
31 |
+
|
32 |
+
### Pinecone Hybrid Search
|
33 |
+
|
34 |
+
```bash
|
35 |
+
pip install pinecone-client pinecone-text
|
36 |
+
```
|
37 |
+
|
38 |
+
```python
|
39 |
+
from langchain_community.retrievers import (
|
40 |
+
PineconeHybridSearchRetriever,
|
41 |
+
)
|
42 |
+
```
|
43 |
+
|
44 |
+
For more detailed information, see [this notebook](/docs/integrations/retrievers/pinecone_hybrid_search).
|
45 |
+
|
46 |
+
|
47 |
+
### Self Query retriever
|
48 |
+
|
49 |
+
Pinecone vector store can be used as a retriever for self-querying.
|
50 |
+
|
51 |
+
For more detailed information, see [this notebook](/docs/integrations/retrievers/self_query/pinecone).
|
langchain_md_files/integrations/providers/pipelineai.mdx
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# PipelineAI
|
2 |
+
|
3 |
+
This page covers how to use the PipelineAI ecosystem within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then references to specific PipelineAI wrappers.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
- Install with `pip install pipeline-ai`
|
9 |
+
- Get a Pipeline Cloud api key and set it as an environment variable (`PIPELINE_API_KEY`)
|
10 |
+
|
11 |
+
## Wrappers
|
12 |
+
|
13 |
+
### LLM
|
14 |
+
|
15 |
+
There exists a PipelineAI LLM wrapper, which you can access with
|
16 |
+
|
17 |
+
```python
|
18 |
+
from langchain_community.llms import PipelineAI
|
19 |
+
```
|
langchain_md_files/integrations/providers/predictionguard.mdx
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Prediction Guard
|
2 |
+
|
3 |
+
This page covers how to use the Prediction Guard ecosystem within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
- Install the Python SDK with `pip install predictionguard`
|
8 |
+
- Get a Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
|
9 |
+
|
10 |
+
## LLM Wrapper
|
11 |
+
|
12 |
+
There exists a Prediction Guard LLM wrapper, which you can access with
|
13 |
+
```python
|
14 |
+
from langchain_community.llms import PredictionGuard
|
15 |
+
```
|
16 |
+
|
17 |
+
You can provide the name of the Prediction Guard model as an argument when initializing the LLM:
|
18 |
+
```python
|
19 |
+
pgllm = PredictionGuard(model="MPT-7B-Instruct")
|
20 |
+
```
|
21 |
+
|
22 |
+
You can also provide your access token directly as an argument:
|
23 |
+
```python
|
24 |
+
pgllm = PredictionGuard(model="MPT-7B-Instruct", token="<your access token>")
|
25 |
+
```
|
26 |
+
|
27 |
+
Finally, you can provide an "output" argument that is used to structure/ control the output of the LLM:
|
28 |
+
```python
|
29 |
+
pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})
|
30 |
+
```
|
31 |
+
|
32 |
+
## Example usage
|
33 |
+
|
34 |
+
Basic usage of the controlled or guarded LLM wrapper:
|
35 |
+
```python
|
36 |
+
import os
|
37 |
+
|
38 |
+
import predictionguard as pg
|
39 |
+
from langchain_community.llms import PredictionGuard
|
40 |
+
from langchain_core.prompts import PromptTemplate
|
41 |
+
from langchain.chains import LLMChain
|
42 |
+
|
43 |
+
# Your Prediction Guard API key. Get one at predictionguard.com
|
44 |
+
os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"
|
45 |
+
|
46 |
+
# Define a prompt template
|
47 |
+
template = """Respond to the following query based on the context.
|
48 |
+
|
49 |
+
Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦
|
50 |
+
Exclusive Candle Box - $80
|
51 |
+
Monthly Candle Box - $45 (NEW!)
|
52 |
+
Scent of The Month Box - $28 (NEW!)
|
53 |
+
Head to stories to get ALL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉
|
54 |
+
|
55 |
+
Query: {query}
|
56 |
+
|
57 |
+
Result: """
|
58 |
+
prompt = PromptTemplate.from_template(template)
|
59 |
+
|
60 |
+
# With "guarding" or controlling the output of the LLM. See the
|
61 |
+
# Prediction Guard docs (https://docs.predictionguard.com) to learn how to
|
62 |
+
# control the output with integer, float, boolean, JSON, and other types and
|
63 |
+
# structures.
|
64 |
+
pgllm = PredictionGuard(model="MPT-7B-Instruct",
|
65 |
+
output={
|
66 |
+
"type": "categorical",
|
67 |
+
"categories": [
|
68 |
+
"product announcement",
|
69 |
+
"apology",
|
70 |
+
"relational"
|
71 |
+
]
|
72 |
+
})
|
73 |
+
pgllm(prompt.format(query="What kind of post is this?"))
|
74 |
+
```
|
75 |
+
|
76 |
+
Basic LLM Chaining with the Prediction Guard wrapper:
|
77 |
+
```python
|
78 |
+
import os
|
79 |
+
|
80 |
+
from langchain_core.prompts import PromptTemplate
|
81 |
+
from langchain.chains import LLMChain
|
82 |
+
from langchain_community.llms import PredictionGuard
|
83 |
+
|
84 |
+
# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows
|
85 |
+
# you to access all the latest open access models (see https://docs.predictionguard.com)
|
86 |
+
os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"
|
87 |
+
|
88 |
+
# Your Prediction Guard API key. Get one at predictionguard.com
|
89 |
+
os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"
|
90 |
+
|
91 |
+
pgllm = PredictionGuard(model="OpenAI-gpt-3.5-turbo-instruct")
|
92 |
+
|
93 |
+
template = """Question: {question}
|
94 |
+
|
95 |
+
Answer: Let's think step by step."""
|
96 |
+
prompt = PromptTemplate.from_template(template)
|
97 |
+
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
|
98 |
+
|
99 |
+
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
|
100 |
+
|
101 |
+
llm_chain.predict(question=question)
|
102 |
+
```
|
langchain_md_files/integrations/providers/promptlayer.mdx
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# PromptLayer
|
2 |
+
|
3 |
+
>[PromptLayer](https://docs.promptlayer.com/introduction) is a platform for prompt engineering.
|
4 |
+
> It also helps with the LLM observability to visualize requests, version prompts, and track usage.
|
5 |
+
>
|
6 |
+
>While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g.
|
7 |
+
> [`PromptLayerOpenAI`](https://docs.promptlayer.com/languages/langchain)),
|
8 |
+
> using a callback is the recommended way to integrate `PromptLayer` with LangChain.
|
9 |
+
|
10 |
+
## Installation and Setup
|
11 |
+
|
12 |
+
To work with `PromptLayer`, we have to:
|
13 |
+
- Create a `PromptLayer` account
|
14 |
+
- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`)
|
15 |
+
|
16 |
+
Install a Python package:
|
17 |
+
|
18 |
+
```bash
|
19 |
+
pip install promptlayer
|
20 |
+
```
|
21 |
+
|
22 |
+
|
23 |
+
## Callback
|
24 |
+
|
25 |
+
See a [usage example](/docs/integrations/callbacks/promptlayer).
|
26 |
+
|
27 |
+
```python
|
28 |
+
import promptlayer # Don't forget this import!
|
29 |
+
from langchain.callbacks import PromptLayerCallbackHandler
|
30 |
+
```
|
31 |
+
|
32 |
+
|
33 |
+
## LLM
|
34 |
+
|
35 |
+
See a [usage example](/docs/integrations/llms/promptlayer_openai).
|
36 |
+
|
37 |
+
```python
|
38 |
+
from langchain_community.llms import PromptLayerOpenAI
|
39 |
+
```
|
40 |
+
|
41 |
+
|
42 |
+
## Chat Models
|
43 |
+
|
44 |
+
See a [usage example](/docs/integrations/chat/promptlayer_chatopenai).
|
45 |
+
|
46 |
+
```python
|
47 |
+
from langchain_community.chat_models import PromptLayerChatOpenAI
|
48 |
+
```
|
49 |
+
|
langchain_md_files/integrations/providers/psychic.mdx
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
sidebar_class_name: hidden
|
3 |
+
---
|
4 |
+
|
5 |
+
# Psychic
|
6 |
+
|
7 |
+
:::warning
|
8 |
+
This provider is no longer maintained, and may not work. Use with caution.
|
9 |
+
:::
|
10 |
+
|
11 |
+
>[Psychic](https://www.psychic.dev/) is a platform for integrating with SaaS tools like `Notion`, `Zendesk`,
|
12 |
+
> `Confluence`, and `Google Drive` via OAuth and syncing documents from these applications to your SQL or vector
|
13 |
+
> database. You can think of it like Plaid for unstructured data.
|
14 |
+
|
15 |
+
## Installation and Setup
|
16 |
+
|
17 |
+
```bash
|
18 |
+
pip install psychicapi
|
19 |
+
```
|
20 |
+
|
21 |
+
Psychic is easy to set up - you import the `react` library and configure it with your `Sidekick API` key, which you get
|
22 |
+
from the [Psychic dashboard](https://dashboard.psychic.dev/). When you connect the applications, you
|
23 |
+
view these connections from the dashboard and retrieve data using the server-side libraries.
|
24 |
+
|
25 |
+
1. Create an account in the [dashboard](https://dashboard.psychic.dev/).
|
26 |
+
2. Use the [react library](https://docs.psychic.dev/sidekick-link) to add the Psychic link modal to your frontend react app. You will use this to connect the SaaS apps.
|
27 |
+
3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/integrations/document_loaders/psychic)
|
28 |
+
|
29 |
+
|
30 |
+
## Advantages vs Other Document Loaders
|
31 |
+
|
32 |
+
1. **Universal API:** Instead of building OAuth flows and learning the APIs for every SaaS app, you integrate Psychic once and leverage our universal API to retrieve data.
|
33 |
+
2. **Data Syncs:** Data in your customers' SaaS apps can get stale fast. With Psychic you can configure webhooks to keep your documents up to date on a daily or realtime basis.
|
34 |
+
3. **Simplified OAuth:** Psychic handles OAuth end-to-end so that you don't have to spend time creating OAuth clients for each integration, keeping access tokens fresh, and handling OAuth redirect logic.
|
langchain_md_files/integrations/providers/pygmalionai.mdx
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# PygmalionAI
|
2 |
+
|
3 |
+
>[PygmalionAI](https://pygmalion.chat/) is a company supporting the
|
4 |
+
> open-source models by serving the inference endpoint
|
5 |
+
> for the [Aphrodite Engine](https://github.com/PygmalionAI/aphrodite-engine).
|
6 |
+
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
|
11 |
+
```bash
|
12 |
+
pip install aphrodite-engine
|
13 |
+
```
|
14 |
+
|
15 |
+
## LLMs
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/llms/aphrodite).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.llms import Aphrodite
|
21 |
+
```
|
langchain_md_files/integrations/providers/qdrant.mdx
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Qdrant
|
2 |
+
|
3 |
+
>[Qdrant](https://qdrant.tech/documentation/) (read: quadrant) is a vector similarity search engine.
|
4 |
+
> It provides a production-ready service with a convenient API to store, search, and manage
|
5 |
+
> points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support.
|
6 |
+
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
Install the Python partner package:
|
11 |
+
|
12 |
+
```bash
|
13 |
+
pip install langchain-qdrant
|
14 |
+
```
|
15 |
+
|
16 |
+
|
17 |
+
## Vector Store
|
18 |
+
|
19 |
+
There exists a wrapper around `Qdrant` indexes, allowing you to use it as a vectorstore,
|
20 |
+
whether for semantic search or example selection.
|
21 |
+
|
22 |
+
To import this vectorstore:
|
23 |
+
```python
|
24 |
+
from langchain_qdrant import QdrantVectorStore
|
25 |
+
```
|
26 |
+
|
27 |
+
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant)
|
langchain_md_files/integrations/providers/rank_bm25.mdx
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# rank_bm25
|
2 |
+
|
3 |
+
[rank_bm25](https://github.com/dorianbrown/rank_bm25) is an open-source collection of algorithms
|
4 |
+
designed to query documents and return the most relevant ones, commonly used for creating
|
5 |
+
search engines.
|
6 |
+
|
7 |
+
See its [project page](https://github.com/dorianbrown/rank_bm25) for available algorithms.
|
8 |
+
|
9 |
+
|
10 |
+
## Installation and Setup
|
11 |
+
|
12 |
+
First, you need to install `rank_bm25` python package.
|
13 |
+
|
14 |
+
```bash
|
15 |
+
pip install rank_bm25
|
16 |
+
```
|
17 |
+
|
18 |
+
|
19 |
+
## Retriever
|
20 |
+
|
21 |
+
See a [usage example](/docs/integrations/retrievers/bm25).
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.retrievers import BM25Retriever
|
25 |
+
```
|
langchain_md_files/integrations/providers/reddit.mdx
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Reddit
|
2 |
+
|
3 |
+
>[Reddit](https://www.reddit.com) is an American social news aggregation, content rating, and discussion website.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
First, you need to install a python package.
|
8 |
+
|
9 |
+
```bash
|
10 |
+
pip install praw
|
11 |
+
```
|
12 |
+
|
13 |
+
Make a [Reddit Application](https://www.reddit.com/prefs/apps/) and initialize the loader with your Reddit API credentials.
|
14 |
+
|
15 |
+
## Document Loader
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/document_loaders/reddit).
|
18 |
+
|
19 |
+
|
20 |
+
```python
|
21 |
+
from langchain_community.document_loaders import RedditPostsLoader
|
22 |
+
```
|
langchain_md_files/integrations/providers/redis.mdx
ADDED
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Redis
|
2 |
+
|
3 |
+
>[Redis (Remote Dictionary Server)](https://en.wikipedia.org/wiki/Redis) is an open-source in-memory storage,
|
4 |
+
> used as a distributed, in-memory key–value database, cache and message broker, with optional durability.
|
5 |
+
> Because it holds all data in memory and because of its design, `Redis` offers low-latency reads and writes,
|
6 |
+
> making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database,
|
7 |
+
> and one of the most popular databases overall.
|
8 |
+
|
9 |
+
This page covers how to use the [Redis](https://redis.com) ecosystem within LangChain.
|
10 |
+
It is broken into two parts: installation and setup, and then references to specific Redis wrappers.
|
11 |
+
|
12 |
+
## Installation and Setup
|
13 |
+
|
14 |
+
Install the Python SDK:
|
15 |
+
|
16 |
+
```bash
|
17 |
+
pip install redis
|
18 |
+
```
|
19 |
+
|
20 |
+
To run Redis locally, you can use Docker:
|
21 |
+
|
22 |
+
```bash
|
23 |
+
docker run --name langchain-redis -d -p 6379:6379 redis redis-server --save 60 1 --loglevel warning
|
24 |
+
```
|
25 |
+
|
26 |
+
To stop the container:
|
27 |
+
|
28 |
+
```bash
|
29 |
+
docker stop langchain-redis
|
30 |
+
```
|
31 |
+
|
32 |
+
And to start it again:
|
33 |
+
|
34 |
+
```bash
|
35 |
+
docker start langchain-redis
|
36 |
+
```
|
37 |
+
|
38 |
+
### Connections
|
39 |
+
|
40 |
+
We need a redis url connection string to connect to the database support either a stand alone Redis server
|
41 |
+
or a High-Availability setup with Replication and Redis Sentinels.
|
42 |
+
|
43 |
+
#### Redis Standalone connection url
|
44 |
+
For standalone `Redis` server, the official redis connection url formats can be used as describe in the python redis modules
|
45 |
+
"from_url()" method [Redis.from_url](https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url)
|
46 |
+
|
47 |
+
Example: `redis_url = "redis://:secret-pass@localhost:6379/0"`
|
48 |
+
|
49 |
+
#### Redis Sentinel connection url
|
50 |
+
|
51 |
+
For [Redis sentinel setups](https://redis.io/docs/management/sentinel/) the connection scheme is "redis+sentinel".
|
52 |
+
This is an unofficial extensions to the official IANA registered protocol schemes as long as there is no connection url
|
53 |
+
for Sentinels available.
|
54 |
+
|
55 |
+
Example: `redis_url = "redis+sentinel://:secret-pass@sentinel-host:26379/mymaster/0"`
|
56 |
+
|
57 |
+
The format is `redis+sentinel://[[username]:[password]]@[host-or-ip]:[port]/[service-name]/[db-number]`
|
58 |
+
with the default values of "service-name = mymaster" and "db-number = 0" if not set explicit.
|
59 |
+
The service-name is the redis server monitoring group name as configured within the Sentinel.
|
60 |
+
|
61 |
+
The current url format limits the connection string to one sentinel host only (no list can be given) and
|
62 |
+
booth Redis server and sentinel must have the same password set (if used).
|
63 |
+
|
64 |
+
#### Redis Cluster connection url
|
65 |
+
|
66 |
+
Redis cluster is not supported right now for all methods requiring a "redis_url" parameter.
|
67 |
+
The only way to use a Redis Cluster is with LangChain classes accepting a preconfigured Redis client like `RedisCache`
|
68 |
+
(example below).
|
69 |
+
|
70 |
+
## Cache
|
71 |
+
|
72 |
+
The Cache wrapper allows for [Redis](https://redis.io) to be used as a remote, low-latency, in-memory cache for LLM prompts and responses.
|
73 |
+
|
74 |
+
### Standard Cache
|
75 |
+
The standard cache is the Redis bread & butter of use case in production for both [open-source](https://redis.io) and [enterprise](https://redis.com) users globally.
|
76 |
+
|
77 |
+
```python
|
78 |
+
from langchain.cache import RedisCache
|
79 |
+
```
|
80 |
+
|
81 |
+
To use this cache with your LLMs:
|
82 |
+
```python
|
83 |
+
from langchain.globals import set_llm_cache
|
84 |
+
import redis
|
85 |
+
|
86 |
+
redis_client = redis.Redis.from_url(...)
|
87 |
+
set_llm_cache(RedisCache(redis_client))
|
88 |
+
```
|
89 |
+
|
90 |
+
### Semantic Cache
|
91 |
+
Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends Redis as both a cache and a vectorstore.
|
92 |
+
|
93 |
+
```python
|
94 |
+
from langchain.cache import RedisSemanticCache
|
95 |
+
```
|
96 |
+
|
97 |
+
To use this cache with your LLMs:
|
98 |
+
```python
|
99 |
+
from langchain.globals import set_llm_cache
|
100 |
+
import redis
|
101 |
+
|
102 |
+
# use any embedding provider...
|
103 |
+
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
|
104 |
+
|
105 |
+
redis_url = "redis://localhost:6379"
|
106 |
+
|
107 |
+
set_llm_cache(RedisSemanticCache(
|
108 |
+
embedding=FakeEmbeddings(),
|
109 |
+
redis_url=redis_url
|
110 |
+
))
|
111 |
+
```
|
112 |
+
|
113 |
+
## VectorStore
|
114 |
+
|
115 |
+
The vectorstore wrapper turns Redis into a low-latency [vector database](https://redis.com/solutions/use-cases/vector-database/) for semantic search or LLM content retrieval.
|
116 |
+
|
117 |
+
```python
|
118 |
+
from langchain_community.vectorstores import Redis
|
119 |
+
```
|
120 |
+
|
121 |
+
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis).
|
122 |
+
|
123 |
+
## Retriever
|
124 |
+
|
125 |
+
The Redis vector store retriever wrapper generalizes the vectorstore class to perform
|
126 |
+
low-latency document retrieval. To create the retriever, simply
|
127 |
+
call `.as_retriever()` on the base vectorstore class.
|
128 |
+
|
129 |
+
## Memory
|
130 |
+
|
131 |
+
Redis can be used to persist LLM conversations.
|
132 |
+
|
133 |
+
### Vector Store Retriever Memory
|
134 |
+
|
135 |
+
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](https://python.langchain.com/v0.2/api_reference/langchain/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html).
|
136 |
+
|
137 |
+
### Chat Message History Memory
|
138 |
+
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history).
|
langchain_md_files/integrations/providers/remembrall.mdx
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Remembrall
|
2 |
+
|
3 |
+
>[Remembrall](https://remembrall.dev/) is a platform that gives a language model
|
4 |
+
> long-term memory, retrieval augmented generation, and complete observability.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
To get started, [sign in with Github on the Remembrall platform](https://remembrall.dev/login)
|
9 |
+
and copy your [API key from the settings page](https://remembrall.dev/dashboard/settings).
|
10 |
+
|
11 |
+
|
12 |
+
## Memory
|
13 |
+
|
14 |
+
See a [usage example](/docs/integrations/memory/remembrall).
|
15 |
+
|
langchain_md_files/integrations/providers/replicate.mdx
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Replicate
|
2 |
+
This page covers how to run models on Replicate within LangChain.
|
3 |
+
|
4 |
+
## Installation and Setup
|
5 |
+
- Create a [Replicate](https://replicate.com) account. Get your API key and set it as an environment variable (`REPLICATE_API_TOKEN`)
|
6 |
+
- Install the [Replicate python client](https://github.com/replicate/replicate-python) with `pip install replicate`
|
7 |
+
|
8 |
+
## Calling a model
|
9 |
+
|
10 |
+
Find a model on the [Replicate explore page](https://replicate.com/explore), and then paste in the model name and version in this format: `owner-name/model-name:version`
|
11 |
+
|
12 |
+
For example, for this [dolly model](https://replicate.com/replicate/dolly-v2-12b), click on the API tab. The model name/version would be: `"replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5"`
|
13 |
+
|
14 |
+
Only the `model` param is required, but any other model parameters can also be passed in with the format `input={model_param: value, ...}`
|
15 |
+
|
16 |
+
|
17 |
+
For example, if we were running stable diffusion and wanted to change the image dimensions:
|
18 |
+
|
19 |
+
```
|
20 |
+
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
|
21 |
+
```
|
22 |
+
|
23 |
+
*Note that only the first output of a model will be returned.*
|
24 |
+
From here, we can initialize our model:
|
25 |
+
|
26 |
+
```python
|
27 |
+
llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
|
28 |
+
```
|
29 |
+
|
30 |
+
And run it:
|
31 |
+
|
32 |
+
```python
|
33 |
+
prompt = """
|
34 |
+
Answer the following yes/no question by reasoning step by step.
|
35 |
+
Can a dog drive a car?
|
36 |
+
"""
|
37 |
+
llm(prompt)
|
38 |
+
```
|
39 |
+
|
40 |
+
We can call any Replicate model (not just LLMs) using this syntax. For example, we can call [Stable Diffusion](https://replicate.com/stability-ai/stable-diffusion):
|
41 |
+
|
42 |
+
```python
|
43 |
+
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions':'512x512'})
|
44 |
+
|
45 |
+
image_output = text2image("A cat riding a motorcycle by Picasso")
|
46 |
+
```
|
langchain_md_files/integrations/providers/roam.mdx
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Roam
|
2 |
+
|
3 |
+
>[ROAM](https://roamresearch.com/) is a note-taking tool for networked thought, designed to create a personal knowledge base.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
There isn't any special setup for it.
|
8 |
+
|
9 |
+
|
10 |
+
|
11 |
+
## Document Loader
|
12 |
+
|
13 |
+
See a [usage example](/docs/integrations/document_loaders/roam).
|
14 |
+
|
15 |
+
```python
|
16 |
+
from langchain_community.document_loaders import RoamLoader
|
17 |
+
```
|
langchain_md_files/integrations/providers/robocorp.mdx
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Robocorp
|
2 |
+
|
3 |
+
>[Robocorp](https://robocorp.com/) helps build and operate Python workers that run seamlessly anywhere at any scale
|
4 |
+
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
You need to install `langchain-robocorp` python package:
|
9 |
+
|
10 |
+
```bash
|
11 |
+
pip install langchain-robocorp
|
12 |
+
```
|
13 |
+
|
14 |
+
You will need a running instance of `Action Server` to communicate with from your agent application.
|
15 |
+
See the [Robocorp Quickstart](https://github.com/robocorp/robocorp#quickstart) on how to setup Action Server and create your Actions.
|
16 |
+
|
17 |
+
You can bootstrap a new project using Action Server `new` command.
|
18 |
+
|
19 |
+
```bash
|
20 |
+
action-server new
|
21 |
+
cd ./your-project-name
|
22 |
+
action-server start
|
23 |
+
```
|
24 |
+
|
25 |
+
## Tool
|
26 |
+
|
27 |
+
```python
|
28 |
+
from langchain_robocorp.toolkits import ActionServerRequestTool
|
29 |
+
```
|
30 |
+
|
31 |
+
## Toolkit
|
32 |
+
|
33 |
+
See a [usage example](/docs/integrations/tools/robocorp).
|
34 |
+
|
35 |
+
```python
|
36 |
+
from langchain_robocorp import ActionServerToolkit
|
37 |
+
```
|
langchain_md_files/integrations/providers/rockset.mdx
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Rockset
|
2 |
+
|
3 |
+
>[Rockset](https://rockset.com/product/) is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
Make sure you have Rockset account and go to the web console to get the API key. Details can be found on [the website](https://rockset.com/docs/rest-api/).
|
8 |
+
|
9 |
+
```bash
|
10 |
+
pip install rockset
|
11 |
+
```
|
12 |
+
|
13 |
+
## Vector Store
|
14 |
+
|
15 |
+
See a [usage example](/docs/integrations/vectorstores/rockset).
|
16 |
+
|
17 |
+
```python
|
18 |
+
from langchain_community.vectorstores import Rockset
|
19 |
+
```
|
20 |
+
|
21 |
+
## Document Loader
|
22 |
+
|
23 |
+
See a [usage example](/docs/integrations/document_loaders/rockset).
|
24 |
+
```python
|
25 |
+
from langchain_community.document_loaders import RocksetLoader
|
26 |
+
```
|
27 |
+
|
28 |
+
## Chat Message History
|
29 |
+
|
30 |
+
See a [usage example](/docs/integrations/memory/rockset_chat_message_history).
|
31 |
+
```python
|
32 |
+
from langchain_community.chat_message_histories import RocksetChatMessageHistory
|
33 |
+
```
|
langchain_md_files/integrations/providers/runhouse.mdx
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Runhouse
|
2 |
+
|
3 |
+
This page covers how to use the [Runhouse](https://github.com/run-house/runhouse) ecosystem within LangChain.
|
4 |
+
It is broken into three parts: installation and setup, LLMs, and Embeddings.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
- Install the Python SDK with `pip install runhouse`
|
8 |
+
- If you'd like to use on-demand cluster, check your cloud credentials with `sky check`
|
9 |
+
|
10 |
+
## Self-hosted LLMs
|
11 |
+
For a basic self-hosted LLM, you can use the `SelfHostedHuggingFaceLLM` class. For more
|
12 |
+
custom LLMs, you can use the `SelfHostedPipeline` parent class.
|
13 |
+
|
14 |
+
```python
|
15 |
+
from langchain_community.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
|
16 |
+
```
|
17 |
+
|
18 |
+
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/integrations/llms/runhouse)
|
19 |
+
|
20 |
+
## Self-hosted Embeddings
|
21 |
+
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
|
22 |
+
|
23 |
+
For a basic self-hosted embedding from a Hugging Face Transformers model, you can use
|
24 |
+
the `SelfHostedEmbedding` class.
|
25 |
+
```python
|
26 |
+
from langchain_community.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
|
27 |
+
```
|
28 |
+
|
29 |
+
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/integrations/text_embedding/self-hosted)
|
langchain_md_files/integrations/providers/rwkv.mdx
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# RWKV-4
|
2 |
+
|
3 |
+
This page covers how to use the `RWKV-4` wrapper within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then usage with an example.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
- Install the Python package with `pip install rwkv`
|
8 |
+
- Install the tokenizer Python package with `pip install tokenizer`
|
9 |
+
- Download a [RWKV model](https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main) and place it in your desired directory
|
10 |
+
- Download the [tokens file](https://raw.githubusercontent.com/BlinkDL/ChatRWKV/main/20B_tokenizer.json)
|
11 |
+
|
12 |
+
## Usage
|
13 |
+
|
14 |
+
### RWKV
|
15 |
+
|
16 |
+
To use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizer's configuration.
|
17 |
+
```python
|
18 |
+
from langchain_community.llms import RWKV
|
19 |
+
|
20 |
+
# Test the model
|
21 |
+
|
22 |
+
```python
|
23 |
+
|
24 |
+
def generate_prompt(instruction, input=None):
|
25 |
+
if input:
|
26 |
+
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
27 |
+
|
28 |
+
# Instruction:
|
29 |
+
{instruction}
|
30 |
+
|
31 |
+
# Input:
|
32 |
+
{input}
|
33 |
+
|
34 |
+
# Response:
|
35 |
+
"""
|
36 |
+
else:
|
37 |
+
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
38 |
+
|
39 |
+
# Instruction:
|
40 |
+
{instruction}
|
41 |
+
|
42 |
+
# Response:
|
43 |
+
"""
|
44 |
+
|
45 |
+
|
46 |
+
model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json")
|
47 |
+
response = model.invoke(generate_prompt("Once upon a time, "))
|
48 |
+
```
|
49 |
+
## Model File
|
50 |
+
|
51 |
+
You can find links to model file downloads at the [RWKV-4-Raven](https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main) repository.
|
52 |
+
|
53 |
+
### Rwkv-4 models -> recommended VRAM
|
54 |
+
|
55 |
+
|
56 |
+
```
|
57 |
+
RWKV VRAM
|
58 |
+
Model | 8bit | bf16/fp16 | fp32
|
59 |
+
14B | 16GB | 28GB | >50GB
|
60 |
+
7B | 8GB | 14GB | 28GB
|
61 |
+
3B | 2.8GB| 6GB | 12GB
|
62 |
+
1b5 | 1.3GB| 3GB | 6GB
|
63 |
+
```
|
64 |
+
|
65 |
+
See the [rwkv pip](https://pypi.org/project/rwkv/) page for more information about strategies, including streaming and cuda support.
|
langchain_md_files/integrations/providers/salute_devices.mdx
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Salute Devices
|
2 |
+
|
3 |
+
Salute Devices provides GigaChat LLM's models.
|
4 |
+
|
5 |
+
For more info how to get access to GigaChat [follow here](https://developers.sber.ru/docs/ru/gigachat/api/integration).
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
GigaChat package can be installed via pip from PyPI:
|
10 |
+
|
11 |
+
```bash
|
12 |
+
pip install gigachat
|
13 |
+
```
|
14 |
+
|
15 |
+
## LLMs
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/llms/gigachat).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.llms import GigaChat
|
21 |
+
```
|
22 |
+
|
23 |
+
## Chat models
|
24 |
+
|
25 |
+
See a [usage example](/docs/integrations/chat/gigachat).
|
26 |
+
|
27 |
+
```python
|
28 |
+
from langchain_community.chat_models import GigaChat
|
29 |
+
```
|
30 |
+
|
31 |
+
## Embeddings
|
32 |
+
|
33 |
+
See a [usage example](/docs/integrations/text_embedding/gigachat).
|
34 |
+
|
35 |
+
```python
|
36 |
+
from langchain_community.embeddings import GigaChatEmbeddings
|
37 |
+
```
|
langchain_md_files/integrations/providers/sap.mdx
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SAP
|
2 |
+
|
3 |
+
>[SAP SE(Wikipedia)](https://www.sap.com/about/company.html) is a German multinational
|
4 |
+
> software company. It develops enterprise software to manage business operation and
|
5 |
+
> customer relations. The company is the world's leading
|
6 |
+
> `enterprise resource planning (ERP)` software vendor.
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
We need to install the `hdbcli` python package.
|
11 |
+
|
12 |
+
```bash
|
13 |
+
pip install hdbcli
|
14 |
+
```
|
15 |
+
|
16 |
+
## Vectorstore
|
17 |
+
|
18 |
+
>[SAP HANA Cloud Vector Engine](https://www.sap.com/events/teched/news-guide/ai.html#article8) is
|
19 |
+
> a vector store fully integrated into the `SAP HANA Cloud` database.
|
20 |
+
|
21 |
+
See a [usage example](/docs/integrations/vectorstores/sap_hanavector).
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.vectorstores.hanavector import HanaDB
|
25 |
+
```
|
langchain_md_files/integrations/providers/searchapi.mdx
ADDED
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SearchApi
|
2 |
+
|
3 |
+
This page covers how to use the [SearchApi](https://www.searchapi.io/) Google Search API within LangChain. SearchApi is a real-time SERP API for easy SERP scraping.
|
4 |
+
|
5 |
+
## Setup
|
6 |
+
|
7 |
+
- Go to [https://www.searchapi.io/](https://www.searchapi.io/) to sign up for a free account
|
8 |
+
- Get the api key and set it as an environment variable (`SEARCHAPI_API_KEY`)
|
9 |
+
|
10 |
+
## Wrappers
|
11 |
+
|
12 |
+
### Utility
|
13 |
+
|
14 |
+
There is a SearchApiAPIWrapper utility which wraps this API. To import this utility:
|
15 |
+
|
16 |
+
```python
|
17 |
+
from langchain_community.utilities import SearchApiAPIWrapper
|
18 |
+
```
|
19 |
+
|
20 |
+
You can use it as part of a Self Ask chain:
|
21 |
+
|
22 |
+
```python
|
23 |
+
from langchain_community.utilities import SearchApiAPIWrapper
|
24 |
+
from langchain_openai import OpenAI
|
25 |
+
from langchain.agents import initialize_agent, Tool
|
26 |
+
from langchain.agents import AgentType
|
27 |
+
|
28 |
+
import os
|
29 |
+
|
30 |
+
os.environ["SEARCHAPI_API_KEY"] = ""
|
31 |
+
os.environ['OPENAI_API_KEY'] = ""
|
32 |
+
|
33 |
+
llm = OpenAI(temperature=0)
|
34 |
+
search = SearchApiAPIWrapper()
|
35 |
+
tools = [
|
36 |
+
Tool(
|
37 |
+
name="Intermediate Answer",
|
38 |
+
func=search.run,
|
39 |
+
description="useful for when you need to ask with search"
|
40 |
+
)
|
41 |
+
]
|
42 |
+
|
43 |
+
self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)
|
44 |
+
self_ask_with_search.run("Who lived longer: Plato, Socrates, or Aristotle?")
|
45 |
+
```
|
46 |
+
|
47 |
+
#### Output
|
48 |
+
|
49 |
+
```
|
50 |
+
> Entering new AgentExecutor chain...
|
51 |
+
Yes.
|
52 |
+
Follow up: How old was Plato when he died?
|
53 |
+
Intermediate answer: eighty
|
54 |
+
Follow up: How old was Socrates when he died?
|
55 |
+
Intermediate answer: | Socrates |
|
56 |
+
| -------- |
|
57 |
+
| Born | c. 470 BC Deme Alopece, Athens |
|
58 |
+
| Died | 399 BC (aged approximately 71) Athens |
|
59 |
+
| Cause of death | Execution by forced suicide by poisoning |
|
60 |
+
| Spouse(s) | Xanthippe, Myrto |
|
61 |
+
|
62 |
+
Follow up: How old was Aristotle when he died?
|
63 |
+
Intermediate answer: 62 years
|
64 |
+
So the final answer is: Plato
|
65 |
+
|
66 |
+
> Finished chain.
|
67 |
+
'Plato'
|
68 |
+
```
|
69 |
+
|
70 |
+
### Tool
|
71 |
+
|
72 |
+
You can also easily load this wrapper as a Tool (to use with an Agent).
|
73 |
+
You can do this with:
|
74 |
+
|
75 |
+
```python
|
76 |
+
from langchain.agents import load_tools
|
77 |
+
tools = load_tools(["searchapi"])
|
78 |
+
```
|
79 |
+
|
80 |
+
For more information on tools, see [this page](/docs/how_to/tools_builtin).
|
langchain_md_files/integrations/providers/searx.mdx
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SearxNG Search API
|
2 |
+
|
3 |
+
This page covers how to use the SearxNG search API within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then references to the specific SearxNG API wrapper.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
While it is possible to utilize the wrapper in conjunction with [public searx
|
9 |
+
instances](https://searx.space/) these instances frequently do not permit API
|
10 |
+
access (see note on output format below) and have limitations on the frequency
|
11 |
+
of requests. It is recommended to opt for a self-hosted instance instead.
|
12 |
+
|
13 |
+
### Self Hosted Instance:
|
14 |
+
|
15 |
+
See [this page](https://searxng.github.io/searxng/admin/installation.html) for installation instructions.
|
16 |
+
|
17 |
+
When you install SearxNG, the only active output format by default is the HTML format.
|
18 |
+
You need to activate the `json` format to use the API. This can be done by adding the following line to the `settings.yml` file:
|
19 |
+
```yaml
|
20 |
+
search:
|
21 |
+
formats:
|
22 |
+
- html
|
23 |
+
- json
|
24 |
+
```
|
25 |
+
You can make sure that the API is working by issuing a curl request to the API endpoint:
|
26 |
+
|
27 |
+
`curl -kLX GET --data-urlencode q='langchain' -d format=json http://localhost:8888`
|
28 |
+
|
29 |
+
This should return a JSON object with the results.
|
30 |
+
|
31 |
+
|
32 |
+
## Wrappers
|
33 |
+
|
34 |
+
### Utility
|
35 |
+
|
36 |
+
To use the wrapper we need to pass the host of the SearxNG instance to the wrapper with:
|
37 |
+
1. the named parameter `searx_host` when creating the instance.
|
38 |
+
2. exporting the environment variable `SEARXNG_HOST`.
|
39 |
+
|
40 |
+
You can use the wrapper to get results from a SearxNG instance.
|
41 |
+
|
42 |
+
```python
|
43 |
+
from langchain_community.utilities import SearxSearchWrapper
|
44 |
+
s = SearxSearchWrapper(searx_host="http://localhost:8888")
|
45 |
+
s.run("what is a large language model?")
|
46 |
+
```
|
47 |
+
|
48 |
+
### Tool
|
49 |
+
|
50 |
+
You can also load this wrapper as a Tool (to use with an Agent).
|
51 |
+
|
52 |
+
You can do this with:
|
53 |
+
|
54 |
+
```python
|
55 |
+
from langchain.agents import load_tools
|
56 |
+
tools = load_tools(["searx-search"],
|
57 |
+
searx_host="http://localhost:8888",
|
58 |
+
engines=["github"])
|
59 |
+
```
|
60 |
+
|
61 |
+
Note that we could _optionally_ pass custom engines to use.
|
62 |
+
|
63 |
+
If you want to obtain results with metadata as *json* you can use:
|
64 |
+
```python
|
65 |
+
tools = load_tools(["searx-search-results-json"],
|
66 |
+
searx_host="http://localhost:8888",
|
67 |
+
num_results=5)
|
68 |
+
```
|
69 |
+
|
70 |
+
#### Quickly creating tools
|
71 |
+
|
72 |
+
This examples showcases a quick way to create multiple tools from the same
|
73 |
+
wrapper.
|
74 |
+
|
75 |
+
```python
|
76 |
+
from langchain_community.tools.searx_search.tool import SearxSearchResults
|
77 |
+
|
78 |
+
wrapper = SearxSearchWrapper(searx_host="**")
|
79 |
+
github_tool = SearxSearchResults(name="Github", wrapper=wrapper,
|
80 |
+
kwargs = {
|
81 |
+
"engines": ["github"],
|
82 |
+
})
|
83 |
+
|
84 |
+
arxiv_tool = SearxSearchResults(name="Arxiv", wrapper=wrapper,
|
85 |
+
kwargs = {
|
86 |
+
"engines": ["arxiv"]
|
87 |
+
})
|
88 |
+
```
|
89 |
+
|
90 |
+
For more information on tools, see [this page](/docs/how_to/tools_builtin).
|
langchain_md_files/integrations/providers/semadb.mdx
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SemaDB
|
2 |
+
|
3 |
+
>[SemaDB](https://semafind.com/) is a no fuss vector similarity search engine. It provides a low-cost cloud hosted version to help you build AI applications with ease.
|
4 |
+
|
5 |
+
With SemaDB Cloud, our hosted version, no fuss means no pod size calculations, no schema definitions, no partition settings, no parameter tuning, no search algorithm tuning, no complex installation, no complex API. It is integrated with [RapidAPI](https://rapidapi.com/semafind-semadb/api/semadb) providing transparent billing, automatic sharding and an interactive API playground.
|
6 |
+
|
7 |
+
## Installation
|
8 |
+
|
9 |
+
None required, get started directly with SemaDB Cloud at [RapidAPI](https://rapidapi.com/semafind-semadb/api/semadb).
|
10 |
+
|
11 |
+
## Vector Store
|
12 |
+
|
13 |
+
There is a basic wrapper around `SemaDB` collections allowing you to use it as a vectorstore.
|
14 |
+
|
15 |
+
```python
|
16 |
+
from langchain_community.vectorstores import SemaDB
|
17 |
+
```
|
18 |
+
|
19 |
+
You can follow a tutorial on how to use the wrapper in [this notebook](/docs/integrations/vectorstores/semadb).
|
langchain_md_files/integrations/providers/serpapi.mdx
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SerpAPI
|
2 |
+
|
3 |
+
This page covers how to use the SerpAPI search APIs within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then references to the specific SerpAPI wrapper.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
- Install requirements with `pip install google-search-results`
|
8 |
+
- Get a SerpAPI api key and either set it as an environment variable (`SERPAPI_API_KEY`)
|
9 |
+
|
10 |
+
## Wrappers
|
11 |
+
|
12 |
+
### Utility
|
13 |
+
|
14 |
+
There exists a SerpAPI utility which wraps this API. To import this utility:
|
15 |
+
|
16 |
+
```python
|
17 |
+
from langchain_community.utilities import SerpAPIWrapper
|
18 |
+
```
|
19 |
+
|
20 |
+
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/serpapi).
|
21 |
+
|
22 |
+
### Tool
|
23 |
+
|
24 |
+
You can also easily load this wrapper as a Tool (to use with an Agent).
|
25 |
+
You can do this with:
|
26 |
+
```python
|
27 |
+
from langchain.agents import load_tools
|
28 |
+
tools = load_tools(["serpapi"])
|
29 |
+
```
|
30 |
+
|
31 |
+
For more information on this, see [this page](/docs/how_to/tools_builtin)
|
langchain_md_files/integrations/providers/singlestoredb.mdx
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SingleStoreDB
|
2 |
+
|
3 |
+
>[SingleStoreDB](https://singlestore.com/) is a high-performance distributed SQL database that supports deployment both in the [cloud](https://www.singlestore.com/cloud/) and on-premises. It provides vector storage, and vector functions including [dot_product](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/dot_product.html) and [euclidean_distance](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/euclidean_distance.html), thereby supporting AI applications that require text similarity matching.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
There are several ways to establish a [connection](https://singlestoredb-python.labs.singlestore.com/generated/singlestoredb.connect.html) to the database. You can either set up environment variables or pass named parameters to the `SingleStoreDB constructor`.
|
8 |
+
Alternatively, you may provide these parameters to the `from_documents` and `from_texts` methods.
|
9 |
+
|
10 |
+
```bash
|
11 |
+
pip install singlestoredb
|
12 |
+
```
|
13 |
+
|
14 |
+
## Vector Store
|
15 |
+
|
16 |
+
See a [usage example](/docs/integrations/vectorstores/singlestoredb).
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain_community.vectorstores import SingleStoreDB
|
20 |
+
```
|
21 |
+
|
22 |
+
## Memory
|
23 |
+
|
24 |
+
See a [usage example](/docs/integrations/memory/singlestoredb_chat_message_history).
|
25 |
+
|
26 |
+
```python
|
27 |
+
from langchain.memory import SingleStoreDBChatMessageHistory
|
28 |
+
```
|
langchain_md_files/integrations/providers/sklearn.mdx
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# scikit-learn
|
2 |
+
|
3 |
+
>[scikit-learn](https://scikit-learn.org/stable/) is an open-source collection of machine learning algorithms,
|
4 |
+
> including some implementations of the [k nearest neighbors](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html). `SKLearnVectorStore` wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
- Install the Python package with `pip install scikit-learn`
|
9 |
+
|
10 |
+
|
11 |
+
## Vector Store
|
12 |
+
|
13 |
+
`SKLearnVectorStore` provides a simple wrapper around the nearest neighbor implementation in the
|
14 |
+
scikit-learn package, allowing you to use it as a vectorstore.
|
15 |
+
|
16 |
+
To import this vectorstore:
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain_community.vectorstores import SKLearnVectorStore
|
20 |
+
```
|
21 |
+
|
22 |
+
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn).
|
23 |
+
|
24 |
+
|
25 |
+
## Retriever
|
26 |
+
|
27 |
+
`Support vector machines (SVMs)` are the supervised learning
|
28 |
+
methods used for classification, regression and outliers detection.
|
29 |
+
|
30 |
+
See a [usage example](/docs/integrations/retrievers/svm).
|
31 |
+
|
32 |
+
```python
|
33 |
+
from langchain_community.retrievers import SVMRetriever
|
34 |
+
```
|
35 |
+
|
langchain_md_files/integrations/providers/slack.mdx
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Slack
|
2 |
+
|
3 |
+
>[Slack](https://slack.com/) is an instant messaging program.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
There isn't any special setup for it.
|
8 |
+
|
9 |
+
|
10 |
+
## Document loader
|
11 |
+
|
12 |
+
See a [usage example](/docs/integrations/document_loaders/slack).
|
13 |
+
|
14 |
+
```python
|
15 |
+
from langchain_community.document_loaders import SlackDirectoryLoader
|
16 |
+
```
|
17 |
+
|
18 |
+
## Toolkit
|
19 |
+
|
20 |
+
See a [usage example](/docs/integrations/tools/slack).
|
21 |
+
|
22 |
+
```python
|
23 |
+
from langchain_community.agent_toolkits import SlackToolkit
|
24 |
+
```
|
25 |
+
|
26 |
+
## Chat loader
|
27 |
+
|
28 |
+
See a [usage example](/docs/integrations/chat_loaders/slack).
|
29 |
+
|
30 |
+
```python
|
31 |
+
from langchain_community.chat_loaders.slack import SlackChatLoader
|
32 |
+
```
|
langchain_md_files/integrations/providers/snowflake.mdx
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Snowflake
|
2 |
+
|
3 |
+
> [Snowflake](https://www.snowflake.com/) is a cloud-based data-warehousing platform
|
4 |
+
> that allows you to store and query large amounts of data.
|
5 |
+
|
6 |
+
This page covers how to use the `Snowflake` ecosystem within `LangChain`.
|
7 |
+
|
8 |
+
## Embedding models
|
9 |
+
|
10 |
+
Snowflake offers their open-weight `arctic` line of embedding models for free
|
11 |
+
on [Hugging Face](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5). The most recent model, snowflake-arctic-embed-m-v1.5 feature [matryoshka embedding](https://arxiv.org/abs/2205.13147) which allows for effective vector truncation.
|
12 |
+
You can use these models via the
|
13 |
+
[HuggingFaceEmbeddings](/docs/integrations/text_embedding/huggingfacehub) connector:
|
14 |
+
|
15 |
+
```shell
|
16 |
+
pip install langchain-community sentence-transformers
|
17 |
+
```
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_huggingface import HuggingFaceEmbeddings
|
21 |
+
|
22 |
+
model = HuggingFaceEmbeddings(model_name="snowflake/arctic-embed-m-v1.5")
|
23 |
+
```
|
24 |
+
|
25 |
+
## Document loader
|
26 |
+
|
27 |
+
You can use the [`SnowflakeLoader`](/docs/integrations/document_loaders/snowflake)
|
28 |
+
to load data from Snowflake:
|
29 |
+
|
30 |
+
```python
|
31 |
+
from langchain_community.document_loaders import SnowflakeLoader
|
32 |
+
```
|
langchain_md_files/integrations/providers/spacy.mdx
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# spaCy
|
2 |
+
|
3 |
+
>[spaCy](https://spacy.io/) is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
|
8 |
+
```bash
|
9 |
+
pip install spacy
|
10 |
+
```
|
11 |
+
|
12 |
+
|
13 |
+
|
14 |
+
## Text Splitter
|
15 |
+
|
16 |
+
See a [usage example](/docs/how_to/split_by_token/#spacy).
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain_text_splitters import SpacyTextSplitter
|
20 |
+
```
|
21 |
+
|
22 |
+
## Text Embedding Models
|
23 |
+
|
24 |
+
See a [usage example](/docs/integrations/text_embedding/spacy_embedding)
|
25 |
+
|
26 |
+
```python
|
27 |
+
from langchain_community.embeddings.spacy_embeddings import SpacyEmbeddings
|
28 |
+
```
|
langchain_md_files/integrations/providers/sparkllm.mdx
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SparkLLM
|
2 |
+
|
3 |
+
>[SparkLLM](https://xinghuo.xfyun.cn/spark) is a large-scale cognitive model independently developed by iFLYTEK.
|
4 |
+
It has cross-domain knowledge and language understanding ability by learning a large amount of texts, codes and images.
|
5 |
+
It can understand and perform tasks based on natural dialogue.
|
6 |
+
|
7 |
+
## SparkLLM LLM Model
|
8 |
+
An example is available at [example](/docs/integrations/llms/sparkllm).
|
9 |
+
|
10 |
+
## SparkLLM Chat Model
|
11 |
+
An example is available at [example](/docs/integrations/chat/sparkllm).
|
12 |
+
|
13 |
+
## SparkLLM Text Embedding Model
|
14 |
+
An example is available at [example](/docs/integrations/text_embedding/sparkllm)
|
langchain_md_files/integrations/providers/spreedly.mdx
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Spreedly
|
2 |
+
|
3 |
+
>[Spreedly](https://docs.spreedly.com/) is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at `Spreedly`, allowing you to independently store a card and then pass that card to different end points based on your business requirements.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
See [setup instructions](/docs/integrations/document_loaders/spreedly).
|
8 |
+
|
9 |
+
## Document Loader
|
10 |
+
|
11 |
+
See a [usage example](/docs/integrations/document_loaders/spreedly).
|
12 |
+
|
13 |
+
```python
|
14 |
+
from langchain_community.document_loaders import SpreedlyLoader
|
15 |
+
```
|
langchain_md_files/integrations/providers/sqlite.mdx
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SQLite
|
2 |
+
|
3 |
+
>[SQLite](https://en.wikipedia.org/wiki/SQLite) is a database engine written in the
|
4 |
+
> C programming language. It is not a standalone app; rather, it is a library that
|
5 |
+
> software developers embed in their apps. As such, it belongs to the family of
|
6 |
+
> embedded databases. It is the most widely deployed database engine, as it is
|
7 |
+
> used by several of the top web browsers, operating systems, mobile phones, and other embedded systems.
|
8 |
+
|
9 |
+
## Installation and Setup
|
10 |
+
|
11 |
+
We need to install the `SQLAlchemy` python package.
|
12 |
+
|
13 |
+
```bash
|
14 |
+
pip install SQLAlchemy
|
15 |
+
```
|
16 |
+
|
17 |
+
## Vector Store
|
18 |
+
|
19 |
+
See a [usage example](/docs/integrations/vectorstores/sqlitevss).
|
20 |
+
|
21 |
+
```python
|
22 |
+
from langchain_community.vectorstores import SQLiteVSS
|
23 |
+
```
|
24 |
+
|
25 |
+
## Memory
|
26 |
+
|
27 |
+
See a [usage example](/docs/integrations/memory/sqlite).
|
28 |
+
|
29 |
+
```python
|
30 |
+
from langchain_community.chat_message_histories import SQLChatMessageHistory
|
31 |
+
```
|
langchain_md_files/integrations/providers/stackexchange.mdx
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Stack Exchange
|
2 |
+
|
3 |
+
>[Stack Exchange](https://en.wikipedia.org/wiki/Stack_Exchange) is a network of
|
4 |
+
question-and-answer (Q&A) websites on topics in diverse fields, each site covering
|
5 |
+
a specific topic, where questions, answers, and users are subject to a reputation award process.
|
6 |
+
|
7 |
+
This page covers how to use the `Stack Exchange API` within LangChain.
|
8 |
+
|
9 |
+
## Installation and Setup
|
10 |
+
- Install requirements with
|
11 |
+
```bash
|
12 |
+
pip install stackapi
|
13 |
+
```
|
14 |
+
|
15 |
+
## Wrappers
|
16 |
+
|
17 |
+
### Utility
|
18 |
+
|
19 |
+
There exists a StackExchangeAPIWrapper utility which wraps this API. To import this utility:
|
20 |
+
|
21 |
+
```python
|
22 |
+
from langchain_community.utilities import StackExchangeAPIWrapper
|
23 |
+
```
|
24 |
+
|
25 |
+
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/stackexchange).
|
26 |
+
|
27 |
+
### Tool
|
28 |
+
|
29 |
+
You can also easily load this wrapper as a Tool (to use with an Agent).
|
30 |
+
You can do this with:
|
31 |
+
```python
|
32 |
+
from langchain.agents import load_tools
|
33 |
+
tools = load_tools(["stackexchange"])
|
34 |
+
```
|
35 |
+
|
36 |
+
For more information on tools, see [this page](/docs/how_to/tools_builtin).
|
langchain_md_files/integrations/providers/starrocks.mdx
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# StarRocks
|
2 |
+
|
3 |
+
>[StarRocks](https://www.starrocks.io/) is a High-Performance Analytical Database.
|
4 |
+
`StarRocks` is a next-gen sub-second MPP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics and ad-hoc query.
|
5 |
+
|
6 |
+
>Usually `StarRocks` is categorized into OLAP, and it has showed excellent performance in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/). Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
|
11 |
+
```bash
|
12 |
+
pip install pymysql
|
13 |
+
```
|
14 |
+
|
15 |
+
## Vector Store
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/vectorstores/starrocks).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.vectorstores import StarRocks
|
21 |
+
```
|
langchain_md_files/integrations/providers/stochasticai.mdx
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# StochasticAI
|
2 |
+
|
3 |
+
This page covers how to use the StochasticAI ecosystem within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
- Install with `pip install stochasticx`
|
8 |
+
- Get an StochasticAI api key and set it as an environment variable (`STOCHASTICAI_API_KEY`)
|
9 |
+
|
10 |
+
## Wrappers
|
11 |
+
|
12 |
+
### LLM
|
13 |
+
|
14 |
+
There exists an StochasticAI LLM wrapper, which you can access with
|
15 |
+
```python
|
16 |
+
from langchain_community.llms import StochasticAI
|
17 |
+
```
|
langchain_md_files/integrations/providers/streamlit.mdx
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Streamlit
|
2 |
+
|
3 |
+
>[Streamlit](https://streamlit.io/) is a faster way to build and share data apps.
|
4 |
+
>`Streamlit` turns data scripts into shareable web apps in minutes. All in pure Python. No front‑end experience required.
|
5 |
+
>See more examples at [streamlit.io/generative-ai](https://streamlit.io/generative-ai).
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
We need to install the `streamlit` Python package:
|
10 |
+
|
11 |
+
```bash
|
12 |
+
pip install streamlit
|
13 |
+
```
|
14 |
+
|
15 |
+
|
16 |
+
## Memory
|
17 |
+
|
18 |
+
See a [usage example](/docs/integrations/memory/streamlit_chat_message_history).
|
19 |
+
|
20 |
+
```python
|
21 |
+
from langchain_community.chat_message_histories import StreamlitChatMessageHistory
|
22 |
+
```
|
23 |
+
|
24 |
+
## Callbacks
|
25 |
+
|
26 |
+
See a [usage example](/docs/integrations/callbacks/streamlit).
|
27 |
+
|
28 |
+
```python
|
29 |
+
from langchain_community.callbacks import StreamlitCallbackHandler
|
30 |
+
```
|
langchain_md_files/integrations/providers/stripe.mdx
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Stripe
|
2 |
+
|
3 |
+
>[Stripe](https://stripe.com/en-ca) is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
|
4 |
+
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
See [setup instructions](/docs/integrations/document_loaders/stripe).
|
9 |
+
|
10 |
+
## Document Loader
|
11 |
+
|
12 |
+
See a [usage example](/docs/integrations/document_loaders/stripe).
|
13 |
+
|
14 |
+
```python
|
15 |
+
from langchain_community.document_loaders import StripeLoader
|
16 |
+
```
|
langchain_md_files/integrations/providers/supabase.mdx
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Supabase (Postgres)
|
2 |
+
|
3 |
+
>[Supabase](https://supabase.com/docs) is an open-source `Firebase` alternative.
|
4 |
+
> `Supabase` is built on top of `PostgreSQL`, which offers strong `SQL`
|
5 |
+
> querying capabilities and enables a simple interface with already-existing tools and frameworks.
|
6 |
+
|
7 |
+
>[PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) also known as `Postgres`,
|
8 |
+
> is a free and open-source relational database management system (RDBMS)
|
9 |
+
> emphasizing extensibility and `SQL` compliance.
|
10 |
+
|
11 |
+
## Installation and Setup
|
12 |
+
|
13 |
+
We need to install `supabase` python package.
|
14 |
+
|
15 |
+
```bash
|
16 |
+
pip install supabase
|
17 |
+
```
|
18 |
+
|
19 |
+
## Vector Store
|
20 |
+
|
21 |
+
See a [usage example](/docs/integrations/vectorstores/supabase).
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.vectorstores import SupabaseVectorStore
|
25 |
+
```
|
26 |
+
|
langchain_md_files/integrations/providers/symblai_nebula.mdx
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Nebula
|
2 |
+
|
3 |
+
This page covers how to use [Nebula](https://symbl.ai/nebula), [Symbl.ai](https://symbl.ai/)'s LLM, ecosystem within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then references to specific Nebula wrappers.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
- Get an [Nebula API Key](https://info.symbl.ai/Nebula_Private_Beta.html) and set as environment variable `NEBULA_API_KEY`
|
9 |
+
- Please see the [Nebula documentation](https://docs.symbl.ai/docs/nebula-llm) for more details.
|
10 |
+
|
11 |
+
### LLM
|
12 |
+
|
13 |
+
There exists an Nebula LLM wrapper, which you can access with
|
14 |
+
```python
|
15 |
+
from langchain_community.llms import Nebula
|
16 |
+
llm = Nebula()
|
17 |
+
```
|
langchain_md_files/integrations/providers/tair.mdx
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Tair
|
2 |
+
|
3 |
+
>[Alibaba Cloud Tair](https://www.alibabacloud.com/help/en/tair/latest/what-is-tair) is a cloud native in-memory database service
|
4 |
+
> developed by `Alibaba Cloud`. It provides rich data models and enterprise-grade capabilities to
|
5 |
+
> support your real-time online scenarios while maintaining full compatibility with open-source `Redis`.
|
6 |
+
> `Tair` also introduces persistent memory-optimized instances that are based on
|
7 |
+
> new non-volatile memory (NVM) storage medium.
|
8 |
+
|
9 |
+
## Installation and Setup
|
10 |
+
|
11 |
+
Install Tair Python SDK:
|
12 |
+
|
13 |
+
```bash
|
14 |
+
pip install tair
|
15 |
+
```
|
16 |
+
|
17 |
+
## Vector Store
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.vectorstores import Tair
|
21 |
+
```
|
22 |
+
|
23 |
+
See a [usage example](/docs/integrations/vectorstores/tair).
|
langchain_md_files/integrations/providers/telegram.mdx
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Telegram
|
2 |
+
|
3 |
+
>[Telegram Messenger](https://web.telegram.org/a/) is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.
|
4 |
+
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
See [setup instructions](/docs/integrations/document_loaders/telegram).
|
9 |
+
|
10 |
+
## Document Loader
|
11 |
+
|
12 |
+
See a [usage example](/docs/integrations/document_loaders/telegram).
|
13 |
+
|
14 |
+
```python
|
15 |
+
from langchain_community.document_loaders import TelegramChatFileLoader
|
16 |
+
from langchain_community.document_loaders import TelegramChatApiLoader
|
17 |
+
```
|
18 |
+
|
19 |
+
## Chat loader
|
20 |
+
|
21 |
+
See a [usage example](/docs/integrations/chat_loaders/telegram).
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.chat_loaders.telegram import TelegramChatLoader
|
25 |
+
```
|
langchain_md_files/integrations/providers/tencent.mdx
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Tencent
|
2 |
+
|
3 |
+
>[Tencent Holdings Ltd. (Wikipedia)](https://en.wikipedia.org/wiki/Tencent) (Chinese: 腾讯; pinyin: Téngxùn)
|
4 |
+
> is a Chinese multinational technology conglomerate and holding company headquartered
|
5 |
+
> in Shenzhen. `Tencent` is one of the highest grossing multimedia companies in the
|
6 |
+
> world based on revenue. It is also the world's largest company in the video game industry
|
7 |
+
> based on its equity investments.
|
8 |
+
|
9 |
+
|
10 |
+
## Chat model
|
11 |
+
|
12 |
+
>[Tencent's hybrid model API](https://cloud.tencent.com/document/product/1729) (`Hunyuan API`)
|
13 |
+
> implements dialogue communication, content generation,
|
14 |
+
> analysis and understanding, and can be widely used in various scenarios such as intelligent
|
15 |
+
> customer service, intelligent marketing, role playing, advertising, copyrighting, product description,
|
16 |
+
> script creation, resume generation, article writing, code generation, data analysis, and content
|
17 |
+
> analysis.
|
18 |
+
|
19 |
+
|
20 |
+
For more information, see [this notebook](/docs/integrations/chat/tencent_hunyuan)
|
21 |
+
|
22 |
+
```python
|
23 |
+
from langchain_community.chat_models import ChatHunyuan
|
24 |
+
```
|
25 |
+
|
26 |
+
|
27 |
+
## Document Loaders
|
28 |
+
|
29 |
+
### Tencent COS
|
30 |
+
|
31 |
+
>[Tencent Cloud Object Storage (COS)](https://www.tencentcloud.com/products/cos) is a distributed
|
32 |
+
> storage service that enables you to store any amount of data from anywhere via HTTP/HTTPS protocols.
|
33 |
+
> `COS` has no restrictions on data structure or format. It also has no bucket size limit and
|
34 |
+
> partition management, making it suitable for virtually any use case, such as data delivery,
|
35 |
+
> data processing, and data lakes. COS provides a web-based console, multi-language SDKs and APIs,
|
36 |
+
> command line tool, and graphical tools. It works well with Amazon S3 APIs, allowing you to quickly
|
37 |
+
> access community tools and plugins.
|
38 |
+
|
39 |
+
Install the Python SDK:
|
40 |
+
|
41 |
+
```bash
|
42 |
+
pip install cos-python-sdk-v5
|
43 |
+
```
|
44 |
+
|
45 |
+
#### Tencent COS Directory
|
46 |
+
|
47 |
+
For more information, see [this notebook](/docs/integrations/document_loaders/tencent_cos_directory)
|
48 |
+
|
49 |
+
```python
|
50 |
+
from langchain_community.document_loaders import TencentCOSDirectoryLoader
|
51 |
+
from qcloud_cos import CosConfig
|
52 |
+
```
|
53 |
+
|
54 |
+
#### Tencent COS File
|
55 |
+
|
56 |
+
For more information, see [this notebook](/docs/integrations/document_loaders/tencent_cos_file)
|
57 |
+
|
58 |
+
```python
|
59 |
+
from langchain_community.document_loaders import TencentCOSFileLoader
|
60 |
+
from qcloud_cos import CosConfig
|
61 |
+
```
|
62 |
+
|
63 |
+
## Vector Store
|
64 |
+
|
65 |
+
### Tencent VectorDB
|
66 |
+
|
67 |
+
>[Tencent Cloud VectorDB](https://www.tencentcloud.com/products/vdb) is a fully managed,
|
68 |
+
> self-developed enterprise-level distributed database service
|
69 |
+
>dedicated to storing, retrieving, and analyzing multidimensional vector data. The database supports a variety of index
|
70 |
+
>types and similarity calculation methods, and a single index supports 1 billion vectors, millions of QPS, and
|
71 |
+
>millisecond query latency. `Tencent Cloud Vector Database` can not only provide an external knowledge base for large
|
72 |
+
>models and improve the accuracy of large models' answers, but also be widely used in AI fields such as
|
73 |
+
>recommendation systems, NLP services, computer vision, and intelligent customer service.
|
74 |
+
|
75 |
+
Install the Python SDK:
|
76 |
+
|
77 |
+
```bash
|
78 |
+
pip install tcvectordb
|
79 |
+
```
|
80 |
+
|
81 |
+
For more information, see [this notebook](/docs/integrations/vectorstores/tencentvectordb)
|
82 |
+
|
83 |
+
```python
|
84 |
+
from langchain_community.vectorstores import TencentVectorDB
|
85 |
+
```
|
86 |
+
|
87 |
+
## Chat loader
|
88 |
+
|
89 |
+
### WeChat
|
90 |
+
|
91 |
+
>[WeChat](https://www.wechat.com/) or `Weixin` in Chinese is a Chinese
|
92 |
+
> instant messaging, social media, and mobile payment app developed by `Tencent`.
|
93 |
+
|
94 |
+
See a [usage example](/docs/integrations/chat_loaders/wechat).
|
95 |
+
|
langchain_md_files/integrations/providers/tensorflow_datasets.mdx
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# TensorFlow Datasets
|
2 |
+
|
3 |
+
>[TensorFlow Datasets](https://www.tensorflow.org/datasets) is a collection of datasets ready to use,
|
4 |
+
> with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed
|
5 |
+
> as [tf.data.Datasets](https://www.tensorflow.org/api_docs/python/tf/data/Dataset),
|
6 |
+
> enabling easy-to-use and high-performance input pipelines. To get started see
|
7 |
+
> the [guide](https://www.tensorflow.org/datasets/overview) and
|
8 |
+
> the [list of datasets](https://www.tensorflow.org/datasets/catalog/overview#all_datasets).
|
9 |
+
|
10 |
+
|
11 |
+
|
12 |
+
## Installation and Setup
|
13 |
+
|
14 |
+
You need to install `tensorflow` and `tensorflow-datasets` python packages.
|
15 |
+
|
16 |
+
```bash
|
17 |
+
pip install tensorflow
|
18 |
+
```
|
19 |
+
|
20 |
+
```bash
|
21 |
+
pip install tensorflow-dataset
|
22 |
+
```
|
23 |
+
|
24 |
+
|
25 |
+
## Document Loader
|
26 |
+
|
27 |
+
See a [usage example](/docs/integrations/document_loaders/tensorflow_datasets).
|
28 |
+
|
29 |
+
```python
|
30 |
+
from langchain_community.document_loaders import TensorflowDatasetLoader
|
31 |
+
```
|
langchain_md_files/integrations/providers/tidb.mdx
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# TiDB
|
2 |
+
|
3 |
+
> [TiDB Cloud](https://www.pingcap.com/tidb-serverless), is a comprehensive Database-as-a-Service (DBaaS) solution,
|
4 |
+
> that provides dedicated and serverless options. `TiDB Serverless` is now integrating
|
5 |
+
> a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly
|
6 |
+
> develop AI applications using `TiDB Serverless` without the need for a new database or additional
|
7 |
+
> technical stacks. Create a free TiDB Serverless cluster and start using the vector search feature at https://pingcap.com/ai.
|
8 |
+
|
9 |
+
|
10 |
+
## Installation and Setup
|
11 |
+
|
12 |
+
You have to get the connection details for the TiDB database.
|
13 |
+
Visit the [TiDB Cloud](https://tidbcloud.com/) to get the connection details.
|
14 |
+
|
15 |
+
```bash
|
16 |
+
## Document loader
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain_community.document_loaders import TiDBLoader
|
20 |
+
```
|
21 |
+
|
22 |
+
Please refer the details [here](/docs/integrations/document_loaders/tidb).
|
23 |
+
|
24 |
+
## Vector store
|
25 |
+
|
26 |
+
```python
|
27 |
+
from langchain_community.vectorstores import TiDBVectorStore
|
28 |
+
```
|
29 |
+
Please refer the details [here](/docs/integrations/vectorstores/tidb_vector).
|
30 |
+
|
31 |
+
|
32 |
+
## Memory
|
33 |
+
|
34 |
+
```python
|
35 |
+
from langchain_community.chat_message_histories import TiDBChatMessageHistory
|
36 |
+
```
|
37 |
+
|
38 |
+
Please refer the details [here](/docs/integrations/memory/tidb_chat_message_history).
|
langchain_md_files/integrations/providers/tigergraph.mdx
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# TigerGraph
|
2 |
+
|
3 |
+
>[TigerGraph](https://www.tigergraph.com/tigergraph-db/) is a natively distributed and high-performance graph database.
|
4 |
+
> The storage of data in a graph format of vertices and edges leads to rich relationships,
|
5 |
+
> ideal for grouding LLM responses.
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
Follow instructions [how to connect to the `TigerGraph` database](https://docs.tigergraph.com/pytigergraph/current/getting-started/connection).
|
10 |
+
|
11 |
+
Install the Python SDK:
|
12 |
+
|
13 |
+
```bash
|
14 |
+
pip install pyTigerGraph
|
15 |
+
```
|
16 |
+
|
17 |
+
## Graph store
|
18 |
+
|
19 |
+
### TigerGraph
|
20 |
+
|
21 |
+
See a [usage example](/docs/integrations/graphs/tigergraph).
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.graphs import TigerGraph
|
25 |
+
```
|