omarsol commited on
Commit
7677e00
1 Parent(s): e31b5f3

026b63d1b2534d37e2985f31ecf11f13a01a9f2306fd8e798775233f180f8ea7

Browse files
Files changed (50) hide show
  1. langchain_md_files/contributing/code/setup.mdx +213 -0
  2. langchain_md_files/contributing/documentation/index.mdx +7 -0
  3. langchain_md_files/contributing/documentation/setup.mdx +181 -0
  4. langchain_md_files/contributing/documentation/style_guide.mdx +160 -0
  5. langchain_md_files/contributing/faq.mdx +26 -0
  6. langchain_md_files/contributing/index.mdx +54 -0
  7. langchain_md_files/contributing/integrations.mdx +203 -0
  8. langchain_md_files/contributing/repo_structure.mdx +65 -0
  9. langchain_md_files/contributing/testing.mdx +147 -0
  10. langchain_md_files/how_to/document_loader_json.mdx +402 -0
  11. langchain_md_files/how_to/document_loader_office_file.mdx +35 -0
  12. langchain_md_files/how_to/embed_text.mdx +154 -0
  13. langchain_md_files/how_to/index.mdx +361 -0
  14. langchain_md_files/how_to/installation.mdx +107 -0
  15. langchain_md_files/how_to/toolkits.mdx +21 -0
  16. langchain_md_files/how_to/vectorstores.mdx +178 -0
  17. langchain_md_files/integrations/chat/index.mdx +32 -0
  18. langchain_md_files/integrations/document_loaders/index.mdx +69 -0
  19. langchain_md_files/integrations/graphs/tigergraph.mdx +37 -0
  20. langchain_md_files/integrations/llms/index.mdx +30 -0
  21. langchain_md_files/integrations/llms/layerup_security.mdx +85 -0
  22. langchain_md_files/integrations/platforms/anthropic.mdx +43 -0
  23. langchain_md_files/integrations/platforms/aws.mdx +381 -0
  24. langchain_md_files/integrations/platforms/google.mdx +1079 -0
  25. langchain_md_files/integrations/platforms/huggingface.mdx +126 -0
  26. langchain_md_files/integrations/platforms/microsoft.mdx +561 -0
  27. langchain_md_files/integrations/platforms/openai.mdx +123 -0
  28. langchain_md_files/integrations/providers/acreom.mdx +15 -0
  29. langchain_md_files/integrations/providers/activeloop_deeplake.mdx +38 -0
  30. langchain_md_files/integrations/providers/ai21.mdx +67 -0
  31. langchain_md_files/integrations/providers/ainetwork.mdx +23 -0
  32. langchain_md_files/integrations/providers/airbyte.mdx +32 -0
  33. langchain_md_files/integrations/providers/alchemy.mdx +20 -0
  34. langchain_md_files/integrations/providers/aleph_alpha.mdx +36 -0
  35. langchain_md_files/integrations/providers/alibaba_cloud.mdx +91 -0
  36. langchain_md_files/integrations/providers/analyticdb.mdx +31 -0
  37. langchain_md_files/integrations/providers/annoy.mdx +21 -0
  38. langchain_md_files/integrations/providers/anyscale.mdx +42 -0
  39. langchain_md_files/integrations/providers/apache_doris.mdx +22 -0
  40. langchain_md_files/integrations/providers/apify.mdx +41 -0
  41. langchain_md_files/integrations/providers/arangodb.mdx +25 -0
  42. langchain_md_files/integrations/providers/arcee.mdx +30 -0
  43. langchain_md_files/integrations/providers/arcgis.mdx +27 -0
  44. langchain_md_files/integrations/providers/argilla.mdx +25 -0
  45. langchain_md_files/integrations/providers/arize.mdx +24 -0
  46. langchain_md_files/integrations/providers/arxiv.mdx +36 -0
  47. langchain_md_files/integrations/providers/ascend.mdx +24 -0
  48. langchain_md_files/integrations/providers/asknews.mdx +33 -0
  49. langchain_md_files/integrations/providers/assemblyai.mdx +42 -0
  50. langchain_md_files/integrations/providers/astradb.mdx +150 -0
langchain_md_files/contributing/code/setup.mdx ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Setup
2
+
3
+ This guide walks through how to run the repository locally and check in your first code.
4
+ For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchain/tree/master/.devcontainer).
5
+
6
+ ## Dependency Management: Poetry and other env/dependency managers
7
+
8
+ This project utilizes [Poetry](https://python-poetry.org/) v1.7.1+ as a dependency manager.
9
+
10
+ ❗Note: *Before installing Poetry*, if you use `Conda`, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
11
+
12
+ Install Poetry: **[documentation on how to install it](https://python-poetry.org/docs/#installation)**.
13
+
14
+ ❗Note: If you use `Conda` or `Pyenv` as your environment/package manager, after installing Poetry,
15
+ tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
16
+
17
+ ## Different packages
18
+
19
+ This repository contains multiple packages:
20
+ - `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
21
+ - `langchain-community`: Third-party integrations of various components.
22
+ - `langchain`: Chains, agents, and retrieval logic that makes up the cognitive architecture of your applications.
23
+ - `langchain-experimental`: Components and chains that are experimental, either in the sense that the techniques are novel and still being tested, or they require giving the LLM more access than would be possible in most production systems.
24
+ - Partner integrations: Partner packages in `libs/partners` that are independently version controlled.
25
+
26
+ Each of these has its own development environment. Docs are run from the top-level makefile, but development
27
+ is split across separate test & release flows.
28
+
29
+ For this quickstart, start with langchain-community:
30
+
31
+ ```bash
32
+ cd libs/community
33
+ ```
34
+
35
+ ## Local Development Dependencies
36
+
37
+ Install langchain-community development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
38
+
39
+ ```bash
40
+ poetry install --with lint,typing,test,test_integration
41
+ ```
42
+
43
+ Then verify dependency installation:
44
+
45
+ ```bash
46
+ make test
47
+ ```
48
+
49
+ If during installation you receive a `WheelFileValidationError` for `debugpy`, please make sure you are running
50
+ Poetry v1.6.1+. This bug was present in older versions of Poetry (e.g. 1.4.1) and has been resolved in newer releases.
51
+ If you are still seeing this bug on v1.6.1+, you may also try disabling "modern installation"
52
+ (`poetry config installer.modern-installation false`) and re-installing requirements.
53
+ See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
54
+
55
+ ## Testing
56
+
57
+ **Note:** In `langchain`, `langchain-community`, and `langchain-experimental`, some test dependencies are optional. See the following section about optional dependencies.
58
+
59
+ Unit tests cover modular logic that does not require calls to outside APIs.
60
+ If you add new logic, please add a unit test.
61
+
62
+ To run unit tests:
63
+
64
+ ```bash
65
+ make test
66
+ ```
67
+
68
+ To run unit tests in Docker:
69
+
70
+ ```bash
71
+ make docker_tests
72
+ ```
73
+
74
+ There are also [integration tests and code-coverage](/docs/contributing/testing/) available.
75
+
76
+ ### Only develop langchain_core or langchain_experimental
77
+
78
+ If you are only developing `langchain_core` or `langchain_experimental`, you can simply install the dependencies for the respective projects and run tests:
79
+
80
+ ```bash
81
+ cd libs/core
82
+ poetry install --with test
83
+ make test
84
+ ```
85
+
86
+ Or:
87
+
88
+ ```bash
89
+ cd libs/experimental
90
+ poetry install --with test
91
+ make test
92
+ ```
93
+
94
+ ## Formatting and Linting
95
+
96
+ Run these locally before submitting a PR; the CI system will check also.
97
+
98
+ ### Code Formatting
99
+
100
+ Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules/).
101
+
102
+ To run formatting for docs, cookbook and templates:
103
+
104
+ ```bash
105
+ make format
106
+ ```
107
+
108
+ To run formatting for a library, run the same command from the relevant library directory:
109
+
110
+ ```bash
111
+ cd libs/{LIBRARY}
112
+ make format
113
+ ```
114
+
115
+ Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
116
+
117
+ ```bash
118
+ make format_diff
119
+ ```
120
+
121
+ This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
122
+
123
+ #### Linting
124
+
125
+ Linting for this project is done via a combination of [ruff](https://docs.astral.sh/ruff/rules/) and [mypy](http://mypy-lang.org/).
126
+
127
+ To run linting for docs, cookbook and templates:
128
+
129
+ ```bash
130
+ make lint
131
+ ```
132
+
133
+ To run linting for a library, run the same command from the relevant library directory:
134
+
135
+ ```bash
136
+ cd libs/{LIBRARY}
137
+ make lint
138
+ ```
139
+
140
+ In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
141
+
142
+ ```bash
143
+ make lint_diff
144
+ ```
145
+
146
+ This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
147
+
148
+ We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
149
+
150
+ ### Spellcheck
151
+
152
+ Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
153
+ Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
154
+
155
+ To check spelling for this project:
156
+
157
+ ```bash
158
+ make spell_check
159
+ ```
160
+
161
+ To fix spelling in place:
162
+
163
+ ```bash
164
+ make spell_fix
165
+ ```
166
+
167
+ If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
168
+
169
+ ```python
170
+ [tool.codespell]
171
+ ...
172
+ # Add here:
173
+ ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure'
174
+ ```
175
+
176
+ ## Working with Optional Dependencies
177
+
178
+ `langchain`, `langchain-community`, and `langchain-experimental` rely on optional dependencies to keep these packages lightweight.
179
+
180
+ `langchain-core` and partner packages **do not use** optional dependencies in this way.
181
+
182
+ You'll notice that `pyproject.toml` and `poetry.lock` are **not** touched when you add optional dependencies below.
183
+
184
+ If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and
185
+ that most users won't have it installed.
186
+
187
+ Users who do not have the dependency installed should be able to **import** your code without
188
+ any side effects (no warnings, no errors, no exceptions).
189
+
190
+ To introduce the dependency to a library, please do the following:
191
+
192
+ 1. Open extended_testing_deps.txt and add the dependency
193
+ 2. Add a unit test that the very least attempts to import the new code. Ideally, the unit
194
+ test makes use of lightweight fixtures to test the logic of the code.
195
+ 3. Please use the `@pytest.mark.requires(package_name)` decorator for any unit tests that require the dependency.
196
+
197
+ ## Adding a Jupyter Notebook
198
+
199
+ If you are adding a Jupyter Notebook example, you'll want to install the optional `dev` dependencies.
200
+
201
+ To install dev dependencies:
202
+
203
+ ```bash
204
+ poetry install --with dev
205
+ ```
206
+
207
+ Launch a notebook:
208
+
209
+ ```bash
210
+ poetry run jupyter notebook
211
+ ```
212
+
213
+ When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.
langchain_md_files/contributing/documentation/index.mdx ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # Contribute Documentation
2
+
3
+ Documentation is a vital part of LangChain. We welcome both new documentation for new features and
4
+ community improvements to our current documentation. Please read the resources below before getting started:
5
+
6
+ - [Documentation style guide](/docs/contributing/documentation/style_guide/)
7
+ - [Setup](/docs/contributing/documentation/setup/)
langchain_md_files/contributing/documentation/setup.mdx ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_class_name: "hidden"
3
+ ---
4
+
5
+ # Setup
6
+
7
+ LangChain documentation consists of two components:
8
+
9
+ 1. Main Documentation: Hosted at [python.langchain.com](https://python.langchain.com/),
10
+ this comprehensive resource serves as the primary user-facing documentation.
11
+ It covers a wide array of topics, including tutorials, use cases, integrations,
12
+ and more, offering extensive guidance on building with LangChain.
13
+ The content for this documentation lives in the `/docs` directory of the monorepo.
14
+ 2. In-code Documentation: This is documentation of the codebase itself, which is also
15
+ used to generate the externally facing [API Reference](https://python.langchain.com/v0.2/api_reference/langchain/index.html).
16
+ The content for the API reference is autogenerated by scanning the docstrings in the codebase. For this reason we ask that
17
+ developers document their code well.
18
+
19
+ The `API Reference` is largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/)
20
+ from the code and is hosted by [Read the Docs](https://readthedocs.org/).
21
+
22
+ We appreciate all contributions to the documentation, whether it be fixing a typo,
23
+ adding a new tutorial or example and whether it be in the main documentation or the API Reference.
24
+
25
+ Similar to linting, we recognize documentation can be annoying. If you do not want
26
+ to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
27
+
28
+ ## 📜 Main Documentation
29
+
30
+ The content for the main documentation is located in the `/docs` directory of the monorepo.
31
+
32
+ The documentation is written using a combination of ipython notebooks (`.ipynb` files)
33
+ and markdown (`.mdx` files). The notebooks are converted to markdown
34
+ and then built using [Docusaurus 2](https://docusaurus.io/).
35
+
36
+ Feel free to make contributions to the main documentation! 🥰
37
+
38
+ After modifying the documentation:
39
+
40
+ 1. Run the linting and formatting commands (see below) to ensure that the documentation is well-formatted and free of errors.
41
+ 2. Optionally build the documentation locally to verify that the changes look good.
42
+ 3. Make a pull request with the changes.
43
+ 4. You can preview and verify that the changes are what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page. This will take you to a preview of the documentation changes.
44
+
45
+ ## ⚒️ Linting and Building Documentation Locally
46
+
47
+ After writing up the documentation, you may want to lint and build the documentation
48
+ locally to ensure that it looks good and is free of errors.
49
+
50
+ If you're unable to build it locally that's okay as well, as you will be able to
51
+ see a preview of the documentation on the pull request page.
52
+
53
+ From the **monorepo root**, run the following command to install the dependencies:
54
+
55
+ ```bash
56
+ poetry install --with lint,docs --no-root
57
+ ````
58
+
59
+ ### Building
60
+
61
+ The code that builds the documentation is located in the `/docs` directory of the monorepo.
62
+
63
+ In the following commands, the prefix `api_` indicates that those are operations for the API Reference.
64
+
65
+ Before building the documentation, it is always a good idea to clean the build directory:
66
+
67
+ ```bash
68
+ make docs_clean
69
+ make api_docs_clean
70
+ ```
71
+
72
+ Next, you can build the documentation as outlined below:
73
+
74
+ ```bash
75
+ make docs_build
76
+ make api_docs_build
77
+ ```
78
+
79
+ :::tip
80
+
81
+ The `make api_docs_build` command takes a long time. If you're making cosmetic changes to the API docs and want to see how they look, use:
82
+
83
+ ```bash
84
+ make api_docs_quick_preview
85
+ ```
86
+
87
+ which will just build a small subset of the API reference.
88
+
89
+ :::
90
+
91
+ Finally, run the link checker to ensure all links are valid:
92
+
93
+ ```bash
94
+ make docs_linkcheck
95
+ make api_docs_linkcheck
96
+ ```
97
+
98
+ ### Linting and Formatting
99
+
100
+ The Main Documentation is linted from the **monorepo root**. To lint the main documentation, run the following from there:
101
+
102
+ ```bash
103
+ make lint
104
+ ```
105
+
106
+ If you have formatting-related errors, you can fix them automatically with:
107
+
108
+ ```bash
109
+ make format
110
+ ```
111
+
112
+ ## ⌨️ In-code Documentation
113
+
114
+ The in-code documentation is largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code and is hosted by [Read the Docs](https://readthedocs.org/).
115
+
116
+ For the API reference to be useful, the codebase must be well-documented. This means that all functions, classes, and methods should have a docstring that explains what they do, what the arguments are, and what the return value is. This is a good practice in general, but it is especially important for LangChain because the API reference is the primary resource for developers to understand how to use the codebase.
117
+
118
+ We generally follow the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) for docstrings.
119
+
120
+ Here is an example of a well-documented function:
121
+
122
+ ```python
123
+
124
+ def my_function(arg1: int, arg2: str) -> float:
125
+ """This is a short description of the function. (It should be a single sentence.)
126
+
127
+ This is a longer description of the function. It should explain what
128
+ the function does, what the arguments are, and what the return value is.
129
+ It should wrap at 88 characters.
130
+
131
+ Examples:
132
+ This is a section for examples of how to use the function.
133
+
134
+ .. code-block:: python
135
+
136
+ my_function(1, "hello")
137
+
138
+ Args:
139
+ arg1: This is a description of arg1. We do not need to specify the type since
140
+ it is already specified in the function signature.
141
+ arg2: This is a description of arg2.
142
+
143
+ Returns:
144
+ This is a description of the return value.
145
+ """
146
+ return 3.14
147
+ ```
148
+
149
+ ### Linting and Formatting
150
+
151
+ The in-code documentation is linted from the directories belonging to the packages
152
+ being documented.
153
+
154
+ For example, if you're working on the `langchain-community` package, you would change
155
+ the working directory to the `langchain-community` directory:
156
+
157
+ ```bash
158
+ cd [root]/libs/langchain-community
159
+ ```
160
+
161
+ Set up a virtual environment for the package if you haven't done so already.
162
+
163
+ Install the dependencies for the package.
164
+
165
+ ```bash
166
+ poetry install --with lint
167
+ ```
168
+
169
+ Then you can run the following commands to lint and format the in-code documentation:
170
+
171
+ ```bash
172
+ make format
173
+ make lint
174
+ ```
175
+
176
+ ## Verify Documentation Changes
177
+
178
+ After pushing documentation changes to the repository, you can preview and verify that the changes are
179
+ what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.
180
+ This will take you to a preview of the documentation changes.
181
+ This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel).
langchain_md_files/contributing/documentation/style_guide.mdx ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_class_name: "hidden"
3
+ ---
4
+
5
+ # Documentation Style Guide
6
+
7
+ As LangChain continues to grow, the surface area of documentation required to cover it continues to grow too.
8
+ This page provides guidelines for anyone writing documentation for LangChain, as well as some of our philosophies around
9
+ organization and structure.
10
+
11
+ ## Philosophy
12
+
13
+ LangChain's documentation follows the [Diataxis framework](https://diataxis.fr).
14
+ Under this framework, all documentation falls under one of four categories: [Tutorials](/docs/contributing/documentation/style_guide/#tutorials),
15
+ [How-to guides](/docs/contributing/documentation/style_guide/#how-to-guides),
16
+ [References](/docs/contributing/documentation/style_guide/#references), and [Explanations](/docs/contributing/documentation/style_guide/#conceptual-guide).
17
+
18
+ ### Tutorials
19
+
20
+ Tutorials are lessons that take the reader through a practical activity. Their purpose is to help the user
21
+ gain understanding of concepts and how they interact by showing one way to achieve some goal in a hands-on way. They should **avoid** giving
22
+ multiple permutations of ways to achieve that goal in-depth. Instead, it should guide a new user through a recommended path to accomplishing the tutorial's goal. While the end result of a tutorial does not necessarily need to
23
+ be completely production-ready, it should be useful and practically satisfy the the goal that you clearly stated in the tutorial's introduction. Information on how to address additional scenarios
24
+ belongs in how-to guides.
25
+
26
+ To quote the Diataxis website:
27
+
28
+ > A tutorial serves the user’s *acquisition* of skills and knowledge - their study. Its purpose is not to help the user get something done, but to help them learn.
29
+
30
+ In LangChain, these are often higher level guides that show off end-to-end use cases.
31
+
32
+ Some examples include:
33
+
34
+ - [Build a Simple LLM Application with LCEL](/docs/tutorials/llm_chain/)
35
+ - [Build a Retrieval Augmented Generation (RAG) App](/docs/tutorials/rag/)
36
+
37
+ A good structural rule of thumb is to follow the structure of this [example from Numpy](https://numpy.org/numpy-tutorials/content/tutorial-svd.html).
38
+
39
+ Here are some high-level tips on writing a good tutorial:
40
+
41
+ - Focus on guiding the user to get something done, but keep in mind the end-goal is more to impart principles than to create a perfect production system.
42
+ - Be specific, not abstract and follow one path.
43
+ - No need to go deeply into alternative approaches, but it’s ok to reference them, ideally with a link to an appropriate how-to guide.
44
+ - Get "a point on the board" as soon as possible - something the user can run that outputs something.
45
+ - You can iterate and expand afterwards.
46
+ - Try to frequently checkpoint at given steps where the user can run code and see progress.
47
+ - Focus on results, not technical explanation.
48
+ - Crosslink heavily to appropriate conceptual/reference pages.
49
+ - The first time you mention a LangChain concept, use its full name (e.g. "LangChain Expression Language (LCEL)"), and link to its conceptual/other documentation page.
50
+ - It's also helpful to add a prerequisite callout that links to any pages with necessary background information.
51
+ - End with a recap/next steps section summarizing what the tutorial covered and future reading, such as related how-to guides.
52
+
53
+ ### How-to guides
54
+
55
+ A how-to guide, as the name implies, demonstrates how to do something discrete and specific.
56
+ It should assume that the user is already familiar with underlying concepts, and is trying to solve an immediate problem, but
57
+ should still give some background or list the scenarios where the information contained within can be relevant.
58
+ They can and should discuss alternatives if one approach may be better than another in certain cases.
59
+
60
+ To quote the Diataxis website:
61
+
62
+ > A how-to guide serves the work of the already-competent user, whom you can assume to know what they want to do, and to be able to follow your instructions correctly.
63
+
64
+ Some examples include:
65
+
66
+ - [How to: return structured data from a model](/docs/how_to/structured_output/)
67
+ - [How to: write a custom chat model](/docs/how_to/custom_chat_model/)
68
+
69
+ Here are some high-level tips on writing a good how-to guide:
70
+
71
+ - Clearly explain what you are guiding the user through at the start.
72
+ - Assume higher intent than a tutorial and show what the user needs to do to get that task done.
73
+ - Assume familiarity of concepts, but explain why suggested actions are helpful.
74
+ - Crosslink heavily to conceptual/reference pages.
75
+ - Discuss alternatives and responses to real-world tradeoffs that may arise when solving a problem.
76
+ - Use lots of example code.
77
+ - Prefer full code blocks that the reader can copy and run.
78
+ - End with a recap/next steps section summarizing what the tutorial covered and future reading, such as other related how-to guides.
79
+
80
+ ### Conceptual guide
81
+
82
+ LangChain's conceptual guide falls under the **Explanation** quadrant of Diataxis. They should cover LangChain terms and concepts
83
+ in a more abstract way than how-to guides or tutorials, and should be geared towards curious users interested in
84
+ gaining a deeper understanding of the framework. Try to avoid excessively large code examples - the goal here is to
85
+ impart perspective to the user rather than to finish a practical project. These guides should cover **why** things work they way they do.
86
+
87
+ This guide on documentation style is meant to fall under this category.
88
+
89
+ To quote the Diataxis website:
90
+
91
+ > The perspective of explanation is higher and wider than that of the other types. It does not take the user’s eye-level view, as in a how-to guide, or a close-up view of the machinery, like reference material. Its scope in each case is a topic - “an area of knowledge”, that somehow has to be bounded in a reasonable, meaningful way.
92
+
93
+ Some examples include:
94
+
95
+ - [Retrieval conceptual docs](/docs/concepts/#retrieval)
96
+ - [Chat model conceptual docs](/docs/concepts/#chat-models)
97
+
98
+ Here are some high-level tips on writing a good conceptual guide:
99
+
100
+ - Explain design decisions. Why does concept X exist and why was it designed this way?
101
+ - Use analogies and reference other concepts and alternatives
102
+ - Avoid blending in too much reference content
103
+ - You can and should reference content covered in other guides, but make sure to link to them
104
+
105
+ ### References
106
+
107
+ References contain detailed, low-level information that describes exactly what functionality exists and how to use it.
108
+ In LangChain, this is mainly our API reference pages, which are populated from docstrings within code.
109
+ References pages are generally not read end-to-end, but are consulted as necessary when a user needs to know
110
+ how to use something specific.
111
+
112
+ To quote the Diataxis website:
113
+
114
+ > The only purpose of a reference guide is to describe, as succinctly as possible, and in an orderly way. Whereas the content of tutorials and how-to guides are led by needs of the user, reference material is led by the product it describes.
115
+
116
+ Many of the reference pages in LangChain are automatically generated from code,
117
+ but here are some high-level tips on writing a good docstring:
118
+
119
+ - Be concise
120
+ - Discuss special cases and deviations from a user's expectations
121
+ - Go into detail on required inputs and outputs
122
+ - Light details on when one might use the feature are fine, but in-depth details belong in other sections.
123
+
124
+ Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
125
+
126
+ ## General guidelines
127
+
128
+ Here are some other guidelines you should think about when writing and organizing documentation.
129
+
130
+ We generally do not merge new tutorials from outside contributors without an actue need.
131
+ We welcome updates as well as new integration docs, how-tos, and references.
132
+
133
+ ### Avoid duplication
134
+
135
+ Multiple pages that cover the same material in depth are difficult to maintain and cause confusion. There should
136
+ be only one (very rarely two), canonical pages for a given concept or feature. Instead, you should link to other guides.
137
+
138
+ ### Link to other sections
139
+
140
+ Because sections of the docs do not exist in a vacuum, it is important to link to other sections as often as possible
141
+ to allow a developer to learn more about an unfamiliar topic inline.
142
+
143
+ This includes linking to the API references as well as conceptual sections!
144
+
145
+ ### Be concise
146
+
147
+ In general, take a less-is-more approach. If a section with a good explanation of a concept already exists, you should link to it rather than
148
+ re-explain it, unless the concept you are documenting presents some new wrinkle.
149
+
150
+ Be concise, including in code samples.
151
+
152
+ ### General style
153
+
154
+ - Use active voice and present tense whenever possible
155
+ - Use examples and code snippets to illustrate concepts and usage
156
+ - Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically
157
+ - Use fewer cells with more code to make copy/paste easier
158
+ - Use bullet points and numbered lists to break down information into easily digestible chunks
159
+ - Use tables (especially for **Reference** sections) and diagrams often to present information visually
160
+ - Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages
langchain_md_files/contributing/faq.mdx ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 6
3
+ sidebar_label: FAQ
4
+ ---
5
+ # Frequently Asked Questions
6
+
7
+ ## Pull Requests (PRs)
8
+
9
+ ### How do I allow maintainers to edit my PR?
10
+
11
+ When you submit a pull request, there may be additional changes
12
+ necessary before merging it. Oftentimes, it is more efficient for the
13
+ maintainers to make these changes themselves before merging, rather than asking you
14
+ to do so in code review.
15
+
16
+ By default, most pull requests will have a
17
+ `✅ Maintainers are allowed to edit this pull request.`
18
+ badge in the right-hand sidebar.
19
+
20
+ If you do not see this badge, you may have this setting off for the fork you are
21
+ pull-requesting from. See [this Github docs page](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
22
+ for more information.
23
+
24
+ Notably, Github doesn't allow this setting to be enabled for forks in **organizations** ([issue](https://github.com/orgs/community/discussions/5634)).
25
+ If you are working in an organization, we recommend submitting your PR from a personal
26
+ fork in order to enable this setting.
langchain_md_files/contributing/index.mdx ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ ---
4
+ # Welcome Contributors
5
+
6
+ Hi there! Thank you for even being interested in contributing to LangChain.
7
+ As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
8
+
9
+ ## 🗺️ Guidelines
10
+
11
+ ### 👩‍💻 Ways to contribute
12
+
13
+ There are many ways to contribute to LangChain. Here are some common ways people contribute:
14
+
15
+ - [**Documentation**](/docs/contributing/documentation/): Help improve our docs, including this one!
16
+ - [**Code**](/docs/contributing/code/): Help us write code, fix bugs, or improve our infrastructure.
17
+ - [**Integrations**](integrations.mdx): Help us integrate with your favorite vendors and tools.
18
+ - [**Discussions**](https://github.com/langchain-ai/langchain/discussions): Help answer usage questions and discuss issues with users.
19
+
20
+ ### 🚩 GitHub Issues
21
+
22
+ Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date with bugs, improvements, and feature requests.
23
+
24
+ There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues.
25
+
26
+ If you start working on an issue, please assign it to yourself.
27
+
28
+ If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature.
29
+ If two issues are related, or blocking, please link them rather than combining them.
30
+
31
+ We will try to keep these issues as up-to-date as possible, though
32
+ with the rapid rate of development in this field some may get out of date.
33
+ If you notice this happening, please let us know.
34
+
35
+ ### 💭 GitHub Discussions
36
+
37
+ We have a [discussions](https://github.com/langchain-ai/langchain/discussions) page where users can ask usage questions, discuss design decisions, and propose new features.
38
+
39
+ If you are able to help answer questions, please do so! This will allow the maintainers to spend more time focused on development and bug fixing.
40
+
41
+ ### 🙋 Getting Help
42
+
43
+ Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
44
+ contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is
45
+ smooth for future contributors.
46
+
47
+ In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
48
+ If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
49
+ we do not want these to get in the way of getting good code into the codebase.
50
+
51
+ ### 🌟 Recognition
52
+
53
+ If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
54
+ If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.
langchain_md_files/contributing/integrations.mdx ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 5
3
+ ---
4
+
5
+ # Contribute Integrations
6
+
7
+ To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](/docs/contributing/code/).
8
+
9
+ There are a few different places you can contribute integrations for LangChain:
10
+
11
+ - **Community**: For lighter-weight integrations that are primarily maintained by LangChain and the Open Source Community.
12
+ - **Partner Packages**: For independent packages that are co-maintained by LangChain and a partner.
13
+
14
+ For the most part, **new integrations should be added to the Community package**. Partner packages require more maintenance as separate packages, so please confirm with the LangChain team before creating a new partner package.
15
+
16
+ In the following sections, we'll walk through how to contribute to each of these packages from a fake company, `Parrot Link AI`.
17
+
18
+ ## Community package
19
+
20
+ The `langchain-community` package is in `libs/community` and contains most integrations.
21
+
22
+ It can be installed with `pip install langchain-community`, and exported members can be imported with code like
23
+
24
+ ```python
25
+ from langchain_community.chat_models import ChatParrotLink
26
+ from langchain_community.llms import ParrotLinkLLM
27
+ from langchain_community.vectorstores import ParrotLinkVectorStore
28
+ ```
29
+
30
+ The `community` package relies on manually-installed dependent packages, so you will see errors
31
+ if you try to import a package that is not installed. In our fake example, if you tried to import `ParrotLinkLLM` without installing `parrot-link-sdk`, you will see an `ImportError` telling you to install it when trying to use it.
32
+
33
+ Let's say we wanted to implement a chat model for Parrot Link AI. We would create a new file in `libs/community/langchain_community/chat_models/parrot_link.py` with the following code:
34
+
35
+ ```python
36
+ from langchain_core.language_models.chat_models import BaseChatModel
37
+
38
+ class ChatParrotLink(BaseChatModel):
39
+ """ChatParrotLink chat model.
40
+
41
+ Example:
42
+ .. code-block:: python
43
+
44
+ from langchain_community.chat_models import ChatParrotLink
45
+
46
+ model = ChatParrotLink()
47
+ """
48
+
49
+ ...
50
+ ```
51
+
52
+ And we would write tests in:
53
+
54
+ - Unit tests: `libs/community/tests/unit_tests/chat_models/test_parrot_link.py`
55
+ - Integration tests: `libs/community/tests/integration_tests/chat_models/test_parrot_link.py`
56
+
57
+ And add documentation to:
58
+
59
+ - `docs/docs/integrations/chat/parrot_link.ipynb`
60
+
61
+ ## Partner package in LangChain repo
62
+
63
+ :::caution
64
+ Before starting a **partner** package, please confirm your intent with the LangChain team. Partner packages require more maintenance as separate packages, so we will close PRs that add new partner packages without prior discussion. See the above section for how to add a community integration.
65
+ :::
66
+
67
+ Partner packages can be hosted in the `LangChain` monorepo or in an external repo.
68
+
69
+ Partner package in the `LangChain` repo is placed in `libs/partners/{partner}`
70
+ and the package source code is in `libs/partners/{partner}/langchain_{partner}`.
71
+
72
+ A package is
73
+ installed by users with `pip install langchain-{partner}`, and the package members
74
+ can be imported with code like:
75
+
76
+ ```python
77
+ from langchain_{partner} import X
78
+ ```
79
+
80
+ ### Set up a new package
81
+
82
+ To set up a new partner package, use the latest version of the LangChain CLI. You can install or update it with:
83
+
84
+ ```bash
85
+ pip install -U langchain-cli
86
+ ```
87
+
88
+ Let's say you want to create a new partner package working for a company called Parrot Link AI.
89
+
90
+ Then, run the following command to create a new partner package:
91
+
92
+ ```bash
93
+ cd libs/partners
94
+ langchain-cli integration new
95
+ > Name: parrot-link
96
+ > Name of integration in PascalCase [ParrotLink]: ParrotLink
97
+ ```
98
+
99
+ This will create a new package in `libs/partners/parrot-link` with the following structure:
100
+
101
+ ```
102
+ libs/partners/parrot-link/
103
+ langchain_parrot_link/ # folder containing your package
104
+ ...
105
+ tests/
106
+ ...
107
+ docs/ # bootstrapped docs notebooks, must be moved to /docs in monorepo root
108
+ ...
109
+ scripts/ # scripts for CI
110
+ ...
111
+ LICENSE
112
+ README.md # fill out with information about your package
113
+ Makefile # default commands for CI
114
+ pyproject.toml # package metadata, mostly managed by Poetry
115
+ poetry.lock # package lockfile, managed by Poetry
116
+ .gitignore
117
+ ```
118
+
119
+ ### Implement your package
120
+
121
+ First, add any dependencies your package needs, such as your company's SDK:
122
+
123
+ ```bash
124
+ poetry add parrot-link-sdk
125
+ ```
126
+
127
+ If you need separate dependencies for type checking, you can add them to the `typing` group with:
128
+
129
+ ```bash
130
+ poetry add --group typing types-parrot-link-sdk
131
+ ```
132
+
133
+ Then, implement your package in `libs/partners/parrot-link/langchain_parrot_link`.
134
+
135
+ By default, this will include stubs for a Chat Model, an LLM, and/or a Vector Store. You should delete any of the files you won't use and remove them from `__init__.py`.
136
+
137
+ ### Write Unit and Integration Tests
138
+
139
+ Some basic tests are presented in the `tests/` directory. You should add more tests to cover your package's functionality.
140
+
141
+ For information on running and implementing tests, see the [Testing guide](/docs/contributing/testing/).
142
+
143
+ ### Write documentation
144
+
145
+ Documentation is generated from Jupyter notebooks in the `docs/` directory. You should place the notebooks with examples
146
+ to the relevant `docs/docs/integrations` directory in the monorepo root.
147
+
148
+ ### (If Necessary) Deprecate community integration
149
+
150
+ Note: this is only necessary if you're migrating an existing community integration into
151
+ a partner package. If the component you're integrating is net-new to LangChain (i.e.
152
+ not already in the `community` package), you can skip this step.
153
+
154
+ Let's pretend we migrated our `ChatParrotLink` chat model from the community package to
155
+ the partner package. We would need to deprecate the old model in the community package.
156
+
157
+ We would do that by adding a `@deprecated` decorator to the old model as follows, in
158
+ `libs/community/langchain_community/chat_models/parrot_link.py`.
159
+
160
+ Before our change, our chat model might look like this:
161
+
162
+ ```python
163
+ class ChatParrotLink(BaseChatModel):
164
+ ...
165
+ ```
166
+
167
+ After our change, it would look like this:
168
+
169
+ ```python
170
+ from langchain_core._api.deprecation import deprecated
171
+
172
+ @deprecated(
173
+ since="0.0.<next community version>",
174
+ removal="0.2.0",
175
+ alternative_import="langchain_parrot_link.ChatParrotLink"
176
+ )
177
+ class ChatParrotLink(BaseChatModel):
178
+ ...
179
+ ```
180
+
181
+ You should do this for *each* component that you're migrating to the partner package.
182
+
183
+ ### Additional steps
184
+
185
+ Contributor steps:
186
+
187
+ - [ ] Add secret names to manual integrations workflow in `.github/workflows/_integration_test.yml`
188
+ - [ ] Add secrets to release workflow (for pre-release testing) in `.github/workflows/_release.yml`
189
+
190
+ Maintainer steps (Contributors should **not** do these):
191
+
192
+ - [ ] set up pypi and test pypi projects
193
+ - [ ] add credential secrets to Github Actions
194
+ - [ ] add package to conda-forge
195
+
196
+ ## Partner package in external repo
197
+
198
+ Partner packages in external repos must be coordinated between the LangChain team and
199
+ the partner organization to ensure that they are maintained and updated.
200
+
201
+ If you're interested in creating a partner package in an external repo, please start
202
+ with one in the LangChain repo, and then reach out to the LangChain team to discuss
203
+ how to move it to an external repo.
langchain_md_files/contributing/repo_structure.mdx ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0.5
3
+ ---
4
+ # Repository Structure
5
+
6
+ If you plan on contributing to LangChain code or documentation, it can be useful
7
+ to understand the high level structure of the repository.
8
+
9
+ LangChain is organized as a [monorepo](https://en.wikipedia.org/wiki/Monorepo) that contains multiple packages.
10
+ You can check out our [installation guide](/docs/how_to/installation/) for more on how they fit together.
11
+
12
+ Here's the structure visualized as a tree:
13
+
14
+ ```text
15
+ .
16
+ ├── cookbook # Tutorials and examples
17
+ ├── docs # Contains content for the documentation here: https://python.langchain.com/
18
+ ├── libs
19
+ │ ├── langchain
20
+ │ │ ├── langchain
21
+ │ │ ├── tests/unit_tests # Unit tests (present in each package not shown for brevity)
22
+ │ │ ├── tests/integration_tests # Integration tests (present in each package not shown for brevity)
23
+ │ ├── community # Third-party integrations
24
+ │ │ ├── langchain-community
25
+ │ ├── core # Base interfaces for key abstractions
26
+ │ │ ├── langchain-core
27
+ │ ├── experimental # Experimental components and chains
28
+ │ │ ├── langchain-experimental
29
+ | ├── cli # Command line interface
30
+ │ │ ├── langchain-cli
31
+ │ ├── text-splitters
32
+ │ │ ├── langchain-text-splitters
33
+ │ ├── standard-tests
34
+ │ │ ├── langchain-standard-tests
35
+ │ ├── partners
36
+ │ ├── langchain-partner-1
37
+ │ ├── langchain-partner-2
38
+ │ ├── ...
39
+
40
+ ├── templates # A collection of easily deployable reference architectures for a wide variety of tasks.
41
+ ```
42
+
43
+ The root directory also contains the following files:
44
+
45
+ * `pyproject.toml`: Dependencies for building docs and linting docs, cookbook.
46
+ * `Makefile`: A file that contains shortcuts for building, linting and docs and cookbook.
47
+
48
+ There are other files in the root directory level, but their presence should be self-explanatory. Feel free to browse around!
49
+
50
+ ## Documentation
51
+
52
+ The `/docs` directory contains the content for the documentation that is shown
53
+ at https://python.langchain.com/ and the associated API Reference https://python.langchain.com/v0.2/api_reference/langchain/index.html.
54
+
55
+ See the [documentation](/docs/contributing/documentation/) guidelines to learn how to contribute to the documentation.
56
+
57
+ ## Code
58
+
59
+ The `/libs` directory contains the code for the LangChain packages.
60
+
61
+ To learn more about how to contribute code see the following guidelines:
62
+
63
+ - [Code](/docs/contributing/code/): Learn how to develop in the LangChain codebase.
64
+ - [Integrations](./integrations.mdx): Learn how to contribute to third-party integrations to `langchain-community` or to start a new partner package.
65
+ - [Testing](./testing.mdx): Guidelines to learn how to write tests for the packages.
langchain_md_files/contributing/testing.mdx ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 6
3
+ ---
4
+
5
+ # Testing
6
+
7
+ All of our packages have unit tests and integration tests, and we favor unit tests over integration tests.
8
+
9
+ Unit tests run on every pull request, so they should be fast and reliable.
10
+
11
+ Integration tests run once a day, and they require more setup, so they should be reserved for confirming interface points with external services.
12
+
13
+ ## Unit Tests
14
+
15
+ Unit tests cover modular logic that does not require calls to outside APIs.
16
+ If you add new logic, please add a unit test.
17
+
18
+ To install dependencies for unit tests:
19
+
20
+ ```bash
21
+ poetry install --with test
22
+ ```
23
+
24
+ To run unit tests:
25
+
26
+ ```bash
27
+ make test
28
+ ```
29
+
30
+ To run unit tests in Docker:
31
+
32
+ ```bash
33
+ make docker_tests
34
+ ```
35
+
36
+ To run a specific test:
37
+
38
+ ```bash
39
+ TEST_FILE=tests/unit_tests/test_imports.py make test
40
+ ```
41
+
42
+ ## Integration Tests
43
+
44
+ Integration tests cover logic that requires making calls to outside APIs (often integration with other services).
45
+ If you add support for a new external API, please add a new integration test.
46
+
47
+ **Warning:** Almost no tests should be integration tests.
48
+
49
+ Tests that require making network connections make it difficult for other
50
+ developers to test the code.
51
+
52
+ Instead favor relying on `responses` library and/or mock.patch to mock
53
+ requests using small fixtures.
54
+
55
+ To install dependencies for integration tests:
56
+
57
+ ```bash
58
+ poetry install --with test,test_integration
59
+ ```
60
+
61
+ To run integration tests:
62
+
63
+ ```bash
64
+ make integration_tests
65
+ ```
66
+
67
+ ### Prepare
68
+
69
+ The integration tests use several search engines and databases. The tests
70
+ aim to verify the correct behavior of the engines and databases according to
71
+ their specifications and requirements.
72
+
73
+ To run some integration tests, such as tests located in
74
+ `tests/integration_tests/vectorstores/`, you will need to install the following
75
+ software:
76
+
77
+ - Docker
78
+ - Python 3.8.1 or later
79
+
80
+ Any new dependencies should be added by running:
81
+
82
+ ```bash
83
+ # add package and install it after adding:
84
+ poetry add tiktoken@latest --group "test_integration" && poetry install --with test_integration
85
+ ```
86
+
87
+ Before running any tests, you should start a specific Docker container that has all the
88
+ necessary dependencies installed. For instance, we use the `elasticsearch.yml` container
89
+ for `test_elasticsearch.py`:
90
+
91
+ ```bash
92
+ cd tests/integration_tests/vectorstores/docker-compose
93
+ docker-compose -f elasticsearch.yml up
94
+ ```
95
+
96
+ For environments that requires more involving preparation, look for `*.sh`. For instance,
97
+ `opensearch.sh` builds a required docker image and then launch opensearch.
98
+
99
+
100
+ ### Prepare environment variables for local testing:
101
+
102
+ - copy `tests/integration_tests/.env.example` to `tests/integration_tests/.env`
103
+ - set variables in `tests/integration_tests/.env` file, e.g `OPENAI_API_KEY`
104
+
105
+ Additionally, it's important to note that some integration tests may require certain
106
+ environment variables to be set, such as `OPENAI_API_KEY`. Be sure to set any required
107
+ environment variables before running the tests to ensure they run correctly.
108
+
109
+ ### Recording HTTP interactions with pytest-vcr
110
+
111
+ Some of the integration tests in this repository involve making HTTP requests to
112
+ external services. To prevent these requests from being made every time the tests are
113
+ run, we use pytest-vcr to record and replay HTTP interactions.
114
+
115
+ When running tests in a CI/CD pipeline, you may not want to modify the existing
116
+ cassettes. You can use the --vcr-record=none command-line option to disable recording
117
+ new cassettes. Here's an example:
118
+
119
+ ```bash
120
+ pytest --log-cli-level=10 tests/integration_tests/vectorstores/test_pinecone.py --vcr-record=none
121
+ pytest tests/integration_tests/vectorstores/test_elasticsearch.py --vcr-record=none
122
+
123
+ ```
124
+
125
+ ### Run some tests with coverage:
126
+
127
+ ```bash
128
+ pytest tests/integration_tests/vectorstores/test_elasticsearch.py --cov=langchain --cov-report=html
129
+ start "" htmlcov/index.html || open htmlcov/index.html
130
+
131
+ ```
132
+
133
+ ## Coverage
134
+
135
+ Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
136
+
137
+ Coverage requires the dependencies for integration tests:
138
+
139
+ ```bash
140
+ poetry install --with test_integration
141
+ ```
142
+
143
+ To get a report of current coverage, run the following:
144
+
145
+ ```bash
146
+ make coverage
147
+ ```
langchain_md_files/how_to/document_loader_json.mdx ADDED
@@ -0,0 +1,402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # How to load JSON
2
+
3
+ [JSON (JavaScript Object Notation)](https://en.wikipedia.org/wiki/JSON) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
4
+
5
+ [JSON Lines](https://jsonlines.org/) is a file format where each line is a valid JSON value.
6
+
7
+ LangChain implements a [JSONLoader](https://python.langchain.com/v0.2/api_reference/community/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html)
8
+ to convert JSON and JSONL data into LangChain [Document](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document)
9
+ objects. It uses a specified [jq schema](https://en.wikipedia.org/wiki/Jq_(programming_language)) to parse the JSON files, allowing for the extraction of specific fields into the content
10
+ and metadata of the LangChain Document.
11
+
12
+ It uses the `jq` python package. Check out this [manual](https://stedolan.github.io/jq/manual/#Basicfilters) for a detailed documentation of the `jq` syntax.
13
+
14
+ Here we will demonstrate:
15
+
16
+ - How to load JSON and JSONL data into the content of a LangChain `Document`;
17
+ - How to load JSON and JSONL data into metadata associated with a `Document`.
18
+
19
+
20
+ ```python
21
+ #!pip install jq
22
+ ```
23
+
24
+
25
+ ```python
26
+ from langchain_community.document_loaders import JSONLoader
27
+ ```
28
+
29
+
30
+ ```python
31
+ import json
32
+ from pathlib import Path
33
+ from pprint import pprint
34
+
35
+
36
+ file_path='./example_data/facebook_chat.json'
37
+ data = json.loads(Path(file_path).read_text())
38
+ ```
39
+
40
+
41
+ ```python
42
+ pprint(data)
43
+ ```
44
+
45
+ <CodeOutputBlock lang="python">
46
+
47
+ ```
48
+ {'image': {'creation_timestamp': 1675549016, 'uri': 'image_of_the_chat.jpg'},
49
+ 'is_still_participant': True,
50
+ 'joinable_mode': {'link': '', 'mode': 1},
51
+ 'magic_words': [],
52
+ 'messages': [{'content': 'Bye!',
53
+ 'sender_name': 'User 2',
54
+ 'timestamp_ms': 1675597571851},
55
+ {'content': 'Oh no worries! Bye',
56
+ 'sender_name': 'User 1',
57
+ 'timestamp_ms': 1675597435669},
58
+ {'content': 'No Im sorry it was my mistake, the blue one is not '
59
+ 'for sale',
60
+ 'sender_name': 'User 2',
61
+ 'timestamp_ms': 1675596277579},
62
+ {'content': 'I thought you were selling the blue one!',
63
+ 'sender_name': 'User 1',
64
+ 'timestamp_ms': 1675595140251},
65
+ {'content': 'Im not interested in this bag. Im interested in the '
66
+ 'blue one!',
67
+ 'sender_name': 'User 1',
68
+ 'timestamp_ms': 1675595109305},
69
+ {'content': 'Here is $129',
70
+ 'sender_name': 'User 2',
71
+ 'timestamp_ms': 1675595068468},
72
+ {'photos': [{'creation_timestamp': 1675595059,
73
+ 'uri': 'url_of_some_picture.jpg'}],
74
+ 'sender_name': 'User 2',
75
+ 'timestamp_ms': 1675595060730},
76
+ {'content': 'Online is at least $100',
77
+ 'sender_name': 'User 2',
78
+ 'timestamp_ms': 1675595045152},
79
+ {'content': 'How much do you want?',
80
+ 'sender_name': 'User 1',
81
+ 'timestamp_ms': 1675594799696},
82
+ {'content': 'Goodmorning! $50 is too low.',
83
+ 'sender_name': 'User 2',
84
+ 'timestamp_ms': 1675577876645},
85
+ {'content': 'Hi! Im interested in your bag. Im offering $50. Let '
86
+ 'me know if you are interested. Thanks!',
87
+ 'sender_name': 'User 1',
88
+ 'timestamp_ms': 1675549022673}],
89
+ 'participants': [{'name': 'User 1'}, {'name': 'User 2'}],
90
+ 'thread_path': 'inbox/User 1 and User 2 chat',
91
+ 'title': 'User 1 and User 2 chat'}
92
+ ```
93
+
94
+ </CodeOutputBlock>
95
+
96
+
97
+ ## Using `JSONLoader`
98
+
99
+ Suppose we are interested in extracting the values under the `content` field within the `messages` key of the JSON data. This can easily be done through the `JSONLoader` as shown below.
100
+
101
+
102
+ ### JSON file
103
+
104
+ ```python
105
+ loader = JSONLoader(
106
+ file_path='./example_data/facebook_chat.json',
107
+ jq_schema='.messages[].content',
108
+ text_content=False)
109
+
110
+ data = loader.load()
111
+ ```
112
+
113
+
114
+ ```python
115
+ pprint(data)
116
+ ```
117
+
118
+ <CodeOutputBlock lang="python">
119
+
120
+ ```
121
+ [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1}),
122
+ Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2}),
123
+ Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3}),
124
+ Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4}),
125
+ Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5}),
126
+ Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6}),
127
+ Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7}),
128
+ Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8}),
129
+ Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9}),
130
+ Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10}),
131
+ Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11})]
132
+ ```
133
+
134
+ </CodeOutputBlock>
135
+
136
+
137
+ ### JSON Lines file
138
+
139
+ If you want to load documents from a JSON Lines file, you pass `json_lines=True`
140
+ and specify `jq_schema` to extract `page_content` from a single JSON object.
141
+
142
+ ```python
143
+ file_path = './example_data/facebook_chat_messages.jsonl'
144
+ pprint(Path(file_path).read_text())
145
+ ```
146
+
147
+ <CodeOutputBlock lang="python">
148
+
149
+ ```
150
+ ('{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"}\n'
151
+ '{"sender_name": "User 1", "timestamp_ms": 1675597435669, "content": "Oh no '
152
+ 'worries! Bye"}\n'
153
+ '{"sender_name": "User 2", "timestamp_ms": 1675596277579, "content": "No Im '
154
+ 'sorry it was my mistake, the blue one is not for sale"}\n')
155
+ ```
156
+
157
+ </CodeOutputBlock>
158
+
159
+
160
+ ```python
161
+ loader = JSONLoader(
162
+ file_path='./example_data/facebook_chat_messages.jsonl',
163
+ jq_schema='.content',
164
+ text_content=False,
165
+ json_lines=True)
166
+
167
+ data = loader.load()
168
+ ```
169
+
170
+ ```python
171
+ pprint(data)
172
+ ```
173
+
174
+ <CodeOutputBlock lang="python">
175
+
176
+ ```
177
+ [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}),
178
+ Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}),
179
+ Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})]
180
+ ```
181
+
182
+ </CodeOutputBlock>
183
+
184
+
185
+ Another option is to set `jq_schema='.'` and provide `content_key`:
186
+
187
+ ```python
188
+ loader = JSONLoader(
189
+ file_path='./example_data/facebook_chat_messages.jsonl',
190
+ jq_schema='.',
191
+ content_key='sender_name',
192
+ json_lines=True)
193
+
194
+ data = loader.load()
195
+ ```
196
+
197
+ ```python
198
+ pprint(data)
199
+ ```
200
+
201
+ <CodeOutputBlock lang="python">
202
+
203
+ ```
204
+ [Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}),
205
+ Document(page_content='User 1', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}),
206
+ Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})]
207
+ ```
208
+
209
+ </CodeOutputBlock>
210
+
211
+ ### JSON file with jq schema `content_key`
212
+
213
+ To load documents from a JSON file using the content_key within the jq schema, set is_content_key_jq_parsable=True.
214
+ Ensure that content_key is compatible and can be parsed using the jq schema.
215
+
216
+ ```python
217
+ file_path = './sample.json'
218
+ pprint(Path(file_path).read_text())
219
+ ```
220
+
221
+ <CodeOutputBlock lang="python">
222
+
223
+ ```json
224
+ {"data": [
225
+ {"attributes": {
226
+ "message": "message1",
227
+ "tags": [
228
+ "tag1"]},
229
+ "id": "1"},
230
+ {"attributes": {
231
+ "message": "message2",
232
+ "tags": [
233
+ "tag2"]},
234
+ "id": "2"}]}
235
+ ```
236
+
237
+ </CodeOutputBlock>
238
+
239
+
240
+ ```python
241
+ loader = JSONLoader(
242
+ file_path=file_path,
243
+ jq_schema=".data[]",
244
+ content_key=".attributes.message",
245
+ is_content_key_jq_parsable=True,
246
+ )
247
+
248
+ data = loader.load()
249
+ ```
250
+
251
+ ```python
252
+ pprint(data)
253
+ ```
254
+
255
+ <CodeOutputBlock lang="python">
256
+
257
+ ```
258
+ [Document(page_content='message1', metadata={'source': '/path/to/sample.json', 'seq_num': 1}),
259
+ Document(page_content='message2', metadata={'source': '/path/to/sample.json', 'seq_num': 2})]
260
+ ```
261
+
262
+ </CodeOutputBlock>
263
+
264
+ ## Extracting metadata
265
+
266
+ Generally, we want to include metadata available in the JSON file into the documents that we create from the content.
267
+
268
+ The following demonstrates how metadata can be extracted using the `JSONLoader`.
269
+
270
+ There are some key changes to be noted. In the previous example where we didn't collect the metadata, we managed to directly specify in the schema where the value for the `page_content` can be extracted from.
271
+
272
+ ```
273
+ .messages[].content
274
+ ```
275
+
276
+ In the current example, we have to tell the loader to iterate over the records in the `messages` field. The jq_schema then has to be:
277
+
278
+ ```
279
+ .messages[]
280
+ ```
281
+
282
+ This allows us to pass the records (dict) into the `metadata_func` that has to be implemented. The `metadata_func` is responsible for identifying which pieces of information in the record should be included in the metadata stored in the final `Document` object.
283
+
284
+ Additionally, we now have to explicitly specify in the loader, via the `content_key` argument, the key from the record where the value for the `page_content` needs to be extracted from.
285
+
286
+
287
+ ```python
288
+ # Define the metadata extraction function.
289
+ def metadata_func(record: dict, metadata: dict) -> dict:
290
+
291
+ metadata["sender_name"] = record.get("sender_name")
292
+ metadata["timestamp_ms"] = record.get("timestamp_ms")
293
+
294
+ return metadata
295
+
296
+
297
+ loader = JSONLoader(
298
+ file_path='./example_data/facebook_chat.json',
299
+ jq_schema='.messages[]',
300
+ content_key="content",
301
+ metadata_func=metadata_func
302
+ )
303
+
304
+ data = loader.load()
305
+ ```
306
+
307
+
308
+ ```python
309
+ pprint(data)
310
+ ```
311
+
312
+ <CodeOutputBlock lang="python">
313
+
314
+ ```
315
+ [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}),
316
+ Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}),
317
+ Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}),
318
+ Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}),
319
+ Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}),
320
+ Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}),
321
+ Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}),
322
+ Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}),
323
+ Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}),
324
+ Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}),
325
+ Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]
326
+ ```
327
+
328
+ </CodeOutputBlock>
329
+
330
+ Now, you will see that the documents contain the metadata associated with the content we extracted.
331
+
332
+ ## The `metadata_func`
333
+
334
+ As shown above, the `metadata_func` accepts the default metadata generated by the `JSONLoader`. This allows full control to the user with respect to how the metadata is formatted.
335
+
336
+ For example, the default metadata contains the `source` and the `seq_num` keys. However, it is possible that the JSON data contain these keys as well. The user can then exploit the `metadata_func` to rename the default keys and use the ones from the JSON data.
337
+
338
+ The example below shows how we can modify the `source` to only contain information of the file source relative to the `langchain` directory.
339
+
340
+
341
+ ```python
342
+ # Define the metadata extraction function.
343
+ def metadata_func(record: dict, metadata: dict) -> dict:
344
+
345
+ metadata["sender_name"] = record.get("sender_name")
346
+ metadata["timestamp_ms"] = record.get("timestamp_ms")
347
+
348
+ if "source" in metadata:
349
+ source = metadata["source"].split("/")
350
+ source = source[source.index("langchain"):]
351
+ metadata["source"] = "/".join(source)
352
+
353
+ return metadata
354
+
355
+
356
+ loader = JSONLoader(
357
+ file_path='./example_data/facebook_chat.json',
358
+ jq_schema='.messages[]',
359
+ content_key="content",
360
+ metadata_func=metadata_func
361
+ )
362
+
363
+ data = loader.load()
364
+ ```
365
+
366
+
367
+ ```python
368
+ pprint(data)
369
+ ```
370
+
371
+ <CodeOutputBlock lang="python">
372
+
373
+ ```
374
+ [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}),
375
+ Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}),
376
+ Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}),
377
+ Document(page_content='I thought you were selling the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}),
378
+ Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}),
379
+ Document(page_content='Here is $129', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}),
380
+ Document(page_content='', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}),
381
+ Document(page_content='Online is at least $100', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}),
382
+ Document(page_content='How much do you want?', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}),
383
+ Document(page_content='Goodmorning! $50 is too low.', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}),
384
+ Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]
385
+ ```
386
+
387
+ </CodeOutputBlock>
388
+
389
+ ## Common JSON structures with jq schema
390
+
391
+ The list below provides a reference to the possible `jq_schema` the user can use to extract content from the JSON data depending on the structure.
392
+
393
+ ```
394
+ JSON -> [{"text": ...}, {"text": ...}, {"text": ...}]
395
+ jq_schema -> ".[].text"
396
+
397
+ JSON -> {"key": [{"text": ...}, {"text": ...}, {"text": ...}]}
398
+ jq_schema -> ".key[].text"
399
+
400
+ JSON -> ["...", "...", "..."]
401
+ jq_schema -> ".[]"
402
+ ```
langchain_md_files/how_to/document_loader_office_file.mdx ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # How to load Microsoft Office files
2
+
3
+ The [Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.
4
+
5
+ This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a LangChain
6
+ [Document](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document)
7
+ object that we can use downstream.
8
+
9
+
10
+ ## Loading DOCX, XLSX, PPTX with AzureAIDocumentIntelligenceLoader
11
+
12
+ [Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning
13
+ based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from
14
+ digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`.
15
+
16
+ This [current implementation](https://aka.ms/di-langchain) of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page.
17
+
18
+ ### Prerequisite
19
+
20
+ An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `<endpoint>` and `<key>` as parameters to the loader.
21
+
22
+ ```python
23
+ %pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence
24
+
25
+ from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader
26
+
27
+ file_path = "<filepath>"
28
+ endpoint = "<endpoint>"
29
+ key = "<key>"
30
+ loader = AzureAIDocumentIntelligenceLoader(
31
+ api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout"
32
+ )
33
+
34
+ documents = loader.load()
35
+ ```
langchain_md_files/how_to/embed_text.mdx ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Text embedding models
2
+
3
+ :::info
4
+ Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers.
5
+ :::
6
+
7
+ The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
8
+
9
+ Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
10
+
11
+ The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former, `.embed_documents`, takes as input multiple texts, while the latter, `.embed_query`, takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
12
+ `.embed_query` will return a list of floats, whereas `.embed_documents` returns a list of lists of floats.
13
+
14
+ ## Get started
15
+
16
+ ### Setup
17
+
18
+ import Tabs from '@theme/Tabs';
19
+ import TabItem from '@theme/TabItem';
20
+
21
+ <Tabs>
22
+ <TabItem value="openai" label="OpenAI" default>
23
+ To start we'll need to install the OpenAI partner package:
24
+
25
+ ```bash
26
+ pip install langchain-openai
27
+ ```
28
+
29
+ Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:
30
+
31
+ ```bash
32
+ export OPENAI_API_KEY="..."
33
+ ```
34
+
35
+ If you'd prefer not to set an environment variable you can pass the key in directly via the `api_key` named parameter when initiating the OpenAI LLM class:
36
+
37
+ ```python
38
+ from langchain_openai import OpenAIEmbeddings
39
+
40
+ embeddings_model = OpenAIEmbeddings(api_key="...")
41
+ ```
42
+
43
+ Otherwise you can initialize without any params:
44
+ ```python
45
+ from langchain_openai import OpenAIEmbeddings
46
+
47
+ embeddings_model = OpenAIEmbeddings()
48
+ ```
49
+
50
+ </TabItem>
51
+ <TabItem value="cohere" label="Cohere">
52
+
53
+ To start we'll need to install the Cohere SDK package:
54
+
55
+ ```bash
56
+ pip install langchain-cohere
57
+ ```
58
+
59
+ Accessing the API requires an API key, which you can get by creating an account and heading [here](https://dashboard.cohere.com/api-keys). Once we have a key we'll want to set it as an environment variable by running:
60
+
61
+ ```shell
62
+ export COHERE_API_KEY="..."
63
+ ```
64
+
65
+ If you'd prefer not to set an environment variable you can pass the key in directly via the `cohere_api_key` named parameter when initiating the Cohere LLM class:
66
+
67
+ ```python
68
+ from langchain_cohere import CohereEmbeddings
69
+
70
+ embeddings_model = CohereEmbeddings(cohere_api_key="...", model='embed-english-v3.0')
71
+ ```
72
+
73
+ Otherwise you can initialize simply as shown below:
74
+ ```python
75
+ from langchain_cohere import CohereEmbeddings
76
+
77
+ embeddings_model = CohereEmbeddings(model='embed-english-v3.0')
78
+ ```
79
+ Do note that it is mandatory to pass the model parameter while initializing the CohereEmbeddings class.
80
+
81
+ </TabItem>
82
+ <TabItem value="huggingface" label="Hugging Face">
83
+
84
+ To start we'll need to install the Hugging Face partner package:
85
+
86
+ ```bash
87
+ pip install langchain-huggingface
88
+ ```
89
+
90
+ You can then load any [Sentence Transformers model](https://huggingface.co/models?library=sentence-transformers) from the Hugging Face Hub.
91
+
92
+ ```python
93
+ from langchain_huggingface import HuggingFaceEmbeddings
94
+
95
+ embeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
96
+ ```
97
+
98
+ You can also leave the `model_name` blank to use the default [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model.
99
+
100
+ ```python
101
+ from langchain_huggingface import HuggingFaceEmbeddings
102
+
103
+ embeddings_model = HuggingFaceEmbeddings()
104
+ ```
105
+
106
+ </TabItem>
107
+ </Tabs>
108
+
109
+ ### `embed_documents`
110
+ #### Embed list of texts
111
+
112
+ Use `.embed_documents` to embed a list of strings, recovering a list of embeddings:
113
+
114
+ ```python
115
+ embeddings = embeddings_model.embed_documents(
116
+ [
117
+ "Hi there!",
118
+ "Oh, hello!",
119
+ "What's your name?",
120
+ "My friends call me World",
121
+ "Hello World!"
122
+ ]
123
+ )
124
+ len(embeddings), len(embeddings[0])
125
+ ```
126
+
127
+ <CodeOutputBlock language="python">
128
+
129
+ ```
130
+ (5, 1536)
131
+ ```
132
+
133
+ </CodeOutputBlock>
134
+
135
+ ### `embed_query`
136
+ #### Embed single query
137
+ Use `.embed_query` to embed a single piece of text (e.g., for the purpose of comparing to other embedded pieces of texts).
138
+
139
+ ```python
140
+ embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?")
141
+ embedded_query[:5]
142
+ ```
143
+
144
+ <CodeOutputBlock language="python">
145
+
146
+ ```
147
+ [0.0053587136790156364,
148
+ -0.0004999046213924885,
149
+ 0.038883671164512634,
150
+ -0.003001077566295862,
151
+ -0.00900818221271038]
152
+ ```
153
+
154
+ </CodeOutputBlock>
langchain_md_files/how_to/index.mdx ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_class_name: hidden
4
+ ---
5
+
6
+ # How-to guides
7
+
8
+ Here you’ll find answers to “How do I….?” types of questions.
9
+ These guides are *goal-oriented* and *concrete*; they're meant to help you complete a specific task.
10
+ For conceptual explanations see the [Conceptual guide](/docs/concepts/).
11
+ For end-to-end walkthroughs see [Tutorials](/docs/tutorials).
12
+ For comprehensive descriptions of every class and function see the [API Reference](https://python.langchain.com/v0.2/api_reference/).
13
+
14
+ ## Installation
15
+
16
+ - [How to: install LangChain packages](/docs/how_to/installation/)
17
+ - [How to: use LangChain with different Pydantic versions](/docs/how_to/pydantic_compatibility)
18
+
19
+ ## Key features
20
+
21
+ This highlights functionality that is core to using LangChain.
22
+
23
+ - [How to: return structured data from a model](/docs/how_to/structured_output/)
24
+ - [How to: use a model to call tools](/docs/how_to/tool_calling)
25
+ - [How to: stream runnables](/docs/how_to/streaming)
26
+ - [How to: debug your LLM apps](/docs/how_to/debugging/)
27
+
28
+ ## LangChain Expression Language (LCEL)
29
+
30
+ [LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel) is a way to create arbitrary custom chains. It is built on the [Runnable](https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html) protocol.
31
+
32
+ [**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives.
33
+
34
+ [**Migration guide**](/docs/versions/migrating_chains): For migrating legacy chain abstractions to LCEL.
35
+
36
+ - [How to: chain runnables](/docs/how_to/sequence)
37
+ - [How to: stream runnables](/docs/how_to/streaming)
38
+ - [How to: invoke runnables in parallel](/docs/how_to/parallel/)
39
+ - [How to: add default invocation args to runnables](/docs/how_to/binding/)
40
+ - [How to: turn any function into a runnable](/docs/how_to/functions)
41
+ - [How to: pass through inputs from one chain step to the next](/docs/how_to/passthrough)
42
+ - [How to: configure runnable behavior at runtime](/docs/how_to/configure)
43
+ - [How to: add message history (memory) to a chain](/docs/how_to/message_history)
44
+ - [How to: route between sub-chains](/docs/how_to/routing)
45
+ - [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
46
+ - [How to: inspect runnables](/docs/how_to/inspect)
47
+ - [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
48
+ - [How to: pass runtime secrets to a runnable](/docs/how_to/runnable_runtime_secrets)
49
+
50
+ ## Components
51
+
52
+ These are the core building blocks you can use when building applications.
53
+
54
+ ### Prompt templates
55
+
56
+ [Prompt Templates](/docs/concepts/#prompt-templates) are responsible for formatting user input into a format that can be passed to a language model.
57
+
58
+ - [How to: use few shot examples](/docs/how_to/few_shot_examples)
59
+ - [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
60
+ - [How to: partially format prompt templates](/docs/how_to/prompts_partial)
61
+ - [How to: compose prompts together](/docs/how_to/prompts_composition)
62
+
63
+ ### Example selectors
64
+
65
+ [Example Selectors](/docs/concepts/#example-selectors) are responsible for selecting the correct few shot examples to pass to the prompt.
66
+
67
+ - [How to: use example selectors](/docs/how_to/example_selectors)
68
+ - [How to: select examples by length](/docs/how_to/example_selectors_length_based)
69
+ - [How to: select examples by semantic similarity](/docs/how_to/example_selectors_similarity)
70
+ - [How to: select examples by semantic ngram overlap](/docs/how_to/example_selectors_ngram)
71
+ - [How to: select examples by maximal marginal relevance](/docs/how_to/example_selectors_mmr)
72
+ - [How to: select examples from LangSmith few-shot datasets](/docs/how_to/example_selectors_langsmith/)
73
+
74
+ ### Chat models
75
+
76
+ [Chat Models](/docs/concepts/#chat-models) are newer forms of language models that take messages in and output a message.
77
+
78
+ - [How to: do function/tool calling](/docs/how_to/tool_calling)
79
+ - [How to: get models to return structured output](/docs/how_to/structured_output)
80
+ - [How to: cache model responses](/docs/how_to/chat_model_caching)
81
+ - [How to: get log probabilities](/docs/how_to/logprobs)
82
+ - [How to: create a custom chat model class](/docs/how_to/custom_chat_model)
83
+ - [How to: stream a response back](/docs/how_to/chat_streaming)
84
+ - [How to: track token usage](/docs/how_to/chat_token_usage_tracking)
85
+ - [How to: track response metadata across providers](/docs/how_to/response_metadata)
86
+ - [How to: use chat model to call tools](/docs/how_to/tool_calling)
87
+ - [How to: stream tool calls](/docs/how_to/tool_streaming)
88
+ - [How to: handle rate limits](/docs/how_to/chat_model_rate_limiting)
89
+ - [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
90
+ - [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
91
+ - [How to: force a specific tool call](/docs/how_to/tool_choice)
92
+ - [How to: work with local models](/docs/how_to/local_llms)
93
+ - [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
94
+
95
+ ### Messages
96
+
97
+ [Messages](/docs/concepts/#messages) are the input and output of chat models. They have some `content` and a `role`, which describes the source of the message.
98
+
99
+ - [How to: trim messages](/docs/how_to/trim_messages/)
100
+ - [How to: filter messages](/docs/how_to/filter_messages/)
101
+ - [How to: merge consecutive messages of the same type](/docs/how_to/merge_message_runs/)
102
+
103
+ ### LLMs
104
+
105
+ What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language models that take a string in and output a string.
106
+
107
+ - [How to: cache model responses](/docs/how_to/llm_caching)
108
+ - [How to: create a custom LLM class](/docs/how_to/custom_llm)
109
+ - [How to: stream a response back](/docs/how_to/streaming_llm)
110
+ - [How to: track token usage](/docs/how_to/llm_token_usage_tracking)
111
+ - [How to: work with local models](/docs/how_to/local_llms)
112
+
113
+ ### Output parsers
114
+
115
+ [Output Parsers](/docs/concepts/#output-parsers) are responsible for taking the output of an LLM and parsing into more structured format.
116
+
117
+ - [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured)
118
+ - [How to: parse JSON output](/docs/how_to/output_parser_json)
119
+ - [How to: parse XML output](/docs/how_to/output_parser_xml)
120
+ - [How to: parse YAML output](/docs/how_to/output_parser_yaml)
121
+ - [How to: retry when output parsing errors occur](/docs/how_to/output_parser_retry)
122
+ - [How to: try to fix errors in output parsing](/docs/how_to/output_parser_fixing)
123
+ - [How to: write a custom output parser class](/docs/how_to/output_parser_custom)
124
+
125
+ ### Document loaders
126
+
127
+ [Document Loaders](/docs/concepts/#document-loaders) are responsible for loading documents from a variety of sources.
128
+
129
+ - [How to: load CSV data](/docs/how_to/document_loader_csv)
130
+ - [How to: load data from a directory](/docs/how_to/document_loader_directory)
131
+ - [How to: load HTML data](/docs/how_to/document_loader_html)
132
+ - [How to: load JSON data](/docs/how_to/document_loader_json)
133
+ - [How to: load Markdown data](/docs/how_to/document_loader_markdown)
134
+ - [How to: load Microsoft Office data](/docs/how_to/document_loader_office_file)
135
+ - [How to: load PDF files](/docs/how_to/document_loader_pdf)
136
+ - [How to: write a custom document loader](/docs/how_to/document_loader_custom)
137
+
138
+ ### Text splitters
139
+
140
+ [Text Splitters](/docs/concepts/#text-splitters) take a document and split into chunks that can be used for retrieval.
141
+
142
+ - [How to: recursively split text](/docs/how_to/recursive_text_splitter)
143
+ - [How to: split by HTML headers](/docs/how_to/HTML_header_metadata_splitter)
144
+ - [How to: split by HTML sections](/docs/how_to/HTML_section_aware_splitter)
145
+ - [How to: split by character](/docs/how_to/character_text_splitter)
146
+ - [How to: split code](/docs/how_to/code_splitter)
147
+ - [How to: split Markdown by headers](/docs/how_to/markdown_header_metadata_splitter)
148
+ - [How to: recursively split JSON](/docs/how_to/recursive_json_splitter)
149
+ - [How to: split text into semantic chunks](/docs/how_to/semantic-chunker)
150
+ - [How to: split by tokens](/docs/how_to/split_by_token)
151
+
152
+ ### Embedding models
153
+
154
+ [Embedding Models](/docs/concepts/#embedding-models) take a piece of text and create a numerical representation of it.
155
+
156
+ - [How to: embed text data](/docs/how_to/embed_text)
157
+ - [How to: cache embedding results](/docs/how_to/caching_embeddings)
158
+
159
+ ### Vector stores
160
+
161
+ [Vector stores](/docs/concepts/#vector-stores) are databases that can efficiently store and retrieve embeddings.
162
+
163
+ - [How to: use a vector store to retrieve data](/docs/how_to/vectorstores)
164
+
165
+ ### Retrievers
166
+
167
+ [Retrievers](/docs/concepts/#retrievers) are responsible for taking a query and returning relevant documents.
168
+
169
+ - [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever)
170
+ - [How to: generate multiple queries to retrieve data for](/docs/how_to/MultiQueryRetriever)
171
+ - [How to: use contextual compression to compress the data retrieved](/docs/how_to/contextual_compression)
172
+ - [How to: write a custom retriever class](/docs/how_to/custom_retriever)
173
+ - [How to: add similarity scores to retriever results](/docs/how_to/add_scores_retriever)
174
+ - [How to: combine the results from multiple retrievers](/docs/how_to/ensemble_retriever)
175
+ - [How to: reorder retrieved results to mitigate the "lost in the middle" effect](/docs/how_to/long_context_reorder)
176
+ - [How to: generate multiple embeddings per document](/docs/how_to/multi_vector)
177
+ - [How to: retrieve the whole document for a chunk](/docs/how_to/parent_document_retriever)
178
+ - [How to: generate metadata filters](/docs/how_to/self_query)
179
+ - [How to: create a time-weighted retriever](/docs/how_to/time_weighted_vectorstore)
180
+ - [How to: use hybrid vector and keyword retrieval](/docs/how_to/hybrid)
181
+
182
+ ### Indexing
183
+
184
+ Indexing is the process of keeping your vectorstore in-sync with the underlying data source.
185
+
186
+ - [How to: reindex data to keep your vectorstore in-sync with the underlying data source](/docs/how_to/indexing)
187
+
188
+ ### Tools
189
+
190
+ LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools.
191
+
192
+ - [How to: create tools](/docs/how_to/custom_tools)
193
+ - [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
194
+ - [How to: use chat models to call tools](/docs/how_to/tool_calling)
195
+ - [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model)
196
+ - [How to: pass run time values to tools](/docs/how_to/tool_runtime)
197
+ - [How to: add a human-in-the-loop for tools](/docs/how_to/tools_human)
198
+ - [How to: handle tool errors](/docs/how_to/tools_error)
199
+ - [How to: force models to call a tool](/docs/how_to/tool_choice)
200
+ - [How to: disable parallel tool calling](/docs/how_to/tool_calling_parallel)
201
+ - [How to: access the `RunnableConfig` from a tool](/docs/how_to/tool_configure)
202
+ - [How to: stream events from a tool](/docs/how_to/tool_stream_events)
203
+ - [How to: return artifacts from a tool](/docs/how_to/tool_artifacts/)
204
+ - [How to: convert Runnables to tools](/docs/how_to/convert_runnable_to_tool)
205
+ - [How to: add ad-hoc tool calling capability to models](/docs/how_to/tools_prompting)
206
+ - [How to: pass in runtime secrets](/docs/how_to/runnable_runtime_secrets)
207
+
208
+ ### Multimodal
209
+
210
+ - [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
211
+ - [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
212
+
213
+
214
+ ### Agents
215
+
216
+ :::note
217
+
218
+ For in depth how-to guides for agents, please check out [LangGraph](https://langchain-ai.github.io/langgraph/) documentation.
219
+
220
+ :::
221
+
222
+ - [How to: use legacy LangChain Agents (AgentExecutor)](/docs/how_to/agent_executor)
223
+ - [How to: migrate from legacy LangChain agents to LangGraph](/docs/how_to/migrate_agent)
224
+
225
+ ### Callbacks
226
+
227
+ [Callbacks](/docs/concepts/#callbacks) allow you to hook into the various stages of your LLM application's execution.
228
+
229
+ - [How to: pass in callbacks at runtime](/docs/how_to/callbacks_runtime)
230
+ - [How to: attach callbacks to a module](/docs/how_to/callbacks_attach)
231
+ - [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor)
232
+ - [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
233
+ - [How to: use callbacks in async environments](/docs/how_to/callbacks_async)
234
+ - [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events)
235
+
236
+ ### Custom
237
+
238
+ All of LangChain components can easily be extended to support your own versions.
239
+
240
+ - [How to: create a custom chat model class](/docs/how_to/custom_chat_model)
241
+ - [How to: create a custom LLM class](/docs/how_to/custom_llm)
242
+ - [How to: write a custom retriever class](/docs/how_to/custom_retriever)
243
+ - [How to: write a custom document loader](/docs/how_to/document_loader_custom)
244
+ - [How to: write a custom output parser class](/docs/how_to/output_parser_custom)
245
+ - [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
246
+ - [How to: define a custom tool](/docs/how_to/custom_tools)
247
+ - [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events)
248
+
249
+ ### Serialization
250
+ - [How to: save and load LangChain objects](/docs/how_to/serialization)
251
+
252
+ ## Use cases
253
+
254
+ These guides cover use-case specific details.
255
+
256
+ ### Q&A with RAG
257
+
258
+ Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data.
259
+ For a high-level tutorial on RAG, check out [this guide](/docs/tutorials/rag/).
260
+
261
+ - [How to: add chat history](/docs/how_to/qa_chat_history_how_to/)
262
+ - [How to: stream](/docs/how_to/qa_streaming/)
263
+ - [How to: return sources](/docs/how_to/qa_sources/)
264
+ - [How to: return citations](/docs/how_to/qa_citations/)
265
+ - [How to: do per-user retrieval](/docs/how_to/qa_per_user/)
266
+
267
+
268
+ ### Extraction
269
+
270
+ Extraction is when you use LLMs to extract structured information from unstructured text.
271
+ For a high level tutorial on extraction, check out [this guide](/docs/tutorials/extraction/).
272
+
273
+ - [How to: use reference examples](/docs/how_to/extraction_examples/)
274
+ - [How to: handle long text](/docs/how_to/extraction_long_text/)
275
+ - [How to: do extraction without using function calling](/docs/how_to/extraction_parse)
276
+
277
+ ### Chatbots
278
+
279
+ Chatbots involve using an LLM to have a conversation.
280
+ For a high-level tutorial on building chatbots, check out [this guide](/docs/tutorials/chatbot/).
281
+
282
+ - [How to: manage memory](/docs/how_to/chatbots_memory)
283
+ - [How to: do retrieval](/docs/how_to/chatbots_retrieval)
284
+ - [How to: use tools](/docs/how_to/chatbots_tools)
285
+ - [How to: manage large chat history](/docs/how_to/trim_messages/)
286
+
287
+ ### Query analysis
288
+
289
+ Query Analysis is the task of using an LLM to generate a query to send to a retriever.
290
+ For a high-level tutorial on query analysis, check out [this guide](/docs/tutorials/query_analysis/).
291
+
292
+ - [How to: add examples to the prompt](/docs/how_to/query_few_shot)
293
+ - [How to: handle cases where no queries are generated](/docs/how_to/query_no_queries)
294
+ - [How to: handle multiple queries](/docs/how_to/query_multiple_queries)
295
+ - [How to: handle multiple retrievers](/docs/how_to/query_multiple_retrievers)
296
+ - [How to: construct filters](/docs/how_to/query_constructing_filters)
297
+ - [How to: deal with high cardinality categorical variables](/docs/how_to/query_high_cardinality)
298
+
299
+ ### Q&A over SQL + CSV
300
+
301
+ You can use LLMs to do question answering over tabular data.
302
+ For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).
303
+
304
+ - [How to: use prompting to improve results](/docs/how_to/sql_prompting)
305
+ - [How to: do query validation](/docs/how_to/sql_query_checking)
306
+ - [How to: deal with large databases](/docs/how_to/sql_large_db)
307
+ - [How to: deal with CSV files](/docs/how_to/sql_csv)
308
+
309
+ ### Q&A over graph databases
310
+
311
+ You can use an LLM to do question answering over graph databases.
312
+ For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).
313
+
314
+ - [How to: map values to a database](/docs/how_to/graph_mapping)
315
+ - [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
316
+ - [How to: improve results with prompting](/docs/how_to/graph_prompting)
317
+ - [How to: construct knowledge graphs](/docs/how_to/graph_constructing)
318
+
319
+ ### Summarization
320
+
321
+ LLMs can summarize and otherwise distill desired information from text, including
322
+ large volumes of text. For a high-level tutorial, check out [this guide](/docs/tutorials/summarization).
323
+
324
+ - [How to: summarize text in a single LLM call](/docs/how_to/summarize_stuff)
325
+ - [How to: summarize text through parallelization](/docs/how_to/summarize_map_reduce)
326
+ - [How to: summarize text through iterative refinement](/docs/how_to/summarize_refine)
327
+
328
+ ## [LangGraph](https://langchain-ai.github.io/langgraph)
329
+
330
+ LangGraph is an extension of LangChain aimed at
331
+ building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
332
+
333
+ LangGraph documentation is currently hosted on a separate site.
334
+ You can peruse [LangGraph how-to guides here](https://langchain-ai.github.io/langgraph/how-tos/).
335
+
336
+ ## [LangSmith](https://docs.smith.langchain.com/)
337
+
338
+ LangSmith allows you to closely trace, monitor and evaluate your LLM application.
339
+ It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build.
340
+
341
+ LangSmith documentation is hosted on a separate site.
342
+ You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/), but we'll highlight a few sections that are particularly
343
+ relevant to LangChain below:
344
+
345
+ ### Evaluation
346
+ <span data-heading-keywords="evaluation,evaluate"></span>
347
+
348
+ Evaluating performance is a vital part of building LLM-powered applications.
349
+ LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators.
350
+
351
+ To learn more, check out the [LangSmith evaluation how-to guides](https://docs.smith.langchain.com/how_to_guides#evaluation).
352
+
353
+ ### Tracing
354
+ <span data-heading-keywords="trace,tracing"></span>
355
+
356
+ Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues.
357
+
358
+ - [How to: trace with LangChain](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain)
359
+ - [How to: add metadata and tags to traces](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain#add-metadata-and-tags-to-traces)
360
+
361
+ You can see general tracing-related how-tos [in this section of the LangSmith docs](https://docs.smith.langchain.com/how_to_guides/tracing).
langchain_md_files/how_to/installation.mdx ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 2
3
+ ---
4
+
5
+ # How to install LangChain packages
6
+
7
+ The LangChain ecosystem is split into different packages, which allow you to choose exactly which pieces of
8
+ functionality to install.
9
+
10
+ ## Official release
11
+
12
+ To install the main LangChain package, run:
13
+
14
+ import Tabs from '@theme/Tabs';
15
+ import TabItem from '@theme/TabItem';
16
+ import CodeBlock from "@theme/CodeBlock";
17
+
18
+ <Tabs>
19
+ <TabItem value="pip" label="Pip" default>
20
+ <CodeBlock language="bash">pip install langchain</CodeBlock>
21
+ </TabItem>
22
+ <TabItem value="conda" label="Conda">
23
+ <CodeBlock language="bash">conda install langchain -c conda-forge</CodeBlock>
24
+ </TabItem>
25
+ </Tabs>
26
+
27
+ While this package acts as a sane starting point to using LangChain,
28
+ much of the value of LangChain comes when integrating it with various model providers, datastores, etc.
29
+ By default, the dependencies needed to do that are NOT installed. You will need to install the dependencies for specific integrations separately.
30
+ We'll show how to do that in the next sections of this guide.
31
+
32
+ ## Ecosystem packages
33
+
34
+ With the exception of the `langsmith` SDK, all packages in the LangChain ecosystem depend on `langchain-core`, which contains base
35
+ classes and abstractions that other packages use. The dependency graph below shows how the difference packages are related.
36
+ A directed arrow indicates that the source package depends on the target package:
37
+
38
+ ![](/img/ecosystem_packages.png)
39
+
40
+ When installing a package, you do not need to explicitly install that package's explicit dependencies (such as `langchain-core`).
41
+ However, you may choose to if you are using a feature only available in a certain version of that dependency.
42
+ If you do, you should make sure that the installed or pinned version is compatible with any other integration packages you use.
43
+
44
+ ### From source
45
+
46
+ If you want to install from source, you can do so by cloning the repo and be sure that the directory is `PATH/TO/REPO/langchain/libs/langchain` running:
47
+
48
+ ```bash
49
+ pip install -e .
50
+ ```
51
+
52
+ ### LangChain core
53
+ The `langchain-core` package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed by `langchain`, but can also be used separately. Install with:
54
+
55
+ ```bash
56
+ pip install langchain-core
57
+ ```
58
+
59
+ ### LangChain community
60
+ The `langchain-community` package contains third-party integrations. Install with:
61
+
62
+ ```bash
63
+ pip install langchain-community
64
+ ```
65
+
66
+ ### LangChain experimental
67
+ The `langchain-experimental` package holds experimental LangChain code, intended for research and experimental uses.
68
+ Install with:
69
+
70
+ ```bash
71
+ pip install langchain-experimental
72
+ ```
73
+
74
+ ### LangGraph
75
+ `langgraph` is a library for building stateful, multi-actor applications with LLMs. It integrates smoothly with LangChain, but can be used without it.
76
+ Install with:
77
+
78
+ ```bash
79
+ pip install langgraph
80
+ ```
81
+
82
+ ### LangServe
83
+ LangServe helps developers deploy LangChain runnables and chains as a REST API.
84
+ LangServe is automatically installed by LangChain CLI.
85
+ If not using LangChain CLI, install with:
86
+
87
+ ```bash
88
+ pip install "langserve[all]"
89
+ ```
90
+ for both client and server dependencies. Or `pip install "langserve[client]"` for client code, and `pip install "langserve[server]"` for server code.
91
+
92
+ ## LangChain CLI
93
+ The LangChain CLI is useful for working with LangChain templates and other LangServe projects.
94
+ Install with:
95
+
96
+ ```bash
97
+ pip install langchain-cli
98
+ ```
99
+
100
+ ### LangSmith SDK
101
+ The LangSmith SDK is automatically installed by LangChain. However, it does not depend on
102
+ `langchain-core`, and can be installed and used independently if desired.
103
+ If you are not using LangChain, you can install it with:
104
+
105
+ ```bash
106
+ pip install langsmith
107
+ ```
langchain_md_files/how_to/toolkits.mdx ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 3
3
+ ---
4
+ # How to use toolkits
5
+
6
+
7
+ Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
8
+
9
+ All Toolkits expose a `get_tools` method which returns a list of tools.
10
+ You can therefore do:
11
+
12
+ ```python
13
+ # Initialize a toolkit
14
+ toolkit = ExampleTookit(...)
15
+
16
+ # Get list of tools
17
+ tools = toolkit.get_tools()
18
+
19
+ # Create agent
20
+ agent = create_agent_method(llm, tools, prompt)
21
+ ```
langchain_md_files/how_to/vectorstores.mdx ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # How to create and query vector stores
2
+
3
+ :::info
4
+ Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores.
5
+ :::
6
+
7
+ One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
8
+ vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
9
+ 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
10
+ for you.
11
+
12
+ ## Get started
13
+
14
+ This guide showcases basic functionality related to vector stores. A key part of working with vector stores is creating the vector to put in them,
15
+ which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model interfaces](/docs/how_to/embed_text) before diving into this.
16
+
17
+ Before using the vectorstore at all, we need to load some data and initialize an embedding model.
18
+
19
+ We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
20
+
21
+ ```python
22
+ import os
23
+ import getpass
24
+
25
+ os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
26
+ ```
27
+
28
+ ```python
29
+ from langchain_community.document_loaders import TextLoader
30
+ from langchain_openai import OpenAIEmbeddings
31
+ from langchain_text_splitters import CharacterTextSplitter
32
+
33
+ # Load the document, split it into chunks, embed each chunk and load it into the vector store.
34
+ raw_documents = TextLoader('state_of_the_union.txt').load()
35
+ text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
36
+ documents = text_splitter.split_documents(raw_documents)
37
+ ```
38
+
39
+ import Tabs from '@theme/Tabs';
40
+ import TabItem from '@theme/TabItem';
41
+
42
+ There are many great vector store options, here are a few that are free, open-source, and run entirely on your local machine. Review all integrations for many great hosted offerings.
43
+
44
+
45
+ <Tabs>
46
+ <TabItem value="chroma" label="Chroma" default>
47
+
48
+ This walkthrough uses the `chroma` vector database, which runs on your local machine as a library.
49
+
50
+ ```bash
51
+ pip install langchain-chroma
52
+ ```
53
+
54
+ ```python
55
+ from langchain_chroma import Chroma
56
+
57
+ db = Chroma.from_documents(documents, OpenAIEmbeddings())
58
+ ```
59
+
60
+ </TabItem>
61
+ <TabItem value="faiss" label="FAISS">
62
+
63
+ This walkthrough uses the `FAISS` vector database, which makes use of the Facebook AI Similarity Search (FAISS) library.
64
+
65
+ ```bash
66
+ pip install faiss-cpu
67
+ ```
68
+
69
+ ```python
70
+ from langchain_community.vectorstores import FAISS
71
+
72
+ db = FAISS.from_documents(documents, OpenAIEmbeddings())
73
+ ```
74
+
75
+ </TabItem>
76
+ <TabItem value="lance" label="Lance">
77
+
78
+ This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.
79
+
80
+ ```bash
81
+ pip install lancedb
82
+ ```
83
+
84
+ ```python
85
+ from langchain_community.vectorstores import LanceDB
86
+
87
+ import lancedb
88
+
89
+ db = lancedb.connect("/tmp/lancedb")
90
+ table = db.create_table(
91
+ "my_table",
92
+ data=[
93
+ {
94
+ "vector": embeddings.embed_query("Hello World"),
95
+ "text": "Hello World",
96
+ "id": "1",
97
+ }
98
+ ],
99
+ mode="overwrite",
100
+ )
101
+ db = LanceDB.from_documents(documents, OpenAIEmbeddings())
102
+ ```
103
+
104
+ </TabItem>
105
+ </Tabs>
106
+
107
+
108
+ ## Similarity search
109
+
110
+ All vectorstores expose a `similarity_search` method.
111
+ This will take incoming documents, create an embedding of them, and then find all documents with the most similar embedding.
112
+
113
+ ```python
114
+ query = "What did the president say about Ketanji Brown Jackson"
115
+ docs = db.similarity_search(query)
116
+ print(docs[0].page_content)
117
+ ```
118
+
119
+ <CodeOutputBlock lang="python">
120
+
121
+ ```
122
+ Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
123
+
124
+ Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
125
+
126
+ One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
127
+
128
+ And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
129
+ ```
130
+
131
+ </CodeOutputBlock>
132
+
133
+ ### Similarity search by vector
134
+
135
+ It is also possible to do a search for documents similar to a given embedding vector using `similarity_search_by_vector` which accepts an embedding vector as a parameter instead of a string.
136
+
137
+ ```python
138
+ embedding_vector = OpenAIEmbeddings().embed_query(query)
139
+ docs = db.similarity_search_by_vector(embedding_vector)
140
+ print(docs[0].page_content)
141
+ ```
142
+
143
+ <CodeOutputBlock lang="python">
144
+
145
+ ```
146
+ Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
147
+
148
+ Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
149
+
150
+ One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
151
+
152
+ And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
153
+ ```
154
+
155
+ </CodeOutputBlock>
156
+
157
+ ## Async Operations
158
+
159
+
160
+ Vector stores are usually run as a separate service that requires some IO operations, and therefore they might be called asynchronously. That gives performance benefits as you don't waste time waiting for responses from external services. That might also be important if you work with an asynchronous framework, such as [FastAPI](https://fastapi.tiangolo.com/).
161
+
162
+ LangChain supports async operation on vector stores. All the methods might be called using their async counterparts, with the prefix `a`, meaning `async`.
163
+
164
+ ```python
165
+ docs = await db.asimilarity_search(query)
166
+ docs
167
+ ```
168
+
169
+ <CodeOutputBlock lang="python">
170
+
171
+ ```
172
+ [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'state_of_the_union.txt'}),
173
+ Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'state_of_the_union.txt'}),
174
+ Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': 'state_of_the_union.txt'}),
175
+ Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': 'state_of_the_union.txt'})]
176
+ ```
177
+
178
+ </CodeOutputBlock>
langchain_md_files/integrations/chat/index.mdx ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_class_name: hidden
4
+ keywords: [compatibility]
5
+ ---
6
+
7
+ # Chat models
8
+
9
+ [Chat models](/docs/concepts/#chat-models) are language models that use a sequence of [messages](/docs/concepts/#messages) as inputs and return messages as outputs (as opposed to using plain text). These are generally newer models.
10
+
11
+ :::info
12
+
13
+ If you'd like to write your own chat model, see [this how-to](/docs/how_to/custom_chat_model/).
14
+ If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/).
15
+
16
+ :::
17
+
18
+ ## Featured Providers
19
+
20
+ :::info
21
+ While all these LangChain classes support the indicated advanced feature, you may have
22
+ to open the provider-specific documentation to learn which hosted models or backends support
23
+ the feature.
24
+ :::
25
+
26
+ import { CategoryTable, IndexTable } from "@theme/FeatureTables";
27
+
28
+ <CategoryTable category="chat" />
29
+
30
+ ## All chat models
31
+
32
+ <IndexTable />
langchain_md_files/integrations/document_loaders/index.mdx ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_class_name: hidden
4
+ ---
5
+
6
+ # Document loaders
7
+
8
+ import { CategoryTable, IndexTable } from "@theme/FeatureTables";
9
+
10
+ DocumentLoaders load data into the standard LangChain Document format.
11
+
12
+ Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the .load method.
13
+ An example use case is as follows:
14
+
15
+ ```python
16
+ from langchain_community.document_loaders.csv_loader import CSVLoader
17
+
18
+ loader = CSVLoader(
19
+ ... # <-- Integration specific parameters here
20
+ )
21
+ data = loader.load()
22
+ ```
23
+
24
+ ## Webpages
25
+
26
+ The below document loaders allow you to load webpages.
27
+
28
+ <CategoryTable category="webpage_loaders" />
29
+
30
+ ## PDFs
31
+
32
+ The below document loaders allow you to load PDF documents.
33
+
34
+ <CategoryTable category="pdf_loaders" />
35
+
36
+ ## Cloud Providers
37
+
38
+ The below document loaders allow you to load documents from your favorite cloud providers.
39
+
40
+ <CategoryTable category="cloud_provider_loaders"/>
41
+
42
+ ## Social Platforms
43
+
44
+ The below document loaders allow you to load documents from differnt social media platforms.
45
+
46
+ <CategoryTable category="social_loaders"/>
47
+
48
+ ## Messaging Services
49
+
50
+ The below document loaders allow you to load data from different messaging platforms.
51
+
52
+ <CategoryTable category="messaging_loaders"/>
53
+
54
+ ## Productivity tools
55
+
56
+ The below document loaders allow you to load data from commonly used productivity tools.
57
+
58
+ <CategoryTable category="productivity_loaders"/>
59
+
60
+ ## Common File Types
61
+
62
+ The below document loaders allow you to load data from common data formats.
63
+
64
+ <CategoryTable category="common_loaders" />
65
+
66
+
67
+ ## All document loaders
68
+
69
+ <IndexTable />
langchain_md_files/integrations/graphs/tigergraph.mdx ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TigerGraph
2
+
3
+ >[TigerGraph](https://www.tigergraph.com/tigergraph-db/) is a natively distributed and high-performance graph database.
4
+ > The storage of data in a graph format of vertices and edges leads to rich relationships,
5
+ > ideal for grouding LLM responses.
6
+
7
+ A big example of the `TigerGraph` and `LangChain` integration [presented here](https://github.com/tigergraph/graph-ml-notebooks/blob/main/applications/large_language_models/TigerGraph_LangChain_Demo.ipynb).
8
+
9
+ ## Installation and Setup
10
+
11
+ Follow instructions [how to connect to the `TigerGraph` database](https://docs.tigergraph.com/pytigergraph/current/getting-started/connection).
12
+
13
+ Install the Python SDK:
14
+
15
+ ```bash
16
+ pip install pyTigerGraph
17
+ ```
18
+
19
+ ## Example
20
+
21
+ To utilize the `TigerGraph InquiryAI` functionality, you can import `TigerGraph` from `langchain_community.graphs`.
22
+
23
+ ```python
24
+ import pyTigerGraph as tg
25
+
26
+ conn = tg.TigerGraphConnection(host="DATABASE_HOST_HERE", graphname="GRAPH_NAME_HERE", username="USERNAME_HERE", password="PASSWORD_HERE")
27
+
28
+ ### ==== CONFIGURE INQUIRYAI HOST ====
29
+ conn.ai.configureInquiryAIHost("INQUIRYAI_HOST_HERE")
30
+
31
+ from langchain_community.graphs import TigerGraph
32
+
33
+ graph = TigerGraph(conn)
34
+ result = graph.query("How many servers are there?")
35
+ print(result)
36
+ ```
37
+
langchain_md_files/integrations/llms/index.mdx ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_class_name: hidden
4
+ keywords: [compatibility]
5
+ ---
6
+
7
+ # LLMs
8
+
9
+ :::caution
10
+ You are currently on a page documenting the use of [text completion models](/docs/concepts/#llms). Many of the latest and most popular models are [chat completion models](/docs/concepts/#chat-models).
11
+
12
+ Unless you are specifically using more advanced prompting techniques, you are probably looking for [this page instead](/docs/integrations/chat/).
13
+ :::
14
+
15
+ [LLMs](docs/concepts/#llms) are language models that take a string as input and return a string as output.
16
+
17
+ :::info
18
+
19
+ If you'd like to write your own LLM, see [this how-to](/docs/how_to/custom_llm/).
20
+ If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/).
21
+
22
+ :::
23
+
24
+ import { CategoryTable, IndexTable } from "@theme/FeatureTables";
25
+
26
+ <CategoryTable category="llms" />
27
+
28
+ ## All LLMs
29
+
30
+ <IndexTable />
langchain_md_files/integrations/llms/layerup_security.mdx ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Layerup Security
2
+
3
+ The [Layerup Security](https://uselayerup.com) integration allows you to secure your calls to any LangChain LLM, LLM chain or LLM agent. The LLM object wraps around any existing LLM object, allowing for a secure layer between your users and your LLMs.
4
+
5
+ While the Layerup Security object is designed as an LLM, it is not actually an LLM itself, it simply wraps around an LLM, allowing it to adapt the same functionality as the underlying LLM.
6
+
7
+ ## Setup
8
+ First, you'll need a Layerup Security account from the Layerup [website](https://uselayerup.com).
9
+
10
+ Next, create a project via the [dashboard](https://dashboard.uselayerup.com), and copy your API key. We recommend putting your API key in your project's environment.
11
+
12
+ Install the Layerup Security SDK:
13
+ ```bash
14
+ pip install LayerupSecurity
15
+ ```
16
+
17
+ And install LangChain Community:
18
+ ```bash
19
+ pip install langchain-community
20
+ ```
21
+
22
+ And now you're ready to start protecting your LLM calls with Layerup Security!
23
+
24
+ ```python
25
+ from langchain_community.llms.layerup_security import LayerupSecurity
26
+ from langchain_openai import OpenAI
27
+
28
+ # Create an instance of your favorite LLM
29
+ openai = OpenAI(
30
+ model_name="gpt-3.5-turbo",
31
+ openai_api_key="OPENAI_API_KEY",
32
+ )
33
+
34
+ # Configure Layerup Security
35
+ layerup_security = LayerupSecurity(
36
+ # Specify a LLM that Layerup Security will wrap around
37
+ llm=openai,
38
+
39
+ # Layerup API key, from the Layerup dashboard
40
+ layerup_api_key="LAYERUP_API_KEY",
41
+
42
+ # Custom base URL, if self hosting
43
+ layerup_api_base_url="https://api.uselayerup.com/v1",
44
+
45
+ # List of guardrails to run on prompts before the LLM is invoked
46
+ prompt_guardrails=[],
47
+
48
+ # List of guardrails to run on responses from the LLM
49
+ response_guardrails=["layerup.hallucination"],
50
+
51
+ # Whether or not to mask the prompt for PII & sensitive data before it is sent to the LLM
52
+ mask=False,
53
+
54
+ # Metadata for abuse tracking, customer tracking, and scope tracking.
55
+ metadata={"customer": "[email protected]"},
56
+
57
+ # Handler for guardrail violations on the prompt guardrails
58
+ handle_prompt_guardrail_violation=(
59
+ lambda violation: {
60
+ "role": "assistant",
61
+ "content": (
62
+ "There was sensitive data! I cannot respond. "
63
+ "Here's a dynamic canned response. Current date: {}"
64
+ ).format(datetime.now())
65
+ }
66
+ if violation["offending_guardrail"] == "layerup.sensitive_data"
67
+ else None
68
+ ),
69
+
70
+ # Handler for guardrail violations on the response guardrails
71
+ handle_response_guardrail_violation=(
72
+ lambda violation: {
73
+ "role": "assistant",
74
+ "content": (
75
+ "Custom canned response with dynamic data! "
76
+ "The violation rule was {}."
77
+ ).format(violation["offending_guardrail"])
78
+ }
79
+ ),
80
+ )
81
+
82
+ response = layerup_security.invoke(
83
+ "Summarize this message: my name is Bob Dylan. My SSN is 123-45-6789."
84
+ )
85
+ ```
langchain_md_files/integrations/platforms/anthropic.mdx ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Anthropic
2
+
3
+ >[Anthropic](https://www.anthropic.com/) is an AI safety and research company, and is the creator of `Claude`.
4
+ This page covers all integrations between `Anthropic` models and `LangChain`.
5
+
6
+ ## Installation and Setup
7
+
8
+ To use `Anthropic` models, you need to install a python package:
9
+
10
+ ```bash
11
+ pip install -U langchain-anthropic
12
+ ```
13
+
14
+ You need to set the `ANTHROPIC_API_KEY` environment variable.
15
+ You can get an Anthropic API key [here](https://console.anthropic.com/settings/keys)
16
+
17
+ ## Chat Models
18
+
19
+ ### ChatAnthropic
20
+
21
+ See a [usage example](/docs/integrations/chat/anthropic).
22
+
23
+ ```python
24
+ from langchain_anthropic import ChatAnthropic
25
+
26
+ model = ChatAnthropic(model='claude-3-opus-20240229')
27
+ ```
28
+
29
+
30
+ ## LLMs
31
+
32
+ ### [Legacy] AnthropicLLM
33
+
34
+ **NOTE**: `AnthropicLLM` only supports legacy `Claude 2` models.
35
+ To use the newest `Claude 3` models, please use `ChatAnthropic` instead.
36
+
37
+ See a [usage example](/docs/integrations/llms/anthropic).
38
+
39
+ ```python
40
+ from langchain_anthropic import AnthropicLLM
41
+
42
+ model = AnthropicLLM(model='claude-2.1')
43
+ ```
langchain_md_files/integrations/platforms/aws.mdx ADDED
@@ -0,0 +1,381 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AWS
2
+
3
+ The `LangChain` integrations related to [Amazon AWS](https://aws.amazon.com/) platform.
4
+
5
+ First-party AWS integrations are available in the `langchain_aws` package.
6
+
7
+ ```bash
8
+ pip install langchain-aws
9
+ ```
10
+
11
+ And there are also some community integrations available in the `langchain_community` package with the `boto3` optional dependency.
12
+
13
+ ```bash
14
+ pip install langchain-community boto3
15
+ ```
16
+
17
+ ## Chat models
18
+
19
+ ### Bedrock Chat
20
+
21
+ >[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of
22
+ > high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`,
23
+ > `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to
24
+ > build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`,
25
+ > you can easily experiment with and evaluate top FMs for your use case, privately customize them with
26
+ > your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build
27
+ > agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is
28
+ > serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy
29
+ > generative AI capabilities into your applications using the AWS services you are already familiar with.
30
+
31
+ See a [usage example](/docs/integrations/chat/bedrock).
32
+
33
+ ```python
34
+ from langchain_aws import ChatBedrock
35
+ ```
36
+
37
+ ### Bedrock Converse
38
+ AWS has recently released the Bedrock Converse API which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). To improve reliability the ChatBedrock integration will switch to using the Bedrock Converse API as soon as it has feature parity with the existing Bedrock API. Until then a separate [ChatBedrockConverse](https://python.langchain.com/v0.2/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) integration has been released.
39
+
40
+ We recommend using `ChatBedrockConverse` for users who do not need to use custom models. See the [docs](/docs/integrations/chat/bedrock/#bedrock-converse-api) and [API reference](https://python.langchain.com/v0.2/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) for more detail.
41
+
42
+ ```python
43
+ from langchain_aws import ChatBedrockConverse
44
+ ```
45
+
46
+ ## LLMs
47
+
48
+ ### Bedrock
49
+
50
+ See a [usage example](/docs/integrations/llms/bedrock).
51
+
52
+ ```python
53
+ from langchain_aws import BedrockLLM
54
+ ```
55
+
56
+ ### Amazon API Gateway
57
+
58
+ >[Amazon API Gateway](https://aws.amazon.com/api-gateway/) is a fully managed service that makes it easy for
59
+ > developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door"
60
+ > for applications to access data, business logic, or functionality from your backend services. Using
61
+ > `API Gateway`, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication
62
+ > applications. `API Gateway` supports containerized and serverless workloads, as well as web applications.
63
+ >
64
+ > `API Gateway` handles all the tasks involved in accepting and processing up to hundreds of thousands of
65
+ > concurrent API calls, including traffic management, CORS support, authorization and access control,
66
+ > throttling, monitoring, and API version management. `API Gateway` has no minimum fees or startup costs.
67
+ > You pay for the API calls you receive and the amount of data transferred out and, with the `API Gateway`
68
+ > tiered pricing model, you can reduce your cost as your API usage scales.
69
+
70
+ See a [usage example](/docs/integrations/llms/amazon_api_gateway).
71
+
72
+ ```python
73
+ from langchain_community.llms import AmazonAPIGateway
74
+ ```
75
+
76
+ ### SageMaker Endpoint
77
+
78
+ >[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a system that can build, train, and deploy
79
+ > machine learning (ML) models with fully managed infrastructure, tools, and workflows.
80
+
81
+ We use `SageMaker` to host our model and expose it as the `SageMaker Endpoint`.
82
+
83
+ See a [usage example](/docs/integrations/llms/sagemaker).
84
+
85
+ ```python
86
+ from langchain_aws import SagemakerEndpoint
87
+ ```
88
+
89
+ ## Embedding Models
90
+
91
+ ### Bedrock
92
+
93
+ See a [usage example](/docs/integrations/text_embedding/bedrock).
94
+ ```python
95
+ from langchain_community.embeddings import BedrockEmbeddings
96
+ ```
97
+
98
+ ### SageMaker Endpoint
99
+
100
+ See a [usage example](/docs/integrations/text_embedding/sagemaker-endpoint).
101
+ ```python
102
+ from langchain_community.embeddings import SagemakerEndpointEmbeddings
103
+ from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase
104
+ ```
105
+
106
+ ## Document loaders
107
+
108
+ ### AWS S3 Directory and File
109
+
110
+ >[Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
111
+ > is an object storage service.
112
+ >[AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
113
+ >[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
114
+
115
+ See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory).
116
+
117
+ See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file).
118
+
119
+ ```python
120
+ from langchain_community.document_loaders import S3DirectoryLoader, S3FileLoader
121
+ ```
122
+
123
+ ### Amazon Textract
124
+
125
+ >[Amazon Textract](https://docs.aws.amazon.com/managedservices/latest/userguide/textract.html) is a machine
126
+ > learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.
127
+
128
+ See a [usage example](/docs/integrations/document_loaders/amazon_textract).
129
+
130
+ ```python
131
+ from langchain_community.document_loaders import AmazonTextractPDFLoader
132
+ ```
133
+
134
+ ### Amazon Athena
135
+
136
+ >[Amazon Athena](https://aws.amazon.com/athena/) is a serverless, interactive analytics service built
137
+ >on open-source frameworks, supporting open-table and file formats.
138
+
139
+ See a [usage example](/docs/integrations/document_loaders/athena).
140
+
141
+ ```python
142
+ from langchain_community.document_loaders.athena import AthenaLoader
143
+ ```
144
+
145
+ ### AWS Glue
146
+
147
+ >The [AWS Glue Data Catalog](https://docs.aws.amazon.com/en_en/glue/latest/dg/catalog-and-crawler.html) is a centralized metadata
148
+ > repository that allows you to manage, access, and share metadata about
149
+ > your data stored in AWS. It acts as a metadata store for your data assets,
150
+ > enabling various AWS services and your applications to query and connect
151
+ > to the data they need efficiently.
152
+
153
+ See a [usage example](/docs/integrations/document_loaders/glue_catalog).
154
+
155
+ ```python
156
+ from langchain_community.document_loaders.glue_catalog import GlueCatalogLoader
157
+ ```
158
+
159
+ ## Vector stores
160
+
161
+ ### Amazon OpenSearch Service
162
+
163
+ > [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs
164
+ > interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is
165
+ > an open source,
166
+ > distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the
167
+ > latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as
168
+ > visualization capabilities powered by `OpenSearch Dashboards` and `Kibana`.
169
+
170
+ We need to install several python libraries.
171
+
172
+ ```bash
173
+ pip install boto3 requests requests-aws4auth
174
+ ```
175
+
176
+ See a [usage example](/docs/integrations/vectorstores/opensearch#using-aos-amazon-opensearch-service).
177
+
178
+ ```python
179
+ from langchain_community.vectorstores import OpenSearchVectorSearch
180
+ ```
181
+
182
+ ### Amazon DocumentDB Vector Search
183
+
184
+ >[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
185
+ > With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB.
186
+ > Vector search for Amazon DocumentDB combines the flexibility and rich querying capability of a JSON-based document database with the power of vector search.
187
+
188
+ #### Installation and Setup
189
+
190
+ See [detail configuration instructions](/docs/integrations/vectorstores/documentdb).
191
+
192
+ We need to install the `pymongo` python package.
193
+
194
+ ```bash
195
+ pip install pymongo
196
+ ```
197
+
198
+ #### Deploy DocumentDB on AWS
199
+
200
+ [Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
201
+
202
+ AWS offers services for computing, databases, storage, analytics, and other functionality. For an overview of all AWS services, see [Cloud Computing with Amazon Web Services](https://aws.amazon.com/what-is-aws/).
203
+
204
+ See a [usage example](/docs/integrations/vectorstores/documentdb).
205
+
206
+ ```python
207
+ from langchain_community.vectorstores import DocumentDBVectorSearch
208
+ ```
209
+ ### Amazon MemoryDB
210
+ [Amazon MemoryDB](https://aws.amazon.com/memorydb/) is a durable, in-memory database service that delivers ultra-fast performance. MemoryDB is compatible with Redis OSS, a popular open source data store,
211
+ enabling you to quickly build applications using the same flexible and friendly Redis OSS APIs, and commands that they already use today.
212
+
213
+ InMemoryVectorStore class provides a vectorstore to connect with Amazon MemoryDB.
214
+
215
+ ```python
216
+ from langchain_aws.vectorstores.inmemorydb import InMemoryVectorStore
217
+
218
+ vds = InMemoryVectorStore.from_documents(
219
+ chunks,
220
+ embeddings,
221
+ redis_url="rediss://cluster_endpoint:6379/ssl=True ssl_cert_reqs=none",
222
+ vector_schema=vector_schema,
223
+ index_name=INDEX_NAME,
224
+ )
225
+ ```
226
+ See a [usage example](/docs/integrations/vectorstores/memorydb).
227
+
228
+ ## Retrievers
229
+
230
+ ### Amazon Kendra
231
+
232
+ > [Amazon Kendra](https://docs.aws.amazon.com/kendra/latest/dg/what-is-kendra.html) is an intelligent search service
233
+ > provided by `Amazon Web Services` (`AWS`). It utilizes advanced natural language processing (NLP) and machine
234
+ > learning algorithms to enable powerful search capabilities across various data sources within an organization.
235
+ > `Kendra` is designed to help users find the information they need quickly and accurately,
236
+ > improving productivity and decision-making.
237
+
238
+ > With `Kendra`, we can search across a wide range of content types, including documents, FAQs, knowledge bases,
239
+ > manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and
240
+ > contextual meanings to provide highly relevant search results.
241
+
242
+ We need to install the `langchain-aws` library.
243
+
244
+ ```bash
245
+ pip install langchain-aws
246
+ ```
247
+
248
+ See a [usage example](/docs/integrations/retrievers/amazon_kendra_retriever).
249
+
250
+ ```python
251
+ from langchain_aws import AmazonKendraRetriever
252
+ ```
253
+
254
+ ### Amazon Bedrock (Knowledge Bases)
255
+
256
+ > [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an
257
+ > `Amazon Web Services` (`AWS`) offering which lets you quickly build RAG applications by using your
258
+ > private data to customize foundation model response.
259
+
260
+ We need to install the `langchain-aws` library.
261
+
262
+ ```bash
263
+ pip install langchain-aws
264
+ ```
265
+
266
+ See a [usage example](/docs/integrations/retrievers/bedrock).
267
+
268
+ ```python
269
+ from langchain_aws import AmazonKnowledgeBasesRetriever
270
+ ```
271
+
272
+ ## Tools
273
+
274
+ ### AWS Lambda
275
+
276
+ >[`Amazon AWS Lambda`](https://aws.amazon.com/pm/lambda/) is a serverless computing service provided by
277
+ > `Amazon Web Services` (`AWS`). It helps developers to build and run applications and services without
278
+ > provisioning or managing servers. This serverless architecture enables you to focus on writing and
279
+ > deploying code, while AWS automatically takes care of scaling, patching, and managing the
280
+ > infrastructure required to run your applications.
281
+
282
+ We need to install `boto3` python library.
283
+
284
+ ```bash
285
+ pip install boto3
286
+ ```
287
+
288
+ See a [usage example](/docs/integrations/tools/awslambda).
289
+
290
+ ## Memory
291
+
292
+ ### AWS DynamoDB
293
+
294
+ >[AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html)
295
+ > is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability.
296
+
297
+ We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
298
+
299
+ We need to install the `boto3` library.
300
+
301
+ ```bash
302
+ pip install boto3
303
+ ```
304
+
305
+ See a [usage example](/docs/integrations/memory/aws_dynamodb).
306
+
307
+ ```python
308
+ from langchain_community.chat_message_histories import DynamoDBChatMessageHistory
309
+ ```
310
+
311
+ ## Graphs
312
+
313
+ ### Amazon Neptune with Cypher
314
+
315
+ See a [usage example](/docs/integrations/graphs/amazon_neptune_open_cypher).
316
+
317
+ ```python
318
+ from langchain_community.graphs import NeptuneGraph
319
+ from langchain_community.graphs import NeptuneAnalyticsGraph
320
+ from langchain_community.chains.graph_qa.neptune_cypher import NeptuneOpenCypherQAChain
321
+ ```
322
+
323
+ ### Amazon Neptune with SPARQL
324
+
325
+ See a [usage example](/docs/integrations/graphs/amazon_neptune_sparql).
326
+
327
+ ```python
328
+ from langchain_community.graphs import NeptuneRdfGraph
329
+ from langchain_community.chains.graph_qa.neptune_sparql import NeptuneSparqlQAChain
330
+ ```
331
+
332
+
333
+
334
+ ## Callbacks
335
+
336
+ ### Bedrock token usage
337
+
338
+ ```python
339
+ from langchain_community.callbacks.bedrock_anthropic_callback import BedrockAnthropicTokenUsageCallbackHandler
340
+ ```
341
+
342
+ ### SageMaker Tracking
343
+
344
+ >[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly
345
+ > and easily build, train and deploy machine learning (ML) models.
346
+
347
+ >[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability
348
+ > of `Amazon SageMaker` that lets you organize, track,
349
+ > compare and evaluate ML experiments and model versions.
350
+
351
+ We need to install several python libraries.
352
+
353
+ ```bash
354
+ pip install google-search-results sagemaker
355
+ ```
356
+
357
+ See a [usage example](/docs/integrations/callbacks/sagemaker_tracking).
358
+
359
+ ```python
360
+ from langchain_community.callbacks import SageMakerCallbackHandler
361
+ ```
362
+
363
+ ## Chains
364
+
365
+ ### Amazon Comprehend Moderation Chain
366
+
367
+ >[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that
368
+ > uses machine learning to uncover valuable insights and connections in text.
369
+
370
+
371
+ We need to install the `boto3` and `nltk` libraries.
372
+
373
+ ```bash
374
+ pip install boto3 nltk
375
+ ```
376
+
377
+ See a [usage example](https://python.langchain.com/v0.1/docs/guides/productionization/safety/amazon_comprehend_chain/).
378
+
379
+ ```python
380
+ from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain
381
+ ```
langchain_md_files/integrations/platforms/google.mdx ADDED
@@ -0,0 +1,1079 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Google
2
+
3
+ All functionality related to [Google Cloud Platform](https://cloud.google.com/) and other `Google` products.
4
+
5
+ ## Chat models
6
+
7
+ We recommend individual developers to start with Gemini API (`langchain-google-genai`) and move to Vertex AI (`langchain-google-vertexai`) when they need access to commercial support and higher rate limits. If you’re already Cloud-friendly or Cloud-native, then you can get started in Vertex AI straight away.
8
+ Please see [here](https://ai.google.dev/gemini-api/docs/migrate-to-cloud) for more information.
9
+
10
+ ### Google Generative AI
11
+
12
+ Access GoogleAI `Gemini` models such as `gemini-pro` and `gemini-pro-vision` through the `ChatGoogleGenerativeAI` class.
13
+
14
+ ```bash
15
+ pip install -U langchain-google-genai
16
+ ```
17
+
18
+ Configure your API key.
19
+
20
+ ```bash
21
+ export GOOGLE_API_KEY=your-api-key
22
+ ```
23
+
24
+ ```python
25
+ from langchain_google_genai import ChatGoogleGenerativeAI
26
+
27
+ llm = ChatGoogleGenerativeAI(model="gemini-pro")
28
+ llm.invoke("Sing a ballad of LangChain.")
29
+ ```
30
+
31
+ Gemini vision model supports image inputs when providing a single chat message.
32
+
33
+ ```python
34
+ from langchain_core.messages import HumanMessage
35
+ from langchain_google_genai import ChatGoogleGenerativeAI
36
+
37
+ llm = ChatGoogleGenerativeAI(model="gemini-pro-vision")
38
+
39
+ message = HumanMessage(
40
+ content=[
41
+ {
42
+ "type": "text",
43
+ "text": "What's in this image?",
44
+ }, # You can optionally provide text parts
45
+ {"type": "image_url", "image_url": "https://picsum.photos/seed/picsum/200/300"},
46
+ ]
47
+ )
48
+ llm.invoke([message])
49
+ ```
50
+
51
+ The value of image_url can be any of the following:
52
+
53
+ - A public image URL
54
+ - A gcs file (e.g., "gcs://path/to/file.png")
55
+ - A local file path
56
+ - A base64 encoded image (e.g., data:image/png;base64,abcd124)
57
+ - A PIL image
58
+
59
+ ### Vertex AI
60
+
61
+ Access PaLM chat models like `chat-bison` and `codechat-bison` via Google Cloud.
62
+
63
+ We need to install `langchain-google-vertexai` python package.
64
+
65
+ ```bash
66
+ pip install langchain-google-vertexai
67
+ ```
68
+
69
+ See a [usage example](/docs/integrations/chat/google_vertex_ai_palm).
70
+
71
+ ```python
72
+ from langchain_google_vertexai import ChatVertexAI
73
+ ```
74
+
75
+ ### Chat Anthropic on Vertex AI
76
+
77
+ See a [usage example](/docs/integrations/llms/google_vertex_ai_palm).
78
+
79
+ ```python
80
+ from langchain_google_vertexai.model_garden import ChatAnthropicVertex
81
+ ```
82
+
83
+ ## LLMs
84
+
85
+ ### Google Generative AI
86
+
87
+ Access GoogleAI `Gemini` models such as `gemini-pro` and `gemini-pro-vision` through the `GoogleGenerativeAI` class.
88
+
89
+ Install python package.
90
+
91
+ ```bash
92
+ pip install langchain-google-genai
93
+ ```
94
+
95
+ See a [usage example](/docs/integrations/llms/google_ai).
96
+
97
+ ```python
98
+ from langchain_google_genai import GoogleGenerativeAI
99
+ ```
100
+
101
+ ### Vertex AI Model Garden
102
+
103
+ Access `PaLM` and hundreds of OSS models via `Vertex AI Model Garden` service.
104
+
105
+ We need to install `langchain-google-vertexai` python package.
106
+
107
+ ```bash
108
+ pip install langchain-google-vertexai
109
+ ```
110
+
111
+ See a [usage example](/docs/integrations/llms/google_vertex_ai_palm#vertex-model-garden).
112
+
113
+ ```python
114
+ from langchain_google_vertexai import VertexAIModelGarden
115
+ ```
116
+
117
+ ## Embedding models
118
+
119
+ ### Google Generative AI Embeddings
120
+
121
+ See a [usage example](/docs/integrations/text_embedding/google_generative_ai).
122
+
123
+ ```bash
124
+ pip install -U langchain-google-genai
125
+ ```
126
+
127
+ Configure your API key.
128
+
129
+ ```bash
130
+ export GOOGLE_API_KEY=your-api-key
131
+ ```
132
+
133
+ ```python
134
+ from langchain_google_genai import GoogleGenerativeAIEmbeddings
135
+ ```
136
+
137
+ ### Vertex AI
138
+
139
+ We need to install `langchain-google-vertexai` python package.
140
+
141
+ ```bash
142
+ pip install langchain-google-vertexai
143
+ ```
144
+
145
+ See a [usage example](/docs/integrations/text_embedding/google_vertex_ai_palm).
146
+
147
+ ```python
148
+ from langchain_google_vertexai import VertexAIEmbeddings
149
+ ```
150
+
151
+ ### Palm Embedding
152
+
153
+ We need to install `langchain-community` python package.
154
+
155
+ ```bash
156
+ pip install langchain-community
157
+ ```
158
+
159
+ ```python
160
+ from langchain_community.embeddings.google_palm import GooglePalmEmbeddings
161
+ ```
162
+
163
+ ## Document Loaders
164
+
165
+ ### AlloyDB for PostgreSQL
166
+
167
+ > [Google Cloud AlloyDB](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability on Google Cloud. AlloyDB is 100% compatible with PostgreSQL.
168
+
169
+ Install the python package:
170
+
171
+ ```bash
172
+ pip install langchain-google-alloydb-pg
173
+ ```
174
+
175
+ See [usage example](/docs/integrations/document_loaders/google_alloydb).
176
+
177
+ ```python
178
+ from langchain_google_alloydb_pg import AlloyDBEngine, AlloyDBLoader
179
+ ```
180
+
181
+ ### BigQuery
182
+
183
+ > [Google Cloud BigQuery](https://cloud.google.com/bigquery) is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data in Google Cloud.
184
+
185
+ We need to install `langchain-google-community` with Big Query dependencies:
186
+
187
+ ```bash
188
+ pip install langchain-google-community[bigquery]
189
+ ```
190
+
191
+ See a [usage example](/docs/integrations/document_loaders/google_bigquery).
192
+
193
+ ```python
194
+ from langchain_google_community import BigQueryLoader
195
+ ```
196
+
197
+ ### Bigtable
198
+
199
+ > [Google Cloud Bigtable](https://cloud.google.com/bigtable/docs) is Google's fully managed NoSQL Big Data database service in Google Cloud.
200
+ Install the python package:
201
+
202
+ ```bash
203
+ pip install langchain-google-bigtable
204
+ ```
205
+
206
+ See [Googel Cloud usage example](/docs/integrations/document_loaders/google_bigtable).
207
+
208
+ ```python
209
+ from langchain_google_bigtable import BigtableLoader
210
+ ```
211
+
212
+ ### Cloud SQL for MySQL
213
+
214
+ > [Google Cloud SQL for MySQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud.
215
+ Install the python package:
216
+
217
+ ```bash
218
+ pip install langchain-google-cloud-sql-mysql
219
+ ```
220
+
221
+ See [usage example](/docs/integrations/document_loaders/google_cloud_sql_mysql).
222
+
223
+ ```python
224
+ from langchain_google_cloud_sql_mysql import MySQLEngine, MySQLDocumentLoader
225
+ ```
226
+
227
+ ### Cloud SQL for SQL Server
228
+
229
+ > [Google Cloud SQL for SQL Server](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your SQL Server databases on Google Cloud.
230
+ Install the python package:
231
+
232
+ ```bash
233
+ pip install langchain-google-cloud-sql-mssql
234
+ ```
235
+
236
+ See [usage example](/docs/integrations/document_loaders/google_cloud_sql_mssql).
237
+
238
+ ```python
239
+ from langchain_google_cloud_sql_mssql import MSSQLEngine, MSSQLLoader
240
+ ```
241
+
242
+ ### Cloud SQL for PostgreSQL
243
+
244
+ > [Google Cloud SQL for PostgreSQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud.
245
+ Install the python package:
246
+
247
+ ```bash
248
+ pip install langchain-google-cloud-sql-pg
249
+ ```
250
+
251
+ See [usage example](/docs/integrations/document_loaders/google_cloud_sql_pg).
252
+
253
+ ```python
254
+ from langchain_google_cloud_sql_pg import PostgresEngine, PostgresLoader
255
+ ```
256
+
257
+ ### Cloud Storage
258
+
259
+ >[Cloud Storage](https://en.wikipedia.org/wiki/Google_Cloud_Storage) is a managed service for storing unstructured data in Google Cloud.
260
+
261
+ We need to install `langchain-google-community` with Google Cloud Storage dependencies.
262
+
263
+ ```bash
264
+ pip install langchain-google-community[gcs]
265
+ ```
266
+
267
+ There are two loaders for the `Google Cloud Storage`: the `Directory` and the `File` loaders.
268
+
269
+ See a [usage example](/docs/integrations/document_loaders/google_cloud_storage_directory).
270
+
271
+ ```python
272
+ from langchain_google_community import GCSDirectoryLoader
273
+ ```
274
+ See a [usage example](/docs/integrations/document_loaders/google_cloud_storage_file).
275
+
276
+ ```python
277
+ from langchain_google_community import GCSFileLoader
278
+ ```
279
+
280
+ ### Cloud Vision loader
281
+
282
+ Install the python package:
283
+
284
+ ```bash
285
+ pip install langchain-google-community[vision]
286
+ ```
287
+
288
+ ```python
289
+ from langchain_google_community.vision import CloudVisionLoader
290
+ ```
291
+
292
+ ### El Carro for Oracle Workloads
293
+
294
+ > Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
295
+ offers a way to run Oracle databases in Kubernetes as a portable, open source,
296
+ community driven, no vendor lock-in container orchestration system.
297
+
298
+ ```bash
299
+ pip install langchain-google-el-carro
300
+ ```
301
+
302
+ See [usage example](/docs/integrations/document_loaders/google_el_carro).
303
+
304
+ ```python
305
+ from langchain_google_el_carro import ElCarroLoader
306
+ ```
307
+
308
+ ### Google Drive
309
+
310
+ >[Google Drive](https://en.wikipedia.org/wiki/Google_Drive) is a file storage and synchronization service developed by Google.
311
+
312
+ Currently, only `Google Docs` are supported.
313
+
314
+ We need to install `langchain-google-community` with Google Drive dependencies.
315
+
316
+ ```bash
317
+ pip install langchain-google-community[drive]
318
+ ```
319
+
320
+ See a [usage example and authorization instructions](/docs/integrations/document_loaders/google_drive).
321
+
322
+ ```python
323
+ from langchain_google_community import GoogleDriveLoader
324
+ ```
325
+
326
+ ### Firestore (Native Mode)
327
+
328
+ > [Google Cloud Firestore](https://cloud.google.com/firestore/docs/) is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
329
+ Install the python package:
330
+
331
+ ```bash
332
+ pip install langchain-google-firestore
333
+ ```
334
+
335
+ See [usage example](/docs/integrations/document_loaders/google_firestore).
336
+
337
+ ```python
338
+ from langchain_google_firestore import FirestoreLoader
339
+ ```
340
+
341
+ ### Firestore (Datastore Mode)
342
+
343
+ > [Google Cloud Firestore in Datastore mode](https://cloud.google.com/datastore/docs) is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
344
+ > Firestore is the newest version of Datastore and introduces several improvements over Datastore.
345
+ Install the python package:
346
+
347
+ ```bash
348
+ pip install langchain-google-datastore
349
+ ```
350
+
351
+ See [usage example](/docs/integrations/document_loaders/google_datastore).
352
+
353
+ ```python
354
+ from langchain_google_datastore import DatastoreLoader
355
+ ```
356
+
357
+ ### Memorystore for Redis
358
+
359
+ > [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments.
360
+ Install the python package:
361
+
362
+ ```bash
363
+ pip install langchain-google-memorystore-redis
364
+ ```
365
+
366
+ See [usage example](/docs/integrations/document_loaders/google_memorystore_redis).
367
+
368
+ ```python
369
+ from langchain_google_memorystore_redis import MemorystoreLoader
370
+ ```
371
+
372
+ ### Spanner
373
+
374
+ > [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL.
375
+ Install the python package:
376
+
377
+ ```bash
378
+ pip install langchain-google-spanner
379
+ ```
380
+
381
+ See [usage example](/docs/integrations/document_loaders/google_spanner).
382
+
383
+ ```python
384
+ from langchain_google_spanner import SpannerLoader
385
+ ```
386
+
387
+ ### Speech-to-Text
388
+
389
+ > [Google Cloud Speech-to-Text](https://cloud.google.com/speech-to-text) is an audio transcription API powered by Google's speech recognition models in Google Cloud.
390
+
391
+ This document loader transcribes audio files and outputs the text results as Documents.
392
+
393
+ First, we need to install `langchain-google-community` with speech-to-text dependencies.
394
+
395
+ ```bash
396
+ pip install langchain-google-community[speech]
397
+ ```
398
+
399
+ See a [usage example and authorization instructions](/docs/integrations/document_loaders/google_speech_to_text).
400
+
401
+ ```python
402
+ from langchain_google_community import SpeechToTextLoader
403
+ ```
404
+
405
+ ## Document Transformers
406
+
407
+ ### Document AI
408
+
409
+ >[Google Cloud Document AI](https://cloud.google.com/document-ai/docs/overview) is a Google Cloud
410
+ > service that transforms unstructured data from documents into structured data, making it easier
411
+ > to understand, analyze, and consume.
412
+
413
+ We need to set up a [`GCS` bucket and create your own OCR processor](https://cloud.google.com/document-ai/docs/create-processor)
414
+ The `GCS_OUTPUT_PATH` should be a path to a folder on GCS (starting with `gs://`)
415
+ and a processor name should look like `projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID`.
416
+ We can get it either programmatically or copy from the `Prediction endpoint` section of the `Processor details`
417
+ tab in the Google Cloud Console.
418
+
419
+ ```bash
420
+ pip install langchain-google-community[docai]
421
+ ```
422
+
423
+ See a [usage example](/docs/integrations/document_transformers/google_docai).
424
+
425
+ ```python
426
+ from langchain_core.document_loaders.blob_loaders import Blob
427
+ from langchain_google_community import DocAIParser
428
+ ```
429
+
430
+ ### Google Translate
431
+
432
+ > [Google Translate](https://translate.google.com/) is a multilingual neural machine
433
+ > translation service developed by Google to translate text, documents and websites
434
+ > from one language into another.
435
+
436
+ The `GoogleTranslateTransformer` allows you to translate text and HTML with the [Google Cloud Translation API](https://cloud.google.com/translate).
437
+
438
+ First, we need to install the `langchain-google-community` with translate dependencies.
439
+
440
+ ```bash
441
+ pip install langchain-google-community[translate]
442
+ ```
443
+
444
+ See a [usage example and authorization instructions](/docs/integrations/document_transformers/google_translate).
445
+
446
+ ```python
447
+ from langchain_google_community import GoogleTranslateTransformer
448
+ ```
449
+
450
+ ## Vector Stores
451
+
452
+ ### AlloyDB for PostgreSQL
453
+
454
+ > [Google Cloud AlloyDB](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability on Google Cloud. AlloyDB is 100% compatible with PostgreSQL.
455
+
456
+ Install the python package:
457
+
458
+ ```bash
459
+ pip install langchain-google-alloydb-pg
460
+ ```
461
+
462
+ See [usage example](/docs/integrations/vectorstores/google_alloydb).
463
+
464
+ ```python
465
+ from langchain_google_alloydb_pg import AlloyDBEngine, AlloyDBVectorStore
466
+ ```
467
+
468
+ ### BigQuery Vector Search
469
+
470
+ > [Google Cloud BigQuery](https://cloud.google.com/bigquery),
471
+ > BigQuery is a serverless and cost-effective enterprise data warehouse in Google Cloud.
472
+ >
473
+ > [Google Cloud BigQuery Vector Search](https://cloud.google.com/bigquery/docs/vector-search-intro)
474
+ > BigQuery vector search lets you use GoogleSQL to do semantic search, using vector indexes for fast but approximate results, or using brute force for exact results.
475
+
476
+ > It can calculate Euclidean or Cosine distance. With LangChain, we default to use Euclidean distance.
477
+
478
+ We need to install several python packages.
479
+
480
+ ```bash
481
+ pip install google-cloud-bigquery
482
+ ```
483
+
484
+ See a [usage example](/docs/integrations/vectorstores/google_bigquery_vector_search).
485
+
486
+ ```python
487
+ from langchain.vectorstores import BigQueryVectorSearch
488
+ ```
489
+
490
+ ### Memorystore for Redis
491
+
492
+ > [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments.
493
+ Install the python package:
494
+
495
+ ```bash
496
+ pip install langchain-google-memorystore-redis
497
+ ```
498
+
499
+ See [usage example](/docs/integrations/vectorstores/google_memorystore_redis).
500
+
501
+ ```python
502
+ from langchain_google_memorystore_redis import RedisVectorStore
503
+ ```
504
+
505
+ ### Spanner
506
+
507
+ > [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL.
508
+ Install the python package:
509
+
510
+ ```bash
511
+ pip install langchain-google-spanner
512
+ ```
513
+
514
+ See [usage example](/docs/integrations/vectorstores/google_spanner).
515
+
516
+ ```python
517
+ from langchain_google_spanner import SpannerVectorStore
518
+ ```
519
+
520
+ ### Firestore (Native Mode)
521
+
522
+ > [Google Cloud Firestore](https://cloud.google.com/firestore/docs/) is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
523
+ Install the python package:
524
+
525
+ ```bash
526
+ pip install langchain-google-firestore
527
+ ```
528
+
529
+ See [usage example](/docs/integrations/vectorstores/google_firestore).
530
+
531
+ ```python
532
+ from langchain_google_firestore import FirestoreVectorstore
533
+ ```
534
+
535
+ ### Cloud SQL for MySQL
536
+
537
+ > [Google Cloud SQL for MySQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud.
538
+ Install the python package:
539
+
540
+ ```bash
541
+ pip install langchain-google-cloud-sql-mysql
542
+ ```
543
+
544
+ See [usage example](/docs/integrations/vectorstores/google_cloud_sql_mysql).
545
+
546
+ ```python
547
+ from langchain_google_cloud_sql_mysql import MySQLEngine, MySQLVectorStore
548
+ ```
549
+
550
+ ### Cloud SQL for PostgreSQL
551
+
552
+ > [Google Cloud SQL for PostgreSQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud.
553
+ Install the python package:
554
+
555
+ ```bash
556
+ pip install langchain-google-cloud-sql-pg
557
+ ```
558
+
559
+ See [usage example](/docs/integrations/vectorstores/google_cloud_sql_pg).
560
+
561
+ ```python
562
+ from langchain_google_cloud_sql_pg import PostgresEngine, PostgresVectorStore
563
+ ```
564
+
565
+ ### Vertex AI Vector Search
566
+
567
+ > [Google Cloud Vertex AI Vector Search](https://cloud.google.com/vertex-ai/docs/vector-search/overview) from Google Cloud,
568
+ > formerly known as `Vertex AI Matching Engine`, provides the industry's leading high-scale
569
+ > low latency vector database. These vector databases are commonly
570
+ > referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.
571
+
572
+ Install the python package:
573
+
574
+ ```bash
575
+ pip install langchain-google-vertexai
576
+ ```
577
+
578
+ See a [usage example](/docs/integrations/vectorstores/google_vertex_ai_vector_search).
579
+
580
+ ```python
581
+ from langchain_google_vertexai import VectorSearchVectorStore
582
+ ```
583
+
584
+ ### ScaNN
585
+
586
+ >[Google ScaNN](https://github.com/google-research/google-research/tree/master/scann)
587
+ > (Scalable Nearest Neighbors) is a python package.
588
+ >
589
+ >`ScaNN` is a method for efficient vector similarity search at scale.
590
+
591
+ >`ScaNN` includes search space pruning and quantization for Maximum Inner
592
+ > Product Search and also supports other distance functions such as
593
+ > Euclidean distance. The implementation is optimized for x86 processors
594
+ > with AVX2 support. See its [Google Research github](https://github.com/google-research/google-research/tree/master/scann)
595
+ > for more details.
596
+
597
+ We need to install `scann` python package.
598
+
599
+ ```bash
600
+ pip install scann
601
+ ```
602
+
603
+ See a [usage example](/docs/integrations/vectorstores/scann).
604
+
605
+ ```python
606
+ from langchain_community.vectorstores import ScaNN
607
+ ```
608
+
609
+ ## Retrievers
610
+
611
+ ### Google Drive
612
+
613
+ We need to install several python packages.
614
+
615
+ ```bash
616
+ pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
617
+ ```
618
+
619
+ See a [usage example and authorization instructions](/docs/integrations/retrievers/google_drive).
620
+
621
+ ```python
622
+ from langchain_googledrive.retrievers import GoogleDriveRetriever
623
+ ```
624
+
625
+ ### Vertex AI Search
626
+
627
+ > [Vertex AI Search](https://cloud.google.com/generative-ai-app-builder/docs/introduction)
628
+ > from Google Cloud allows developers to quickly build generative AI powered search engines for customers and employees.
629
+
630
+ We need to install the `google-cloud-discoveryengine` python package.
631
+
632
+ ```bash
633
+ pip install google-cloud-discoveryengine
634
+ ```
635
+
636
+ See a [usage example](/docs/integrations/retrievers/google_vertex_ai_search).
637
+
638
+ ```python
639
+ from langchain.retrievers import GoogleVertexAISearchRetriever
640
+ ```
641
+
642
+ ### Document AI Warehouse
643
+
644
+ > [Document AI Warehouse](https://cloud.google.com/document-ai-warehouse)
645
+ > from Google Cloud allows enterprises to search, store, govern, and manage documents and their AI-extracted
646
+ > data and metadata in a single platform.
647
+
648
+ Note: `GoogleDocumentAIWarehouseRetriever` is deprecated, use `DocumentAIWarehouseRetriever` (see below).
649
+ ```python
650
+ from langchain.retrievers import GoogleDocumentAIWarehouseRetriever
651
+ docai_wh_retriever = GoogleDocumentAIWarehouseRetriever(
652
+ project_number=...
653
+ )
654
+ query = ...
655
+ documents = docai_wh_retriever.invoke(
656
+ query, user_ldap=...
657
+ )
658
+ ```
659
+
660
+ ```python
661
+ from langchain_google_community.documentai_warehouse import DocumentAIWarehouseRetriever
662
+ ```
663
+
664
+ ## Tools
665
+
666
+ ### Text-to-Speech
667
+
668
+ >[Google Cloud Text-to-Speech](https://cloud.google.com/text-to-speech) is a Google Cloud service that enables developers to
669
+ > synthesize natural-sounding speech with 100+ voices, available in multiple languages and variants.
670
+ > It applies DeepMind’s groundbreaking research in WaveNet and Google’s powerful neural networks
671
+ > to deliver the highest fidelity possible.
672
+
673
+ We need to install a python package.
674
+
675
+ ```bash
676
+ pip install google-cloud-text-to-speech
677
+ ```
678
+
679
+ See a [usage example and authorization instructions](/docs/integrations/tools/google_cloud_texttospeech).
680
+
681
+ ```python
682
+ from langchain_google_community import TextToSpeechTool
683
+ ```
684
+
685
+ ### Google Drive
686
+
687
+ We need to install several python packages.
688
+
689
+ ```bash
690
+ pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
691
+ ```
692
+
693
+ See a [usage example and authorization instructions](/docs/integrations/tools/google_drive).
694
+
695
+ ```python
696
+ from langchain_community.utilities.google_drive import GoogleDriveAPIWrapper
697
+ from langchain_community.tools.google_drive.tool import GoogleDriveSearchTool
698
+ ```
699
+
700
+ ### Google Finance
701
+
702
+ We need to install a python package.
703
+
704
+ ```bash
705
+ pip install google-search-results
706
+ ```
707
+
708
+ See a [usage example and authorization instructions](/docs/integrations/tools/google_finance).
709
+
710
+ ```python
711
+ from langchain_community.tools.google_finance import GoogleFinanceQueryRun
712
+ from langchain_community.utilities.google_finance import GoogleFinanceAPIWrapper
713
+ ```
714
+
715
+ ### Google Jobs
716
+
717
+ We need to install a python package.
718
+
719
+ ```bash
720
+ pip install google-search-results
721
+ ```
722
+
723
+ See a [usage example and authorization instructions](/docs/integrations/tools/google_jobs).
724
+
725
+ ```python
726
+ from langchain_community.tools.google_jobs import GoogleJobsQueryRun
727
+ from langchain_community.utilities.google_finance import GoogleFinanceAPIWrapper
728
+ ```
729
+
730
+ ### Google Lens
731
+
732
+ See a [usage example and authorization instructions](/docs/integrations/tools/google_lens).
733
+
734
+ ```python
735
+ from langchain_community.tools.google_lens import GoogleLensQueryRun
736
+ from langchain_community.utilities.google_lens import GoogleLensAPIWrapper
737
+ ```
738
+
739
+ ### Google Places
740
+
741
+ We need to install a python package.
742
+
743
+ ```bash
744
+ pip install googlemaps
745
+ ```
746
+
747
+ See a [usage example and authorization instructions](/docs/integrations/tools/google_places).
748
+
749
+ ```python
750
+ from langchain.tools import GooglePlacesTool
751
+ ```
752
+
753
+ ### Google Scholar
754
+
755
+ We need to install a python package.
756
+
757
+ ```bash
758
+ pip install google-search-results
759
+ ```
760
+
761
+ See a [usage example and authorization instructions](/docs/integrations/tools/google_scholar).
762
+
763
+ ```python
764
+ from langchain_community.tools.google_scholar import GoogleScholarQueryRun
765
+ from langchain_community.utilities.google_scholar import GoogleScholarAPIWrapper
766
+ ```
767
+
768
+ ### Google Search
769
+
770
+ - Set up a Custom Search Engine, following [these instructions](https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search)
771
+ - Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables
772
+ `GOOGLE_API_KEY` and `GOOGLE_CSE_ID` respectively.
773
+
774
+ ```python
775
+ from langchain_google_community import GoogleSearchAPIWrapper
776
+ ```
777
+
778
+ For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_search).
779
+
780
+ We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:
781
+
782
+ ```python
783
+ from langchain.agents import load_tools
784
+ tools = load_tools(["google-search"])
785
+ ```
786
+
787
+ ### Google Trends
788
+
789
+ We need to install a python package.
790
+
791
+ ```bash
792
+ pip install google-search-results
793
+ ```
794
+
795
+ See a [usage example and authorization instructions](/docs/integrations/tools/google_trends).
796
+
797
+ ```python
798
+ from langchain_community.tools.google_trends import GoogleTrendsQueryRun
799
+ from langchain_community.utilities.google_trends import GoogleTrendsAPIWrapper
800
+ ```
801
+
802
+ ## Toolkits
803
+
804
+ ### GMail
805
+
806
+ > [Google Gmail](https://en.wikipedia.org/wiki/Gmail) is a free email service provided by Google.
807
+ This toolkit works with emails through the `Gmail API`.
808
+
809
+ We need to install `langchain-google-community` with required dependencies:
810
+
811
+ ```bash
812
+ pip install langchain-google-community[gmail]
813
+ ```
814
+
815
+ See a [usage example and authorization instructions](/docs/integrations/tools/gmail).
816
+
817
+ ```python
818
+ from langchain_google_community import GmailToolkit
819
+ ```
820
+
821
+ ## Memory
822
+
823
+ ### AlloyDB for PostgreSQL
824
+
825
+ > [AlloyDB for PostgreSQL](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability on Google Cloud. AlloyDB is 100% compatible with PostgreSQL.
826
+
827
+ Install the python package:
828
+
829
+ ```bash
830
+ pip install langchain-google-alloydb-pg
831
+ ```
832
+
833
+ See [usage example](/docs/integrations/memory/google_alloydb).
834
+
835
+ ```python
836
+ from langchain_google_alloydb_pg import AlloyDBEngine, AlloyDBChatMessageHistory
837
+ ```
838
+
839
+ ### Cloud SQL for PostgreSQL
840
+
841
+ > [Cloud SQL for PostgreSQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud.
842
+ Install the python package:
843
+
844
+ ```bash
845
+ pip install langchain-google-cloud-sql-pg
846
+ ```
847
+
848
+ See [usage example](/docs/integrations/memory/google_sql_pg).
849
+
850
+
851
+ ```python
852
+ from langchain_google_cloud_sql_pg import PostgresEngine, PostgresChatMessageHistory
853
+ ```
854
+
855
+ ### Cloud SQL for MySQL
856
+
857
+ > [Cloud SQL for MySQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud.
858
+ Install the python package:
859
+
860
+ ```bash
861
+ pip install langchain-google-cloud-sql-mysql
862
+ ```
863
+
864
+ See [usage example](/docs/integrations/memory/google_sql_mysql).
865
+
866
+ ```python
867
+ from langchain_google_cloud_sql_mysql import MySQLEngine, MySQLChatMessageHistory
868
+ ```
869
+
870
+ ### Cloud SQL for SQL Server
871
+
872
+ > [Cloud SQL for SQL Server](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your SQL Server databases on Google Cloud.
873
+ Install the python package:
874
+
875
+ ```bash
876
+ pip install langchain-google-cloud-sql-mssql
877
+ ```
878
+
879
+ See [usage example](/docs/integrations/memory/google_sql_mssql).
880
+
881
+ ```python
882
+ from langchain_google_cloud_sql_mssql import MSSQLEngine, MSSQLChatMessageHistory
883
+ ```
884
+
885
+ ### Spanner
886
+
887
+ > [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL.
888
+ Install the python package:
889
+
890
+ ```bash
891
+ pip install langchain-google-spanner
892
+ ```
893
+
894
+ See [usage example](/docs/integrations/memory/google_spanner).
895
+
896
+ ```python
897
+ from langchain_google_spanner import SpannerChatMessageHistory
898
+ ```
899
+
900
+ ### Memorystore for Redis
901
+
902
+ > [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments.
903
+ Install the python package:
904
+
905
+ ```bash
906
+ pip install langchain-google-memorystore-redis
907
+ ```
908
+
909
+ See [usage example](/docs/integrations/document_loaders/google_memorystore_redis).
910
+
911
+ ```python
912
+ from langchain_google_memorystore_redis import MemorystoreChatMessageHistory
913
+ ```
914
+
915
+ ### Bigtable
916
+
917
+ > [Google Cloud Bigtable](https://cloud.google.com/bigtable/docs) is Google's fully managed NoSQL Big Data database service in Google Cloud.
918
+ Install the python package:
919
+
920
+ ```bash
921
+ pip install langchain-google-bigtable
922
+ ```
923
+
924
+ See [usage example](/docs/integrations/memory/google_bigtable).
925
+
926
+ ```python
927
+ from langchain_google_bigtable import BigtableChatMessageHistory
928
+ ```
929
+
930
+ ### Firestore (Native Mode)
931
+
932
+ > [Google Cloud Firestore](https://cloud.google.com/firestore/docs/) is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
933
+ Install the python package:
934
+
935
+ ```bash
936
+ pip install langchain-google-firestore
937
+ ```
938
+
939
+ See [usage example](/docs/integrations/memory/google_firestore).
940
+
941
+ ```python
942
+ from langchain_google_firestore import FirestoreChatMessageHistory
943
+ ```
944
+
945
+ ### Firestore (Datastore Mode)
946
+
947
+ > [Google Cloud Firestore in Datastore mode](https://cloud.google.com/datastore/docs) is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
948
+ > Firestore is the newest version of Datastore and introduces several improvements over Datastore.
949
+ Install the python package:
950
+
951
+ ```bash
952
+ pip install langchain-google-datastore
953
+ ```
954
+
955
+ See [usage example](/docs/integrations/memory/google_firestore_datastore).
956
+
957
+ ```python
958
+ from langchain_google_datastore import DatastoreChatMessageHistory
959
+ ```
960
+
961
+ ### El Carro: The Oracle Operator for Kubernetes
962
+
963
+ > Google [El Carro Oracle Operator for Kubernetes](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
964
+ offers a way to run `Oracle` databases in `Kubernetes` as a portable, open source,
965
+ community driven, no vendor lock-in container orchestration system.
966
+
967
+ ```bash
968
+ pip install langchain-google-el-carro
969
+ ```
970
+
971
+ See [usage example](/docs/integrations/memory/google_el_carro).
972
+
973
+ ```python
974
+ from langchain_google_el_carro import ElCarroChatMessageHistory
975
+ ```
976
+
977
+ ## Chat Loaders
978
+
979
+ ### GMail
980
+
981
+ > [Gmail](https://en.wikipedia.org/wiki/Gmail) is a free email service provided by Google.
982
+ This loader works with emails through the `Gmail API`.
983
+
984
+ We need to install `langchain-google-community` with underlying dependencies.
985
+
986
+ ```bash
987
+ pip install langchain-google-community[gmail]
988
+ ```
989
+
990
+ See a [usage example and authorization instructions](/docs/integrations/chat_loaders/gmail).
991
+
992
+ ```python
993
+ from langchain_google_community import GMailLoader
994
+ ```
995
+
996
+ ## 3rd Party Integrations
997
+
998
+ ### SearchApi
999
+
1000
+ >[SearchApi](https://www.searchapi.io/) provides a 3rd-party API to access Google search results, YouTube search & transcripts, and other Google-related engines.
1001
+
1002
+ See [usage examples and authorization instructions](/docs/integrations/tools/searchapi).
1003
+
1004
+ ```python
1005
+ from langchain_community.utilities import SearchApiAPIWrapper
1006
+ ```
1007
+
1008
+ ### SerpApi
1009
+
1010
+ >[SerpApi](https://serpapi.com/) provides a 3rd-party API to access Google search results.
1011
+
1012
+ See a [usage example and authorization instructions](/docs/integrations/tools/serpapi).
1013
+
1014
+ ```python
1015
+ from langchain_community.utilities import SerpAPIWrapper
1016
+ ```
1017
+
1018
+ ### Serper.dev
1019
+
1020
+ See a [usage example and authorization instructions](/docs/integrations/tools/google_serper).
1021
+
1022
+ ```python
1023
+ from langchain_community.utilities import GoogleSerperAPIWrapper
1024
+ ```
1025
+
1026
+ ### YouTube
1027
+
1028
+ >[YouTube Search](https://github.com/joetats/youtube_search) package searches `YouTube` videos avoiding using their heavily rate-limited API.
1029
+ >
1030
+ >It uses the form on the YouTube homepage and scrapes the resulting page.
1031
+
1032
+ We need to install a python package.
1033
+
1034
+ ```bash
1035
+ pip install youtube_search
1036
+ ```
1037
+
1038
+ See a [usage example](/docs/integrations/tools/youtube).
1039
+
1040
+ ```python
1041
+ from langchain.tools import YouTubeSearchTool
1042
+ ```
1043
+
1044
+ ### YouTube audio
1045
+
1046
+ >[YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by `Google`.
1047
+
1048
+ Use `YoutubeAudioLoader` to fetch / download the audio files.
1049
+
1050
+ Then, use `OpenAIWhisperParser` to transcribe them to text.
1051
+
1052
+ We need to install several python packages.
1053
+
1054
+ ```bash
1055
+ pip install yt_dlp pydub librosa
1056
+ ```
1057
+
1058
+ See a [usage example and authorization instructions](/docs/integrations/document_loaders/youtube_audio).
1059
+
1060
+ ```python
1061
+ from langchain_community.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
1062
+ from langchain_community.document_loaders.parsers import OpenAIWhisperParser, OpenAIWhisperParserLocal
1063
+ ```
1064
+
1065
+ ### YouTube transcripts
1066
+
1067
+ >[YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by `Google`.
1068
+
1069
+ We need to install `youtube-transcript-api` python package.
1070
+
1071
+ ```bash
1072
+ pip install youtube-transcript-api
1073
+ ```
1074
+
1075
+ See a [usage example](/docs/integrations/document_loaders/youtube_transcript).
1076
+
1077
+ ```python
1078
+ from langchain_community.document_loaders import YoutubeLoader
1079
+ ```
langchain_md_files/integrations/platforms/huggingface.mdx ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Hugging Face
2
+
3
+ All functionality related to the [Hugging Face Platform](https://huggingface.co/).
4
+
5
+ ## Installation
6
+
7
+ Most of the Hugging Face integrations are available in the `langchain-huggingface` package.
8
+
9
+ ```bash
10
+ pip install langchain-huggingface
11
+ ```
12
+
13
+ ## Chat models
14
+
15
+ ### Models from Hugging Face
16
+
17
+ We can use the `Hugging Face` LLM classes or directly use the `ChatHuggingFace` class.
18
+
19
+ See a [usage example](/docs/integrations/chat/huggingface).
20
+
21
+ ```python
22
+ from langchain_huggingface import ChatHuggingFace
23
+ ```
24
+
25
+ ## LLMs
26
+
27
+ ### Hugging Face Local Pipelines
28
+
29
+ Hugging Face models can be run locally through the `HuggingFacePipeline` class.
30
+
31
+ See a [usage example](/docs/integrations/llms/huggingface_pipelines).
32
+
33
+ ```python
34
+ from langchain_huggingface import HuggingFacePipeline
35
+ ```
36
+
37
+ ## Embedding Models
38
+
39
+ ### HuggingFaceEmbeddings
40
+
41
+ See a [usage example](/docs/integrations/text_embedding/huggingfacehub).
42
+
43
+ ```python
44
+ from langchain_huggingface import HuggingFaceEmbeddings
45
+ ```
46
+
47
+ ### HuggingFaceInstructEmbeddings
48
+
49
+ See a [usage example](/docs/integrations/text_embedding/instruct_embeddings).
50
+
51
+ ```python
52
+ from langchain_community.embeddings import HuggingFaceInstructEmbeddings
53
+ ```
54
+
55
+ ### HuggingFaceBgeEmbeddings
56
+
57
+ >[BGE models on the HuggingFace](https://huggingface.co/BAAI/bge-large-en) are [the best open-source embedding models](https://huggingface.co/spaces/mteb/leaderboard).
58
+ >BGE model is created by the [Beijing Academy of Artificial Intelligence (BAAI)](https://en.wikipedia.org/wiki/Beijing_Academy_of_Artificial_Intelligence). `BAAI` is a private non-profit organization engaged in AI research and development.
59
+
60
+ See a [usage example](/docs/integrations/text_embedding/bge_huggingface).
61
+
62
+ ```python
63
+ from langchain_community.embeddings import HuggingFaceBgeEmbeddings
64
+ ```
65
+
66
+ ### Hugging Face Text Embeddings Inference (TEI)
67
+
68
+ >[Hugging Face Text Embeddings Inference (TEI)](https://huggingface.co/docs/text-generation-inference/index) is a toolkit for deploying and serving open-source
69
+ > text embeddings and sequence classification models. `TEI` enables high-performance extraction for the most popular models,
70
+ >including `FlagEmbedding`, `Ember`, `GTE` and `E5`.
71
+
72
+ We need to install `huggingface-hub` python package.
73
+
74
+ ```bash
75
+ pip install huggingface-hub
76
+ ```
77
+
78
+ See a [usage example](/docs/integrations/text_embedding/text_embeddings_inference).
79
+
80
+ ```python
81
+ from langchain_community.embeddings import HuggingFaceHubEmbeddings
82
+ ```
83
+
84
+
85
+ ## Document Loaders
86
+
87
+ ### Hugging Face dataset
88
+
89
+ >[Hugging Face Hub](https://huggingface.co/docs/hub/index) is home to over 75,000
90
+ > [datasets](https://huggingface.co/docs/hub/index#datasets) in more than 100 languages
91
+ > that can be used for a broad range of tasks across NLP, Computer Vision, and Audio.
92
+ > They used for a diverse range of tasks such as translation, automatic speech
93
+ > recognition, and image classification.
94
+
95
+ We need to install `datasets` python package.
96
+
97
+ ```bash
98
+ pip install datasets
99
+ ```
100
+
101
+ See a [usage example](/docs/integrations/document_loaders/hugging_face_dataset).
102
+
103
+ ```python
104
+ from langchain_community.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader
105
+ ```
106
+
107
+
108
+
109
+ ## Tools
110
+
111
+ ### Hugging Face Hub Tools
112
+
113
+ >[Hugging Face Tools](https://huggingface.co/docs/transformers/v4.29.0/en/custom_tools)
114
+ > support text I/O and are loaded using the `load_huggingface_tool` function.
115
+
116
+ We need to install several python packages.
117
+
118
+ ```bash
119
+ pip install transformers huggingface_hub
120
+ ```
121
+
122
+ See a [usage example](/docs/integrations/tools/huggingface_tools).
123
+
124
+ ```python
125
+ from langchain_community.agent_toolkits.load_tools import load_huggingface_tool
126
+ ```
langchain_md_files/integrations/platforms/microsoft.mdx ADDED
@@ -0,0 +1,561 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ keywords: [azure]
3
+ ---
4
+
5
+ # Microsoft
6
+
7
+ All functionality related to `Microsoft Azure` and other `Microsoft` products.
8
+
9
+ ## Chat Models
10
+
11
+ ### Azure OpenAI
12
+
13
+ >[Microsoft Azure](https://en.wikipedia.org/wiki/Microsoft_Azure), often referred to as `Azure` is a cloud computing platform run by `Microsoft`, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). `Microsoft Azure` supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.
14
+
15
+ >[Azure OpenAI](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) is an `Azure` service with powerful language models from `OpenAI` including the `GPT-3`, `Codex` and `Embeddings model` series for content generation, summarization, semantic search, and natural language to code translation.
16
+
17
+ ```bash
18
+ pip install langchain-openai
19
+ ```
20
+
21
+ Set the environment variables to get access to the `Azure OpenAI` service.
22
+
23
+ ```python
24
+ import os
25
+
26
+ os.environ["AZURE_OPENAI_ENDPOINT"] = "https://<your-endpoint.openai.azure.com/"
27
+ os.environ["AZURE_OPENAI_API_KEY"] = "your AzureOpenAI key"
28
+ ```
29
+
30
+ See a [usage example](/docs/integrations/chat/azure_chat_openai)
31
+
32
+
33
+ ```python
34
+ from langchain_openai import AzureChatOpenAI
35
+ ```
36
+
37
+ ### Azure ML Chat Online Endpoint
38
+
39
+ See the documentation [here](/docs/integrations/chat/azureml_chat_endpoint) for accessing chat
40
+ models hosted with [Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning/).
41
+
42
+
43
+ ## LLMs
44
+
45
+ ### Azure ML
46
+
47
+ See a [usage example](/docs/integrations/llms/azure_ml).
48
+
49
+ ```python
50
+ from langchain_community.llms.azureml_endpoint import AzureMLOnlineEndpoint
51
+ ```
52
+
53
+ ### Azure OpenAI
54
+
55
+ See a [usage example](/docs/integrations/llms/azure_openai).
56
+
57
+ ```python
58
+ from langchain_openai import AzureOpenAI
59
+ ```
60
+
61
+ ## Embedding Models
62
+ ### Azure OpenAI
63
+
64
+ See a [usage example](/docs/integrations/text_embedding/azureopenai)
65
+
66
+ ```python
67
+ from langchain_openai import AzureOpenAIEmbeddings
68
+ ```
69
+
70
+ ## Document loaders
71
+
72
+ ### Azure AI Data
73
+
74
+ >[Azure AI Studio](https://ai.azure.com/) provides the capability to upload data assets
75
+ > to cloud storage and register existing data assets from the following sources:
76
+ >
77
+ >- `Microsoft OneLake`
78
+ >- `Azure Blob Storage`
79
+ >- `Azure Data Lake gen 2`
80
+
81
+ First, you need to install several python packages.
82
+
83
+ ```bash
84
+ pip install azureml-fsspec, azure-ai-generative
85
+ ```
86
+
87
+ See a [usage example](/docs/integrations/document_loaders/azure_ai_data).
88
+
89
+ ```python
90
+ from langchain.document_loaders import AzureAIDataLoader
91
+ ```
92
+
93
+
94
+ ### Azure AI Document Intelligence
95
+
96
+ >[Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known
97
+ > as `Azure Form Recognizer`) is machine-learning
98
+ > based service that extracts texts (including handwriting), tables, document structures,
99
+ > and key-value-pairs
100
+ > from digital or scanned PDFs, images, Office and HTML files.
101
+ >
102
+ > Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`.
103
+
104
+ First, you need to install a python package.
105
+
106
+ ```bash
107
+ pip install azure-ai-documentintelligence
108
+ ```
109
+
110
+ See a [usage example](/docs/integrations/document_loaders/azure_document_intelligence).
111
+
112
+ ```python
113
+ from langchain.document_loaders import AzureAIDocumentIntelligenceLoader
114
+ ```
115
+
116
+
117
+ ### Azure Blob Storage
118
+
119
+ >[Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.
120
+
121
+ >[Azure Files](https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction) offers fully managed
122
+ > file shares in the cloud that are accessible via the industry standard Server Message Block (`SMB`) protocol,
123
+ > Network File System (`NFS`) protocol, and `Azure Files REST API`. `Azure Files` are based on the `Azure Blob Storage`.
124
+
125
+ `Azure Blob Storage` is designed for:
126
+ - Serving images or documents directly to a browser.
127
+ - Storing files for distributed access.
128
+ - Streaming video and audio.
129
+ - Writing to log files.
130
+ - Storing data for backup and restore, disaster recovery, and archiving.
131
+ - Storing data for analysis by an on-premises or Azure-hosted service.
132
+
133
+ ```bash
134
+ pip install azure-storage-blob
135
+ ```
136
+
137
+ See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/azure_blob_storage_container).
138
+
139
+ ```python
140
+ from langchain_community.document_loaders import AzureBlobStorageContainerLoader
141
+ ```
142
+
143
+ See a [usage example for the Azure Files](/docs/integrations/document_loaders/azure_blob_storage_file).
144
+
145
+ ```python
146
+ from langchain_community.document_loaders import AzureBlobStorageFileLoader
147
+ ```
148
+
149
+
150
+ ### Microsoft OneDrive
151
+
152
+ >[Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file-hosting service operated by Microsoft.
153
+
154
+ First, you need to install a python package.
155
+
156
+ ```bash
157
+ pip install o365
158
+ ```
159
+
160
+ See a [usage example](/docs/integrations/document_loaders/microsoft_onedrive).
161
+
162
+ ```python
163
+ from langchain_community.document_loaders import OneDriveLoader
164
+ ```
165
+
166
+ ### Microsoft OneDrive File
167
+
168
+ >[Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file-hosting service operated by Microsoft.
169
+
170
+ First, you need to install a python package.
171
+
172
+ ```bash
173
+ pip install o365
174
+ ```
175
+
176
+ ```python
177
+ from langchain_community.document_loaders import OneDriveFileLoader
178
+ ```
179
+
180
+
181
+ ### Microsoft Word
182
+
183
+ >[Microsoft Word](https://www.microsoft.com/en-us/microsoft-365/word) is a word processor developed by Microsoft.
184
+
185
+ See a [usage example](/docs/integrations/document_loaders/microsoft_word).
186
+
187
+ ```python
188
+ from langchain_community.document_loaders import UnstructuredWordDocumentLoader
189
+ ```
190
+
191
+
192
+ ### Microsoft Excel
193
+
194
+ >[Microsoft Excel](https://en.wikipedia.org/wiki/Microsoft_Excel) is a spreadsheet editor developed by
195
+ > Microsoft for Windows, macOS, Android, iOS and iPadOS.
196
+ > It features calculation or computation capabilities, graphing tools, pivot tables, and a macro programming
197
+ > language called Visual Basic for Applications (VBA). Excel forms part of the Microsoft 365 suite of software.
198
+
199
+ The `UnstructuredExcelLoader` is used to load `Microsoft Excel` files. The loader works with both `.xlsx` and `.xls` files.
200
+ The page content will be the raw text of the Excel file. If you use the loader in `"elements"` mode, an HTML
201
+ representation of the Excel file will be available in the document metadata under the `text_as_html` key.
202
+
203
+ See a [usage example](/docs/integrations/document_loaders/microsoft_excel).
204
+
205
+ ```python
206
+ from langchain_community.document_loaders import UnstructuredExcelLoader
207
+ ```
208
+
209
+
210
+ ### Microsoft SharePoint
211
+
212
+ >[Microsoft SharePoint](https://en.wikipedia.org/wiki/SharePoint) is a website-based collaboration system
213
+ > that uses workflow applications, “list” databases, and other web parts and security features to
214
+ > empower business teams to work together developed by Microsoft.
215
+
216
+ See a [usage example](/docs/integrations/document_loaders/microsoft_sharepoint).
217
+
218
+ ```python
219
+ from langchain_community.document_loaders.sharepoint import SharePointLoader
220
+ ```
221
+
222
+
223
+ ### Microsoft PowerPoint
224
+
225
+ >[Microsoft PowerPoint](https://en.wikipedia.org/wiki/Microsoft_PowerPoint) is a presentation program by Microsoft.
226
+
227
+ See a [usage example](/docs/integrations/document_loaders/microsoft_powerpoint).
228
+
229
+ ```python
230
+ from langchain_community.document_loaders import UnstructuredPowerPointLoader
231
+ ```
232
+
233
+ ### Microsoft OneNote
234
+
235
+ First, let's install dependencies:
236
+
237
+ ```bash
238
+ pip install bs4 msal
239
+ ```
240
+
241
+ See a [usage example](/docs/integrations/document_loaders/microsoft_onenote).
242
+
243
+ ```python
244
+ from langchain_community.document_loaders.onenote import OneNoteLoader
245
+ ```
246
+
247
+ ### Playwright URL Loader
248
+
249
+ >[Playwright](https://github.com/microsoft/playwright) is an open-source automation tool
250
+ > developed by `Microsoft` that allows you to programmatically control and automate
251
+ > web browsers. It is designed for end-to-end testing, scraping, and automating
252
+ > tasks across various web browsers such as `Chromium`, `Firefox`, and `WebKit`.
253
+
254
+
255
+ First, let's install dependencies:
256
+
257
+ ```bash
258
+ pip install playwright unstructured
259
+ ```
260
+
261
+ See a [usage example](/docs/integrations/document_loaders/url/#playwright-url-loader).
262
+
263
+ ```python
264
+ from langchain_community.document_loaders.onenote import OneNoteLoader
265
+ ```
266
+
267
+ ## AI Agent Memory System
268
+
269
+ [AI agent](https://learn.microsoft.com/en-us/azure/cosmos-db/ai-agents) needs robust memory systems that support multi-modality, offer strong operational performance, and enable agent memory sharing as well as separation.
270
+
271
+ ### Azure Cosmos DB
272
+ AI agents can rely on Azure Cosmos DB as a unified [memory system](https://learn.microsoft.com/en-us/azure/cosmos-db/ai-agents#memory-can-make-or-break-agents) solution, enjoying speed, scale, and simplicity. This service successfully [enabled OpenAI's ChatGPT service](https://www.youtube.com/watch?v=6IIUtEFKJec&t) to scale dynamically with high reliability and low maintenance. Powered by an atom-record-sequence engine, it is the world's first globally distributed [NoSQL](https://learn.microsoft.com/en-us/azure/cosmos-db/distributed-nosql), [relational](https://learn.microsoft.com/en-us/azure/cosmos-db/distributed-relational), and [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database) service that offers a serverless mode.
273
+
274
+ Below are two available Azure Cosmos DB APIs that can provide vector store functionalities.
275
+
276
+ ### Azure Cosmos DB for MongoDB (vCore)
277
+
278
+ >[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/) makes it easy to create a database with full native MongoDB support.
279
+ > You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account's connection string.
280
+ > Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB.
281
+
282
+ #### Installation and Setup
283
+
284
+ See [detail configuration instructions](/docs/integrations/vectorstores/azure_cosmos_db).
285
+
286
+ We need to install `pymongo` python package.
287
+
288
+ ```bash
289
+ pip install pymongo
290
+ ```
291
+
292
+ #### Deploy Azure Cosmos DB on Microsoft Azure
293
+
294
+ Azure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture.
295
+
296
+ With Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and the familiar vCore architecture when migrating existing applications or building new ones.
297
+
298
+ [Sign Up](https://azure.microsoft.com/en-us/free/) for free to get started today.
299
+
300
+ See a [usage example](/docs/integrations/vectorstores/azure_cosmos_db).
301
+
302
+ ```python
303
+ from langchain_community.vectorstores import AzureCosmosDBVectorSearch
304
+ ```
305
+
306
+ ### Azure Cosmos DB NoSQL
307
+
308
+ >[Azure Cosmos DB for NoSQL](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/vector-search) now offers vector indexing and search in preview.
309
+ This feature is designed to handle high-dimensional vectors, enabling efficient and accurate vector search at any scale. You can now store vectors
310
+ directly in the documents alongside your data. This means that each document in your database can contain not only traditional schema-free data,
311
+ but also high-dimensional vectors as other properties of the documents. This colocation of data and vectors allows for efficient indexing and searching,
312
+ as the vectors are stored in the same logical unit as the data they represent. This simplifies data management, AI application architectures, and the
313
+ efficiency of vector-based operations.
314
+
315
+ #### Installation and Setup
316
+
317
+ See [detail configuration instructions](/docs/integrations/vectorstores/azure_cosmos_db_no_sql).
318
+
319
+ We need to install `azure-cosmos` python package.
320
+
321
+ ```bash
322
+ pip install azure-cosmos
323
+ ```
324
+
325
+ #### Deploy Azure Cosmos DB on Microsoft Azure
326
+
327
+ Azure Cosmos DB offers a solution for modern apps and intelligent workloads by being very responsive with dynamic and elastic autoscale. It is available
328
+ in every Azure region and can automatically replicate data closer to users. It has SLA guaranteed low-latency and high availability.
329
+
330
+ [Sign Up](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-python?pivots=devcontainer-codespace) for free to get started today.
331
+
332
+ See a [usage example](/docs/integrations/vectorstores/azure_cosmos_db_no_sql).
333
+
334
+ ```python
335
+ from langchain_community.vectorstores import AzureCosmosDBNoSQLVectorSearch
336
+ ```
337
+
338
+ ### Azure Database for PostgreSQL
339
+ >[Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/service-overview) is a relational database service based on the open-source Postgres database engine. It's a fully managed database-as-a-service that can handle mission-critical workloads with predictable performance, security, high availability, and dynamic scalability.
340
+
341
+ See [set up instructions](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/quickstart-create-server-portal) for Azure Database for PostgreSQL.
342
+
343
+ See a [usage example](/docs/integrations/memory/postgres_chat_message_history/). Simply use the [connection string](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/connect-python?tabs=cmd%2Cpassword#add-authentication-code) from your Azure Portal.
344
+
345
+ Since Azure Database for PostgreSQL is open-source Postgres, you can use the [LangChain's Postgres support](/docs/integrations/vectorstores/pgvector/) to connect to Azure Database for PostgreSQL.
346
+
347
+
348
+ ### Azure AI Search
349
+
350
+ [Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) is a cloud search service
351
+ that gives developers infrastructure, APIs, and tools for information retrieval of vector, keyword, and hybrid
352
+ queries at scale. See [here](/docs/integrations/vectorstores/azuresearch) for usage examples.
353
+
354
+ ```python
355
+ from langchain_community.vectorstores.azuresearch import AzureSearch
356
+ ```
357
+
358
+ ## Retrievers
359
+
360
+ ### Azure AI Search
361
+
362
+ >[Azure AI Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search` or `Azure Cognitive Search` ) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
363
+
364
+ >Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:
365
+ >- A search engine for full text search over a search index containing user-owned content
366
+ >- Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation
367
+ >- Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more
368
+ >- Programmability through REST APIs and client libraries in Azure SDKs
369
+ >- Azure integration at the data layer, machine learning layer, and AI (AI Services)
370
+
371
+ See [set up instructions](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal).
372
+
373
+ See a [usage example](/docs/integrations/retrievers/azure_ai_search).
374
+
375
+ ```python
376
+ from langchain_community.retrievers import AzureAISearchRetriever
377
+ ```
378
+
379
+ ## Vector Store
380
+ ### Azure Database for PostgreSQL
381
+ >[Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/service-overview) is a relational database service based on the open-source Postgres database engine. It's a fully managed database-as-a-service that can handle mission-critical workloads with predictable performance, security, high availability, and dynamic scalability.
382
+
383
+ See [set up instructions](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/quickstart-create-server-portal) for Azure Database for PostgreSQL.
384
+
385
+ You need to [enable pgvector extension](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-use-pgvector) in your database to use Postgres as a vector store. Once you have the extension enabled, you can use the [PGVector in LangChain](/docs/integrations/vectorstores/pgvector/) to connect to Azure Database for PostgreSQL.
386
+
387
+ See a [usage example](/docs/integrations/vectorstores/pgvector/). Simply use the [connection string](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/connect-python?tabs=cmd%2Cpassword#add-authentication-code) from your Azure Portal.
388
+
389
+
390
+ ## Tools
391
+
392
+ ### Azure Container Apps dynamic sessions
393
+
394
+ We need to get the `POOL_MANAGEMENT_ENDPOINT` environment variable from the Azure Container Apps service.
395
+ See the instructions [here](/docs/integrations/tools/azure_dynamic_sessions/#setup).
396
+
397
+ We need to install a python package.
398
+
399
+ ```bash
400
+ pip install langchain-azure-dynamic-sessions
401
+ ```
402
+
403
+ See a [usage example](/docs/integrations/tools/azure_dynamic_sessions).
404
+
405
+ ```python
406
+ from langchain_azure_dynamic_sessions import SessionsPythonREPLTool
407
+ ```
408
+
409
+ ### Bing Search
410
+
411
+ Follow the documentation [here](/docs/integrations/tools/bing_search) to get a detail explanations and instructions of this tool.
412
+
413
+ The environment variable `BING_SUBSCRIPTION_KEY` and `BING_SEARCH_URL` are required from Bing Search resource.
414
+
415
+ ```python
416
+ from langchain_community.tools.bing_search import BingSearchResults
417
+ from langchain_community.utilities import BingSearchAPIWrapper
418
+
419
+ api_wrapper = BingSearchAPIWrapper()
420
+ tool = BingSearchResults(api_wrapper=api_wrapper)
421
+ ```
422
+
423
+ ## Toolkits
424
+
425
+ ### Azure AI Services
426
+
427
+ We need to install several python packages.
428
+
429
+ ```bash
430
+ pip install azure-ai-formrecognizer azure-cognitiveservices-speech azure-ai-vision-imageanalysis
431
+ ```
432
+
433
+ See a [usage example](/docs/integrations/tools/azure_ai_services).
434
+
435
+ ```python
436
+ from langchain_community.agent_toolkits import azure_ai_services
437
+ ```
438
+
439
+ The `azure_ai_services` toolkit includes the following tools:
440
+
441
+ - Image Analysis: [AzureAiServicesImageAnalysisTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.image_analysis.AzureAiServicesImageAnalysisTool.html)
442
+ - Document Intelligence: [AzureAiServicesDocumentIntelligenceTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.document_intelligence.AzureAiServicesDocumentIntelligenceTool.html)
443
+ - Speech to Text: [AzureAiServicesSpeechToTextTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.speech_to_text.AzureAiServicesSpeechToTextTool.html)
444
+ - Text to Speech: [AzureAiServicesTextToSpeechTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.text_to_speech.AzureAiServicesTextToSpeechTool.html)
445
+ - Text Analytics for Health: [AzureAiServicesTextAnalyticsForHealthTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.text_analytics_for_health.AzureAiServicesTextAnalyticsForHealthTool.html)
446
+
447
+
448
+ ### Microsoft Office 365 email and calendar
449
+
450
+ We need to install `O365` python package.
451
+
452
+ ```bash
453
+ pip install O365
454
+ ```
455
+
456
+
457
+ See a [usage example](/docs/integrations/tools/office365).
458
+
459
+ ```python
460
+ from langchain_community.agent_toolkits import O365Toolkit
461
+ ```
462
+
463
+ ### Microsoft Azure PowerBI
464
+
465
+ We need to install `azure-identity` python package.
466
+
467
+ ```bash
468
+ pip install azure-identity
469
+ ```
470
+
471
+ See a [usage example](/docs/integrations/tools/powerbi).
472
+
473
+ ```python
474
+ from langchain_community.agent_toolkits import PowerBIToolkit
475
+ from langchain_community.utilities.powerbi import PowerBIDataset
476
+ ```
477
+
478
+ ### PlayWright Browser Toolkit
479
+
480
+ >[Playwright](https://github.com/microsoft/playwright) is an open-source automation tool
481
+ > developed by `Microsoft` that allows you to programmatically control and automate
482
+ > web browsers. It is designed for end-to-end testing, scraping, and automating
483
+ > tasks across various web browsers such as `Chromium`, `Firefox`, and `WebKit`.
484
+
485
+ We need to install several python packages.
486
+
487
+ ```bash
488
+ pip install playwright lxml
489
+ ```
490
+
491
+ See a [usage example](/docs/integrations/tools/playwright).
492
+
493
+ ```python
494
+ from langchain_community.agent_toolkits import PlayWrightBrowserToolkit
495
+ ```
496
+
497
+ #### PlayWright Browser individual tools
498
+
499
+ You can use individual tools from the PlayWright Browser Toolkit.
500
+
501
+ ```python
502
+ from langchain_community.tools.playwright import ClickTool
503
+ from langchain_community.tools.playwright import CurrentWebPageTool
504
+ from langchain_community.tools.playwright import ExtractHyperlinksTool
505
+ from langchain_community.tools.playwright import ExtractTextTool
506
+ from langchain_community.tools.playwright import GetElementsTool
507
+ from langchain_community.tools.playwright import NavigateTool
508
+ from langchain_community.tools.playwright import NavigateBackTool
509
+ ```
510
+
511
+ ## Graphs
512
+
513
+ ### Azure Cosmos DB for Apache Gremlin
514
+
515
+ We need to install a python package.
516
+
517
+ ```bash
518
+ pip install gremlinpython
519
+ ```
520
+
521
+ See a [usage example](/docs/integrations/graphs/azure_cosmosdb_gremlin).
522
+
523
+ ```python
524
+ from langchain_community.graphs import GremlinGraph
525
+ from langchain_community.graphs.graph_document import GraphDocument, Node, Relationship
526
+ ```
527
+
528
+ ## Utilities
529
+
530
+ ### Bing Search API
531
+
532
+ >[Microsoft Bing](https://www.bing.com/), commonly referred to as `Bing` or `Bing Search`,
533
+ > is a web search engine owned and operated by `Microsoft`.
534
+
535
+ See a [usage example](/docs/integrations/tools/bing_search).
536
+
537
+ ```python
538
+ from langchain_community.utilities import BingSearchAPIWrapper
539
+ ```
540
+
541
+ ## More
542
+
543
+ ### Microsoft Presidio
544
+
545
+ >[Presidio](https://microsoft.github.io/presidio/) (Origin from Latin praesidium ‘protection, garrison’)
546
+ > helps to ensure sensitive data is properly managed and governed. It provides fast identification and
547
+ > anonymization modules for private entities in text and images such as credit card numbers, names,
548
+ > locations, social security numbers, bitcoin wallets, US phone numbers, financial data and more.
549
+
550
+ First, you need to install several python packages and download a `SpaCy` model.
551
+
552
+ ```bash
553
+ pip install langchain-experimental openai presidio-analyzer presidio-anonymizer spacy Faker
554
+ python -m spacy download en_core_web_lg
555
+ ```
556
+
557
+ See [usage examples](https://python.langchain.com/v0.1/docs/guides/productionization/safety/presidio_data_anonymization).
558
+
559
+ ```python
560
+ from langchain_experimental.data_anonymizer import PresidioAnonymizer, PresidioReversibleAnonymizer
561
+ ```
langchain_md_files/integrations/platforms/openai.mdx ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ keywords: [openai]
3
+ ---
4
+
5
+ # OpenAI
6
+
7
+ All functionality related to OpenAI
8
+
9
+ >[OpenAI](https://en.wikipedia.org/wiki/OpenAI) is American artificial intelligence (AI) research laboratory
10
+ > consisting of the non-profit `OpenAI Incorporated`
11
+ > and its for-profit subsidiary corporation `OpenAI Limited Partnership`.
12
+ > `OpenAI` conducts AI research with the declared intention of promoting and developing a friendly AI.
13
+ > `OpenAI` systems run on an `Azure`-based supercomputing platform from `Microsoft`.
14
+
15
+ >The [OpenAI API](https://platform.openai.com/docs/models) is powered by a diverse set of models with different capabilities and price points.
16
+ >
17
+ >[ChatGPT](https://chat.openai.com) is the Artificial Intelligence (AI) chatbot developed by `OpenAI`.
18
+
19
+ ## Installation and Setup
20
+
21
+ Install the integration package with
22
+ ```bash
23
+ pip install langchain-openai
24
+ ```
25
+
26
+ Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
27
+
28
+ ## Chat model
29
+
30
+ See a [usage example](/docs/integrations/chat/openai).
31
+
32
+ ```python
33
+ from langchain_openai import ChatOpenAI
34
+ ```
35
+
36
+ If you are using a model hosted on `Azure`, you should use different wrapper for that:
37
+ ```python
38
+ from langchain_openai import AzureChatOpenAI
39
+ ```
40
+ For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/chat/azure_chat_openai).
41
+
42
+ ## LLM
43
+
44
+ See a [usage example](/docs/integrations/llms/openai).
45
+
46
+ ```python
47
+ from langchain_openai import OpenAI
48
+ ```
49
+
50
+ If you are using a model hosted on `Azure`, you should use different wrapper for that:
51
+ ```python
52
+ from langchain_openai import AzureOpenAI
53
+ ```
54
+ For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai).
55
+
56
+ ## Embedding Model
57
+
58
+ See a [usage example](/docs/integrations/text_embedding/openai)
59
+
60
+ ```python
61
+ from langchain_openai import OpenAIEmbeddings
62
+ ```
63
+
64
+ ## Document Loader
65
+
66
+ See a [usage example](/docs/integrations/document_loaders/chatgpt_loader).
67
+
68
+ ```python
69
+ from langchain_community.document_loaders.chatgpt import ChatGPTLoader
70
+ ```
71
+
72
+ ## Retriever
73
+
74
+ See a [usage example](/docs/integrations/retrievers/chatgpt-plugin).
75
+
76
+ ```python
77
+ from langchain.retrievers import ChatGPTPluginRetriever
78
+ ```
79
+
80
+ ## Tools
81
+
82
+ ### Dall-E Image Generator
83
+
84
+ >[OpenAI Dall-E](https://openai.com/dall-e-3) are text-to-image models developed by `OpenAI`
85
+ > using deep learning methodologies to generate digital images from natural language descriptions,
86
+ > called "prompts".
87
+
88
+
89
+ See a [usage example](/docs/integrations/tools/dalle_image_generator).
90
+
91
+ ```python
92
+ from langchain_community.utilities.dalle_image_generator import DallEAPIWrapper
93
+ ```
94
+
95
+ ## Adapter
96
+
97
+ See a [usage example](/docs/integrations/adapters/openai).
98
+
99
+ ```python
100
+ from langchain.adapters import openai as lc_openai
101
+ ```
102
+
103
+ ## Tokenizer
104
+
105
+ There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
106
+ for OpenAI LLMs.
107
+
108
+ You can also use it to count tokens when splitting documents with
109
+ ```python
110
+ from langchain.text_splitter import CharacterTextSplitter
111
+ CharacterTextSplitter.from_tiktoken_encoder(...)
112
+ ```
113
+ For a more detailed walkthrough of this, see [this notebook](/docs/how_to/split_by_token/#tiktoken)
114
+
115
+ ## Chain
116
+
117
+ See a [usage example](https://python.langchain.com/v0.1/docs/guides/productionization/safety/moderation).
118
+
119
+ ```python
120
+ from langchain.chains import OpenAIModerationChain
121
+ ```
122
+
123
+
langchain_md_files/integrations/providers/acreom.mdx ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Acreom
2
+
3
+ [acreom](https://acreom.com) is a dev-first knowledge base with tasks running on local `markdown` files.
4
+
5
+ ## Installation and Setup
6
+
7
+ No installation is required.
8
+
9
+ ## Document Loader
10
+
11
+ See a [usage example](/docs/integrations/document_loaders/acreom).
12
+
13
+ ```python
14
+ from langchain_community.document_loaders import AcreomLoader
15
+ ```
langchain_md_files/integrations/providers/activeloop_deeplake.mdx ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Activeloop Deep Lake
2
+
3
+ >[Activeloop Deep Lake](https://docs.activeloop.ai/) is a data lake for Deep Learning applications, allowing you to use it
4
+ > as a vector store.
5
+
6
+ ## Why Deep Lake?
7
+
8
+ - More than just a (multi-modal) vector store. You can later use the dataset to fine-tune your own LLM models.
9
+ - Not only stores embeddings, but also the original data with automatic version control.
10
+ - Truly serverless. Doesn't require another service and can be used with major cloud providers (`AWS S3`, `GCS`, etc.)
11
+
12
+ `Activeloop Deep Lake` supports `SelfQuery Retrieval`:
13
+ [Activeloop Deep Lake Self Query Retrieval](/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query)
14
+
15
+
16
+ ## More Resources
17
+
18
+ 1. [Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial Data](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/)
19
+ 2. [Twitter the-algorithm codebase analysis with Deep Lake](https://github.com/langchain-ai/langchain/blob/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb)
20
+ 3. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake
21
+ 4. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Get started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials)
22
+
23
+ ## Installation and Setup
24
+
25
+ Install the Python package:
26
+
27
+ ```bash
28
+ pip install deeplake
29
+ ```
30
+
31
+
32
+ ## VectorStore
33
+
34
+ ```python
35
+ from langchain_community.vectorstores import DeepLake
36
+ ```
37
+
38
+ See a [usage example](/docs/integrations/vectorstores/activeloop_deeplake).
langchain_md_files/integrations/providers/ai21.mdx ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AI21 Labs
2
+
3
+ >[AI21 Labs](https://www.ai21.com/about) is a company specializing in Natural
4
+ > Language Processing (NLP), which develops AI systems
5
+ > that can understand and generate natural language.
6
+
7
+ This page covers how to use the `AI21` ecosystem within `LangChain`.
8
+
9
+ ## Installation and Setup
10
+
11
+ - Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`)
12
+ - Install the Python package:
13
+
14
+ ```bash
15
+ pip install langchain-ai21
16
+ ```
17
+
18
+ ## LLMs
19
+
20
+ See a [usage example](/docs/integrations/llms/ai21).
21
+
22
+ ### AI21 LLM
23
+
24
+ ```python
25
+ from langchain_ai21 import AI21LLM
26
+ ```
27
+
28
+ ### AI21 Contextual Answer
29
+
30
+ You can use AI21’s contextual answers model to receive text or document,
31
+ serving as a context, and a question and return an answer based entirely on this context.
32
+
33
+ ```python
34
+ from langchain_ai21 import AI21ContextualAnswers
35
+ ```
36
+
37
+
38
+ ## Chat models
39
+
40
+ ### AI21 Chat
41
+
42
+ See a [usage example](/docs/integrations/chat/ai21).
43
+
44
+ ```python
45
+ from langchain_ai21 import ChatAI21
46
+ ```
47
+
48
+ ## Embedding models
49
+
50
+ ### AI21 Embeddings
51
+
52
+ See a [usage example](/docs/integrations/text_embedding/ai21).
53
+
54
+ ```python
55
+ from langchain_ai21 import AI21Embeddings
56
+ ```
57
+
58
+ ## Text splitters
59
+
60
+ ### AI21 Semantic Text Splitter
61
+
62
+ See a [usage example](/docs/integrations/document_transformers/ai21_semantic_text_splitter).
63
+
64
+ ```python
65
+ from langchain_ai21 import AI21SemanticTextSplitter
66
+ ```
67
+
langchain_md_files/integrations/providers/ainetwork.mdx ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AINetwork
2
+
3
+ >[AI Network](https://www.ainetwork.ai/build-on-ain) is a layer 1 blockchain designed to accommodate
4
+ > large-scale AI models, utilizing a decentralized GPU network powered by the
5
+ > [$AIN token](https://www.ainetwork.ai/token), enriching AI-driven `NFTs` (`AINFTs`).
6
+
7
+
8
+ ## Installation and Setup
9
+
10
+ You need to install `ain-py` python package.
11
+
12
+ ```bash
13
+ pip install ain-py
14
+ ```
15
+ You need to set the `AIN_BLOCKCHAIN_ACCOUNT_PRIVATE_KEY` environmental variable to your AIN Blockchain Account Private Key.
16
+ ## Toolkit
17
+
18
+ See a [usage example](/docs/integrations/tools/ainetwork).
19
+
20
+ ```python
21
+ from langchain_community.agent_toolkits.ainetwork.toolkit import AINetworkToolkit
22
+ ```
23
+
langchain_md_files/integrations/providers/airbyte.mdx ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Airbyte
2
+
3
+ >[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs,
4
+ > databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
5
+
6
+ ## Installation and Setup
7
+
8
+ ```bash
9
+ pip install -U langchain-airbyte
10
+ ```
11
+
12
+ :::note
13
+
14
+ Currently, the `langchain-airbyte` library does not support Pydantic v2.
15
+ Please downgrade to Pydantic v1 to use this package.
16
+
17
+ This package also currently requires Python 3.10+.
18
+
19
+ :::
20
+
21
+ The integration package doesn't require any global environment variables that need to be
22
+ set, but some integrations (e.g. `source-github`) may need credentials passed in.
23
+
24
+ ## Document loader
25
+
26
+ ### AirbyteLoader
27
+
28
+ See a [usage example](/docs/integrations/document_loaders/airbyte).
29
+
30
+ ```python
31
+ from langchain_airbyte import AirbyteLoader
32
+ ```
langchain_md_files/integrations/providers/alchemy.mdx ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Alchemy
2
+
3
+ >[Alchemy](https://www.alchemy.com) is the platform to build blockchain applications.
4
+
5
+ ## Installation and Setup
6
+
7
+ Check out the [installation guide](/docs/integrations/document_loaders/blockchain).
8
+
9
+ ## Document loader
10
+
11
+ ### BlockchainLoader on the Alchemy platform
12
+
13
+ See a [usage example](/docs/integrations/document_loaders/blockchain).
14
+
15
+ ```python
16
+ from langchain_community.document_loaders.blockchain import (
17
+ BlockchainDocumentLoader,
18
+ BlockchainType,
19
+ )
20
+ ```
langchain_md_files/integrations/providers/aleph_alpha.mdx ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Aleph Alpha
2
+
3
+ >[Aleph Alpha](https://docs.aleph-alpha.com/) was founded in 2019 with the mission to research and build the foundational technology for an era of strong AI. The team of international scientists, engineers, and innovators researches, develops, and deploys transformative AI like large language and multimodal models and runs the fastest European commercial AI cluster.
4
+
5
+ >[The Luminous series](https://docs.aleph-alpha.com/docs/introduction/luminous/) is a family of large language models.
6
+
7
+ ## Installation and Setup
8
+
9
+ ```bash
10
+ pip install aleph-alpha-client
11
+ ```
12
+
13
+ You have to create a new token. Please, see [instructions](https://docs.aleph-alpha.com/docs/account/#create-a-new-token).
14
+
15
+ ```python
16
+ from getpass import getpass
17
+
18
+ ALEPH_ALPHA_API_KEY = getpass()
19
+ ```
20
+
21
+
22
+ ## LLM
23
+
24
+ See a [usage example](/docs/integrations/llms/aleph_alpha).
25
+
26
+ ```python
27
+ from langchain_community.llms import AlephAlpha
28
+ ```
29
+
30
+ ## Text Embedding Models
31
+
32
+ See a [usage example](/docs/integrations/text_embedding/aleph_alpha).
33
+
34
+ ```python
35
+ from langchain_community.embeddings import AlephAlphaSymmetricSemanticEmbedding, AlephAlphaAsymmetricSemanticEmbedding
36
+ ```
langchain_md_files/integrations/providers/alibaba_cloud.mdx ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Alibaba Cloud
2
+
3
+ >[Alibaba Group Holding Limited (Wikipedia)](https://en.wikipedia.org/wiki/Alibaba_Group), or `Alibaba`
4
+ > (Chinese: 阿里巴巴), is a Chinese multinational technology company specializing in e-commerce, retail,
5
+ > Internet, and technology.
6
+ >
7
+ > [Alibaba Cloud (Wikipedia)](https://en.wikipedia.org/wiki/Alibaba_Cloud), also known as `Aliyun`
8
+ > (Chinese: 阿里云; pinyin: Ālǐyún; lit. 'Ali Cloud'), is a cloud computing company, a subsidiary
9
+ > of `Alibaba Group`. `Alibaba Cloud` provides cloud computing services to online businesses and
10
+ > Alibaba's own e-commerce ecosystem.
11
+
12
+
13
+ ## LLMs
14
+
15
+ ### Alibaba Cloud PAI EAS
16
+
17
+ See [installation instructions and a usage example](/docs/integrations/llms/alibabacloud_pai_eas_endpoint).
18
+
19
+ ```python
20
+ from langchain_community.llms.pai_eas_endpoint import PaiEasEndpoint
21
+ ```
22
+
23
+ ### Tongyi Qwen
24
+
25
+ See [installation instructions and a usage example](/docs/integrations/llms/tongyi).
26
+
27
+ ```python
28
+ from langchain_community.llms import Tongyi
29
+ ```
30
+
31
+ ## Chat Models
32
+
33
+ ### Alibaba Cloud PAI EAS
34
+
35
+ See [installation instructions and a usage example](/docs/integrations/chat/alibaba_cloud_pai_eas).
36
+
37
+ ```python
38
+ from langchain_community.chat_models import PaiEasChatEndpoint
39
+ ```
40
+
41
+ ### Tongyi Qwen Chat
42
+
43
+ See [installation instructions and a usage example](/docs/integrations/chat/tongyi).
44
+
45
+ ```python
46
+ from langchain_community.chat_models.tongyi import ChatTongyi
47
+ ```
48
+
49
+ ## Document Loaders
50
+
51
+ ### Alibaba Cloud MaxCompute
52
+
53
+ See [installation instructions and a usage example](/docs/integrations/document_loaders/alibaba_cloud_maxcompute).
54
+
55
+ ```python
56
+ from langchain_community.document_loaders import MaxComputeLoader
57
+ ```
58
+
59
+ ## Vector stores
60
+
61
+ ### Alibaba Cloud OpenSearch
62
+
63
+ See [installation instructions and a usage example](/docs/integrations/vectorstores/alibabacloud_opensearch).
64
+
65
+ ```python
66
+ from langchain_community.vectorstores import AlibabaCloudOpenSearch, AlibabaCloudOpenSearchSettings
67
+ ```
68
+
69
+ ### Alibaba Cloud Tair
70
+
71
+ See [installation instructions and a usage example](/docs/integrations/vectorstores/tair).
72
+
73
+ ```python
74
+ from langchain_community.vectorstores import Tair
75
+ ```
76
+
77
+ ### AnalyticDB
78
+
79
+ See [installation instructions and a usage example](/docs/integrations/vectorstores/analyticdb).
80
+
81
+ ```python
82
+ from langchain_community.vectorstores import AnalyticDB
83
+ ```
84
+
85
+ ### Hologres
86
+
87
+ See [installation instructions and a usage example](/docs/integrations/vectorstores/hologres).
88
+
89
+ ```python
90
+ from langchain_community.vectorstores import Hologres
91
+ ```
langchain_md_files/integrations/providers/analyticdb.mdx ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AnalyticDB
2
+
3
+ >[AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview)
4
+ > is a massively parallel processing (MPP) data warehousing service
5
+ > from [Alibaba Cloud](https://www.alibabacloud.com/)
6
+ >that is designed to analyze large volumes of data online.
7
+
8
+ >`AnalyticDB for PostgreSQL` is developed based on the open-source `Greenplum Database`
9
+ > project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB
10
+ > for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and
11
+ > Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and
12
+ > column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a
13
+ > high performance level and supports highly concurrent.
14
+
15
+ This page covers how to use the AnalyticDB ecosystem within LangChain.
16
+
17
+ ## Installation and Setup
18
+
19
+ You need to install the `sqlalchemy` python package.
20
+
21
+ ```bash
22
+ pip install sqlalchemy
23
+ ```
24
+
25
+ ## VectorStore
26
+
27
+ See a [usage example](/docs/integrations/vectorstores/analyticdb).
28
+
29
+ ```python
30
+ from langchain_community.vectorstores import AnalyticDB
31
+ ```
langchain_md_files/integrations/providers/annoy.mdx ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Annoy
2
+
3
+ > [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`)
4
+ > is a C++ library with Python bindings to search for points in space that are
5
+ > close to a given query point. It also creates large read-only file-based data
6
+ > structures that are mapped into memory so that many processes may share the same data.
7
+
8
+ ## Installation and Setup
9
+
10
+ ```bash
11
+ pip install annoy
12
+ ```
13
+
14
+
15
+ ## Vectorstore
16
+
17
+ See a [usage example](/docs/integrations/vectorstores/annoy).
18
+
19
+ ```python
20
+ from langchain_community.vectorstores import Annoy
21
+ ```
langchain_md_files/integrations/providers/anyscale.mdx ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Anyscale
2
+
3
+ >[Anyscale](https://www.anyscale.com) is a platform to run, fine tune and scale LLMs via production-ready APIs.
4
+ > [Anyscale Endpoints](https://docs.anyscale.com/endpoints/overview) serve many open-source models in a cost-effective way.
5
+
6
+ `Anyscale` also provides [an example](https://docs.anyscale.com/endpoints/model-serving/examples/langchain-integration)
7
+ how to setup `LangChain` with `Anyscale` for advanced chat agents.
8
+
9
+ ## Installation and Setup
10
+
11
+ - Get an Anyscale Service URL, route and API key and set them as environment variables (`ANYSCALE_SERVICE_URL`,`ANYSCALE_SERVICE_ROUTE`, `ANYSCALE_SERVICE_TOKEN`).
12
+ - Please see [the Anyscale docs](https://www.anyscale.com/get-started) for more details.
13
+
14
+ We have to install the `openai` package:
15
+
16
+ ```bash
17
+ pip install openai
18
+ ```
19
+
20
+ ## LLM
21
+
22
+ See a [usage example](/docs/integrations/llms/anyscale).
23
+
24
+ ```python
25
+ from langchain_community.llms.anyscale import Anyscale
26
+ ```
27
+
28
+ ## Chat Models
29
+
30
+ See a [usage example](/docs/integrations/chat/anyscale).
31
+
32
+ ```python
33
+ from langchain_community.chat_models.anyscale import ChatAnyscale
34
+ ```
35
+
36
+ ## Embeddings
37
+
38
+ See a [usage example](/docs/integrations/text_embedding/anyscale).
39
+
40
+ ```python
41
+ from langchain_community.embeddings import AnyscaleEmbeddings
42
+ ```
langchain_md_files/integrations/providers/apache_doris.mdx ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Apache Doris
2
+
3
+ >[Apache Doris](https://doris.apache.org/) is a modern data warehouse for real-time analytics.
4
+ It delivers lightning-fast analytics on real-time data at scale.
5
+
6
+ >Usually `Apache Doris` is categorized into OLAP, and it has showed excellent performance
7
+ > in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/).
8
+ > Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.
9
+
10
+ ## Installation and Setup
11
+
12
+ ```bash
13
+ pip install pymysql
14
+ ```
15
+
16
+ ## Vector Store
17
+
18
+ See a [usage example](/docs/integrations/vectorstores/apache_doris).
19
+
20
+ ```python
21
+ from langchain_community.vectorstores import ApacheDoris
22
+ ```
langchain_md_files/integrations/providers/apify.mdx ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Apify
2
+
3
+
4
+ >[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,
5
+ >which provides an [ecosystem](https://apify.com/store) of more than a thousand
6
+ >ready-made apps called *Actors* for various scraping, crawling, and extraction use cases.
7
+
8
+ [![Apify Actors](/img/ApifyActors.png)](https://apify.com/store)
9
+
10
+ This integration enables you run Actors on the `Apify` platform and load their results into LangChain to feed your vector
11
+ indexes with documents and data from the web, e.g. to generate answers from websites with documentation,
12
+ blogs, or knowledge bases.
13
+
14
+
15
+ ## Installation and Setup
16
+
17
+ - Install the Apify API client for Python with `pip install apify-client`
18
+ - Get your [Apify API token](https://console.apify.com/account/integrations) and either set it as
19
+ an environment variable (`APIFY_API_TOKEN`) or pass it to the `ApifyWrapper` as `apify_api_token` in the constructor.
20
+
21
+
22
+ ## Utility
23
+
24
+ You can use the `ApifyWrapper` to run Actors on the Apify platform.
25
+
26
+ ```python
27
+ from langchain_community.utilities import ApifyWrapper
28
+ ```
29
+
30
+ For more information on this wrapper, see [the API reference](https://python.langchain.com/v0.2/api_reference/community/utilities/langchain_community.utilities.apify.ApifyWrapper.html).
31
+
32
+
33
+ ## Document loader
34
+
35
+ You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
36
+
37
+ ```python
38
+ from langchain_community.document_loaders import ApifyDatasetLoader
39
+ ```
40
+
41
+ For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset).
langchain_md_files/integrations/providers/arangodb.mdx ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ArangoDB
2
+
3
+ >[ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to
4
+ > drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud – anywhere.
5
+
6
+ ## Installation and Setup
7
+
8
+ Install the [ArangoDB Python Driver](https://github.com/ArangoDB-Community/python-arango) package with
9
+
10
+ ```bash
11
+ pip install python-arango
12
+ ```
13
+
14
+ ## Graph QA Chain
15
+
16
+ Connect your `ArangoDB` Database with a chat model to get insights on your data.
17
+
18
+ See the notebook example [here](/docs/integrations/graphs/arangodb).
19
+
20
+ ```python
21
+ from arango import ArangoClient
22
+
23
+ from langchain_community.graphs import ArangoGraph
24
+ from langchain.chains import ArangoGraphQAChain
25
+ ```
langchain_md_files/integrations/providers/arcee.mdx ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Arcee
2
+
3
+ >[Arcee](https://www.arcee.ai/about/about-us) enables the development and advancement
4
+ > of what we coin as SLMs—small, specialized, secure, and scalable language models.
5
+ > By offering a SLM Adaptation System and a seamless, secure integration,
6
+ > `Arcee` empowers enterprises to harness the full potential of
7
+ > domain-adapted language models, driving the transformative
8
+ > innovation in operations.
9
+
10
+
11
+ ## Installation and Setup
12
+
13
+ Get your `Arcee API` key.
14
+
15
+
16
+ ## LLMs
17
+
18
+ See a [usage example](/docs/integrations/llms/arcee).
19
+
20
+ ```python
21
+ from langchain_community.llms import Arcee
22
+ ```
23
+
24
+ ## Retrievers
25
+
26
+ See a [usage example](/docs/integrations/retrievers/arcee).
27
+
28
+ ```python
29
+ from langchain_community.retrievers import ArceeRetriever
30
+ ```
langchain_md_files/integrations/providers/arcgis.mdx ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ArcGIS
2
+
3
+ >[ArcGIS](https://www.esri.com/en-us/arcgis/about-arcgis/overview) is a family of client,
4
+ > server and online geographic information system software developed and maintained by [Esri](https://www.esri.com/).
5
+ >
6
+ >`ArcGISLoader` uses the `arcgis` package.
7
+ > `arcgis` is a Python library for the vector and raster analysis, geocoding, map making,
8
+ > routing and directions. It administers, organizes and manages users,
9
+ > groups and information items in your GIS.
10
+ >It enables access to ready-to-use maps and curated geographic data from `Esri`
11
+ > and other authoritative sources, and works with your own data as well.
12
+
13
+ ## Installation and Setup
14
+
15
+ We have to install the `arcgis` package.
16
+
17
+ ```bash
18
+ pip install -U arcgis
19
+ ```
20
+
21
+ ## Document Loader
22
+
23
+ See a [usage example](/docs/integrations/document_loaders/arcgis).
24
+
25
+ ```python
26
+ from langchain_community.document_loaders import ArcGISLoader
27
+ ```
langchain_md_files/integrations/providers/argilla.mdx ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Argilla
2
+
3
+ >[Argilla](https://argilla.io/) is an open-source data curation platform for LLMs.
4
+ > Using `Argilla`, everyone can build robust language models through faster data curation
5
+ > using both human and machine feedback. `Argilla` provides support for each step in the MLOps cycle,
6
+ > from data labeling to model monitoring.
7
+
8
+ ## Installation and Setup
9
+
10
+ Get your [API key](https://platform.openai.com/account/api-keys).
11
+
12
+ Install the Python package:
13
+
14
+ ```bash
15
+ pip install argilla
16
+ ```
17
+
18
+ ## Callbacks
19
+
20
+
21
+ ```python
22
+ from langchain.callbacks import ArgillaCallbackHandler
23
+ ```
24
+
25
+ See an [example](/docs/integrations/callbacks/argilla).
langchain_md_files/integrations/providers/arize.mdx ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Arize
2
+
3
+ [Arize](https://arize.com) is an AI observability and LLM evaluation platform that offers
4
+ support for LangChain applications, providing detailed traces of input, embeddings, retrieval,
5
+ functions, and output messages.
6
+
7
+
8
+ ## Installation and Setup
9
+
10
+ First, you need to install `arize` python package.
11
+
12
+ ```bash
13
+ pip install arize
14
+ ```
15
+
16
+ Second, you need to set up your [Arize account](https://app.arize.com/auth/join)
17
+ and get your `API_KEY` or `SPACE_KEY`.
18
+
19
+
20
+ ## Callback handler
21
+
22
+ ```python
23
+ from langchain_community.callbacks import ArizeCallbackHandler
24
+ ```
langchain_md_files/integrations/providers/arxiv.mdx ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Arxiv
2
+
3
+ >[arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics,
4
+ > mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and
5
+ > systems science, and economics.
6
+
7
+
8
+ ## Installation and Setup
9
+
10
+ First, you need to install `arxiv` python package.
11
+
12
+ ```bash
13
+ pip install arxiv
14
+ ```
15
+
16
+ Second, you need to install `PyMuPDF` python package which transforms PDF files downloaded from the `arxiv.org` site into the text format.
17
+
18
+ ```bash
19
+ pip install pymupdf
20
+ ```
21
+
22
+ ## Document Loader
23
+
24
+ See a [usage example](/docs/integrations/document_loaders/arxiv).
25
+
26
+ ```python
27
+ from langchain_community.document_loaders import ArxivLoader
28
+ ```
29
+
30
+ ## Retriever
31
+
32
+ See a [usage example](/docs/integrations/retrievers/arxiv).
33
+
34
+ ```python
35
+ from langchain_community.retrievers import ArxivRetriever
36
+ ```
langchain_md_files/integrations/providers/ascend.mdx ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ascend
2
+
3
+ >[Ascend](https://https://www.hiascend.com/) is Natural Process Unit provide by Huawei
4
+
5
+ This page covers how to use ascend NPU with LangChain.
6
+
7
+ ### Installation
8
+
9
+ Install using torch-npu using:
10
+
11
+ ```bash
12
+ pip install torch-npu
13
+ ```
14
+
15
+ Please follow the installation instructions as specified below:
16
+ * Install CANN as shown [here](https://www.hiascend.com/document/detail/zh/canncommercial/700/quickstart/quickstart/quickstart_18_0002.html).
17
+
18
+ ### Embedding Models
19
+
20
+ See a [usage example](/docs/integrations/text_embedding/ascend).
21
+
22
+ ```python
23
+ from langchain_community.embeddings import AscendEmbeddings
24
+ ```
langchain_md_files/integrations/providers/asknews.mdx ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AskNews
2
+
3
+ [AskNews](https://asknews.app/) enhances language models with up-to-date global or historical news
4
+ by processing and indexing over 300,000 articles daily, providing prompt-optimized responses
5
+ through a low-latency endpoint, and ensuring transparency and diversity in its news coverage.
6
+
7
+ ## Installation and Setup
8
+
9
+ First, you need to install `asknews` python package.
10
+
11
+ ```bash
12
+ pip install asknews
13
+ ```
14
+
15
+ You also need to set our AskNews API credentials, which can be generated at
16
+ the [AskNews console](https://my.asknews.app/).
17
+
18
+
19
+ ## Retriever
20
+
21
+ See a [usage example](/docs/integrations/retrievers/asknews).
22
+
23
+ ```python
24
+ from langchain_community.retrievers import AskNewsRetriever
25
+ ```
26
+
27
+ ## Tool
28
+
29
+ See a [usage example](/docs/integrations/tools/asknews).
30
+
31
+ ```python
32
+ from langchain_community.tools.asknews import AskNewsSearch
33
+ ```
langchain_md_files/integrations/providers/assemblyai.mdx ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AssemblyAI
2
+
3
+ >[AssemblyAI](https://www.assemblyai.com/) builds `Speech AI` models for tasks like
4
+ speech-to-text, speaker diarization, speech summarization, and more.
5
+ > `AssemblyAI’s` Speech AI models include accurate speech-to-text for voice data
6
+ > (such as calls, virtual meetings, and podcasts), speaker detection, sentiment analysis,
7
+ > chapter detection, PII redaction.
8
+
9
+
10
+
11
+ ## Installation and Setup
12
+
13
+ Get your [API key](https://www.assemblyai.com/dashboard/signup).
14
+
15
+ Install the `assemblyai` package.
16
+
17
+ ```bash
18
+ pip install -U assemblyai
19
+ ```
20
+
21
+ ## Document Loader
22
+
23
+ ### AssemblyAI Audio Transcript
24
+
25
+ The `AssemblyAIAudioTranscriptLoader` transcribes audio files with the `AssemblyAI API`
26
+ and loads the transcribed text into documents.
27
+
28
+ See a [usage example](/docs/integrations/document_loaders/assemblyai).
29
+
30
+ ```python
31
+ from langchain_community.document_loaders import AssemblyAIAudioTranscriptLoader
32
+ ```
33
+
34
+ ### AssemblyAI Audio Loader By Id
35
+
36
+ The `AssemblyAIAudioLoaderById` uses the AssemblyAI API to get an existing
37
+ transcription and loads the transcribed text into one or more Documents,
38
+ depending on the specified format.
39
+
40
+ ```python
41
+ from langchain_community.document_loaders import AssemblyAIAudioLoaderById
42
+ ```
langchain_md_files/integrations/providers/astradb.mdx ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Astra DB
2
+
3
+ > [DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless
4
+ > vector-capable database built on `Apache Cassandra®`and made conveniently available
5
+ > through an easy-to-use JSON API.
6
+
7
+ See a [tutorial provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/tutorials/chatbot.html).
8
+
9
+ ## Installation and Setup
10
+
11
+ Install the following Python package:
12
+ ```bash
13
+ pip install "langchain-astradb>=0.1.0"
14
+ ```
15
+
16
+ Get the [connection secrets](https://docs.datastax.com/en/astra/astra-db-vector/get-started/quickstart.html).
17
+ Set up the following environment variables:
18
+
19
+ ```python
20
+ ASTRA_DB_APPLICATION_TOKEN="TOKEN"
21
+ ASTRA_DB_API_ENDPOINT="API_ENDPOINT"
22
+ ```
23
+
24
+ ## Vector Store
25
+
26
+ ```python
27
+ from langchain_astradb import AstraDBVectorStore
28
+
29
+ vector_store = AstraDBVectorStore(
30
+ embedding=my_embedding,
31
+ collection_name="my_store",
32
+ api_endpoint=ASTRA_DB_API_ENDPOINT,
33
+ token=ASTRA_DB_APPLICATION_TOKEN,
34
+ )
35
+ ```
36
+
37
+ Learn more in the [example notebook](/docs/integrations/vectorstores/astradb).
38
+
39
+ See the [example provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/integrations/langchain.html).
40
+
41
+ ## Chat message history
42
+
43
+ ```python
44
+ from langchain_astradb import AstraDBChatMessageHistory
45
+
46
+ message_history = AstraDBChatMessageHistory(
47
+ session_id="test-session",
48
+ api_endpoint=ASTRA_DB_API_ENDPOINT,
49
+ token=ASTRA_DB_APPLICATION_TOKEN,
50
+ )
51
+ ```
52
+
53
+ See the [usage example](/docs/integrations/memory/astradb_chat_message_history#example).
54
+
55
+ ## LLM Cache
56
+
57
+ ```python
58
+ from langchain.globals import set_llm_cache
59
+ from langchain_astradb import AstraDBCache
60
+
61
+ set_llm_cache(AstraDBCache(
62
+ api_endpoint=ASTRA_DB_API_ENDPOINT,
63
+ token=ASTRA_DB_APPLICATION_TOKEN,
64
+ ))
65
+ ```
66
+
67
+ Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the Astra DB section).
68
+
69
+
70
+ ## Semantic LLM Cache
71
+
72
+ ```python
73
+ from langchain.globals import set_llm_cache
74
+ from langchain_astradb import AstraDBSemanticCache
75
+
76
+ set_llm_cache(AstraDBSemanticCache(
77
+ embedding=my_embedding,
78
+ api_endpoint=ASTRA_DB_API_ENDPOINT,
79
+ token=ASTRA_DB_APPLICATION_TOKEN,
80
+ ))
81
+ ```
82
+
83
+ Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the appropriate section).
84
+
85
+ Learn more in the [example notebook](/docs/integrations/memory/astradb_chat_message_history).
86
+
87
+ ## Document loader
88
+
89
+ ```python
90
+ from langchain_astradb import AstraDBLoader
91
+
92
+ loader = AstraDBLoader(
93
+ collection_name="my_collection",
94
+ api_endpoint=ASTRA_DB_API_ENDPOINT,
95
+ token=ASTRA_DB_APPLICATION_TOKEN,
96
+ )
97
+ ```
98
+
99
+ Learn more in the [example notebook](/docs/integrations/document_loaders/astradb).
100
+
101
+ ## Self-querying retriever
102
+
103
+ ```python
104
+ from langchain_astradb import AstraDBVectorStore
105
+ from langchain.retrievers.self_query.base import SelfQueryRetriever
106
+
107
+ vector_store = AstraDBVectorStore(
108
+ embedding=my_embedding,
109
+ collection_name="my_store",
110
+ api_endpoint=ASTRA_DB_API_ENDPOINT,
111
+ token=ASTRA_DB_APPLICATION_TOKEN,
112
+ )
113
+
114
+ retriever = SelfQueryRetriever.from_llm(
115
+ my_llm,
116
+ vector_store,
117
+ document_content_description,
118
+ metadata_field_info
119
+ )
120
+ ```
121
+
122
+ Learn more in the [example notebook](/docs/integrations/retrievers/self_query/astradb).
123
+
124
+ ## Store
125
+
126
+ ```python
127
+ from langchain_astradb import AstraDBStore
128
+
129
+ store = AstraDBStore(
130
+ collection_name="my_kv_store",
131
+ api_endpoint=ASTRA_DB_API_ENDPOINT,
132
+ token=ASTRA_DB_APPLICATION_TOKEN,
133
+ )
134
+ ```
135
+
136
+ Learn more in the [example notebook](/docs/integrations/stores/astradb#astradbstore).
137
+
138
+ ## Byte Store
139
+
140
+ ```python
141
+ from langchain_astradb import AstraDBByteStore
142
+
143
+ store = AstraDBByteStore(
144
+ collection_name="my_kv_store",
145
+ api_endpoint=ASTRA_DB_API_ENDPOINT,
146
+ token=ASTRA_DB_APPLICATION_TOKEN,
147
+ )
148
+ ```
149
+
150
+ Learn more in the [example notebook](/docs/integrations/stores/astradb#astradbbytestore).