Unnamed: 0
stringlengths 1
178
| link
stringlengths 31
163
| text
stringlengths 18
32.8k
⌀ |
---|---|---|
595 | https://python.langchain.com/docs/integrations/text_embedding/fake | ComponentsText embedding modelsFake EmbeddingsFake EmbeddingsLangChain also provides a fake embedding class. You can use this to test your pipelines.from langchain.embeddings import FakeEmbeddingsembeddings = FakeEmbeddings(size=1352)query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])PreviousERNIE Embedding-V1NextGoogle Vertex AI PaLM |
596 | https://python.langchain.com/docs/integrations/text_embedding/google_vertex_ai_palm | ComponentsText embedding modelsGoogle Vertex AI PaLMGoogle Vertex AI PaLMVertex AI PaLM API is a service on Google Cloud exposing the embedding models. Note: This integration is seperate from the Google PaLM integration.By default, Google Cloud does not use Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install google-cloud-aiplatformfrom langchain.embeddings import VertexAIEmbeddingsembeddings = VertexAIEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousFake EmbeddingsNextGPT4All |
597 | https://python.langchain.com/docs/integrations/text_embedding/gpt4all | ComponentsText embedding modelsGPT4AllOn this pageGPT4AllGPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc.This notebook explains how to use GPT4All embeddings with LangChain.Install GPT4All's Python Bindings%pip install gpt4all > /dev/nullNote: you may need to restart the kernel to use updated packages.from langchain.embeddings import GPT4AllEmbeddingsgpt4all_embd = GPT4AllEmbeddings() 100%|████████████████████████| 45.5M/45.5M [00:02<00:00, 18.5MiB/s] Model downloaded at: /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin objc[45711]: Class GGMLMetalClass is implemented in both /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x29fe18208) and /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x2a0244208). One of the two will be used. Which one is undefined.text = "This is a test document."Embed the Textual Dataquery_result = gpt4all_embd.embed_query(text)With embed_documents you can embed multiple pieces of text. You can also map these embeddings with Nomic's Atlas to see a visual representation of your data.doc_result = gpt4all_embd.embed_documents([text])PreviousGoogle Vertex AI PaLMNextGradientInstall GPT4All's Python BindingsEmbed the Textual Data |
598 | https://python.langchain.com/docs/integrations/text_embedding/gradient | ComponentsText embedding modelsGradientOn this pageGradientGradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API.This notebook goes over how to use Langchain with Embeddings of Gradient.Importsfrom langchain.embeddings import GradientEmbeddingsSet the Environment API KeyMake sure to get your API key from Gradient AI. You are given $10 in free credits to test and fine-tune different models.from getpass import getpassimport osif not os.environ.get("GRADIENT_ACCESS_TOKEN",None): # Access token under https://auth.gradient.ai/select-workspace os.environ["GRADIENT_ACCESS_TOKEN"] = getpass("gradient.ai access token:")if not os.environ.get("GRADIENT_WORKSPACE_ID",None): # `ID` listed in `$ gradient workspace list` # also displayed after login at at https://auth.gradient.ai/select-workspace os.environ["GRADIENT_WORKSPACE_ID"] = getpass("gradient.ai workspace id:")Optional: Validate your Environment variables GRADIENT_ACCESS_TOKEN and GRADIENT_WORKSPACE_ID to get currently deployed models. Using the gradientai Python package.pip install gradientaiCreate the Gradient instancedocuments = ["Pizza is a dish.","Paris is the capital of France", "numpy is a lib for linear algebra"]query = "Where is Paris?"embeddings = GradientEmbeddings( model="bge-large")documents_embedded = embeddings.embed_documents(documents)query_result = embeddings.embed_query(query)# (demo) compute similarityimport numpy as npscores = np.array(documents_embedded) @ np.array(query_result).Tdict(zip(documents, scores))PreviousGPT4AllNextHugging FaceImportsSet the Environment API KeyCreate the Gradient instance |
599 | https://python.langchain.com/docs/integrations/text_embedding/huggingfacehub | ComponentsText embedding modelsHugging FaceOn this pageHugging FaceLet's load the Hugging Face Embedding class.pip install langchain sentence_transformersfrom langchain.embeddings import HuggingFaceEmbeddingsembeddings = HuggingFaceEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:3] [-0.04895168915390968, -0.03986193612217903, -0.021562768146395683]doc_result = embeddings.embed_documents([text])Hugging Face Inference APIWe can also access embedding models via the Hugging Face Inference API, which does not require us to install sentence_transformers and download models locally.import getpassinference_api_key = getpass.getpass("Enter your HF Inference API Key:\n\n") Enter your HF Inference API Key: ········from langchain.embeddings import HuggingFaceInferenceAPIEmbeddingsembeddings = HuggingFaceInferenceAPIEmbeddings( api_key=inference_api_key, model_name="sentence-transformers/all-MiniLM-l6-v2")query_result = embeddings.embed_query(text)query_result[:3] [-0.038338541984558105, 0.1234646737575531, -0.028642963618040085]PreviousGradientNextInstructEmbeddingsHugging Face Inference API |
600 | https://python.langchain.com/docs/integrations/text_embedding/instruct_embeddings | ComponentsText embedding modelsInstructEmbeddingsInstructEmbeddingsLet's load the HuggingFace instruct Embeddings class.from langchain.embeddings import HuggingFaceInstructEmbeddingsembeddings = HuggingFaceInstructEmbeddings( query_instruction="Represent the query for retrieval: ") load INSTRUCTOR_Transformer max_seq_length 512text = "This is a test document."query_result = embeddings.embed_query(text)PreviousHugging FaceNextJina |
601 | https://python.langchain.com/docs/integrations/text_embedding/jina | ComponentsText embedding modelsJinaJinaLet's load the Jina Embedding class.from langchain.embeddings import JinaEmbeddingsembeddings = JinaEmbeddings( jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])In the above example, ViT-B-32::openai, OpenAI's pretrained ViT-B-32 model is used. For a full list of models, see here.PreviousInstructEmbeddingsNextLlama-cpp |
602 | https://python.langchain.com/docs/integrations/text_embedding/llamacpp | ComponentsText embedding modelsLlama-cppLlama-cppThis notebook goes over how to use Llama-cpp embeddings within LangChainpip install llama-cpp-pythonfrom langchain.embeddings import LlamaCppEmbeddingsllama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")text = "This is a test document."query_result = llama.embed_query(text)doc_result = llama.embed_documents([text])PreviousJinaNextLLMRails |
603 | https://python.langchain.com/docs/integrations/text_embedding/llm_rails | ComponentsText embedding modelsLLMRailsLLMRailsLet's load the LLMRails Embeddings class.To use LLMRails embedding you need to pass api key by argument or set it in environment with LLM_RAILS_API_KEY key.
To gey API Key you need to sign up in https://console.llmrails.com/signup and then go to https://console.llmrails.com/api-keys and copy key from there after creating one key in platform.from langchain.embeddings import LLMRailsEmbeddingsembeddings = LLMRailsEmbeddings(model='embedding-english-v1') # or embedding-multi-v1text = "This is a test document."To generate embeddings, you can either query an invidivual text, or you can query a list of texts.query_result = embeddings.embed_query(text)query_result[:5] [-0.09996652603149414, 0.015568195842206478, 0.17670190334320068, 0.16521021723747253, 0.21193109452724457]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.04242777079343796, 0.016536075621843338, 0.10052520781755447, 0.18272875249385834, 0.2079043835401535]PreviousLlama-cppNextLocalAI |
604 | https://python.langchain.com/docs/integrations/text_embedding/localai | ComponentsText embedding modelsLocalAILocalAILet's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https://localai.io/basics/getting_started/index.html and https://localai.io/features/embeddings/index.html.from langchain.embeddings import LocalAIEmbeddingsembeddings = LocalAIEmbeddings(openai_api_base="http://localhost:8080", model="embedding-model-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])Let's load the LocalAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see herefrom langchain.embeddings import LocalAIEmbeddingsembeddings = LocalAIEmbeddings(openai_api_base="http://localhost:8080", model="embedding-model-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass throughos.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"PreviousLLMRailsNextMiniMax |
605 | https://python.langchain.com/docs/integrations/text_embedding/minimax | ComponentsText embedding modelsMiniMaxMiniMaxMiniMax offers an embeddings service.This example goes over how to use LangChain to interact with MiniMax Inference for text embedding.import osos.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY"from langchain.embeddings import MiniMaxEmbeddingsembeddings = MiniMaxEmbeddings()query_text = "This is a test query."query_result = embeddings.embed_query(query_text)document_text = "This is a test document."document_result = embeddings.embed_documents([document_text])import numpy as npquery_numpy = np.array(query_result)document_numpy = np.array(document_result[0])similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy))print(f"Cosine similarity between document and query: {similarity}") Cosine similarity between document and query: 0.1573236279277012PreviousLocalAINextModelScope |
606 | https://python.langchain.com/docs/integrations/text_embedding/modelscope_hub | ComponentsText embedding modelsModelScopeModelScopeModelScope is big repository of the models and datasets.Let's load the ModelScope Embedding class.from langchain.embeddings import ModelScopeEmbeddingsmodel_id = "damo/nlp_corom_sentence-embedding_english-base"embeddings = ModelScopeEmbeddings(model_id=model_id)text = "This is a test document."query_result = embeddings.embed_query(text)doc_results = embeddings.embed_documents(["foo"])PreviousMiniMaxNextMosaicML |
607 | https://python.langchain.com/docs/integrations/text_embedding/mosaicml | ComponentsText embedding modelsMosaicMLMosaicMLMosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.This example goes over how to use LangChain to interact with MosaicML Inference for text embedding.# sign up for an account: https://forms.mosaicml.com/demo?utm_source=langchainfrom getpass import getpassMOSAICML_API_TOKEN = getpass()import osos.environ["MOSAICML_API_TOKEN"] = MOSAICML_API_TOKENfrom langchain.embeddings import MosaicMLInstructorEmbeddingsembeddings = MosaicMLInstructorEmbeddings( query_instruction="Represent the query for retrieval: ")query_text = "This is a test query."query_result = embeddings.embed_query(query_text)document_text = "This is a test document."document_result = embeddings.embed_documents([document_text])import numpy as npquery_numpy = np.array(query_result)document_numpy = np.array(document_result[0])similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy))print(f"Cosine similarity between document and query: {similarity}")PreviousModelScopeNextNLP Cloud |
608 | https://python.langchain.com/docs/integrations/text_embedding/nlp_cloud | ComponentsText embedding modelsNLP CloudNLP CloudNLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. The embeddings endpoint offers the following model:paraphrase-multilingual-mpnet-base-v2: Paraphrase Multilingual MPNet Base V2 is a very fast model based on Sentence Transformers that is perfectly suited for embeddings extraction in more than 50 languages (see the full list here).pip install nlpcloudfrom langchain.embeddings import NLPCloudEmbeddingsimport osos.environ["NLPCLOUD_API_KEY"] = "xxx"nlpcloud_embd = NLPCloudEmbeddings()text = "This is a test document."query_result = nlpcloud_embd.embed_query(text)doc_result = nlpcloud_embd.embed_documents([text])PreviousMosaicMLNextOllama |
609 | https://python.langchain.com/docs/integrations/text_embedding/ollama | ComponentsText embedding modelsOllamaOllamaLet's load the Ollama Embeddings class.from langchain.embeddings import OllamaEmbeddingsembeddings = OllamaEmbeddings()text = "This is a test document."To generate embeddings, you can either query an invidivual text, or you can query a list of texts.query_result = embeddings.embed_query(text)query_result[:5] [-0.09996652603149414, 0.015568195842206478, 0.17670190334320068, 0.16521021723747253, 0.21193109452724457]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.04242777079343796, 0.016536075621843338, 0.10052520781755447, 0.18272875249385834, 0.2079043835401535]Let's load the Ollama Embeddings class with smaller model (e.g. llama:7b). Note: See other supported models https://ollama.ai/libraryembeddings = OllamaEmbeddings(model="llama2:7b")text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:5] [-0.09996627271175385, 0.015567859634757042, 0.17670205235481262, 0.16521376371383667, 0.21193283796310425]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.042427532374858856, 0.01653730869293213, 0.10052604228258133, 0.18272635340690613, 0.20790338516235352]PreviousNLP CloudNextOpenAI |
610 | https://python.langchain.com/docs/integrations/text_embedding/openai | ComponentsText embedding modelsOpenAIOpenAILet's load the OpenAI Embedding class.from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:5] [-0.003186025367556387, 0.011071979803637493, -0.004020420763285827, -0.011658221276953042, -0.0010534035786864363]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.003186025367556387, 0.011071979803637493, -0.004020420763285827, -0.011658221276953042, -0.0010534035786864363]Let's load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see herefrom langchain.embeddings.openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings(model="text-search-ada-doc-001")text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:5] [0.004452846988523035, 0.034550655976098514, -0.015029939040690051, 0.03827273883655212, 0.005785414075152477]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [0.004452846988523035, 0.034550655976098514, -0.015029939040690051, 0.03827273883655212, 0.005785414075152477]# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass throughos.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"PreviousOllamaNextSageMaker |
611 | https://python.langchain.com/docs/integrations/text_embedding/sagemaker-endpoint | ComponentsText embedding modelsSageMakerSageMakerLet's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to adjust the return line in the predict_fn() function within the custom inference.py script:Change fromreturn {"vectors": sentence_embeddings[0].tolist()}to:return {"vectors": sentence_embeddings.tolist()}.pip3 install langchain boto3from typing import Dict, Listfrom langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandlerimport jsonclass ContentHandler(EmbeddingsContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes: """ Transforms the input into bytes that can be consumed by SageMaker endpoint. Args: inputs: List of input strings. model_kwargs: Additional keyword arguments to be passed to the endpoint. Returns: The transformed bytes input. """ # Example: inference.py expects a JSON string with a "inputs" key: input_str = json.dumps({"inputs": inputs, **model_kwargs}) return input_str.encode("utf-8") def transform_output(self, output: bytes) -> List[List[float]]: """ Transforms the bytes output from the endpoint into a list of embeddings. Args: output: The bytes output from SageMaker endpoint. Returns: The transformed output - list of embeddings Note: The length of the outer list is the number of input strings. The length of the inner lists is the embedding dimension. """ # Example: inference.py returns a JSON string with the list of # embeddings in a "vectors" key: response_json = json.loads(output.read().decode("utf-8")) return response_json["vectors"]content_handler = ContentHandler()embeddings = SagemakerEndpointEmbeddings( # credentials_profile_name="credentials-profile-name", endpoint_name="huggingface-pytorch-inference-2023-03-21-16-14-03-834", region_name="us-east-1", content_handler=content_handler,)query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])doc_resultsPreviousOpenAINextSelf Hosted |
612 | https://python.langchain.com/docs/integrations/text_embedding/self-hosted | ComponentsText embedding modelsSelf HostedSelf HostedLet's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings,)import runhouse as rh# For an on-demand A100 with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)# For an on-demand A10G with AWS (no single A100s on AWS)# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')# For an existing cluster# gpu = rh.cluster(ips=['<ip of the cluster>'],# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},# name='my-cluster')embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu)text = "This is a test document."query_result = embeddings.embed_query(text)And similarly for SelfHostedHuggingFaceInstructEmbeddings:embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu)Now let's load an embedding model with a custom load function:def get_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Must be inside the function in notebooks model_id = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer)def inference_fn(pipeline, prompt): # Return last hidden state of the model if isinstance(prompt, list): return [emb[0][-1] for emb in pipeline(prompt)] return pipeline(prompt)[0][-1]embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu, model_reqs=["./", "torch", "transformers"], inference_fn=inference_fn,)query_result = embeddings.embed_query(text)PreviousSageMakerNextSentence Transformers |
613 | https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers | ComponentsText embedding modelsSentence TransformersSentence TransformersSentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.SentenceTransformers is a python package that can generate text and image embeddings, originating from Sentence-BERTpip install sentence_transformers > /dev/null [notice] A new release of pip is available: 23.0.1 -> 23.1.1 [notice] To update, run: pip install --upgrade pipfrom langchain.embeddings import HuggingFaceEmbeddings, SentenceTransformerEmbeddingsembeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")# Equivalent to SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text, "This is not a test document."])PreviousSelf HostedNextSpaCy |
614 | https://python.langchain.com/docs/integrations/text_embedding/spacy_embedding | ComponentsText embedding modelsSpaCyOn this pageSpaCyspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.Installation and Setup#!pip install spacyImport the necessary classesfrom langchain.embeddings.spacy_embeddings import SpacyEmbeddingsExampleInitialize SpacyEmbeddings.This will load the Spacy model into memory.embedder = SpacyEmbeddings()Define some example texts . These could be any documents that you want to analyze - for example, news articles, social media posts, or product reviews.texts = [ "The quick brown fox jumps over the lazy dog.", "Pack my box with five dozen liquor jugs.", "How vexingly quick daft zebras jump!", "Bright vixens jump; dozy fowl quack.",]Generate and print embeddings for the texts . The SpacyEmbeddings class generates an embedding for each document, which is a numerical representation of the document's content. These embeddings can be used for various natural language processing tasks, such as document similarity comparison or text classification.embeddings = embedder.embed_documents(texts)for i, embedding in enumerate(embeddings): print(f"Embedding for document {i+1}: {embedding}")Generate and print an embedding for a single piece of text. You can also generate an embedding for a single piece of text, such as a search query. This can be useful for tasks like information retrieval, where you want to find documents that are similar to a given query.query = "Quick foxes and lazy dogs."query_embedding = embedder.embed_query(query)print(f"Embedding for query: {query_embedding}")PreviousSentence TransformersNextTensorflowHubInstallation and SetupExample |
615 | https://python.langchain.com/docs/integrations/text_embedding/tensorflowhub | ComponentsText embedding modelsTensorflowHubTensorflowHubLet's load the TensorflowHub Embedding class.from langchain.embeddings import TensorflowHubEmbeddingsembeddings = TensorflowHubEmbeddings() 2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-01-30 23:53:34.362802: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.text = "This is a test document."query_result = embeddings.embed_query(text)doc_results = embeddings.embed_documents(["foo"])doc_resultsPreviousSpaCyNextXorbits inference (Xinference) |
616 | https://python.langchain.com/docs/integrations/text_embedding/xinference | ComponentsText embedding modelsXorbits inference (Xinference)On this pageXorbits inference (Xinference)This notebook goes over how to use Xinference embeddings within LangChainInstallationInstall Xinference through PyPI:%pip install "xinference[all]"Deploy Xinference Locally or in a Distributed Cluster.For local deployment, run xinference. To deploy Xinference in a cluster, first start an Xinference supervisor using the xinference-supervisor. You can also use the option -p to specify the port and -H to specify the host. The default port is 9997.Then, start the Xinference workers using xinference-worker on each server you want to run them on. You can consult the README file from Xinference for more information.WrapperTo use Xinference with LangChain, you need to first launch a model. You can use command line interface (CLI) to do so:xinference launch -n vicuna-v1.3 -f ggmlv3 -q q4_0 Model uid: 915845ee-2a04-11ee-8ed4-d29396a3f064A model UID is returned for you to use. Now you can use Xinference embeddings with LangChain:from langchain.embeddings import XinferenceEmbeddingsxinference = XinferenceEmbeddings( server_url="http://0.0.0.0:9997", model_uid = "915845ee-2a04-11ee-8ed4-d29396a3f064")query_result = xinference.embed_query("This is a test query")doc_result = xinference.embed_documents(["text A", "text B"])Lastly, terminate the model when you do not need to use it:xinference terminate --model-uid "915845ee-2a04-11ee-8ed4-d29396a3f064"PreviousTensorflowHubNextVector storesInstallationDeploy Xinference Locally or in a Distributed Cluster.Wrapper |
617 | https://python.langchain.com/docs/integrations/vectorstores | ComponentsVector storesVector stores📄️ Activeloop Deep LakeActiveloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.📄️ Alibaba Cloud OpenSearchAlibaba Cloud Opensearch is a one-stop platform to develop intelligent search services. OpenSearch was built on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.📄️ AnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.📄️ AnnoyAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.📄️ AtlasAtlas is a platform by Nomic made for interacting with both small and internet scale unstructured datasets. It enables anyone to visualize, search, and share massive datasets in their browser.📄️ AwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.📄️ Azure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.📄️ BagelDBBagelDB (Open Vector Database for AI), is like GitHub for AI data.📄️ CassandraApache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database.📄️ ChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.📄️ ClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. A Clarifai application can be used as a vector database after uploading inputs.📄️ ClickHouseClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.📄️ DashVectorDashVector is a fully-managed vectorDB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.📄️ DingoDingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data.📄️ DocArray HnswSearchDocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.📄️ DocArray InMemorySearchDocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.📄️ ElasticsearchElasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library.📄️ EpsillaEpsilla is an open-source vector database that leverages the advanced parallel graph traversal techniques for vector indexing. Epsilla is licensed under GPL-3.0.📄️ FaissFacebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.📄️ HologresHologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.📄️ LanceDBLanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.📄️ LLMRailsLLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy.📄️ MarqoThis notebook shows how to use functionality related to the Marqo vectorstore.📄️ Google Vertex AI MatchingEngineThis notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database.📄️ MeilisearchMeilisearch is an open-source, lightning-fast, and hyper relevant search engine. It comes with great defaults to help developers build snappy search experiences.📄️ MilvusMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.📄️ Momento Vector Index (MVI)MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs.📄️ MongoDB AtlasMongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data.📄️ MyScaleMyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse.📄️ Neo4j Vector IndexNeo4j is an open-source graph database with integrated support for vector similarity search📄️ NucliaDBYou can use a local NucliaDB instance or use Nuclia Cloud.📄️ OpenSearchOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.📄️ Postgres EmbeddingPostgres Embedding is an open-source vector similarity search for Postgres that uses Hierarchical Navigable Small Worlds (HNSW) for approximate nearest neighbor search.📄️ PGVectorPGVector is an open-source vector similarity search for Postgres📄️ PineconePinecone is a vector database with broad functionality.📄️ QdrantQdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.📄️ RedisRedis vector database introduction and langchain integration guide.📄️ RocksetRockset is a real-time search and analytics database built for the cloud. Rockset uses a Converged Index™ with an efficient store for vector embeddings to serve low latency, high concurrency search queries at scale. Rockset has full support for metadata filtering and handles real-time ingestion for constantly updating, streaming data.📄️ ScaNNScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.📄️ SingleStoreDBSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.📄️ scikit-learnscikit-learn is an open source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.📄️ sqlite-vsssqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities.📄️ StarRocksStarRocks is a High-Performance Analytical Database.📄️ Supabase (Postgres)Supabase is an open source Firebase alternative. Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.📄️ TairTair is a cloud native in-memory database service developed by Alibaba Cloud.📄️ Tencent Cloud VectorDBTencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service.📄️ TigrisTigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.📄️ Timescale Vector (Postgres)This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries.📄️ TypesenseTypesense is an open source, in-memory search engine, that you can either self-host or run on Typesense Cloud.📄️ USearchUSearch is a Smaller & Faster Single-File Vector Search Engine📄️ ValdVald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine.📄️ vearch📄️ VectaraVectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.📄️ VespaVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.📄️ WeaviateWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects.📄️ XataXata is a serverless data platform, based on PostgreSQL. It provides a Python SDK for interacting with your database, and a UI for managing your data.📄️ ZepZep is an open source long-term memory store for LLM applications. Zep makes it easy to add relevant documents,📄️ ZillizZilliz Cloud is a fully managed service on cloud for LF AI Milvus®,PreviousXorbits inference (Xinference)NextActiveloop Deep Lake |
618 | https://python.langchain.com/docs/integrations/vectorstores/activeloop_deeplake | ComponentsVector storesActiveloop Deep LakeOn this pageActiveloop Deep LakeActiveloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.This notebook showcases basic functionality related to Activeloop Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a serverless data lake with version control, query engine and streaming dataloaders to deep learning frameworks. For more information, please see the Deep Lake documentation or api referenceSetting uppip install openai 'deeplake[enterprise]' tiktokenExample provided by ActiveloopIntegration with LangChain.Deep Lake locallyfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DeepLakeimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")activeloop_token = getpass.getpass("activeloop token:")embeddings = OpenAIEmbeddings()from langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Create a local datasetCreate a dataset locally at ./deeplake/, then run similarity search. The Deeplake+LangChain integration uses Deep Lake datasets under the hood, so dataset and vector store are used interchangeably. To create a dataset in your own cloud, or in the Deep Lake storage, adjust the path accordingly.db = DeepLake( dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)db.add_documents(docs)# or shorter# db = DeepLake.from_documents(docs, dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)Query datasetquery = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query) Dataset(path='./my_deeplake/', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None To disable dataset summary printings all the time, you can specify verbose=False during VectorStore initialization.print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Later, you can reload the dataset without recomputing embeddingsdb = DeepLake( dataset_path="./my_deeplake/", embedding=embeddings, read_only=True)docs = db.similarity_search(query) Deep Lake Dataset in ./my_deeplake/ already exists, loading from the storageDeep Lake, for now, is single writer and multiple reader. Setting read_only=True helps to avoid acquiring the writer lock.Retrieval Question/Answeringfrom langchain.chains import RetrievalQAfrom langchain.llms import OpenAIChatqa = RetrievalQA.from_chain_type( llm=OpenAIChat(model="gpt-3.5-turbo"), chain_type="stuff", retriever=db.as_retriever(),) /home/ubuntu/langchain_activeloop/langchain/libs/langchain/langchain/llms/openai.py:786: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn(query = "What did the president say about Ketanji Brown Jackson"qa.run(query) 'The president said that Ketanji Brown Jackson is a former top litigator in private practice and a former federal public defender. She comes from a family of public school educators and police officers. She is a consensus builder and has received a broad range of support since being nominated.'Attribute based filtering in metadataLet's create another vector store containing metadata with the year the documents were created.import randomfor d in docs: d.metadata["year"] = random.randint(2012, 2014)db = DeepLake.from_documents( docs, embeddings, dataset_path="./my_deeplake/", overwrite=True) Dataset(path='./my_deeplake/', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (4, 1536) float32 None id text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None db.similarity_search( "What did the president say about Ketanji Brown Jackson", filter={"metadata": {"year": 2013}},) 100%|██████████| 4/4 [00:00<00:00, 2936.16it/s] [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013})]Choosing distance functionDistance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distance, cos for cosine similarity, dot for dot product db.similarity_search( "What did the president say about Ketanji Brown Jackson?", distance_metric="cos") [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2012})]Maximal Marginal relevanceUsing maximal marginal relevancedb.max_marginal_relevance_search( "What did the president say about Ketanji Brown Jackson?") [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2012})]Delete datasetdb.delete_dataset() and if delete fails you can also force deleteDeepLake.force_delete_by_path("./my_deeplake") Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memoryBy default, Deep Lake datasets are stored locally. To store them in memory, in the Deep Lake Managed DB, or in any object storage, you can provide the corresponding path and credentials when creating the vector store. Some paths require registration with Activeloop and creation of an API token that can be retrieved hereos.environ["ACTIVELOOP_TOKEN"] = activeloop_token# Embed and store the textsusername = "<USERNAME_OR_ORG>" # your username on app.activeloop.aidataset_path = f"hub://{username}/langchain_testing_python" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.docs = text_splitter.split_documents(documents)embedding = OpenAIEmbeddings()db = DeepLake(dataset_path=dataset_path, embedding=embeddings, overwrite=True)ids = db.add_documents(docs) Your Deep Lake dataset has been successfully created! Dataset(path='hub://adilkhan/langchain_testing_python', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.tensor_db execution optionIn order to utilize Deep Lake's Managed Tensor Database, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps.# Embed and store the textsusername = "<USERNAME_OR_ORG>" # your username on app.activeloop.aidataset_path = f"hub://{username}/langchain_testing"docs = text_splitter.split_documents(documents)embedding = OpenAIEmbeddings()db = DeepLake( dataset_path=dataset_path, embedding=embeddings, overwrite=True, runtime={"tensor_db": True},)ids = db.add_documents(docs) Your Deep Lake dataset has been successfully created! | Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None TQL SearchFurthermore, the execution of queries is also supported within the similarity_search method, whereby the query can be specified utilizing Deep Lake's Tensor Query Language (TQL).search_id = db.vectorstore.dataset.id[0].numpy()search_id[0] '8a6ff326-3a85-11ee-b840-13905694aaaf'docs = db.similarity_search( query=None, tql=f"SELECT * WHERE id == '{search_id[0]}'",)db.vectorstore.summary() Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None Creating vector stores on AWS S3dataset_path = f"s3://BUCKET/langchain_test" # could be also ./local/path (much faster locally), hub://bucket/path/to/dataset, gcs://path/to/dataset, etc.embedding = OpenAIEmbeddings()db = DeepLake.from_documents( docs, dataset_path=dataset_path, embedding=embeddings, overwrite=True, creds={ "aws_access_key_id": os.environ["AWS_ACCESS_KEY_ID"], "aws_secret_access_key": os.environ["AWS_SECRET_ACCESS_KEY"], "aws_session_token": os.environ["AWS_SESSION_TOKEN"], # Optional },) s3://hub-2.0-datasets-n/langchain_test loaded successfully. Evaluating ingest: 100%|██████████| 1/1 [00:10<00:00 \ Dataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None Deep Lake APIyou can access the Deep Lake dataset at db.vectorstore# get structure of the datasetdb.vectorstore.summary() Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None # get embeddings numpy arrayembeds = db.vectorstore.dataset.embedding.numpy()Transfer local dataset to cloudCopy already created dataset to the cloud. You can also transfer from cloud to local.import deeplakeusername = "davitbun" # your username on app.activeloop.aisource = f"hub://{username}/langchain_testing" # could be local, s3, gcs, etc.destination = f"hub://{username}/langchain_test_copy" # could be local, s3, gcs, etc.deeplake.deepcopy(src=source, dest=destination, overwrite=True) Copying dataset: 100%|██████████| 56/56 [00:38<00:00 This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy Your Deep Lake dataset has been successfully created! The dataset is private so make sure you are logged in! Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])db = DeepLake(dataset_path=destination, embedding=embeddings)db.add_documents(docs) This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy / hub://davitbun/langchain_test_copy loaded successfully. Deep Lake Dataset in hub://davitbun/langchain_test_copy already exists, loading from the storage Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None Evaluating ingest: 100%|██████████| 1/1 [00:31<00:00 - Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (8, 1536) float32 None ids text (8, 1) str None metadata json (8, 1) str None text text (8, 1) str None ['ad42f3fe-e188-11ed-b66d-41c5f7b85421', 'ad42f3ff-e188-11ed-b66d-41c5f7b85421', 'ad42f400-e188-11ed-b66d-41c5f7b85421', 'ad42f401-e188-11ed-b66d-41c5f7b85421']PreviousVector storesNextAlibaba Cloud OpenSearchSetting upExample provided by ActiveloopDeep Lake locallyCreate a local datasetQuery datasetRetrieval Question/AnsweringAttribute based filtering in metadataChoosing distance functionMaximal Marginal relevanceDelete datasetDeep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memoryTQL SearchCreating vector stores on AWS S3Deep Lake APITransfer local dataset to cloud |
619 | https://python.langchain.com/docs/integrations/vectorstores/alibabacloud_opensearch | ComponentsVector storesAlibaba Cloud OpenSearchAlibaba Cloud OpenSearchAlibaba Cloud Opensearch is a one-stop platform to develop intelligent search services. OpenSearch was built on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.OpenSearch helps you develop high quality, maintenance-free, and high performance intelligent search services to provide your users with high search efficiency and accuracy.OpenSearch provides the vector search feature. In specific scenarios, especially test question search and image search scenarios, you can use the vector search feature together with the multimodal search feature to improve the accuracy of search results.This notebook shows how to use functionality related to the Alibaba Cloud OpenSearch Vector Search Edition.
To run, you should have an OpenSearch Vector Search Edition instance up and running:Read the help document to quickly familiarize and configure OpenSearch Vector Search Edition instance.After the instance is up and running, follow these steps to split documents, get embeddings, connect to the alibaba cloud opensearch instance, index documents, and perform vector retrieval.We need to install the following Python packages first.#!pip install alibabacloud-ha3engineWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import ( AlibabaCloudOpenSearch, AlibabaCloudOpenSearchSettings,)Split documents and get embeddings.from langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Create opensearch settings.settings = AlibabaCloudOpenSearchSettings( endpoint="The endpoint of opensearch instance, You can find it from the console of Alibaba Cloud OpenSearch.", instance_id="The identify of opensearch instance, You can find it from the console of Alibaba Cloud OpenSearch.", datasource_name="The name of the data source specified when creating it.", username="The username specified when purchasing the instance.", password="The password specified when purchasing the instance.", embedding_index_name="The name of the vector attribute specified when configuring the instance attributes.", field_name_mapping={ "id": "id", # The id field name mapping of index document. "document": "document", # The text field name mapping of index document. "embedding": "embedding", # The embedding field name mapping of index document. "name_of_the_metadata_specified_during_search": "opensearch_metadata_field_name,=", # The metadata field name mapping of index document, could specify multiple, The value field contains mapping name and operator, the operator would be used when executing metadata filter query. },)# for example# settings = AlibabaCloudOpenSearchSettings(# endpoint="ha-cn-5yd39d83c03.public.ha.aliyuncs.com",# instance_id="ha-cn-5yd39d83c03",# datasource_name="ha-cn-5yd39d83c03_test",# username="this is a user name",# password="this is a password",# embedding_index_name="index_embedding",# field_name_mapping={# "id": "id",# "document": "document",# "embedding": "embedding",# "metadata_a": "metadata_a,=" #The value field contains mapping name and operator, the operator would be used when executing metadata filter query# "metadata_b": "metadata_b,>"# "metadata_c": "metadata_c,<"# "metadata_else": "metadata_else,="# })Create an opensearch access instance by settings.# Create an opensearch instance and index docs.opensearch = AlibabaCloudOpenSearch.from_texts( texts=docs, embedding=embeddings, config=settings)or# Create an opensearch instance.opensearch = AlibabaCloudOpenSearch(embedding=embeddings, config=settings)Add texts and build index.metadatas = {"md_key_a": "md_val_a", "md_key_b": "md_val_b"}# the key of metadatas must match field_name_mapping in settings.opensearch.add_texts(texts=docs, ids=[], metadatas=metadatas)Query and retrieve data.query = "What did the president say about Ketanji Brown Jackson"docs = opensearch.similarity_search(query)print(docs[0].page_content)Query and retrieve data with metadata.query = "What did the president say about Ketanji Brown Jackson"metadatas = {"md_key_a": "md_val_a"}docs = opensearch.similarity_search(query, filter=metadatas)print(docs[0].page_content)If you encounter any problems during use, please feel free to contact [email protected], and we will do our best to provide you with assistance and support.PreviousActiveloop Deep LakeNextAnalyticDB |
620 | https://python.langchain.com/docs/integrations/vectorstores/analyticdb | ComponentsVector storesAnalyticDBAnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.This notebook shows how to use functionality related to the AnalyticDB vector database.
To run, you should have an AnalyticDB instance up and running:Using AnalyticDB Cloud Vector Database. Click here to fast deploy it.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import AnalyticDBSplit documents and get embeddings by call OpenAI APIfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connect to AnalyticDB by setting related ENVIRONMENTS.export PG_HOST={your_analyticdb_hostname}export PG_PORT={your_analyticdb_port} # Optional, default is 5432export PG_DATABASE={your_database} # Optional, default is postgresexport PG_USER={database_username}export PG_PASSWORD={database_password}Then store your embeddings and documents into AnalyticDBimport osconnection_string = AnalyticDB.connection_string_from_db_params( driver=os.environ.get("PG_DRIVER", "psycopg2cffi"), host=os.environ.get("PG_HOST", "localhost"), port=int(os.environ.get("PG_PORT", "5432")), database=os.environ.get("PG_DATABASE", "postgres"), user=os.environ.get("PG_USER", "postgres"), password=os.environ.get("PG_PASSWORD", "postgres"),)vector_db = AnalyticDB.from_documents( docs, embeddings, connection_string=connection_string,)Query and retrieve dataquery = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousAlibaba Cloud OpenSearchNextAnnoy |
621 | https://python.langchain.com/docs/integrations/vectorstores/annoy | ComponentsVector storesAnnoyOn this pageAnnoyAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.This notebook shows how to use functionality related to the Annoy vector database.NOTE: Annoy is read-only - once the index is built you cannot add any more emebddings!If you want to progressively add new entries to your VectorStore then better choose an alternative!#!pip install annoyCreate VectorStore from textsfrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.vectorstores import Annoyembeddings_func = HuggingFaceEmbeddings()texts = ["pizza is great", "I love salad", "my car", "a dog"]# default metric is angularvector_store = Annoy.from_texts(texts, embeddings_func)# allows for custom annoy parameters, defaults are n_trees=100, n_jobs=-1, metric="angular"vector_store_v2 = Annoy.from_texts( texts, embeddings_func, metric="dot", n_trees=100, n_jobs=1)vector_store.similarity_search("food", k=3) [Document(page_content='pizza is great', metadata={}), Document(page_content='I love salad', metadata={}), Document(page_content='my car', metadata={})]# the score is a distance metric, so lower is bettervector_store.similarity_search_with_score("food", k=3) [(Document(page_content='pizza is great', metadata={}), 1.0944390296936035), (Document(page_content='I love salad', metadata={}), 1.1273186206817627), (Document(page_content='my car', metadata={}), 1.1580758094787598)]Create VectorStore from docsfrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \n\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n\nThroughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n\nThey keep moving. \n\nAnd the costs and the threats to America and the world keep rising. \n\nThat’s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \n\nThe United States is a member along with 29 other nations. \n\nIt matters. American diplomacy matters. American resolve matters.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. \n\nHe rejected repeated efforts at diplomacy. \n\nHe thought the West and NATO wouldn’t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n\nWe prepared extensively and carefully. \n\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \n\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n\nWe countered Russia’s lies with truth. \n\nAnd now that he has acted the free world is holding him accountable. \n\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n\nTogether with our allies –we are right now enforcing powerful economic sanctions. \n\nWe are cutting off Russia’s largest banks from the international financial system. \n\nPreventing Russia’s central bank from defending the Russian Ruble making Putin’s $630 Billion “war fund” worthless. \n\nWe are choking off Russia’s access to technology that will sap its economic strength and weaken its military for years to come. \n\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value. \n\nThe Russian stock market has lost 40% of its value and trading remains suspended. Russia’s economy is reeling and Putin alone is to blame. \n\nTogether with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n\nWe are giving more than $1 Billion in direct assistance to Ukraine. \n\nAnd we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n\nLet me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \n\nOur forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies – in the event that Putin decides to keep moving west.', metadata={'source': '../../../state_of_the_union.txt'})]vector_store_from_docs = Annoy.from_documents(docs, embeddings_func)query = "What did the president say about Ketanji Brown Jackson"docs = vector_store_from_docs.similarity_search(query)print(docs[0].page_content[:100]) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights AcCreate VectorStore via existing embeddingsembs = embeddings_func.embed_documents(texts)data = list(zip(texts, embs))vector_store_from_embeddings = Annoy.from_embeddings(data, embeddings_func)vector_store_from_embeddings.similarity_search_with_score("food", k=3) [(Document(page_content='pizza is great', metadata={}), 1.0944390296936035), (Document(page_content='I love salad', metadata={}), 1.1273186206817627), (Document(page_content='my car', metadata={}), 1.1580758094787598)]Search via embeddingsmotorbike_emb = embeddings_func.embed_query("motorbike")vector_store.similarity_search_by_vector(motorbike_emb, k=3) [Document(page_content='my car', metadata={}), Document(page_content='a dog', metadata={}), Document(page_content='pizza is great', metadata={})]vector_store.similarity_search_with_score_by_vector(motorbike_emb, k=3) [(Document(page_content='my car', metadata={}), 1.0870471000671387), (Document(page_content='a dog', metadata={}), 1.2095637321472168), (Document(page_content='pizza is great', metadata={}), 1.3254905939102173)]Search via docstore idvector_store.index_to_docstore_id {0: '2d1498a8-a37c-4798-acb9-0016504ed798', 1: '2d30aecc-88e0-4469-9d51-0ef7e9858e6d', 2: '927f1120-985b-4691-b577-ad5cb42e011c', 3: '3056ddcf-a62f-48c8-bd98-b9e57a3dfcae'}some_docstore_id = 0 # texts[0]vector_store.docstore._dict[vector_store.index_to_docstore_id[some_docstore_id]] Document(page_content='pizza is great', metadata={})# same document has distance 0vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3) [(Document(page_content='pizza is great', metadata={}), 0.0), (Document(page_content='I love salad', metadata={}), 1.0734446048736572), (Document(page_content='my car', metadata={}), 1.2895267009735107)]Save and loadvector_store.save_local("my_annoy_index_and_docstore") saving configloaded_vector_store = Annoy.load_local( "my_annoy_index_and_docstore", embeddings=embeddings_func)# same document has distance 0loaded_vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3) [(Document(page_content='pizza is great', metadata={}), 0.0), (Document(page_content='I love salad', metadata={}), 1.0734446048736572), (Document(page_content='my car', metadata={}), 1.2895267009735107)]Construct from scratchimport uuidfrom annoy import AnnoyIndexfrom langchain.docstore.document import Documentfrom langchain.docstore.in_memory import InMemoryDocstoremetadatas = [{"x": "food"}, {"x": "food"}, {"x": "stuff"}, {"x": "animal"}]# embeddingsembeddings = embeddings_func.embed_documents(texts)# embedding dimf = len(embeddings[0])# indexmetric = "angular"index = AnnoyIndex(f, metric=metric)for i, emb in enumerate(embeddings): index.add_item(i, emb)index.build(10)# docstoredocuments = []for i, text in enumerate(texts): metadata = metadatas[i] if metadatas else {} documents.append(Document(page_content=text, metadata=metadata))index_to_docstore_id = {i: str(uuid.uuid4()) for i in range(len(documents))}docstore = InMemoryDocstore( {index_to_docstore_id[i]: doc for i, doc in enumerate(documents)})db_manually = Annoy( embeddings_func.embed_query, index, metric, docstore, index_to_docstore_id)db_manually.similarity_search_with_score("eating!", k=3) [(Document(page_content='pizza is great', metadata={'x': 'food'}), 1.1314140558242798), (Document(page_content='I love salad', metadata={'x': 'food'}), 1.1668788194656372), (Document(page_content='my car', metadata={'x': 'stuff'}), 1.226445198059082)]PreviousAnalyticDBNextAtlasCreate VectorStore from textsCreate VectorStore from docsCreate VectorStore via existing embeddingsSearch via embeddingsSearch via docstore idSave and loadConstruct from scratch |
622 | https://python.langchain.com/docs/integrations/vectorstores/atlas | ComponentsVector storesAtlasOn this pageAtlasAtlas is a platform by Nomic made for interacting with both small and internet scale unstructured datasets. It enables anyone to visualize, search, and share massive datasets in their browser.This notebook shows you how to use functionality related to the AtlasDB vectorstore.pip install spacypython3 -m spacy download en_core_web_smpip install nomicLoad Packagesimport timefrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import SpacyTextSplitterfrom langchain.vectorstores import AtlasDBfrom langchain.document_loaders import TextLoaderATLAS_TEST_API_KEY = "7xDPkYXSYDc1_ErdTPIcoAR9RNd8YDlkS3nVNXcVoIMZ6"Prepare the Dataloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = SpacyTextSplitter(separator="|")texts = []for doc in text_splitter.split_documents(documents): texts.extend(doc.page_content.split("|"))texts = [e.strip() for e in texts]Map the Data using Nomic's Atlasdb = AtlasDB.from_texts( texts=texts, name="test_index_" + str(time.time()), # unique name for your vector store description="test_index", # a description for your vector store api_key=ATLAS_TEST_API_KEY, index_kwargs={"build_topic_model": True},)db.project.wait_for_project_lock()db.projectHere is a map with the result of this code. This map displays the texts of the State of the Union.
https://atlas.nomic.ai/map/3e4de075-89ff-486a-845c-36c23f30bb67/d8ce2284-8edb-4050-8b9b-9bb543d7f647PreviousAnnoyNextAwaDBLoad PackagesPrepare the DataMap the Data using Nomic's Atlas |
623 | https://python.langchain.com/docs/integrations/vectorstores/awadb | ComponentsVector storesAwaDBOn this pageAwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.This notebook shows how to use functionality related to the AwaDB.pip install awadbfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import AwaDBfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)docs = text_splitter.split_documents(documents)db = AwaDB.from_documents(docs)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with scoreThe returned distance score is between 0-1. 0 is dissimilar, 1 is the most similardocs = db.similarity_search_with_score(query)print(docs[0]) (Document(page_content='And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.561813814013747)Restore the table created and added data beforeAwaDB automatically persists added document dataIf you can restore the table you created and added before, you can just do this as below:awadb_client = awadb.Client()ret = awadb_client.Load("langchain_awadb")if ret: print("awadb load table success")else: print("awadb load table failed")awadb load table successPreviousAtlasNextAzure Cognitive SearchSimilarity search with scoreRestore the table created and added data before |
624 | https://python.langchain.com/docs/integrations/vectorstores/azuresearch | ComponentsVector storesAzure Cognitive SearchOn this pageAzure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Vector search is currently in public preview. It's available through the Azure portal, preview REST API and beta client libraries. More info Beta client libraries are subject to potential breaking changes, please be sure to use the SDK package version identified below. azure-search-documents==11.4.0b8Install Azure Cognitive Search SDKpip install azure-search-documents==11.4.0b8pip install azure-identityImport required librariesimport openaiimport osfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores.azuresearch import AzureSearchConfigure OpenAI settingsConfigure the OpenAI settings to use Azure OpenAI or OpenAIos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "YOUR_OPENAI_ENDPOINT"os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"os.environ["OPENAI_API_VERSION"] = "2023-05-15"model: str = "text-embedding-ada-002"Configure vector store settingsSet up the vector store settings using environment variables:vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"Create embeddings and vector store instancesCreate instances of the OpenAIEmbeddings and AzureSearch classes:embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)index_name: str = "langchain-vector-demo"vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query,)Insert text and embeddings into vector storeAdd texts and metadata from the JSON data to the vector store:from langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../../state_of_the_union.txt", encoding="utf-8")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)vector_store.add_documents(documents=docs)Perform a vector similarity searchExecute a pure vector similarity search using the similarity_search() method:# Perform a similarity searchdocs = vector_store.similarity_search( query="What did the president say about Ketanji Brown Jackson", k=3, search_type="similarity",)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Perform a vector similarity search with relevance scoresExecute a pure vector similarity search using the similarity_search_with_relevance_scores() method:docs_and_scores = vector_store.similarity_search_with_relevance_scores(query="What did the president say about Ketanji Brown Jackson", k=4, score_threshold=0.80)from pprint import pprintpprint(docs_and_scores) [(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.8441472), (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.8441472), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.82153815), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.82153815)]Perform a Hybrid SearchExecute hybrid search using the search_type or hybrid_search() method:# Perform a hybrid searchdocs = vector_store.similarity_search( query="What did the president say about Ketanji Brown Jackson", k=3, search_type="hybrid")print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.# Perform a hybrid searchdocs = vector_store.hybrid_search( query="What did the president say about Ketanji Brown Jackson", k=3)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Create a new index with custom filterable fieldsfrom azure.search.documents.indexes.models import ( SearchableField, SearchField, SearchFieldDataType, SimpleField, ScoringProfile, TextWeights,)embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)embedding_function = embeddings.embed_queryfields = [ SimpleField( name="id", type=SearchFieldDataType.String, key=True, filterable=True, ), SearchableField( name="content", type=SearchFieldDataType.String, searchable=True, ), SearchField( name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=len(embedding_function("Text")), vector_search_configuration="default", ), SearchableField( name="metadata", type=SearchFieldDataType.String, searchable=True, ), # Additional field to store the title SearchableField( name="title", type=SearchFieldDataType.String, searchable=True, ), # Additional field for filtering on document source SimpleField( name="source", type=SearchFieldDataType.String, filterable=True, ),]index_name: str = "langchain-vector-demo-custom"vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embedding_function, fields=fields,)Perform a query with a custom filter# Data in the metadata dictionary with a corresponding field in the index will be added to the index# In this example, the metadata dictionary contains a title, a source and a random field# The title and the source will be added to the index as separate fields, but the random won't. (as it is not defined in the fields list)# The random field will be only stored in the metadata fieldvector_store.add_texts( ["Test 1", "Test 2", "Test 3"], [ {"title": "Title 1", "source": "A", "random": "10290"}, {"title": "Title 2", "source": "A", "random": "48392"}, {"title": "Title 3", "source": "B", "random": "32893"}, ],)res = vector_store.similarity_search(query="Test 3 source1", k=3, search_type="hybrid")res [Document(page_content='Test 3', metadata={'title': 'Title 3', 'source': 'B', 'random': '32893'}), Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}), Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]res = vector_store.similarity_search(query="Test 3 source1", k=3, search_type="hybrid", filters="source eq 'A'")res [Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}), Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]Create a new index with a Scoring Profilefrom azure.search.documents.indexes.models import ( SearchableField, SearchField, SearchFieldDataType, SimpleField, ScoringProfile, TextWeights, ScoringFunction, FreshnessScoringFunction, FreshnessScoringParameters)embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)embedding_function = embeddings.embed_queryfields = [ SimpleField( name="id", type=SearchFieldDataType.String, key=True, filterable=True, ), SearchableField( name="content", type=SearchFieldDataType.String, searchable=True, ), SearchField( name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=len(embedding_function("Text")), vector_search_configuration="default", ), SearchableField( name="metadata", type=SearchFieldDataType.String, searchable=True, ), # Additional field to store the title SearchableField( name="title", type=SearchFieldDataType.String, searchable=True, ), # Additional field for filtering on document source SimpleField( name="source", type=SearchFieldDataType.String, filterable=True, ), # Additional data field for last doc update SimpleField( name="last_update", type=SearchFieldDataType.DateTimeOffset, searchable=True, filterable=True )]# Adding a custom scoring profile with a freshness functionsc_name = "scoring_profile"sc = ScoringProfile( name=sc_name, text_weights=TextWeights(weights={"title": 5}), function_aggregation="sum", functions=[ FreshnessScoringFunction( field_name="last_update", boost=100, parameters=FreshnessScoringParameters(boosting_duration="P2D"), interpolation="linear" ) ])index_name = "langchain-vector-demo-custom-scoring-profile"vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query, fields=fields, scoring_profiles = [sc], default_scoring_profile = sc_name)# Adding same data with different last_update to show Scoring Profile effectfrom datetime import datetime, timedeltatoday = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S-00:00')yesterday = (datetime.utcnow() - timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%S-00:00')one_month_ago = (datetime.utcnow() - timedelta(days=30)).strftime('%Y-%m-%dT%H:%M:%S-00:00')vector_store.add_texts( ["Test 1", "Test 1", "Test 1"], [ {"title": "Title 1", "source": "source1", "random": "10290", "last_update": today}, {"title": "Title 1", "source": "source1", "random": "48392", "last_update": yesterday}, {"title": "Title 1", "source": "source1", "random": "32893", "last_update": one_month_ago}, ],) ['NjQyNTI5ZmMtNmVkYS00Njg5LTk2ZDgtMjM3OTY4NTJkYzFj', 'M2M0MGExZjAtMjhiZC00ZDkwLThmMTgtODNlN2Y2ZDVkMTMw', 'ZmFhMDE1NzMtMjZjNS00MTFiLTk0MTEtNGRkYjgwYWQwOTI0']res = vector_store.similarity_search(query="Test 1", k=3, search_type="similarity")res [Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '10290', 'last_update': '2023-07-13T10:47:39-00:00'}), Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '48392', 'last_update': '2023-07-12T10:47:39-00:00'}), Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '32893', 'last_update': '2023-06-13T10:47:39-00:00'})]PreviousAwaDBNextBagelDBImport required librariesConfigure OpenAI settingsConfigure vector store settingsCreate embeddings and vector store instancesInsert text and embeddings into vector storePerform a vector similarity searchPerform a vector similarity search with relevance scoresPerform a Hybrid Search |
625 | https://python.langchain.com/docs/integrations/vectorstores/bageldb | ComponentsVector storesBagelDBOn this pageBagelDBBagelDB (Open Vector Database for AI), is like GitHub for AI data.
It is a collaborative platform where users can create,
share, and manage vector datasets. It can support private projects for independent developers,
internal collaborations for enterprises, and public contributions for data DAOs.Installation and Setuppip install betabageldbCreate VectorStore from textsfrom langchain.vectorstores import Bageltexts = ["hello bagel", "hello langchain", "I love salad", "my car", "a dog"]# create cluster and add textscluster = Bagel.from_texts(cluster_name="testing", texts=texts)# similarity searchcluster.similarity_search("bagel", k=3) [Document(page_content='hello bagel', metadata={}), Document(page_content='my car', metadata={}), Document(page_content='I love salad', metadata={})]# the score is a distance metric, so lower is bettercluster.similarity_search_with_score("bagel", k=3) [(Document(page_content='hello bagel', metadata={}), 0.27392977476119995), (Document(page_content='my car', metadata={}), 1.4783176183700562), (Document(page_content='I love salad', metadata={}), 1.5342965126037598)]# delete the clustercluster.delete_cluster()Create VectorStore from docsfrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)[:10]# create cluster with docscluster = Bagel.from_documents(cluster_name="testing_with_docs", documents=docs)# similarity searchquery = "What did the president say about Ketanji Brown Jackson"docs = cluster.similarity_search(query)print(docs[0].page_content[:102]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Get all text/doc from Clustertexts = ["hello bagel", "this is langchain"]cluster = Bagel.from_texts(cluster_name="testing", texts=texts)cluster_data = cluster.get()# all keyscluster_data.keys() dict_keys(['ids', 'embeddings', 'metadatas', 'documents'])# all values and keyscluster_data {'ids': ['578c6d24-3763-11ee-a8ab-b7b7b34f99ba', '578c6d25-3763-11ee-a8ab-b7b7b34f99ba', 'fb2fc7d8-3762-11ee-a8ab-b7b7b34f99ba', 'fb2fc7d9-3762-11ee-a8ab-b7b7b34f99ba', '6b40881a-3762-11ee-a8ab-b7b7b34f99ba', '6b40881b-3762-11ee-a8ab-b7b7b34f99ba', '581e691e-3762-11ee-a8ab-b7b7b34f99ba', '581e691f-3762-11ee-a8ab-b7b7b34f99ba'], 'embeddings': None, 'metadatas': [{}, {}, {}, {}, {}, {}, {}, {}], 'documents': ['hello bagel', 'this is langchain', 'hello bagel', 'this is langchain', 'hello bagel', 'this is langchain', 'hello bagel', 'this is langchain']}cluster.delete_cluster()Create cluster with metadata & filter using metadatatexts = ["hello bagel", "this is langchain"]metadatas = [{"source": "notion"}, {"source": "google"}]cluster = Bagel.from_texts(cluster_name="testing", texts=texts, metadatas=metadatas)cluster.similarity_search_with_score("hello bagel", where={"source": "notion"}) [(Document(page_content='hello bagel', metadata={'source': 'notion'}), 0.0)]# delete the clustercluster.delete_cluster()PreviousAzure Cognitive SearchNextCassandraInstallation and SetupCreate VectorStore from textsCreate VectorStore from docsGet all text/doc from ClusterCreate cluster with metadata & filter using metadata |
626 | https://python.langchain.com/docs/integrations/vectorstores/cassandra | ComponentsVector storesCassandraOn this pageCassandraApache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database.Newest Cassandra releases natively support Vector Similarity Search.To run this notebook you need either a running Cassandra cluster equipped with Vector Search capabilities (in pre-release at the time of writing) or a DataStax Astra DB instance running in the cloud (you can get one for free at datastax.com). Check cassio.org for more information.pip install "cassio>=0.1.0"Please provide database connection parameters and secrets:import osimport getpassdatabase_mode = (input("\n(C)assandra or (A)stra DB? ")).upper()keyspace_name = input("\nKeyspace name? ")if database_mode == "A": ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ') # ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ")elif database_mode == "C": CASSANDRA_CONTACT_POINTS = input( "Contact points? (comma-separated, empty for localhost) " ).strip()depending on whether local or cloud-based Astra DB, create the corresponding database connection "Session" objectfrom cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProviderif database_mode == "C": if CASSANDRA_CONTACT_POINTS: cluster = Cluster( [cp.strip() for cp in CASSANDRA_CONTACT_POINTS.split(",") if cp.strip()] ) else: cluster = Cluster() session = cluster.connect()elif database_mode == "A": ASTRA_DB_CLIENT_ID = "token" cluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider( ASTRA_DB_CLIENT_ID, ASTRA_DB_APPLICATION_TOKEN, ), ) session = cluster.connect()else: raise NotImplementedErrorPlease provide OpenAI access keyWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Creation and usage of the Vector Storefrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Cassandrafrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderSOURCE_FILE_NAME = "../../modules/state_of_the_union.txt"loader = TextLoader(SOURCE_FILE_NAME)documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embedding_function = OpenAIEmbeddings()table_name = "my_vector_db_table"docsearch = Cassandra.from_documents( documents=docs, embedding=embedding_function, session=session, keyspace=keyspace_name, table_name=table_name,)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)## if you already have an index, you can load it and use it like this:# docsearch_preexisting = Cassandra(# embedding=embedding_function,# session=session,# keyspace=keyspace_name,# table_name=table_name,# )# docs = docsearch_preexisting.similarity_search(query, k=2)print(docs[0].page_content)Maximal Marginal Relevance SearchesIn addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")Metadata filteringYou can specify filtering on metadata when running searches in the vector store. By default, when inserting documents, the only metadata is the "source" (but you can customize the metadata at insertion time).Since only one files was inserted, this is just a demonstration of how filters are passed:filter = {"source": SOURCE_FILE_NAME}filtered_docs = docsearch.similarity_search(query, filter=filter, k=5)print(f"{len(filtered_docs)} documents retrieved.")print(f"{filtered_docs[0].page_content[:64]} ...")filter = {"source": "nonexisting_file.txt"}filtered_docs2 = docsearch.similarity_search(query, filter=filter)print(f"{len(filtered_docs2)} documents retrieved.")Please visit the cassIO documentation for more on using vector stores with Langchain.PreviousBagelDBNextChromaPlease provide database connection parameters and secrets:Please provide OpenAI access keyCreation and usage of the Vector StoreMaximal Marginal Relevance SearchesMetadata filtering |
627 | https://python.langchain.com/docs/integrations/vectorstores/chroma | ComponentsVector storesChromaOn this pageChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.Install Chroma with:pip install chromadbChroma runs in various modes. See below for examples of each integrated with LangChain.in-memory - in a python script or jupyter notebookin-memory with persistance - in a script or notebook and save/load to diskin a docker container - as a server running your local machine or in the cloudLike any other database, you can: .add .get .update.upsert.delete.peekand .query runs the similarity search.View full docs at docs. To access these methods directly, you can do ._collection.method()Basic ExampleIn this basic example, we take the most recent State of the Union Address, split it into chunks, embed it using an open-source embedding model, load it into Chroma, and then query it.# importfrom langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromafrom langchain.document_loaders import TextLoader# load the document and split it into chunksloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()# split it into chunkstext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)# create the open-source embedding functionembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")# load it into Chromadb = Chroma.from_documents(docs, embedding_function)# query itquery = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)# print resultsprint(docs[0].page_content) /Users/jeff/.pyenv/versions/3.10.10/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Basic Example (including saving to disk)Extending the previous example, if you want to save to disk, simply initialize the Chroma client and pass the directory where you want the data to be saved to. Caution: Chroma makes a best-effort to automatically save data to disk, however multiple in-memory clients can stomp each other's work. As a best practice, only have one client per path running at any given time.# save to diskdb2 = Chroma.from_documents(docs, embedding_function, persist_directory="./chroma_db")docs = db2.similarity_search(query)# load from diskdb3 = Chroma(persist_directory="./chroma_db", embedding_function=embedding_function)docs = db3.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Passing a Chroma Client into LangchainYou can also create a Chroma Client and pass it to LangChain. This is particularly useful if you want easier access to the underlying database.You can also specify the collection name that you want LangChain to use.import chromadbpersistent_client = chromadb.PersistentClient()collection = persistent_client.get_or_create_collection("collection_name")collection.add(ids=["1", "2", "3"], documents=["a", "b", "c"])langchain_chroma = Chroma( client=persistent_client, collection_name="collection_name", embedding_function=embedding_function,)print("There are", langchain_chroma._collection.count(), "in the collection") Add of existing embedding ID: 1 Add of existing embedding ID: 2 Add of existing embedding ID: 3 Add of existing embedding ID: 1 Add of existing embedding ID: 2 Add of existing embedding ID: 3 Add of existing embedding ID: 1 Insert of existing embedding ID: 1 Add of existing embedding ID: 2 Insert of existing embedding ID: 2 Add of existing embedding ID: 3 Insert of existing embedding ID: 3 There are 3 in the collectionBasic Example (using the Docker Container)You can also run the Chroma Server in a Docker container separately, create a Client to connect to it, and then pass that to LangChain. Chroma has the ability to handle multiple Collections of documents, but the LangChain interface expects one, so we need to specify the collection name. The default collection name used by LangChain is "langchain".Here is how to clone, build, and run the Docker Image:git clone [email protected]:chroma-core/chroma.gitEdit the docker-compose.yml file and add ALLOW_RESET=TRUE under environment ... command: uvicorn chromadb.app:app --reload --workers 1 --host 0.0.0.0 --port 8000 --log-config log_config.yml environment: - IS_PERSISTENT=TRUE - ALLOW_RESET=TRUE ports: - 8000:8000 ...Then run docker-compose up -d --build# create the chroma clientimport chromadbimport uuidfrom chromadb.config import Settingsclient = chromadb.HttpClient(settings=Settings(allow_reset=True))client.reset() # resets the databasecollection = client.create_collection("my_collection")for doc in docs: collection.add( ids=[str(uuid.uuid1())], metadatas=doc.metadata, documents=doc.page_content )# tell LangChain to use our client and collection namedb4 = Chroma(client=client, collection_name="my_collection", embedding_function=embedding_function)query = "What did the president say about Ketanji Brown Jackson"docs = db4.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Update and DeleteWhile building toward a real application, you want to go beyond adding data, and also update and delete data. Chroma has users provide ids to simplify the bookkeeping here. ids can be the name of the file, or a combined has like filename_paragraphNumber, etc.Chroma supports all these operations - though some of them are still being integrated all the way through the LangChain interface. Additional workflow improvements will be added soon.Here is a basic example showing how to do various operations:# create simple idsids = [str(i) for i in range(1, len(docs) + 1)]# add dataexample_db = Chroma.from_documents(docs, embedding_function, ids=ids)docs = example_db.similarity_search(query)print(docs[0].metadata)# update the metadata for a documentdocs[0].metadata = { "source": "../../../state_of_the_union.txt", "new_value": "hello world",}example_db.update_document(ids[0], docs[0])print(example_db._collection.get(ids=[ids[0]]))# delete the last documentprint("count before", example_db._collection.count())example_db._collection.delete(ids=[ids[-1]])print("count after", example_db._collection.count()) {'source': '../../../state_of_the_union.txt'} {'ids': ['1'], 'embeddings': None, 'metadatas': [{'new_value': 'hello world', 'source': '../../../state_of_the_union.txt'}], 'documents': ['Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.']} count before 46 count after 45Use OpenAI EmbeddingsMany people like to use OpenAIEmbeddings, here is how to set that up.# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassfrom langchain.embeddings.openai import OpenAIEmbeddingsOPENAI_API_KEY = getpass()import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYembeddings = OpenAIEmbeddings()new_client = chromadb.EphemeralClient()openai_lc_client = Chroma.from_documents( docs, embeddings, client=new_client, collection_name="openai_collection")query = "What did the president say about Ketanji Brown Jackson"docs = openai_lc_client.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Other InformationSimilarity search with scoreThe returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 1.1972057819366455)Retriever optionsThis section goes over different options for how to use Chroma as a retriever.MMRIn addition to using similarity search in the retriever object, you can also use mmr.retriever = db.as_retriever(search_type="mmr")retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})Filtering on metadataIt can be helpful to narrow down the collection before working with it.For example, collections can be filtered on metadata using the get method.# filter collection for updated sourceexample_db.get(where={"source": "some_other_source"}) {'ids': [], 'embeddings': None, 'metadatas': [], 'documents': []}PreviousCassandraNextClarifaiBasic ExampleBasic Example (including saving to disk)Passing a Chroma Client into LangchainBasic Example (using the Docker Container)Update and DeleteUse OpenAI EmbeddingsOther InformationSimilarity search with scoreRetriever optionsFiltering on metadata |
628 | https://python.langchain.com/docs/integrations/vectorstores/clarifai | ComponentsVector storesClarifaiOn this pageClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. A Clarifai application can be used as a vector database after uploading inputs. This notebook shows how to use functionality related to the Clarifai vector database. Examples are shown to demonstrate text semantic search capabilities. Clarifai also supports semantic search with images, video frames, and localized search (see Rank) and attribute search (see Filter).To use Clarifai, you must have an account and a Personal Access Token (PAT) key.
Check here to get or create a PAT.Dependencies# Install required dependenciespip install clarifaiImportsHere we will be setting the personal access token. You can find your PAT under settings/security on the platform.# Please login and get your API key from https://clarifai.com/settings/securityfrom getpass import getpassCLARIFAI_PAT = getpass() ········# Import the required modulesfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import ClarifaiSetupSetup the user id and app id where the text data will be uploaded. Note: when creating that application please select an appropriate base workflow for indexing your text documents such as the Language-Understanding workflow.You will have to first create an account on Clarifai and then create an application.USER_ID = "USERNAME_ID"APP_ID = "APPLICATION_ID"NUMBER_OF_DOCS = 4From TextsCreate a Clarifai vectorstore from a list of texts. This section will upload each text with its respective metadata to a Clarifai Application. The Clarifai Application can then be used for semantic search to find relevant texts.texts = [ "I really enjoy spending time with you", "I hate spending time with my dog", "I want to go for a run", "I went to the movies yesterday", "I love playing soccer with my friends",]metadatas = [{"id": i, "text": text, "source": "book 1", "category": ["books", "modern"]} for i, text in enumerate(texts)]clarifai_vector_db = Clarifai.from_texts( user_id=USER_ID, app_id=APP_ID, texts=texts, pat=CLARIFAI_PAT, number_of_docs=NUMBER_OF_DOCS, metadatas=metadatas,)docs = clarifai_vector_db.similarity_search("I would love to see you")docs [Document(page_content='I really enjoy spending time with you', metadata={'text': 'I really enjoy spending time with you', 'id': 0.0, 'source': 'book 1', 'category': ['books', 'modern']}), Document(page_content='I went to the movies yesterday', metadata={'text': 'I went to the movies yesterday', 'id': 3.0, 'source': 'book 1', 'category': ['books', 'modern']})]# There is lots powerful filtering you can do within an app by leveraging metadata filters. # This one will limit the similarity query to only the texts that have key of "source" matching value of "book 1"book1_similar_docs = clarifai_vector_db.similarity_search("I would love to see you", filter={"source": "book 1"})# you can also use lists in the input's metadata and then select things that match an item in the list. This is useful for categories like below:book_category_similar_docs = clarifai_vector_db.similarity_search("I would love to see you", filter={"category": ["books"]})From DocumentsCreate a Clarifai vectorstore from a list of Documents. This section will upload each document with its respective metadata to a Clarifai Application. The Clarifai Application can then be used for semantic search to find relevant documents.loader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)docs[:4] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \n\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n\nThroughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n\nThey keep moving. \n\nAnd the costs and the threats to America and the world keep rising. \n\nThat’s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \n\nThe United States is a member along with 29 other nations. \n\nIt matters. American diplomacy matters. American resolve matters.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. \n\nHe rejected repeated efforts at diplomacy. \n\nHe thought the West and NATO wouldn’t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n\nWe prepared extensively and carefully. \n\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \n\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n\nWe countered Russia’s lies with truth. \n\nAnd now that he has acted the free world is holding him accountable. \n\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n\nTogether with our allies –we are right now enforcing powerful economic sanctions. \n\nWe are cutting off Russia’s largest banks from the international financial system. \n\nPreventing Russia’s central bank from defending the Russian Ruble making Putin’s $630 Billion “war fund” worthless. \n\nWe are choking off Russia’s access to technology that will sap its economic strength and weaken its military for years to come. \n\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.', metadata={'source': '../../../state_of_the_union.txt'})]USER_ID = "USERNAME_ID"APP_ID = "APPLICATION_ID"NUMBER_OF_DOCS = 4clarifai_vector_db = Clarifai.from_documents( user_id=USER_ID, app_id=APP_ID, documents=docs, pat=CLARIFAI_PAT, number_of_docs=NUMBER_OF_DOCS,)docs = clarifai_vector_db.similarity_search("Texts related to criminals and violence")docs [Document(page_content='And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and can’t be traced. \n\nAnd I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon? \n\nBan assault weapons and high-capacity magazines. \n\nRepeal the liability shield that makes gun manufacturers the only industry in America that can’t be sued. \n\nThese laws don’t infringe on the Second Amendment. They save lives. \n\nThe most fundamental right in America is the right to vote – and to have it counted. And it’s under assault. \n\nIn state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n\nI’ve worked on these issues a long time. \n\nI know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='So let’s not abandon our streets. Or choose between safety and equal justice. \n\nLet’s come together to protect our communities, restore trust, and hold law enforcement accountable. \n\nThat’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. \n\nThat’s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope. \n\nWe should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities. \n\nI ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe.', metadata={'source': '../../../state_of_the_union.txt'})]From existing AppWithin Clarifai we have great tools for adding data to applications (essentially projects) via API or UI. Most users will already have done that before interacting with LangChain so this example will use the data in an existing app to perform searches. Check out our API docs and UI docs. The Clarifai Application can then be used for semantic search to find relevant documents.USER_ID = "USERNAME_ID"APP_ID = "APPLICATION_ID"NUMBER_OF_DOCS = 4clarifai_vector_db = Clarifai( user_id=USER_ID, app_id=APP_ID, documents=docs, pat=CLARIFAI_PAT, number_of_docs=NUMBER_OF_DOCS,)docs = clarifai_vector_db.similarity_search("Texts related to criminals and violence")docsPreviousChromaNextClickHouseFrom TextsFrom DocumentsFrom existing App |
629 | https://python.langchain.com/docs/integrations/vectorstores/clickhouse | ComponentsVector storesClickHouseOn this pageClickHouseClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.This notebook shows how to use functionality related to the ClickHouse vector search.Setting up envrionmentsSetting up local clickhouse server with docker (optional)docker run -d -p 8123:8123 -p9000:9000 --name langchain-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server:23.4.2.11Setup up clickhouse client driverpip install clickhouse-connectWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassif not os.environ["OPENAI_API_KEY"]: os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Clickhouse, ClickhouseSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for d in docs: d.metadata = {"some": "metadata"}settings = ClickhouseSettings(table="clickhouse_vector_search_example")docsearch = Clickhouse.from_documents(docs, embeddings, config=settings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 2801.49it/s]print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Get connection info and data schemaprint(str(docsearch)) default.clickhouse_vector_search_example @ localhost:8123 username: None Table Schema: --------------------------------------------------- |id |Nullable(String) | |document |Nullable(String) | |embedding |Array(Float32) | |metadata |Object('json') | |uuid |UUID | --------------------------------------------------- Clickhouse table schemaClickhouse table will be automatically created if not exist by default. Advanced users could pre-create the table with optimized settings. For distributed Clickhouse cluster with sharding, table engine should be configured as Distributed.print(f"Clickhouse Table DDL:\n\n{docsearch.schema}") Clickhouse Table DDL: CREATE TABLE IF NOT EXISTS default.clickhouse_vector_search_example( id Nullable(String), document Nullable(String), embedding Array(Float32), metadata JSON, uuid UUID DEFAULT generateUUIDv4(), CONSTRAINT cons_vec_len CHECK length(embedding) = 1536, INDEX vec_idx embedding TYPE annoy(100,'L2Distance') GRANULARITY 1000 ) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192FilteringYou can have direct access to ClickHouse SQL where statement. You can write WHERE clause following standard SQL.NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.If you custimized your column_map under your setting, you search with filter like this:from langchain.vectorstores import Clickhouse, ClickhouseSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for i, d in enumerate(docs): d.metadata = {"doc_id": i}docsearch = Clickhouse.from_documents(docs, embeddings) Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 6939.56it/s]meta = docsearch.metadata_columnoutput = docsearch.similarity_search_with_relevance_scores( "What did the president say about Ketanji Brown Jackson?", k=4, where_str=f"{meta}.doc_id<10",)for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...") 0.6779101415357189 {'doc_id': 0} Madam Speaker, Madam... 0.6997970363474885 {'doc_id': 8} And so many families... 0.7044504914336727 {'doc_id': 1} Groups of citizens b... 0.7053558702165094 {'doc_id': 6} And I’m taking robus...Deleting your datadocsearch.drop()PreviousClarifaiNextDashVectorSetting up envrionmentsGet connection info and data schemaClickhouse table schemaFilteringDeleting your data |
630 | https://python.langchain.com/docs/integrations/vectorstores/dashvector | ComponentsVector storesDashVectorOn this pageDashVectorDashVector is a fully-managed vectorDB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.This notebook shows how to use functionality related to the DashVector vector database.To use DashVector, you must have an API key.
Here are the installation instructions.Installpip install dashvector dashscopeWe want to use DashScopeEmbeddings so we also have to get the Dashscope API Key.import osimport getpassos.environ["DASHVECTOR_API_KEY"] = getpass.getpass("DashVector API Key:")os.environ["DASHSCOPE_API_KEY"] = getpass.getpass("DashScope API Key:")Examplefrom langchain.embeddings.dashscope import DashScopeEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DashVectorfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = DashScopeEmbeddings()We can create DashVector from documents.dashvector = DashVector.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = dashvector.similarity_search(query)print(docs)We can add texts with meta datas and ids, and search with meta filter.texts = ["foo", "bar", "baz"]metadatas = [{"key": i} for i in range(len(texts))]ids = ["0", "1", "2"]dashvector.add_texts(texts, metadatas=metadatas, ids=ids)docs = dashvector.similarity_search("foo", filter="key = 2")print(docs) [Document(page_content='baz', metadata={'key': 2})]PreviousClickHouseNextDingoInstallExample |
631 | https://python.langchain.com/docs/integrations/vectorstores/dingo | ComponentsVector storesDingoOn this pageDingoDingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data.This notebook shows how to use functionality related to the DingoDB vector database.To run, you should have a DingoDB instance up and running.pip install dingodbor install latest:pip install git+https://[email protected]/dingodb/pydingo.gitWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:········from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Dingofrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()from dingodb import DingoDBindex_name = "langchain-demo"dingo_client = DingoDB(user="", password="", host=["127.0.0.1:13000"])# First, check if our index already exists. If it doesn't, we create itif index_name not in dingo_client.get_index(): # we create a new index, modify to your own dingo_client.create_index( index_name=index_name, dimension=1536, metric_type='cosine', auto_id=False)# The OpenAI embedding model `text-embedding-ada-002 uses 1536 dimensions`docsearch = Dingo.from_documents(docs, embeddings, client=dingo_client, index_name=index_name)from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Dingofrom langchain.document_loaders import TextLoaderquery = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)Adding More Text to an Existing IndexMore text can embedded and upserted to an existing Dingo index using the add_texts functionvectorstore = Dingo(embeddings, "text", client=dingo_client, index_name=index_name)vectorstore.add_texts(["More text!"])Maximal Marginal Relevance SearchesIn addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")PreviousDashVectorNextDocArray HnswSearchAdding More Text to an Existing IndexMaximal Marginal Relevance Searches |
632 | https://python.langchain.com/docs/integrations/vectorstores/docarray_hnsw | ComponentsVector storesDocArray HnswSearchOn this pageDocArray HnswSearchDocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.This notebook shows how to use functionality related to the DocArrayHnswSearch.SetupUncomment the below cells to install docarray and get/set your OpenAI api key if you haven't already done so.# !pip install "docarray[hnswlib]"# Get an OpenAI token: https://platform.openai.com/account/api-keys# import os# from getpass import getpass# OPENAI_API_KEY = getpass()# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYUsing DocArrayHnswSearchfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DocArrayHnswSearchfrom langchain.document_loaders import TextLoaderdocuments = TextLoader("../../../state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = DocArrayHnswSearch.from_documents( docs, embeddings, work_dir="hnswlib_store/", n_dim=1536)Similarity searchquery = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with scoreThe returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}), 0.36962226)import shutil# delete the dirshutil.rmtree("hnswlib_store")PreviousDingoNextDocArray InMemorySearchSetupUsing DocArrayHnswSearchSimilarity searchSimilarity search with score |
633 | https://python.langchain.com/docs/integrations/vectorstores/docarray_in_memory | ComponentsVector storesDocArray InMemorySearchOn this pageDocArray InMemorySearchDocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.This notebook shows how to use functionality related to the DocArrayInMemorySearch.SetupUncomment the below cells to install docarray and get/set your OpenAI api key if you haven't already done so.# !pip install "docarray"# Get an OpenAI token: https://platform.openai.com/account/api-keys# import os# from getpass import getpass# OPENAI_API_KEY = getpass()# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYUsing DocArrayInMemorySearchfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DocArrayInMemorySearchfrom langchain.document_loaders import TextLoaderdocuments = TextLoader("../../../state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = DocArrayInMemorySearch.from_documents(docs, embeddings)Similarity searchquery = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with scoreThe returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}), 0.8154190158347903)PreviousDocArray HnswSearchNextElasticsearchSetupUsing DocArrayInMemorySearchSimilarity searchSimilarity search with score |
634 | https://python.langchain.com/docs/integrations/vectorstores/elasticsearch | ComponentsVector storesElasticsearchOn this pageElasticsearchElasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. This notebook shows how to use functionality related to the Elasticsearch database.pip install elasticsearch openai tiktoken langchainRunning and connecting to ElasticsearchThere are two main ways to setup an Elasticsearch instance for use with:Elastic Cloud: Elastic Cloud is a managed Elasticsearch service. Signup for a free trial.To connect to an Elasticsearch instance that does not require
login credentials (starting the docker instance with security enabled), pass the Elasticsearch URL and index name along with the
embedding object to the constructor.Local Install Elasticsearch: Get started with Elasticsearch by running it locally. The easiest way is to use the official Elasticsearch Docker image. See the Elasticsearch Docker documentation for more information.Running Elasticsearch via DockerExample: Run a single-node Elasticsearch instance with security disabled. This is not recommended for production use. docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0Once the Elasticsearch instance is running, you can connect to it using the Elasticsearch URL and index name along with the embedding object to the constructor.Example: from langchain.vectorstores.elasticsearch import ElasticsearchStore from langchain.embeddings.openai import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding )AuthenticationFor production, we recommend you run with security enabled. To connect with login credentials, you can use the parameters api_key or es_user and es_password.Example: from langchain.vectorstores import ElasticsearchStore from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding, es_user="elastic", es_password="changeme" )How to obtain a password for the default "elastic" user?To obtain your Elastic Cloud password for the default "elastic" user:Log in to the Elastic Cloud console at https://cloud.elastic.coGo to "Security" > "Users"Locate the "elastic" user and click "Edit"Click "Reset password"Follow the prompts to reset the passwordHow to obtain an API key?To obtain an API key:Log in to the Elastic Cloud console at https://cloud.elastic.coOpen Kibana and go to Stack Management > API KeysClick "Create API key"Enter a name for the API key and click "Create"Copy the API key and paste it into the api_key parameterElastic CloudTo connect to an Elasticsearch instance on Elastic Cloud, you can use either the es_cloud_id parameter or es_url.Example: from langchain.vectorstores.elasticsearch import ElasticsearchStore from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticsearchStore( es_cloud_id="<cloud_id>", index_name="test_index", embedding=embedding, es_user="elastic", es_password="changeme" )We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Basic ExampleThis example we are going to load "state_of_the_union.txt" via the TextLoader, chunk the text into 500 word chunks, and then index each chunk into Elasticsearch.Once the data is indexed, we perform a simple query to find the top 4 chunks that similar to the query "What did the president say about Ketanji Brown Jackson".Elasticsearch is running locally on localhost:9200 with docker. For more details on how to connect to Elasticsearch from Elastic Cloud, see connecting with authentication above.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import ElasticsearchStorefrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test-basic", )db.client.indices.refresh(index="test-basic")query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query)print(results) [Document(page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt'}), Document(page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'date': '2016-01-01', 'rating': 2, 'author': 'John Doe'}), Document(page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'date': '2010-01-01', 'rating': 1, 'author': 'John Doe'}), Document(page_content='As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.', metadata={'source': '../../modules/state_of_the_union.txt'})]MetadataElasticsearchStore supports metadata to stored along with the document. This metadata dict object is stored in a metadata object field in the Elasticsearch document. Based on the metadata value, Elasticsearch will automatically setup the mapping by infering the data type of the metadata value. For example, if the metadata value is a string, Elasticsearch will setup the mapping for the metadata object field as a string type.# Adding metadata to documentsfor i, doc in enumerate(docs): doc.metadata["date"] = f"{range(2010, 2020)[i % 10]}-01-01" doc.metadata["rating"] = range(1, 6)[i % 5] doc.metadata["author"] = ["John Doe", "Jane Doe"][i % 2]db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test-metadata")query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2016-01-01', 'rating': 2, 'author': 'John Doe'}Filtering MetadataWith metadata added to the documents, you can add metadata filtering at query time. Example: Filter by keyworddocs = db.similarity_search(query, filter=[{ "match": { "metadata.author": "John Doe"}}])print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2010-01-01', 'rating': 1, 'author': 'John Doe', 'geo_location': {'lat': 40.12, 'lon': -71.34}}Example: Filter by Date Rangedocs = db.similarity_search("Any mention about Fred?", filter=[{ "range": { "metadata.date": { "gte": "2010-01-01" }}}])print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2012-01-01', 'rating': 3, 'author': 'John Doe', 'geo_location': {'lat': 40.12, 'lon': -71.34}}Example: Filter by Numeric Rangedocs = db.similarity_search("Any mention about Fred?", filter=[{ "range": { "metadata.rating": { "gte": 2 }}}])print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2012-01-01', 'rating': 3, 'author': 'John Doe', 'geo_location': {'lat': 40.12, 'lon': -71.34}}Example: Filter by Geo DistanceRequires an index with a geo_point mapping to be declared for metadata.geo_location.docs = db.similarity_search("Any mention about Fred?", filter=[{ "geo_distance": { "distance": "200km", "metadata.geo_location": { "lat": 40, "lon": -70 } } }])print(docs[0].metadata)Filter supports many more types of queries than above. Read more about them in the documentation.Distance Similarity AlgorithmElasticsearch supports the following vector distance similarity algorithms:cosineeuclideandot_productThe cosine similarity algorithm is the default.You can specify the similarity Algorithm needed via the similarity parameter.NOTE
Depending on the retrieval strategy, the similarity algorithm cannot be changed at query time. It is needed to be set when creating the index mapping for field. If you need to change the similarity algorithm, you need to delete the index and recreate it with the correct distance_strategy.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", distance_strategy="COSINE" # distance_strategy="EUCLIDEAN_DISTANCE" # distance_strategy="DOT_PRODUCT")Retrieval StrategiesElasticsearch has big advantages over other vector only databases from its ability to support a wide range of retrieval strategies. In this notebook we will configure ElasticsearchStore to support some of the most common retrieval strategies. By default, ElasticsearchStore uses the ApproxRetrievalStrategy.ApproxRetrievalStrategyThis will return the top k most similar vectors to the query vector. The k parameter is set when the ElasticsearchStore is initialized. The default value is 10.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", strategy=ElasticsearchStore.ApproxRetrievalStrategy())docs = db.similarity_search(query="What did the president say about Ketanji Brown Jackson?", k=10)Example: Approx with hybridThis example will show how to configure ElasticsearchStore to perform a hybrid retrieval, using a combination of approximate semantic search and keyword based search. We use RRF to balance the two scores from different retrieval methods.To enable hybrid retrieval, we need to set hybrid=True in ElasticsearchStore ApproxRetrievalStrategy constructor.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", strategy=ElasticsearchStore.ApproxRetrievalStrategy( hybrid=True, ))When hybrid is enabled, the query performed will be a combination of approximate semantic search and keyword based search. It will use rrf (Reciprocal Rank Fusion) to balance the two scores from different retrieval methods.Note RRF requires Elasticsearch 8.9.0 or above.{ "knn": { "field": "vector", "filter": [], "k": 1, "num_candidates": 50, "query_vector": [1.0, ..., 0.0], }, "query": { "bool": { "filter": [], "must": [{"match": {"text": {"query": "foo"}}}], } }, "rank": {"rrf": {}},}Example: Approx with Embedding Model in ElasticsearchThis example will show how to configure ElasticsearchStore to use the embedding model deployed in Elasticsearch for approximate retrieval. To use this, specify the model_id in ElasticsearchStore ApproxRetrievalStrategy constructor via the query_model_id argument.NOTE This requires the model to be deployed and running in Elasticsearch ml node. See notebook example on how to deploy the model with eland.APPROX_SELF_DEPLOYED_INDEX_NAME = "test-approx-self-deployed"# Note: This does not have an embedding function specified# Instead, we will use the embedding model deployed in Elasticsearchdb = ElasticsearchStore( es_cloud_id="<your cloud id>", es_user="elastic", es_password="<your password>", index_name=APPROX_SELF_DEPLOYED_INDEX_NAME, query_field="text_field", vector_query_field="vector_query_field.predicted_value", strategy=ElasticsearchStore.ApproxRetrievalStrategy( query_model_id="sentence-transformers__all-minilm-l6-v2" ))# Setup a Ingest Pipeline to perform the embedding# of the text fielddb.client.ingest.put_pipeline( id="test_pipeline", processors=[ { "inference": { "model_id": "sentence-transformers__all-minilm-l6-v2", "field_map": {"query_field": "text_field"}, "target_field": "vector_query_field", } } ],)# creating a new index with the pipeline,# not relying on langchain to create the indexdb.client.indices.create( index=APPROX_SELF_DEPLOYED_INDEX_NAME, mappings={ "properties": { "text_field": {"type": "text"}, "vector_query_field": { "properties": { "predicted_value": { "type": "dense_vector", "dims": 384, "index": True, "similarity": "l2_norm", } } }, } }, settings={"index": {"default_pipeline": "test_pipeline"}},)db.from_texts(["hello world"], es_cloud_id="<cloud id>", es_user="elastic", es_password="<cloud password>", index_name=APPROX_SELF_DEPLOYED_INDEX_NAME, query_field="text_field", vector_query_field="vector_query_field.predicted_value", strategy=ElasticsearchStore.ApproxRetrievalStrategy( query_model_id="sentence-transformers__all-minilm-l6-v2" ))# Perform searchdb.similarity_search("hello world", k=10)SparseVectorRetrievalStrategy (ELSER)This strategy uses Elasticsearch's sparse vector retrieval to retrieve the top-k results. We only support our own "ELSER" embedding model for now.NOTE This requires the ELSER model to be deployed and running in Elasticsearch ml node. To use this, specify SparseVectorRetrievalStrategy in ElasticsearchStore constructor.# Note that this example doesn't have an embedding function. This is because we infer the tokens at index time and at query time within Elasticsearch. # This requires the ELSER model to be loaded and running in Elasticsearch.db = ElasticsearchStore.from_documents( docs, es_cloud_id="My_deployment:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmVzLmlvOjQ0MyQ2OGJhMjhmNDc1M2Y0MWVjYTk2NzI2ZWNkMmE5YzRkNyQ3NWI4ODRjNWQ2OTU0MTYzODFjOTkxNmQ1YzYxMGI1Mw==", es_user="elastic", es_password="GgUPiWKwEzgHIYdHdgPk1Lwi", index_name="test-elser", strategy=ElasticsearchStore.SparseVectorRetrievalStrategy())db.client.indices.refresh(index="test-elser")results = db.similarity_search("What did the president say about Ketanji Brown Jackson", k=4)print(results[0]) page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.' metadata={'source': '../../modules/state_of_the_union.txt'}ExactRetrievalStrategyThis strategy uses Elasticsearch's exact retrieval (also known as brute force) to retrieve the top-k results.To use this, specify ExactRetrievalStrategy in ElasticsearchStore constructor.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", strategy=ElasticsearchStore.ExactRetrievalStrategy())Customise the QueryWith custom_query parameter at search, you are able to adjust the query that is used to retrieve documents from Elasticsearch. This is useful if you want to want to use a more complex query, to support linear boosting of fields.# Example of a custom query thats just doing a BM25 search on the text field.def custom_query(query_body: dict, query: str): """Custom query to be used in Elasticsearch. Args: query_body (dict): Elasticsearch query body. query (str): Query string. Returns: dict: Elasticsearch query body. """ print("Query Retriever created by the retrieval strategy:") print(query_body) print() new_query_body = { "query": { "match": { "text": query } } } print("Query thats actually used in Elasticsearch:") print(new_query_body) print() return new_query_bodyresults = db.similarity_search("What did the president say about Ketanji Brown Jackson", k=4, custom_query=custom_query)print("Results:")print(results[0]) Query Retriever created by the retrieval strategy: {'query': {'bool': {'must': [{'text_expansion': {'vector.tokens': {'model_id': '.elser_model_1', 'model_text': 'What did the president say about Ketanji Brown Jackson'}}}], 'filter': []}}} Query thats actually used in Elasticsearch: {'query': {'match': {'text': 'What did the president say about Ketanji Brown Jackson'}}} Results: page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.' metadata={'source': '../../modules/state_of_the_union.txt'}FAQQuestion: Im getting timeout errors when indexing documents into Elasticsearch. How do I fix this?One possible issue is your documents might take longer to index into Elasticsearch. ElasticsearchStore uses the Elasticsearch bulk API which has a few defaults that you can adjust to reduce the chance of timeout errors.This is also a good idea when you're using SparseVectorRetrievalStrategy.The defaults are:chunk_size: 500max_chunk_bytes: 100MBTo adjust these, you can pass in the chunk_size and max_chunk_bytes parameters to the ElasticsearchStore add_texts method. vector_store.add_texts( texts, bulk_kwargs={ "chunk_size": 50, "max_chunk_bytes": 200000000 } )Upgrading to ElasticsearchStoreIf you're already using Elasticsearch in your langchain based project, you may be using the old implementations: ElasticVectorSearch and ElasticKNNSearch which are now deprecated. We've introduced a new implementation called ElasticsearchStore which is more flexible and easier to use. This notebook will guide you through the process of upgrading to the new implementation.What's new?The new implementation is now one class called ElasticsearchStore which can be used for approx, exact, and ELSER search retrieval, via strategies.Im using ElasticKNNSearchOld implementation:from langchain.vectorstores.elastic_vector_search import ElasticKNNSearchdb = ElasticKNNSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding)New implementation:from langchain.vectorstores.elasticsearch import ElasticsearchStoredb = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding, # if you use the model_id # strategy=ElasticsearchStore.ApproxRetrievalStrategy( query_model_id="test_model" ) # if you use hybrid search # strategy=ElasticsearchStore.ApproxRetrievalStrategy( hybrid=True ))Im using ElasticVectorSearchOld implementation:from langchain.vectorstores.elastic_vector_search import ElasticVectorSearchdb = ElasticVectorSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding)New implementation:from langchain.vectorstores.elasticsearch import ElasticsearchStoredb = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding, strategy=ElasticsearchStore.ExactRetrievalStrategy())db.client.indices.delete(index='test-metadata, test-elser, test-basic', ignore_unavailable=True, allow_no_indices=True) ObjectApiResponse({'acknowledged': True})PreviousDocArray InMemorySearchNextEpsillaRunning and connecting to ElasticsearchRunning Elasticsearch via DockerAuthenticationElastic CloudBasic ExampleFiltering MetadataExample: Filter by keywordExample: Filter by Date RangeExample: Filter by Numeric RangeExample: Filter by Geo DistanceApproxRetrievalStrategyExample: Approx with hybridExample: Approx with Embedding Model in ElasticsearchSparseVectorRetrievalStrategy (ELSER)ExactRetrievalStrategyCustomise the QueryQuestion: Im getting timeout errors when indexing documents into Elasticsearch. How do I fix this?What's new?Im using ElasticKNNSearchIm using ElasticVectorSearch |
635 | https://python.langchain.com/docs/integrations/vectorstores/epsilla | ComponentsVector storesEpsillaEpsillaEpsilla is an open-source vector database that leverages the advanced parallel graph traversal techniques for vector indexing. Epsilla is licensed under GPL-3.0.This notebook shows how to use the functionalities related to the Epsilla vector database.As a prerequisite, you need to have a running Epsilla vector database (for example, through our docker image), and install the pyepsilla package. View full docs at docs.pip/pip3 install pyepsillaWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")OpenAI API Key: ········from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Epsillafrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()documents = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0).split_documents(documents)embeddings = OpenAIEmbeddings()Epsilla vectordb is running with default host "localhost" and port "8888". We have a custom db path, db name and collection name instead of the default ones.from pyepsilla import vectordbclient = vectordb.Client()vector_store = Epsilla.from_documents( documents, embeddings, client, db_path="/tmp/mypath", db_name="MyDB", collection_name="MyCollection")query = "What did the president say about Ketanji Brown Jackson"docs = vector_store.similarity_search(query)print(docs[0].page_content)In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.We cannot let this happen.Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousElasticsearchNextFaiss |
636 | https://python.langchain.com/docs/integrations/vectorstores/faiss | ComponentsVector storesFaissOn this pageFaissFacebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.Faiss documentation.This notebook shows how to use functionality related to the FAISS vector database.pip install faiss-gpu # For CUDA 7.5+ Supported GPU's.# ORpip install faiss-cpu # For CPU InstallationWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")# Uncomment the following line if you need to initialize FAISS with no AVX2 optimization# os.environ['FAISS_NO_AVX2'] = '1'from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../extras/modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = FAISS.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity Search with scoreThere are some FAISS specific methods. One of them is similarity_search_with_score, which allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better.docs_and_scores = db.similarity_search_with_score(query)docs_and_scores[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.36913747)It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.embedding_vector = embeddings.embed_query(query)docs_and_scores = db.similarity_search_by_vector(embedding_vector)Saving and loadingYou can also save and load a FAISS index. This is useful so you don't have to recreate it everytime you use it.db.save_local("faiss_index")new_db = FAISS.load_local("faiss_index", embeddings)docs = new_db.similarity_search(query)docs[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})Serializing and De-Serializing to bytesyou can pickle the FAISS Index by these functions. If you use embeddings model which is of 90 mb (sentence-transformers/all-MiniLM-L6-v2 or any other model), the resultant pickle size would be more than 90 mb. the size of the model is also included in the overall size. To overcome this, use the below functions. These functions only serializes FAISS index and size would be much lesser. this can be helpful if you wish to store the index in database like sql.pkl = db.serialize_to_bytes() # serializes the faiss indexembeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")db = FAISS.deserialize_from_bytes(embeddings = embeddings, serialized = pkl) # Load the indexMergingYou can also merge two FAISS vectorstoresdb1 = FAISS.from_texts(["foo"], embeddings)db2 = FAISS.from_texts(["bar"], embeddings)db1.docstore._dict {'068c473b-d420-487a-806b-fb0ccea7f711': Document(page_content='foo', metadata={})}db2.docstore._dict {'807e0c63-13f6-4070-9774-5c6f0fbb9866': Document(page_content='bar', metadata={})}db1.merge_from(db2)db1.docstore._dict {'068c473b-d420-487a-806b-fb0ccea7f711': Document(page_content='foo', metadata={}), '807e0c63-13f6-4070-9774-5c6f0fbb9866': Document(page_content='bar', metadata={})}Similarity Search with filteringFAISS vectorstore can also support filtering, since the FAISS does not natively support filtering we have to do it manually. This is done by first fetching more results than k and then filtering them. You can filter the documents based on metadata. You can also set the fetch_k parameter when calling any search method to set how many documents you want to fetch before filtering. Here is a small example:from langchain.schema import Documentlist_of_documents = [ Document(page_content="foo", metadata=dict(page=1)), Document(page_content="bar", metadata=dict(page=1)), Document(page_content="foo", metadata=dict(page=2)), Document(page_content="barbar", metadata=dict(page=2)), Document(page_content="foo", metadata=dict(page=3)), Document(page_content="bar burr", metadata=dict(page=3)), Document(page_content="foo", metadata=dict(page=4)), Document(page_content="bar bruh", metadata=dict(page=4)),]db = FAISS.from_documents(list_of_documents, embeddings)results_with_scores = db.similarity_search_with_score("foo")for doc, score in results_with_scores: print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}") Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15 Content: foo, Metadata: {'page': 2}, Score: 5.159960813797904e-15 Content: foo, Metadata: {'page': 3}, Score: 5.159960813797904e-15 Content: foo, Metadata: {'page': 4}, Score: 5.159960813797904e-15Now we make the same query call but we filter for only page = 1 results_with_scores = db.similarity_search_with_score("foo", filter=dict(page=1))for doc, score in results_with_scores: print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}") Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15 Content: bar, Metadata: {'page': 1}, Score: 0.3131446838378906Same thing can be done with the max_marginal_relevance_search as well.results = db.max_marginal_relevance_search("foo", filter=dict(page=1))for doc in results: print(f"Content: {doc.page_content}, Metadata: {doc.metadata}") Content: foo, Metadata: {'page': 1} Content: bar, Metadata: {'page': 1}Here is an example of how to set fetch_k parameter when calling similarity_search. Usually you would want the fetch_k parameter >> k parameter. This is because the fetch_k parameter is the number of documents that will be fetched before filtering. If you set fetch_k to a low number, you might not get enough documents to filter from.results = db.similarity_search("foo", filter=dict(page=1), k=1, fetch_k=4)for doc in results: print(f"Content: {doc.page_content}, Metadata: {doc.metadata}") Content: foo, Metadata: {'page': 1}DeleteYou can also delete ids. Note that the ids to delete should be the ids in the docstore.db.delete([db.index_to_docstore_id[0]]) True# Is now missing0 in db.index_to_docstore_id FalsePreviousEpsillaNextHologresSimilarity Search with scoreSaving and loadingMergingSimilarity Search with filteringDelete |
637 | https://python.langchain.com/docs/integrations/vectorstores/hologres | ComponentsVector storesHologresHologresHologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.
Hologres supports standard SQL syntax, is compatible with PostgreSQL, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services. Hologres provides vector database functionality by adopting Proxima.
Proxima is a high-performance software library developed by Alibaba DAMO Academy. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service.This notebook shows how to use functionality related to the Hologres Proxima vector database.
Click here to fast deploy a Hologres cloud instance.#!pip install psycopg2from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import HologresSplit documents and get embeddings by call OpenAI APIfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connect to Hologres by setting related ENVIRONMENTS.export PG_HOST={host}export PG_PORT={port} # Optional, default is 80export PG_DATABASE={db_name} # Optional, default is postgresexport PG_USER={username}export PG_PASSWORD={password}Then store your embeddings and documents into Hologresimport osconnection_string = Hologres.connection_string_from_db_params( host=os.environ.get("PGHOST", "localhost"), port=int(os.environ.get("PGPORT", "80")), database=os.environ.get("PGDATABASE", "postgres"), user=os.environ.get("PGUSER", "postgres"), password=os.environ.get("PGPASSWORD", "postgres"),)vector_db = Hologres.from_documents( docs, embeddings, connection_string=connection_string, table_name="langchain_example_embeddings",)Query and retrieve dataquery = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousFaissNextLanceDB |
638 | https://python.langchain.com/docs/integrations/vectorstores/lancedb | ComponentsVector storesLanceDBLanceDBLanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.pip install lancedbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import LanceDBfrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()documents = CharacterTextSplitter().split_documents(documents)embeddings = OpenAIEmbeddings()import lancedbdb = lancedb.connect("/tmp/lancedb")table = db.create_table( "my_table", data=[ { "vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1", } ], mode="overwrite",)docsearch = LanceDB.from_documents(documents, embeddings, connection=table)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content) They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. That’s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope. We should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities. I ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe. And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and can’t be traced. And I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon? Ban assault weapons and high-capacity magazines. Repeal the liability shield that makes gun manufacturers the only industry in America that can’t be sued. These laws don’t infringe on the Second Amendment. They save lives. The most fundamental right in America is the right to vote – and to have it counted. And it’s under assault. In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.PreviousHologresNextLLMRails |
639 | https://python.langchain.com/docs/integrations/vectorstores/llm_rails | ComponentsVector storesLLMRailsOn this pageLLMRailsLLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy.
See the LLMRails API documentation for more information on how to use the API.This notebook shows how to use functionality related to the LLMRails's integration with langchain.
Note that unlike many other integrations in this category, LLMRails provides an end-to-end managed service for retrieval agumented generation, which includes:A way to extract text from document files and chunk them into sentences.Its own embeddings model and vector store - each text segment is encoded into a vector embedding and stored in the LLMRails internal vector storeA query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for Hybrid Search)All of these are supported in this LangChain integration.SetupYou will need a LLMRails account to use LLMRails with LangChain. To get started, use the following steps:Sign up for a LLMRails account if you don't already have one.Next you'll need to create API keys to access the API. Click on the "API Keys" tab in the corpus view and then the "Create API Key" button. Give your key a name. Click "Create key" and you now have an active API key. Keep this key confidential. To use LangChain with LLMRails, you'll need to have this value: api_key.
You can provide those to LangChain in two ways:Include in your environment these two variables: LLM_RAILS_API_KEY, LLM_RAILS_DATASTORE_ID.For example, you can set these variables using os.environ and getpass as follows:import osimport getpassos.environ["LLM_RAILS_API_KEY"] = getpass.getpass("LLMRails API Key:")os.environ["LLM_RAILS_DATASTORE_ID"] = getpass.getpass("LLMRails Datastore Id:")Provide them as arguments when creating the LLMRails vectorstore object:vectorstore = LLMRails( api_key=llm_rails_api_key, datastore_id=datastore_id)Adding textFor adding text to your datastore first you have to go to Datastores page and create one. Click Create Datastore button and choose a name and embedding model for your datastore. Then get your datastore id from newly created datatore settings.from langchain.vectorstores import LLMRailsimport osos.environ['LLM_RAILS_DATASTORE_ID'] = 'Your datastore id 'os.environ['LLM_RAILS_API_KEY'] = 'Your API Key'llm_rails = LLMRails.from_texts(['Your text here'])Similarity searchThe simplest scenario for using LLMRails is to perform a similarity search. query = "What do you plan to do about national security?"found_docs = llm_rails.similarity_search( query, k=5)print(found_docs[0].page_content) Others may not be democratic but nevertheless depend upon a rules-based international system. Yet what we share in common, and the prospect of a freer and more open world, makes such a broad coalition necessary and worthwhile. We will listen to and consider ideas that our partners suggest about how to do this. Building this inclusive coalition requires reinforcing the multilateral system to uphold the founding principles of the United Nations, including respect for international law. 141 countries expressed support at the United Nations General Assembly for a resolution condemning Russia’s unprovoked aggression against Ukraine. We continue to demonstrate this approach by engaging all regions across all issues, not in terms of what we are against but what we are for. This year, we partnered with ASEAN to advance clean energy infrastructure and maritime security in the region. We kickstarted the Prosper Africa Build Together Campaign to fuel economic growth across the continent and bolster trade and investment in the clean energy, health, and digital technology sectors. We are working to develop a partnership with countries on the Atlantic Ocean to establish and carry out a shared approach to advancing our joint development, economic, environmental, scientific, and maritime governance goals. We galvanized regional action to address the core challenges facing the Western Hemisphere by spearheading the Americas Partnership for Economic Prosperity to drive economic recovery and by mobilizing the region behind a bold and unprecedented approach to migration through the Los Angeles Declaration on Migration and Protection. In the Middle East, we have worked to enhance deterrence toward Iran, de-escalate regional conflicts, deepen integration among a diverse set of partners in the region, and bolster energy stability. A prime example of an inclusive coalition is IPEF, which we launched alongside a dozen regional partners that represent 40 percent of the world’s GDP.Similarity search with scoreSometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.query = "What is your approach to national defense"found_docs = llm_rails.similarity_search_with_score( query, k=5,)document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") But we will do so as the last resort and only when the objectives and mission are clear and achievable, consistent with our values and laws, alongside non-military tools, and the mission is undertaken with the informed consent of the American people. Our approach to national defense is described in detail in the 2022 National Defense Strategy. Our starting premise is that a powerful U.S. military helps advance and safeguard vital U.S. national interests by backstopping diplomacy, confronting aggression, deterring conflict, projecting strength, and protecting the American people and their economic interests. Amid intensifying competition, the military’s role is to maintain and gain warfighting advantages while limiting those of our competitors. The military will act urgently to sustain and strengthen deterrence, with the PRC as its pacing challenge. We will make disciplined choices regarding our national defense and focus our attention on the military’s primary responsibilities: to defend the homeland, and deter attacks and aggression against the United States, our allies and partners, while being prepared to fight and win the Nation’s wars should diplomacy and deterrence fail. To do so, we will combine our strengths to achieve maximum effect in deterring acts of aggression—an approach we refer to as integrated deterrence (see text box on page 22). We will operate our military using a campaigning mindset—sequencing logically linked military activities to advance strategy-aligned priorities. And, we will build a resilient force and defense ecosystem to ensure we can perform these functions for decades to come. We ended America’s longest war in Afghanistan, and with it an era of major military operations to remake other societies, even as we have maintained the capacity to address terrorist threats to the American people as they emerge. 20 NATIONAL SECURITY STRATEGY Page 21 A combat-credible military is the foundation of deterrence and America’s ability to prevail in conflict. Score: 0.5040982687179959LLMRails as a RetrieverLLMRails, as all the other LangChain vectorstores, is most often used as a LangChain Retriever:retriever = llm_rails.as_retriever()retriever LLMRailsRetriever(tags=None, metadata=None, vectorstore=<langchain.vectorstores.llm_rails.LLMRails object at 0x107b9c040>, search_type='similarity', search_kwargs={'k': 5})query = "What is your approach to national defense"retriever.get_relevant_documents(query)[0] Document(page_content='But we will do so as the last resort and only when the objectives and mission are clear and achievable, consistent with our values and laws, alongside non-military tools, and the mission is undertaken with the informed consent of the American people.\n\nOur approach to national defense is described in detail in the 2022 National Defense Strategy.\n\nOur starting premise is that a powerful U.S. military helps advance and safeguard vital U.S. national interests by backstopping diplomacy, confronting aggression, deterring conflict, projecting strength, and protecting the American people and their economic interests.\n\nAmid intensifying competition, the military’s role is to maintain and gain warfighting advantages while limiting those of our competitors.\n\nThe military will act urgently to sustain and strengthen deterrence, with the PRC as its pacing challenge.\n\nWe will make disciplined choices regarding our national defense and focus our attention on the military’s primary responsibilities: to defend the homeland, and deter attacks and aggression against the United States, our allies and partners, while being prepared to fight and win the Nation’s wars should diplomacy and deterrence fail.\n\nTo do so, we will combine our strengths to achieve maximum effect in deterring acts of aggression—an approach we refer to as integrated deterrence (see text box on page 22).\n\nWe will operate our military using a campaigning mindset—sequencing logically linked military activities to advance strategy-aligned priorities.\n\nAnd, we will build a resilient force and defense ecosystem to ensure we can perform these functions for decades to come.\n\nWe ended America’s longest war in Afghanistan, and with it an era of major military operations to remake other societies, even as we have maintained the capacity to address terrorist threats to the American people as they emerge.\n\n20 NATIONAL SECURITY STRATEGY Page 21 \x90\x90\x90\x90\x90\x90\n\nA combat-credible military is the foundation of deterrence and America’s ability to prevail in conflict.', metadata={'type': 'file', 'url': 'https://cdn.llmrails.com/dst_d94b490c-4638-4247-ad5e-9aa0e7ef53c1/c2d63a2ea3cd406cb522f8312bc1535d', 'name': 'Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf'})PreviousLanceDBNextMarqoAdding textSimilarity searchSimilarity search with scoreLLMRails as a Retriever |
640 | https://python.langchain.com/docs/integrations/vectorstores/marqo | ComponentsVector storesMarqoOn this pageMarqoThis notebook shows how to use functionality related to the Marqo vectorstore.Marqo is an open-source vector search engine. Marqo allows you to store and query multimodal data such as text and images. Marqo creates the vectors for you using a huge selection of opensource models, you can also provide your own finetuned models and Marqo will handle the loading and inference for you.To run this notebook with our docker image please run the following commands first to get Marqo:docker pull marqoai/marqo:latestdocker rm -f marqodocker run --name marqo -it --privileged -p 8882:8882 --add-host host.docker.internal:host-gateway marqoai/marqo:latestpip install marqofrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Marqofrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)import marqo# initialize marqomarqo_url = "http://localhost:8882" # if using marqo cloud replace with your endpoint (console.marqo.ai)marqo_api_key = "" # if using marqo cloud replace with your api key (console.marqo.ai)client = marqo.Client(url=marqo_url, api_key=marqo_api_key)index_name = "langchain-demo"docsearch = Marqo.from_documents(docs, index_name=index_name)query = "What did the president say about Ketanji Brown Jackson"result_docs = docsearch.similarity_search(query) Index langchain-demo exists.print(result_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.result_docs = docsearch.similarity_search_with_score(query)print(result_docs[0][0].page_content, result_docs[0][1], sep="\n") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. 0.68647254Additional featuresOne of the powerful features of Marqo as a vectorstore is that you can use indexes created externally. For example:If you had a database of image and text pairs from another application, you can simply just use it in langchain with the Marqo vectorstore. Note that bringing your own multimodal indexes will disable the add_texts method.If you had a database of text documents, you can bring it into the langchain framework and add more texts through add_texts.The documents that are returned are customised by passing your own function to the page_content_builder callback in the search methods.Multimodal Example# use a new indexindex_name = "langchain-multimodal-demo"# incase the demo is re-runtry: client.delete_index(index_name)except Exception: print(f"Creating {index_name}")# This index could have been created by another systemsettings = {"treat_urls_and_pointers_as_images": True, "model": "ViT-L/14"}client.create_index(index_name, **settings)client.index(index_name).add_documents( [ # image of a bus { "caption": "Bus", "image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpg", }, # image of a plane { "caption": "Plane", "image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg", }, ],) {'errors': False, 'processingTimeMs': 2090.2822139996715, 'index_name': 'langchain-multimodal-demo', 'items': [{'_id': 'aa92fc1c-1fb2-4d86-b027-feb507c419f7', 'result': 'created', 'status': 201}, {'_id': '5142c258-ef9f-4bf2-a1a6-2307280173a0', 'result': 'created', 'status': 201}]}def get_content(res): """Helper to format Marqo's documents into text to be used as page_content""" return f"{res['caption']}: {res['image']}"docsearch = Marqo(client, index_name, page_content_builder=get_content)query = "vehicles that fly"doc_results = docsearch.similarity_search(query)for doc in doc_results: print(doc.page_content) Plane: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg Bus: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpgText only example# use a new indexindex_name = "langchain-byo-index-demo"# incase the demo is re-runtry: client.delete_index(index_name)except Exception: print(f"Creating {index_name}")# This index could have been created by another systemclient.create_index(index_name)client.index(index_name).add_documents( [ { "Title": "Smartphone", "Description": "A smartphone is a portable computer device that combines mobile telephone " "functions and computing functions into one unit.", }, { "Title": "Telephone", "Description": "A telephone is a telecommunications device that permits two or more users to" "conduct a conversation when they are too far apart to be easily heard directly.", }, ],) {'errors': False, 'processingTimeMs': 139.2144540004665, 'index_name': 'langchain-byo-index-demo', 'items': [{'_id': '27c05a1c-b8a9-49a5-ae73-fbf1eb51dc3f', 'result': 'created', 'status': 201}, {'_id': '6889afe0-e600-43c1-aa3b-1d91bf6db274', 'result': 'created', 'status': 201}]}# Note text indexes retain the ability to use add_texts despite different field names in documents# this is because the page_content_builder callback lets you handle these document fields as requireddef get_content(res): """Helper to format Marqo's documents into text to be used as page_content""" if "text" in res: return res["text"] return res["Description"]docsearch = Marqo(client, index_name, page_content_builder=get_content)docsearch.add_texts(["This is a document that is about elephants"]) ['9986cc72-adcd-4080-9d74-265c173a9ec3']query = "modern communications devices"doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.query = "elephants"doc_results = docsearch.similarity_search(query, page_content_builder=get_content)print(doc_results[0].page_content) This is a document that is about elephantsWeighted QueriesWe also expose marqos weighted queries which are a powerful way to compose complex semantic searches.query = {"communications devices": 1.0}doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.query = {"communications devices": 1.0, "technology post 2000": -1.0}doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A telephone is a telecommunications device that permits two or more users toconduct a conversation when they are too far apart to be easily heard directly.Question Answering with SourcesThis section shows how to use Marqo as part of a RetrievalQAWithSourcesChain. Marqo will perform the searches for information in the sources.from langchain.chains import RetrievalQAWithSourcesChainfrom langchain.llms import OpenAIimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:········with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)index_name = "langchain-qa-with-retrieval"docsearch = Marqo.from_documents(docs, index_name=index_name) Index langchain-qa-with-retrieval exists.chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain( {"question": "What did the president say about Justice Breyer"}, return_only_outputs=True,) {'answer': ' The president honored Justice Breyer, thanking him for his service and noting that he is a retiring Justice of the United States Supreme Court.\n', 'sources': '../../../state_of_the_union.txt'}PreviousLLMRailsNextGoogle Vertex AI MatchingEngineAdditional featuresWeighted Queries |
641 | https://python.langchain.com/docs/integrations/vectorstores/matchingengine | ComponentsVector storesGoogle Vertex AI MatchingEngineOn this pageGoogle Vertex AI MatchingEngineThis notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database.Vertex AI Matching Engine provides the industry's leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.Note: This module expects an endpoint and deployed index already created as the creation time takes close to one hour. To see how to create an index refer to the section Create Index and deploy it to an EndpointCreate VectorStore from textsfrom langchain.vectorstores import MatchingEnginetexts = [ "The cat sat on", "the mat.", "I like to", "eat pizza for", "dinner.", "The sun sets", "in the west.",]vector_store = MatchingEngine.from_components( texts=texts, project_id="<my_project_id>", region="<my_region>", gcs_bucket_uri="<my_gcs_bucket>", index_id="<my_matching_engine_index_id>", endpoint_id="<my_matching_engine_endpoint_id>",)vector_store.add_texts(texts=texts)vector_store.similarity_search("lunch", k=2)Create Index and deploy it to an EndpointImports, Constants and Configs# Installing dependencies.pip install tensorflow \ google-cloud-aiplatform \ tensorflow-hub \ tensorflow-textimport osimport jsonfrom google.cloud import aiplatformimport tensorflow_hub as hubimport tensorflow_textPROJECT_ID = "<my_project_id>"REGION = "<my_region>"VPC_NETWORK = "<my_vpc_network_name>"PEERING_RANGE_NAME = "ann-langchain-me-range" # Name for creating the VPC peering.BUCKET_URI = "gs://<bucket_uri>"# The number of dimensions for the tensorflow universal sentence encoder.# If other embedder is used, the dimensions would probably need to change.DIMENSIONS = 512DISPLAY_NAME = "index-test-name"EMBEDDING_DIR = f"{BUCKET_URI}/banana"DEPLOYED_INDEX_ID = "endpoint-test-name"PROJECT_NUMBER = !gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'PROJECT_NUMBER = PROJECT_NUMBER[0]VPC_NETWORK_FULL = f"projects/{PROJECT_NUMBER}/global/networks/{VPC_NETWORK}"# Change this if you need the VPC to be created.CREATE_VPC = False# Set the project id gcloud config set project {PROJECT_ID}# Remove the if condition to run the encapsulated codeif CREATE_VPC: # Create a VPC network gcloud compute networks create {VPC_NETWORK} --bgp-routing-mode=regional --subnet-mode=auto --project={PROJECT_ID} # Add necessary firewall rules gcloud compute firewall-rules create {VPC_NETWORK}-allow-icmp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow icmp gcloud compute firewall-rules create {VPC_NETWORK}-allow-internal --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow all --source-ranges 10.128.0.0/9 gcloud compute firewall-rules create {VPC_NETWORK}-allow-rdp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:3389 gcloud compute firewall-rules create {VPC_NETWORK}-allow-ssh --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:22 # Reserve IP range gcloud compute addresses create {PEERING_RANGE_NAME} --global --prefix-length=16 --network={VPC_NETWORK} --purpose=VPC_PEERING --project={PROJECT_ID} --description="peering range" # Set up peering with service networking # Your account must have the "Compute Network Admin" role to run the following. gcloud services vpc-peerings connect --service=servicenetworking.googleapis.com --network={VPC_NETWORK} --ranges={PEERING_RANGE_NAME} --project={PROJECT_ID}# Creating bucket. gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URIUsing Tensorflow Universal Sentence Encoder as an Embedder# Load the Universal Sentence Encoder modulemodule_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"model = hub.load(module_url)# Generate embeddings for each wordembeddings = model(["banana"])Inserting a test embeddinginitial_config = { "id": "banana_id", "embedding": [float(x) for x in list(embeddings.numpy()[0])],}with open("data.json", "w") as f: json.dump(initial_config, f)gsutil cp data.json {EMBEDDING_DIR}/file.jsonaiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)Creating Indexmy_index = aiplatform.MatchingEngineIndex.create_tree_ah_index( display_name=DISPLAY_NAME, contents_delta_uri=EMBEDDING_DIR, dimensions=DIMENSIONS, approximate_neighbors_count=150, distance_measure_type="DOT_PRODUCT_DISTANCE",)Creating Endpointmy_index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create( display_name=f"{DISPLAY_NAME}-endpoint", network=VPC_NETWORK_FULL,)Deploy Indexmy_index_endpoint = my_index_endpoint.deploy_index( index=my_index, deployed_index_id=DEPLOYED_INDEX_ID)my_index_endpoint.deployed_indexesPreviousMarqoNextMeilisearchCreate VectorStore from textsCreate Index and deploy it to an EndpointImports, Constants and ConfigsUsing Tensorflow Universal Sentence Encoder as an EmbedderInserting a test embeddingCreating IndexCreating EndpointDeploy Index |
642 | https://python.langchain.com/docs/integrations/vectorstores/meilisearch | ComponentsVector storesMeilisearchOn this pageMeilisearchMeilisearch is an open-source, lightning-fast, and hyper relevant search engine. It comes with great defaults to help developers build snappy search experiences. You can self-host Meilisearch or run on Meilisearch Cloud.Meilisearch v1.3 supports vector search. This page guides you through integrating Meilisearch as a vector store and using it to perform vector search.SetupLaunching a Meilisearch instanceYou will need a running Meilisearch instance to use as your vector store. You can run Meilisearch in local or create a Meilisearch Cloud account.As of Meilisearch v1.3, vector storage is an experimental feature. After launching your Meilisearch instance, you need to enable vector storage. For self-hosted Meilisearch, read the docs on enabling experimental features. On Meilisearch Cloud, enable Vector Store via your project Settings page.You should now have a running Meilisearch instance with vector storage enabled. 🎉CredentialsTo interact with your Meilisearch instance, the Meilisearch SDK needs a host (URL of your instance) and an API key.HostIn local, the default host is localhost:7700On Meilisearch Cloud, find the host in your project Settings pageAPI keysMeilisearch instance provides you with three API keys out of the box: A MASTER KEY — it should only be used to create your Meilisearch instanceA ADMIN KEY — use it only server-side to update your database and its settingsA SEARCH KEY — a key that you can safely share in front-end applicationsYou can create additional API keys as needed.Installing dependenciesThis guide uses the Meilisearch Python SDK. You can install it by running:pip install meilisearchFor more information, refer to the Meilisearch Python SDK documentation.ExamplesThere are multiple ways to initialize the Meilisearch vector store: providing a Meilisearch client or the URL and API key as needed. In our examples, the credentials will be loaded from the environment.You can make environment variables available in your Notebook environment by using os and getpass. You can use this technique for all the following examples.import osimport getpassos.environ["MEILI_HTTP_ADDR"] = getpass.getpass("Meilisearch HTTP address and port:")os.environ["MEILI_MASTER_KEY"] = getpass.getpass("Meilisearch API Key:")We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Adding text and embeddingsThis example adds text to the Meilisearch vector database without having to initialize a Meilisearch vector store.from langchain.vectorstores import Meilisearchfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterembeddings = OpenAIEmbeddings()with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)# Use Meilisearch vector store to store texts & associated embeddings as vectorvector_store = Meilisearch.from_texts(texts=texts, embedding=embeddings)Behind the scenes, Meilisearch will convert the text to multiple vectors. This will bring us to the same result as the following example.Adding documents and embeddingsIn this example, we'll use Langchain TextSplitter to split the text in multiple documents. Then, we'll store these documents along with their embeddings.from langchain.document_loaders import TextLoader# Load textloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)# Create documentsdocs = text_splitter.split_documents(documents)# Import documents & embeddings in the vector storevector_store = Meilisearch.from_documents(documents=documents, embedding=embeddings)# Search in our vector storequery = "What did the president say about Ketanji Brown Jackson"docs = vector_store.similarity_search(query)print(docs[0].page_content)Add documents by creating a Meilisearch VectorstoreIn this approach, we create a vector store object and add documents to it.from langchain.vectorstores import Meilisearchimport meilisearchclient = meilisearch.Client(url="http://127.0.0.1:7700", api_key="***")vector_store = Meilisearch( embedding=embeddings, client=client, index_name="langchain_demo", text_key="text")vector_store.add_documents(documents)Similarity Search with scoreThis specific method allows you to return the documents and the distance score of the query to them.docs_and_scores = vector_store.similarity_search_with_score(query)docs_and_scores[0]Similarity Search by vectorembedding_vector = embeddings.embed_query(query)docs_and_scores = vector_store.similarity_search_by_vector(embedding_vector)docs_and_scores[0]Additional resourcesDocumentationMeilisearchMeilisearch Python SDKOpen-source repositoriesMeilisearch repositoryMeilisearch Python SDKPreviousGoogle Vertex AI MatchingEngineNextMilvusSetupLaunching a Meilisearch instanceCredentialsInstalling dependenciesExamplesAdding text and embeddingsAdding documents and embeddingsAdd documents by creating a Meilisearch VectorstoreSimilarity Search with scoreSimilarity Search by vectorAdditional resources |
643 | https://python.langchain.com/docs/integrations/vectorstores/milvus | ComponentsVector storesMilvusOn this pageMilvusMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.This notebook shows how to use functionality related to the Milvus vector database.To run, you should have a Milvus instance up and running.pip install pymilvusWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:········from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Milvusfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vector_db = Milvus.from_documents( docs, embeddings, connection_args={"host": "127.0.0.1", "port": "19530"},)query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'Compartmentalize the data with Milvus CollectionsYou can store different unrelated documents in different collections within same Milvus instance to maintain the contextHere's how you can create a new collectionvector_db = Milvus.from_documents( docs, embeddings, collection_name = 'collection_1', connection_args={"host": "127.0.0.1", "port": "19530"},)And here is how you retrieve that stored collectionvector_db = Milvus( embeddings, connection_args={"host": "127.0.0.1", "port": "19530"}, collection_name = 'collection_1' )After retreival you can go on querying it as usual.PreviousMeilisearchNextMomento Vector Index (MVI)Compartmentalize the data with Milvus Collections |
644 | https://python.langchain.com/docs/integrations/vectorstores/momento_vector_index | ComponentsVector storesMomento Vector Index (MVI)On this pageMomento Vector Index (MVI)MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs.To sign up and access MVI, visit the Momento Console.SetupInstall prerequisitesYou will need:the momento package for interacting with MVI, andthe openai package for interacting with the OpenAI API.the tiktoken package for tokenizing text.pip install momento openai tiktokenEnter API keysimport osimport getpassMomento: for indexing dataVisit the Momento Console to get your API key.os.environ["MOMENTO_API_KEY"] = getpass.getpass("Momento API Key:")OpenAI: for text embeddingsos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Load your dataHere we use the example dataset from Langchain, the state of the union address.First we load relevant modules:from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import MomentoVectorIndexfrom langchain.document_loaders import TextLoaderThen we load the data:loader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()len(documents) 1Note the data is one large file, hence there is only one document:len(documents[0].page_content) 38539Because this is one large text file, we split it into chunks for question answering. That way, user questions will be answered from the most relevant chunk.text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)len(docs) 42Index your dataIndexing your data is as simple as instantiating the MomentoVectorIndex object. Here we use the from_documents helper to both instantiate and index the data:vector_db = MomentoVectorIndex.from_documents( docs, OpenAIEmbeddings(), index_name="sotu")This connects to the Momento Vector Index service using your API key and indexes the data. If the index did not exist before, this process creates it for you. The data is now searchable.Query your dataAsk a question directly against the indexThe most direct way to query the data is to search against the index. We can do that as follows using the VectorStore API:query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'While this does contain relevant information about Ketanji Brown Jackson, we don't have a concise, human-readable answer. We'll tackle that in the next section.Use an LLM to generate fluent answersWith the data indexed in MVI, we can integrate with any chain that leverages vector similarity search. Here we use the RetrievalQA chain to demonstrate how to answer questions from the indexed data.First we load the relevant modules:from langchain.chat_models import ChatOpenAIfrom langchain.chains import RetrievalQAThen we instantiate the retrieval QA chain:llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)qa_chain = RetrievalQA.from_chain_type(llm, retriever=vector_db.as_retriever())qa_chain({"query": "What did the president say about Ketanji Brown Jackson?"}) {'query': 'What did the president say about Ketanji Brown Jackson?', 'result': "The President said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as one of the nation's top legal minds and mentioned that she has received broad support from various groups, including the Fraternal Order of Police and former judges appointed by Democrats and Republicans."}Next StepsThat's it! You've now indexed your data and can query it using the Momento Vector Index. You can use the same index to query your data from any chain that supports vector similarity search.With Momento you can not only index your vector data, but also cache your API calls and store your chat message history. Check out the other Momento langchain integrations to learn more.To learn more about the Momento Vector Index, visit the Momento Documentation.PreviousMilvusNextMongoDB AtlasInstall prerequisitesEnter API keysMomento: for indexing dataOpenAI: for text embeddingsAsk a question directly against the indexUse an LLM to generate fluent answers |
645 | https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas | ComponentsVector storesMongoDB AtlasMongoDB AtlasMongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data.This notebook shows how to use MongoDB Atlas Vector Search to store your embeddings in MongoDB documents, create a vector search index, and perform KNN search with an approximate nearest neighbor algorithm.It uses the knnBeta Operator available in MongoDB Atlas Search. This feature is in Public Preview and available for evaluation purposes, to validate functionality, and to gather feedback from public preview users. It is not recommended for production deployments as we may introduce breaking changes.To use MongoDB Atlas, you must first deploy a cluster. We have a Forever-Free tier of clusters available.
To get started head over to Atlas here: quick start.pip install pymongoimport osimport getpassMONGODB_ATLAS_CLUSTER_URI = getpass.getpass("MongoDB Atlas Cluster URI:")We want to use OpenAIEmbeddings so we need to set up our OpenAI API Key. os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Now, let's create a vector search index on your cluster. In the below example, embedding is the name of the field that contains the embedding vector. Please refer to the documentation to get more details on how to define an Atlas Vector Search index.
You can name the index langchain_demo and create the index on the namespace lanchain_db.langchain_col. Finally, write the following definition in the JSON editor on MongoDB Atlas:{ "mappings": { "dynamic": true, "fields": { "embedding": { "dimensions": 1536, "similarity": "cosine", "type": "knnVector" } } }}from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import MongoDBAtlasVectorSearchfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()from pymongo import MongoClient# initialize MongoDB python clientclient = MongoClient(MONGODB_ATLAS_CLUSTER_URI)db_name = "langchain_db"collection_name = "langchain_col"collection = client[db_name][collection_name]index_name = "langchain_demo"# insert the documents in MongoDB Atlas with their embeddingdocsearch = MongoDBAtlasVectorSearch.from_documents( docs, embeddings, collection=collection, index_name=index_name)# perform a similarity search between the embedding of the query and the embeddings of the documentsquery = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)You can also instantiate the vector store directly and execute a query as follows:# initialize vector storevectorstore = MongoDBAtlasVectorSearch( collection, OpenAIEmbeddings(), index_name=index_name)# perform a similarity search between a query and the ingested documentsquery = "What did the president say about Ketanji Brown Jackson"docs = vectorstore.similarity_search(query)print(docs[0].page_content)PreviousMomento Vector Index (MVI)NextMyScale |
646 | https://python.langchain.com/docs/integrations/vectorstores/myscale | ComponentsVector storesMyScaleOn this pageMyScaleMyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. This notebook shows how to use functionality related to the MyScale vector database.Setting up envrionmentspip install clickhouse-connectWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")There are two ways to set up parameters for myscale index.Environment VariablesBefore you run the app, please set the environment variable with export:
export MYSCALE_HOST='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...You can easily find your account, password and other info on our SaaS. For details please refer to this documentEvery attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.Create MyScaleSettings object with parameters```pythonfrom langchain.vectorstores import MyScale, MyScaleSettingsconfig = MyScaleSetting(host="<your-backend-url>", port=8443, ...)index = MyScale(embedding_function, config)index.add_documents(...)```from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import MyScalefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for d in docs: d.metadata = {"some": "metadata"}docsearch = MyScale.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)Get connection info and data schemaprint(str(docsearch))FilteringYou can have direct access to myscale SQL where statement. You can write WHERE clause following standard SQL.NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.If you custimized your column_map under your setting, you search with filter like this:from langchain.vectorstores import MyScale, MyScaleSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for i, d in enumerate(docs): d.metadata = {"doc_id": i}docsearch = MyScale.from_documents(docs, embeddings)Similarity search with scoreThe returned distance score is cosine distance. Therefore, a lower score is better.meta = docsearch.metadata_columnoutput = docsearch.similarity_search_with_relevance_scores( "What did the president say about Ketanji Brown Jackson?", k=4, where_str=f"{meta}.doc_id<10",)for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...")Deleting your datadocsearch.drop()PreviousMongoDB AtlasNextNeo4j Vector IndexSetting up envrionmentsGet connection info and data schemaFilteringSimilarity search with scoreDeleting your data |
647 | https://python.langchain.com/docs/integrations/vectorstores/neo4jvector | ComponentsVector storesNeo4j Vector IndexOn this pageNeo4j Vector IndexNeo4j is an open-source graph database with integrated support for vector similarity searchIt supports:approximate nearest neighbor searchEuclidean similarity and cosine similarityHybrid search combining vector and keyword searchesThis notebook shows how to use the Neo4j vector index (Neo4jVector).See the installation instruction.# Pip install necessary packagepip install neo4jpip install openaipip install tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Neo4jVectorfrom langchain.document_loaders import TextLoaderfrom langchain.docstore.document import Documentloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()# Neo4jVector requires the Neo4j database credentialsurl = "bolt://localhost:7687"username = "neo4j"password = "pleaseletmein"# You can also use environment variables instead of directly passing named parameters#os.environ["NEO4J_URI"] = "bolt://localhost:7687"#os.environ["NEO4J_USERNAME"] = "neo4j"#os.environ["NEO4J_PASSWORD"] = "pleaseletmein"Similarity Search with Cosine Distance (Default)# The Neo4jVector Module will connect to Neo4j and create a vector index if needed.db = Neo4jVector.from_documents( docs, OpenAIEmbeddings(), url=url, username=username, password=password) /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()query = "What did the president say about Ketanji Brown Jackson"docs_with_score = db.similarity_search_with_score(query, k=2)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.9099836349487305 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.9099686145782471 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. --------------------------------------------------------------------------------Working with vectorstoreAbove, we created a vectorstore from scratch. However, often times we want to work with an existing vectorstore.
In order to do that, we can initialize it directly.index_name = "vector" # default index namestore = Neo4jVector.from_existing_index( OpenAIEmbeddings(), url=url, username=username, password=password, index_name=index_name,) /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()We can also initialize a vectorstore from existing graph using the from_existing_graph method. This method pulls relevant text information from the database, and calculates and stores the text embeddings back to the database.# First we create sample data in graphstore.query( "CREATE (p:Person {name: 'Tomaz', location:'Slovenia', hobby:'Bicycle'})") []# Now we initialize from existing graphexisting_graph = Neo4jVector.from_existing_graph( embedding=OpenAIEmbeddings(), url=url, username=username, password=password, index_name="person_index", node_label="Person", text_node_properties=["name", "location"], embedding_node_property="embedding", )result = existing_graph.similarity_search("Slovenia", k = 1) /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()result[0] Document(page_content='\nname: Tomaz\nlocation: Slovenia', metadata={'hobby': 'Bicycle'})Add documentsWe can add documents to the existing vectorstore.store.add_documents([Document(page_content="foo")]) ['187fc53a-5dde-11ee-ad78-1f6b05bf8513']docs_with_score = store.similarity_search_with_score("foo")docs_with_score[0] (Document(page_content='foo', metadata={}), 1.0)Hybrid search (vector + keyword)Neo4j integrates both vector and keyword indexes, which allows you to use a hybrid search approach# The Neo4jVector Module will connect to Neo4j and create a vector and keyword indices if needed.hybrid_db = Neo4jVector.from_documents( docs, OpenAIEmbeddings(), url=url, username=username, password=password, search_type="hybrid") /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()To load the hybrid search from existing indexes, you have to provide both the vector and keyword indicesindex_name = "vector" # default index namekeyword_index_name = "keyword" #default keyword index namestore = Neo4jVector.from_existing_index( OpenAIEmbeddings(), url=url, username=username, password=password, index_name=index_name, keyword_index_name=keyword_index_name, search_type="hybrid") /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()Retriever optionsThis section shows how to use Neo4jVector as a retriever.retriever = store.as_retriever()retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt'})Question Answering with SourcesThis section goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index. from langchain.chains import RetrievalQAWithSourcesChainfrom langchain.chat_models import ChatOpenAIchain = RetrievalQAWithSourcesChain.from_chain_type( ChatOpenAI(temperature=0), chain_type="stuff", retriever=retriever)chain( {"question": "What did the president say about Justice Breyer"}, return_only_outputs=True,) {'answer': "The president honored Justice Stephen Breyer, who is retiring from the United States Supreme Court. He thanked him for his service and mentioned that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to continue Justice Breyer's legacy of excellence. \n", 'sources': '../../modules/state_of_the_union.txt'}PreviousMyScaleNextNucliaDBSimilarity Search with Cosine Distance (Default)Working with vectorstoreAdd documentsHybrid search (vector + keyword)Retriever optionsQuestion Answering with Sources |
648 | https://python.langchain.com/docs/integrations/vectorstores/nucliadb | ComponentsVector storesNucliaDBOn this pageNucliaDBYou can use a local NucliaDB instance or use Nuclia Cloud.When using a local instance, you need a Nuclia Understanding API key, so your texts are properly vectorized and indexed. You can get a key by creating a free account at https://nuclia.cloud, and then create a NUA key.#!pip install langchain nucliaUsage with nuclia.cloudfrom langchain.vectorstores.nucliadb import NucliaDBAPI_KEY = "YOUR_API_KEY"ndb = NucliaDB(knowledge_box="YOUR_KB_ID", local=False, api_key=API_KEY)Usage with a local instanceNote: By default backend is set to http://localhost:8080.from langchain.vectorstores.nucliadb import NucliaDBndb = NucliaDB(knowledge_box="YOUR_KB_ID", local=True, backend="http://my-local-server")Add and delete texts to your Knowledge Boxids = ndb.add_texts(["This is a new test", "This is a second test"])ndb.delete(ids=ids)Search in your Knowledge Boxresults = ndb.similarity_search("Who was inspired by Ada Lovelace?")print(res.page_content)PreviousNeo4j Vector IndexNextOpenSearchUsage with nuclia.cloudUsage with a local instanceAdd and delete texts to your Knowledge BoxSearch in your Knowledge Box |
649 | https://python.langchain.com/docs/integrations/vectorstores/opensearch | ComponentsVector storesOpenSearchOn this pageOpenSearchOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.This notebook shows how to use functionality related to the OpenSearch database.To run, you should have an OpenSearch instance up and running: see here for an easy Docker installation.similarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for
large datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting.
Check this for more details.InstallationInstall the Python client.pip install opensearch-pyWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import OpenSearchVectorSearchfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()similarity_search using Approximate k-NNsimilarity_search using Approximate k-NN Search with Custom Parametersdocsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200")# If using the default Docker installation, use this instantiation instead:# docsearch = OpenSearchVectorSearch.from_documents(# docs,# embeddings,# opensearch_url="https://localhost:9200",# http_auth=("admin", "admin"),# use_ssl = False,# verify_certs = False,# ssl_assert_hostname = False,# ssl_show_warn = False,# )query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query, k=10)print(docs[0].page_content)docsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200", engine="faiss", space_type="innerproduct", ef_construction=256, m=48,)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)similarity_search using Script Scoringsimilarity_search using Script Scoring with Custom Parametersdocsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search( "What did the president say about Ketanji Brown Jackson", k=1, search_type="script_scoring",)print(docs[0].page_content)similarity_search using Painless Scriptingsimilarity_search using Painless Scripting with Custom Parametersdocsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)filter = {"bool": {"filter": {"term": {"text": "smuggling"}}}}query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search( "What did the president say about Ketanji Brown Jackson", search_type="painless_scripting", space_type="cosineSimilarity", pre_filter=filter,)print(docs[0].page_content)Maximum marginal relevance search (MMR)If you’d like to look up for some similar documents, but you’d also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10, lambda_param=0.5)Using a preexisting OpenSearch instanceIt's also possible to use a preexisting OpenSearch instance with documents that already have vectors present.# this is just an example, you would need to change these values to point to another opensearch instancedocsearch = OpenSearchVectorSearch( index_name="index-*", embedding_function=embeddings, opensearch_url="http://localhost:9200",)# you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadatadocs = docsearch.similarity_search( "Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata",)Using AOSS (Amazon OpenSearch Service Serverless)# This is just an example to show how to use AOSS with faiss engine and efficient_filter, you need to set proper values.service = 'aoss' # must set the service as 'aoss'region = 'us-east-2'credentials = boto3.Session(aws_access_key_id='xxxxxx',aws_secret_access_key='xxxxx').get_credentials()awsauth = AWS4Auth('xxxxx', 'xxxxxx', region,service, session_token=credentials.token)docsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="host url", http_auth=awsauth, timeout = 300, use_ssl = True, verify_certs = True, connection_class = RequestsHttpConnection, index_name="test-index-using-aoss", engine="faiss",)docs = docsearch.similarity_search( "What is feature selection", efficient_filter=filter, k=200,)Using AOS (Amazon OpenSearch Service)# This is just an example to show how to use AOS , you need to set proper values.service = 'es' # must set the service as 'es'region = 'us-east-2'credentials = boto3.Session(aws_access_key_id='xxxxxx',aws_secret_access_key='xxxxx').get_credentials()awsauth = AWS4Auth('xxxxx', 'xxxxxx', region,service, session_token=credentials.token)docsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="host url", http_auth=awsauth, timeout = 300, use_ssl = True, verify_certs = True, connection_class = RequestsHttpConnection, index_name="test-index",)docs = docsearch.similarity_search( "What is feature selection", k=200,)PreviousNucliaDBNextPostgres EmbeddingInstallationsimilarity_search using Approximate k-NNsimilarity_search using Script Scoringsimilarity_search using Painless ScriptingMaximum marginal relevance search (MMR)Using a preexisting OpenSearch instanceUsing AOSS (Amazon OpenSearch Service Serverless)Using AOS (Amazon OpenSearch Service) |
650 | https://python.langchain.com/docs/integrations/vectorstores/pgembedding | ComponentsVector storesPostgres EmbeddingOn this pagePostgres EmbeddingPostgres Embedding is an open-source vector similarity search for Postgres that uses Hierarchical Navigable Small Worlds (HNSW) for approximate nearest neighbor search.It supports:exact and approximate nearest neighbor search using HNSWL2 distanceThis notebook shows how to use the Postgres vector database (PGEmbedding).The PGEmbedding integration creates the pg_embedding extension for you, but you run the following Postgres query to add it:CREATE EXTENSION embedding;# Pip install necessary packagepip install openaipip install psycopg2-binarypip install tiktokenAdd the OpenAI API Key to the environment variables to use OpenAIEmbeddings.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:········## Loading Environment Variablesfrom typing import List, Tuplefrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import PGEmbeddingfrom langchain.document_loaders import TextLoaderfrom langchain.docstore.document import Documentos.environ["DATABASE_URL"] = getpass.getpass("Database Url:") Database Url:········loader = TextLoader("state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()connection_string = os.environ.get("DATABASE_URL")collection_name = "state_of_the_union"db = PGEmbedding.from_documents( embedding=embeddings, documents=docs, collection_name=collection_name, connection_string=connection_string,)query = "What did the president say about Ketanji Brown Jackson"docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80)Working with vectorstore in PostgresUploading a vectorstore in PGdb = PGEmbedding.from_documents( embedding=embeddings, documents=docs, collection_name=collection_name, connection_string=connection_string, pre_delete_collection=False,)Create HNSW IndexBy default, the extension performs a sequential scan search, with 100% recall. You might consider creating an HNSW index for approximate nearest neighbor (ANN) search to speed up similarity_search_with_score execution time. To create the HNSW index on your vector column, use a create_hnsw_index function:PGEmbedding.create_hnsw_index( max_elements=10000, dims=1536, m=8, ef_construction=16, ef_search=16)The function above is equivalent to running the below SQL query:CREATE INDEX ON vectors USING hnsw(vec) WITH (maxelements=10000, dims=1536, m=3, efconstruction=16, efsearch=16);The HNSW index options used in the statement above include:maxelements: Defines the maximum number of elements indexed. This is a required parameter. The example shown above has a value of 3. A real-world example would have a much large value, such as 1000000. An "element" refers to a data point (a vector) in the dataset, which is represented as a node in the HNSW graph. Typically, you would set this option to a value able to accommodate the number of rows in your in your dataset.dims: Defines the number of dimensions in your vector data. This is a required parameter. A small value is used in the example above. If you are storing data generated using OpenAI's text-embedding-ada-002 model, which supports 1536 dimensions, you would define a value of 1536, for example.m: Defines the maximum number of bi-directional links (also referred to as "edges") created for each node during graph construction.
The following additional index options are supported:efConstruction: Defines the number of nearest neighbors considered during index construction. The default value is 32.efsearch: Defines the number of nearest neighbors considered during index search. The default value is 32.
For information about how you can configure these options to influence the HNSW algorithm, refer to Tuning the HNSW algorithm.Retrieving a vectorstore in PGstore = PGEmbedding( connection_string=connection_string, embedding_function=embeddings, collection_name=collection_name,)retriever = store.as_retriever()retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.pghnsw.HNSWVectoreStore object at 0x121d3c8b0>, search_type='similarity', search_kwargs={})db1 = PGEmbedding.from_existing_index( embedding=embeddings, collection_name=collection_name, pre_delete_collection=False, connection_string=connection_string,)query = "What did the president say about Ketanji Brown Jackson"docs_with_score: List[Tuple[Document, float]] = db1.similarity_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80)PreviousOpenSearchNextPGVectorWorking with vectorstore in PostgresUploading a vectorstore in PGCreate HNSW IndexRetrieving a vectorstore in PG |
651 | https://python.langchain.com/docs/integrations/vectorstores/pgvector | ComponentsVector storesPGVectorOn this pagePGVectorPGVector is an open-source vector similarity search for PostgresIt supports:exact and approximate nearest neighbor searchL2 distance, inner product, and cosine distanceThis notebook shows how to use the Postgres vector database (PGVector).See the installation instruction.# Pip install necessary packagepip install pgvectorpip install openaipip install psycopg2-binarypip install tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")## Loading Environment Variablesfrom typing import List, Tuplefrom dotenv import load_dotenvload_dotenv() Falsefrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores.pgvector import PGVectorfrom langchain.document_loaders import TextLoaderfrom langchain.docstore.document import Documentloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()# PGVector needs the connection string to the database.CONNECTION_STRING = "postgresql+psycopg2://harrisonchase@localhost:5432/test3"# # Alternatively, you can create it from enviornment variables.# import os# CONNECTION_STRING = PGVector.connection_string_from_db_params(# driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),# host=os.environ.get("PGVECTOR_HOST", "localhost"),# port=int(os.environ.get("PGVECTOR_PORT", "5432")),# database=os.environ.get("PGVECTOR_DATABASE", "postgres"),# user=os.environ.get("PGVECTOR_USER", "postgres"),# password=os.environ.get("PGVECTOR_PASSWORD", "postgres"),# )Similarity Search with Euclidean Distance (Default)# The PGVector Module will try to create a table with the name of the collection.# So, make sure that the collection name is unique and the user has the permission to create a table.COLLECTION_NAME = "state_of_the_union_test"db = PGVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING,)query = "What did the president say about Ketanji Brown Jackson"docs_with_score = db.similarity_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18456886638850434 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21742627672631343 A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.22641793174529334 And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.22670040608054465 Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. That ends on my watch. Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges. --------------------------------------------------------------------------------Maximal Marginal Relevance Search (MMR)Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.docs_with_score = db.max_marginal_relevance_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18453882564037527 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.23523731441720075 We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.2448441215698569 One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. When they came home, many of the world’s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness. A cancer that would put them in a flag-draped coffin. I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. Stationed near Baghdad, just yards from burn pits the size of football fields. Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.2513994424701056 And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. These steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. But I want you to know that we are going to be okay. When the history of this era is written Putin’s war on Ukraine will have left Russia weaker and the rest of the world stronger. While it shouldn’t have taken something so terrible for people around the world to see what’s at stake now everyone sees it clearly. --------------------------------------------------------------------------------Working with vectorstoreAbove, we created a vectorstore from scratch. However, often times we want to work with an existing vectorstore.
In order to do that, we can initialize it directly.store = PGVector( collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING, embedding_function=embeddings,)Add documentsWe can add documents to the existing vectorstore.store.add_documents([Document(page_content="foo")]) ['048c2e14-1cf3-11ee-8777-e65801318980']docs_with_score = db.similarity_search_with_score("foo")docs_with_score[0] (Document(page_content='foo', metadata={}), 3.3203430005457335e-09)docs_with_score[1] (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.2404395365581814)Overriding a vectorstoreIf you have an existing collection, you override it by doing from_documents and setting pre_delete_collection = Truedb = PGVector.from_documents( documents=docs, embedding=embeddings, collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING, pre_delete_collection=True,)docs_with_score = db.similarity_search_with_score("foo")docs_with_score[0] (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.2404115088144465)Using a VectorStore as a Retrieverretriever = store.as_retriever()print(retriever) tags=None metadata=None vectorstore=<langchain.vectorstores.pgvector.PGVector object at 0x29f94f880> search_type='similarity' search_kwargs={}PreviousPostgres EmbeddingNextPineconeSimilarity Search with Euclidean Distance (Default)Maximal Marginal Relevance Search (MMR)Working with vectorstoreAdd documentsOverriding a vectorstoreUsing a VectorStore as a Retriever |
652 | https://python.langchain.com/docs/integrations/vectorstores/pinecone | ComponentsVector storesPineconeOn this pagePineconePinecone is a vector database with broad functionality.This notebook shows how to use functionality related to the Pinecone vector database.To use Pinecone, you must have an API key.
Here are the installation instructions.pip install pinecone-client openai tiktoken langchainimport osimport getpassos.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:")os.environ["PINECONE_ENV"] = getpass.getpass("Pinecone Environment:")We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Pineconefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()import pinecone# initialize pineconepinecone.init( api_key=os.getenv("PINECONE_API_KEY"), # find at app.pinecone.io environment=os.getenv("PINECONE_ENV"), # next to api key in console)index_name = "langchain-demo"# First, check if our index already exists. If it doesn't, we create itif index_name not in pinecone.list_indexes(): # we create a new index pinecone.create_index( name=index_name, metric='cosine', dimension=1536 )# The OpenAI embedding model `text-embedding-ada-002 uses 1536 dimensions`docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)# if you already have an index, you can load it like this# docsearch = Pinecone.from_existing_index(index_name, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)Adding More Text to an Existing IndexMore text can embedded and upserted to an existing Pinecone index using the add_texts functionindex = pinecone.Index("langchain-demo")vectorstore = Pinecone(index, embeddings.embed_query, "text")vectorstore.add_texts("More text!")Maximal Marginal Relevance SearchesIn addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")PreviousPGVectorNextQdrantAdding More Text to an Existing IndexMaximal Marginal Relevance Searches |
653 | https://python.langchain.com/docs/integrations/vectorstores/qdrant | ComponentsVector storesQdrantOn this pageQdrantQdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.This notebook shows how to use functionality related to the Qdrant vector database. There are various modes of how to run Qdrant, and depending on the chosen one, there will be some subtle differences. The options include:Local mode, no server requiredOn-premise server deploymentQdrant CloudSee the installation instructions.pip install qdrant-clientWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Qdrantfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connecting to Qdrant from LangChainLocal modePython client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.In-memoryFor some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.qdrant = Qdrant.from_documents( docs, embeddings, location=":memory:", # Local mode with in-memory storage only collection_name="my_documents",)On-disk storageLocal mode, without using the Qdrant server, may also store your vectors on disk so they're persisted between runs.qdrant = Qdrant.from_documents( docs, embeddings, path="/tmp/local_qdrant", collection_name="my_documents",)On-premise server deploymentNo matter if you choose to launch Qdrant locally with a Docker container, or select a Kubernetes deployment with the official Helm chart, the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service.url = "<---qdrant url here --->"qdrant = Qdrant.from_documents( docs, embeddings, url=url, prefer_grpc=True, collection_name="my_documents",)Qdrant CloudIf you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on Qdrant Cloud. There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you'll need to provide an API key to secure your deployment from being accessed publicly.url = "<---qdrant cloud cluster url here --->"api_key = "<---api key here--->"qdrant = Qdrant.from_documents( docs, embeddings, url=url, prefer_grpc=True, api_key=api_key, collection_name="my_documents",)Recreating the collectionBoth Qdrant.from_texts and Qdrant.from_documents methods are great to start using Qdrant with Langchain. In the previous versions the collection was recreated every time you called any of them. That behaviour has changed. Currently, the collection is going to be reused if it already exists. Setting force_recreate to True allows to remove the old collection and start from scratch.url = "<---qdrant url here --->"qdrant = Qdrant.from_documents( docs, embeddings, url=url, prefer_grpc=True, collection_name="my_documents", force_recreate=True,)Similarity searchThe simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the embedding_function and used to find similar documents in Qdrant collection.query = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.similarity_search(query)print(found_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with scoreSometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.
The returned distance score is cosine distance. Therefore, a lower score is better.query = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.similarity_search_with_score(query)document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Score: 0.8153784913324512Metadata filteringQdrant has an extensive filtering system with rich type support. It is also possible to use the filters in Langchain, by passing an additional param to both the similarity_search_with_score and similarity_search methods.from qdrant_client.http import models as restquery = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))Maximum marginal relevance search (MMR)If you'd like to look up for some similar documents, but you'd also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.query = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n") 1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. 2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. Qdrant as a RetrieverQdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. retriever = qdrant.as_retriever()retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='similarity', search_kwargs={})It might be also specified to use MMR as a search strategy, instead of similarity.retriever = qdrant.as_retriever(search_type="mmr")retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='mmr', search_kwargs={})query = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})Customizing QdrantThere are some options to use an existing Qdrant collection within your Langchain application. In such cases you may need to define how to map Qdrant point into the Langchain Document.Named vectorsQdrant supports multiple vectors per point by named vectors. Langchain requires just a single embedding per document and, by default, uses a single vector. However, if you work with a collection created externally or want to have the named vector used, you can configure it by providing its name.Qdrant.from_documents( docs, embeddings, location=":memory:", collection_name="my_documents_2", vector_name="custom_vector",)As a Langchain user, you won't see any difference whether you use named vectors or not. Qdrant integration will handle the conversion under the hood.MetadataQdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well.By default, your document is going to be stored in the following payload structure:{ "page_content": "Lorem ipsum dolor sit amet", "metadata": { "foo": "bar" }}You can, however, decide to use different keys for the page content and metadata. That's useful if you already have a collection that you'd like to reuse.Qdrant.from_documents( docs, embeddings, location=":memory:", collection_name="my_documents_2", content_payload_key="my_page_content_key", metadata_payload_key="my_meta",) <langchain.vectorstores.qdrant.Qdrant at 0x7fc4e2baa230>PreviousPineconeNextRedisConnecting to Qdrant from LangChainLocal modeOn-premise server deploymentQdrant CloudRecreating the collectionSimilarity searchSimilarity search with scoreMetadata filteringMaximum marginal relevance search (MMR)Qdrant as a RetrieverCustomizing QdrantNamed vectorsMetadata |
654 | https://python.langchain.com/docs/integrations/vectorstores/redis | ComponentsVector storesRedisOn this pageRedisRedis vector database introduction and langchain integration guide.What is Redis?Most developers from a web services background are probably familiar with Redis. At it's core, Redis is an open-source key-value store that can be used as a cache, message broker, and database. Developers choose Redis because it is fast, has a large ecosystem of client libraries, and has been deployed by major enterprises for years.On top of these traditional use cases, Redis provides additional capabilities like the Search and Query capability that allows users to create secondary index structures within Redis. This allows Redis to be a Vector Database, at the speed of a cache. Redis as a Vector DatabaseRedis uses compressed, inverted indexes for fast indexing with a low memory footprint. It also supports a number of advanced features such as:Indexing of multiple fields in Redis hashes and JSONVector similarity search (with HNSW (ANN) or FLAT (KNN))Vector Range Search (e.g. find all vectors within a radius of a query vector)Incremental indexing without performance lossDocument ranking (using tf-idf, with optional user-provided weights)Field weightingComplex boolean queries with AND, OR, and NOT operatorsPrefix matching, fuzzy matching, and exact-phrase queriesSupport for double-metaphone phonetic matchingAuto-complete suggestions (with fuzzy prefix suggestions)Stemming-based query expansion in many languages (using Snowball)Support for Chinese-language tokenization and querying (using Friso)Numeric filters and rangesGeospatial searches using Redis geospatial indexingA powerful aggregations engineSupports for all utf-8 encoded textRetrieve full documents, selected fields, or only the document IDsSorting results (for example, by creation date)ClientsSince redis is much more than just a vector database, there are often use cases that demand usage of a Redis client besides just the langchain integration. You can use any standard Redis client library to run Search and Query commands, but it's easiest to use a library that wraps the Search and Query API. Below are a few examples, but you can find more client libraries here.ProjectLanguageLicenseAuthorStarsjedisJavaMITRedisredisvlPythonMITRedisredis-pyPythonMITRedisnode-redisNode.jsMITRedisnredisstack.NETMITRedisDeployment OptionsThere are many ways to deploy Redis with RediSearch. The easiest way to get started is to use Docker, but there are are many potential options for deployment such asRedis CloudDocker (Redis Stack)Cloud marketplaces: AWS Marketplace, Google Marketplace, or Azure MarketplaceOn-premise: Redis Enterprise SoftwareKubernetes: Redis Enterprise Software on KubernetesExamplesMany examples can be found in the Redis AI team's GitHubAwesome Redis AI Resources - List of examples of using Redis in AI workloadsAzure OpenAI Embeddings Q&A - OpenAI and Redis as a Q&A service on Azure.ArXiv Paper Search - Semantic search over arXiv scholarly papersVector Search on Azure - Vector search on Azure using Azure Cache for Redis and Azure OpenAIMore ResourcesFor more information on how to use Redis as a vector database, check out the following resources:RedisVL Documentation - Documentation for the Redis Vector Library ClientRedis Vector Similarity Docs - Redis official docs for Vector Search.Redis-py Search Docs - Documentation for redis-py client libraryVector Similarity Search: From Basics to Production - Introductory blog post to VSS and Redis as a VectorDB.Install Redis Python ClientRedis-py is the officially supported client by Redis. Recently released is the RedisVL client which is purpose-built for the Vector Database use cases. Both can be installed with pip.pip install redis redisvl openai tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()Sample DataFirst we will describe some sample data so that the various attributes of the Redis vector store can be demonstrated.metadata = [ { "user": "john", "age": 18, "job": "engineer", "credit_score": "high", }, { "user": "derrick", "age": 45, "job": "doctor", "credit_score": "low", }, { "user": "nancy", "age": 94, "job": "doctor", "credit_score": "high", }, { "user": "tyler", "age": 100, "job": "engineer", "credit_score": "high", }, { "user": "joe", "age": 35, "job": "dentist", "credit_score": "medium", },]texts = ["foo", "foo", "foo", "bar", "bar"]Initializing RedisTo locally deploy Redis, run:docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latestIf things are running correctly you should see a nice Redis UI at http://localhost:8001. See the Deployment Options section above for other ways to deploy.The Redis VectorStore instance can be initialized in a number of ways. There are multiple class methods that can be used to initialize a Redis VectorStore instance.Redis.__init__ - Initialize directlyRedis.from_documents - Initialize from a list of Langchain.docstore.Document objectsRedis.from_texts - Initialize from a list of texts (optionally with metadata)Redis.from_texts_return_keys - Initialize from a list of texts (optionally with metadata) and return the keysRedis.from_existing_index - Initialize from an existing Redis indexBelow we will use the Redis.from_texts method.from langchain.vectorstores.redis import Redisrds = Redis.from_texts( texts, embeddings, metadatas=metadata, redis_url="redis://localhost:6379", index_name="users")rds.index_name 'users'Inspecting the Created IndexOnce the Redis VectorStore object has been constructed, an index will have been created in Redis if it did not already exist. The index can be inspected with both the rvland the redis-cli command line tool. If you installed redisvl above, you can use the rvl command line tool to inspect the index.# assumes you're running Redis locally (use --host, --port, --password, --username, to change this)rvl index listall 16:58:26 [RedisVL] INFO Indices: 16:58:26 [RedisVL] INFO 1. usersThe Redis VectorStore implementation will attempt to generate index schema (fields for filtering) for any metadata passed through the from_texts, from_texts_return_keys, and from_documents methods. This way, whatever metadata is passed will be indexed into the Redis search index allowing
for filtering on those fields.Below we show what fields were created from the metadata we defined abovervl index info -i users Index Information: ╭──────────────┬────────────────┬───────────────┬─────────────────┬────────────╮ │ Index Name │ Storage Type │ Prefixes │ Index Options │ Indexing │ ├──────────────┼────────────────┼───────────────┼─────────────────┼────────────┤ │ users │ HASH │ ['doc:users'] │ [] │ 0 │ ╰──────────────┴────────────────┴───────────────┴─────────────────┴────────────╯ Index Fields: ╭────────────────┬────────────────┬─────────┬────────────────┬────────────────╮ │ Name │ Attribute │ Type │ Field Option │ Option Value │ ├────────────────┼────────────────┼─────────┼────────────────┼────────────────┤ │ user │ user │ TEXT │ WEIGHT │ 1 │ │ job │ job │ TEXT │ WEIGHT │ 1 │ │ credit_score │ credit_score │ TEXT │ WEIGHT │ 1 │ │ content │ content │ TEXT │ WEIGHT │ 1 │ │ age │ age │ NUMERIC │ │ │ │ content_vector │ content_vector │ VECTOR │ │ │ ╰────────────────┴────────────────┴─────────┴────────────────┴────────────────╯rvl stats -i users Statistics: ╭─────────────────────────────┬─────────────╮ │ Stat Key │ Value │ ├─────────────────────────────┼─────────────┤ │ num_docs │ 5 │ │ num_terms │ 15 │ │ max_doc_id │ 5 │ │ num_records │ 33 │ │ percent_indexed │ 1 │ │ hash_indexing_failures │ 0 │ │ number_of_uses │ 4 │ │ bytes_per_record_avg │ 4.60606 │ │ doc_table_size_mb │ 0.000524521 │ │ inverted_sz_mb │ 0.000144958 │ │ key_table_size_mb │ 0.000193596 │ │ offset_bits_per_record_avg │ 8 │ │ offset_vectors_sz_mb │ 2.19345e-05 │ │ offsets_per_term_avg │ 0.69697 │ │ records_per_doc_avg │ 6.6 │ │ sortable_values_size_mb │ 0 │ │ total_indexing_time │ 0.32 │ │ total_inverted_index_blocks │ 16 │ │ vector_index_sz_mb │ 6.0126 │ ╰─────────────────────────────┴─────────────╯It's important to note that we have not specified that the user, job, credit_score and age in the metadata should be fields within the index, this is because the Redis VectorStore object automatically generate the index schema from the passed metadata. For more information on the generation of index fields, see the API documentation.QueryingThere are multiple ways to query the Redis VectorStore implementation based on what use case you have:similarity_search: Find the most similar vectors to a given vector.similarity_search_with_score: Find the most similar vectors to a given vector and return the vector distancesimilarity_search_limit_score: Find the most similar vectors to a given vector and limit the number of results to the score_thresholdsimilarity_search_with_relevance_scores: Find the most similar vectors to a given vector and return the vector similaritiesmax_marginal_relevance_search: Find the most similar vectors to a given vector while also optimizing for diversityresults = rds.similarity_search("foo")print(results[0].page_content) foo# return metadataresults = rds.similarity_search("foo", k=3)meta = results[1].metadataprint("Key of the document in Redis: ", meta.pop("id"))print("Metadata of the document: ", meta) Key of the document in Redis: doc:users:a70ca43b3a4e4168bae57c78753a200f Metadata of the document: {'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}# with scores (distances)results = rds.similarity_search_with_score("foo", k=5)for result in results: print(f"Content: {result[0].page_content} --- Score: {result[1]}") Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: bar --- Score: 0.1566 Content: bar --- Score: 0.1566# limit the vector distance that can be returnedresults = rds.similarity_search_with_score("foo", k=5, distance_threshold=0.1)for result in results: print(f"Content: {result[0].page_content} --- Score: {result[1]}") Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: foo --- Score: 0.0# with scoresresults = rds.similarity_search_with_relevance_scores("foo", k=5)for result in results: print(f"Content: {result[0].page_content} --- Similiarity: {result[1]}") Content: foo --- Similiarity: 1.0 Content: foo --- Similiarity: 1.0 Content: foo --- Similiarity: 1.0 Content: bar --- Similiarity: 0.8434 Content: bar --- Similiarity: 0.8434# limit scores (similarities have to be over .9)results = rds.similarity_search_with_relevance_scores("foo", k=5, score_threshold=0.9)for result in results: print(f"Content: {result[0].page_content} --- Similarity: {result[1]}") Content: foo --- Similarity: 1.0 Content: foo --- Similarity: 1.0 Content: foo --- Similarity: 1.0# you can also add new documents as followsnew_document = ["baz"]new_metadata = [{ "user": "sam", "age": 50, "job": "janitor", "credit_score": "high"}]# both the document and metadata must be listsrds.add_texts(new_document, new_metadata) ['doc:users:b9c71d62a0a34241a37950b448dafd38']# now query the new documentresults = rds.similarity_search("baz", k=3)print(results[0].metadata) {'id': 'doc:users:b9c71d62a0a34241a37950b448dafd38', 'user': 'sam', 'job': 'janitor', 'credit_score': 'high', 'age': '50'}# use maximal marginal relevance search to diversify resultsresults = rds.max_marginal_relevance_search("foo")# the lambda_mult parameter controls the diversity of the results, the lower the more diverseresults = rds.max_marginal_relevance_search("foo", lambda_mult=0.1)Connect to an Existing IndexIn order to have the same metadata indexed when using the Redis VectorStore. You will need to have the same index_schema passed in either as a path to a yaml file or as a dictionary. The following shows how to obtain the schema from an index and connect to an existing index.# write the schema to a yaml filerds.write_schema("redis_schema.yaml")The schema file for this example should look something like:numeric:- name: age no_index: false sortable: falsetext:- name: user no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: false- name: job no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: false- name: credit_score no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: false- name: content no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: falsevector:- algorithm: FLAT block_size: 1000 datatype: FLOAT32 dims: 1536 distance_metric: COSINE initial_cap: 20000 name: content_vectorNotice, this include all possible fields for the schema. You can remove any fields that you don't need.# now we can connect to our existing index as followsnew_rds = Redis.from_existing_index( embeddings, index_name="users", redis_url="redis://localhost:6379", schema="redis_schema.yaml")results = new_rds.similarity_search("foo", k=3)print(results[0].metadata) {'id': 'doc:users:8484c48a032d4c4cbe3cc2ed6845fabb', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}# see the schemas are the samenew_rds.schema == rds.schema TrueCustom Metadata IndexingIn some cases, you may want to control what fields the metadata maps to. For example, you may want the credit_score field to be a categorical field instead of a text field (which is the default behavior for all string fields). In this case, you can use the index_schema parameter in each of the initialization methods above to specify the schema for the index. Custom index schema can either be passed as a dictionary or as a path to a yaml file.All arguments in the schema have defaults besides the name, so you can specify only the fields you want to change. All the names correspond to the snake/lowercase versions of the arguments you would use on the command line with redis-cli or in redis-py. For more on the arguments for each field, see the documentationThe below example shows how to specify the schema for the credit_score field as a Tag (categorical) field instead of a text field. # index_schema.ymltag: - name: credit_scoretext: - name: user - name: jobnumeric: - name: ageIn Python this would look like:index_schema = { "tag": [{"name": "credit_score"}], "text": [{"name": "user"}, {"name": "job"}], "numeric": [{"name": "age"}],}Notice that only the name field needs to be specified. All other fields have defaults.# create a new index with the new schema defined aboveindex_schema = { "tag": [{"name": "credit_score"}], "text": [{"name": "user"}, {"name": "job"}], "numeric": [{"name": "age"}],}rds, keys = Redis.from_texts_return_keys( texts, embeddings, metadatas=metadata, redis_url="redis://localhost:6379", index_name="users_modified", index_schema=index_schema, # pass in the new index schema) `index_schema` does not match generated metadata schema. If you meant to manually override the schema, please ignore this message. index_schema: {'tag': [{'name': 'credit_score'}], 'text': [{'name': 'user'}, {'name': 'job'}], 'numeric': [{'name': 'age'}]} generated_schema: {'text': [{'name': 'user'}, {'name': 'job'}, {'name': 'credit_score'}], 'numeric': [{'name': 'age'}], 'tag': []} The above warning is meant to notify users when they are overriding the default behavior. Ignore it if you are intentionally overriding the behavior.Hybrid FilteringWith the Redis Filter Expression language built into langchain, you can create arbitrarily long chains of hybrid filters
that can be used to filter your search results. The expression language is derived from the RedisVL Expression Syntax
and is designed to be easy to use and understand.The following are the available filter types:RedisText: Filter by full-text search against metadata fields. Supports exact, fuzzy, and wildcard matching.RedisNum: Filter by numeric range against metadata fields.RedisTag: Filter by exact match against string based categorical metadata fields. Multiple tags can be specified like "tag1,tag2,tag3".The following are examples of utilizing these filters.from langchain.vectorstores.redis import RedisText, RedisNum, RedisTag# exact matchinghas_high_credit = RedisTag("credit_score") == "high"does_not_have_high_credit = RedisTag("credit_score") != "low"# fuzzy matchingjob_starts_with_eng = RedisText("job") % "eng*"job_is_engineer = RedisText("job") == "engineer"job_is_not_engineer = RedisText("job") != "engineer"# numeric filteringage_is_18 = RedisNum("age") == 18age_is_not_18 = RedisNum("age") != 18age_is_greater_than_18 = RedisNum("age") > 18age_is_less_than_18 = RedisNum("age") < 18age_is_greater_than_or_equal_to_18 = RedisNum("age") >= 18age_is_less_than_or_equal_to_18 = RedisNum("age") <= 18The RedisFilter class can be used to simplify the import of these filters as followsfrom langchain.vectorstores.redis import RedisFilter# same examples as abovehas_high_credit = RedisFilter.tag("credit_score") == "high"does_not_have_high_credit = RedisFilter.num("age") > 8job_starts_with_eng = RedisFilter.text("job") % "eng*"The following are examples of using hybrid filter for searchfrom langchain.vectorstores.redis import RedisTextis_engineer = RedisText("job") == "engineer"results = rds.similarity_search("foo", k=3, filter=is_engineer)print("Job:", results[0].metadata["job"])print("Engineers in the dataset:", len(results)) Job: engineer Engineers in the dataset: 2# fuzzy matchstarts_with_doc = RedisText("job") % "doc*"results = rds.similarity_search("foo", k=3, filter=starts_with_doc)for result in results: print("Job:", result.metadata["job"])print("Jobs in dataset that start with 'doc':", len(results)) Job: doctor Job: doctor Jobs in dataset that start with 'doc': 2from langchain.vectorstores.redis import RedisNumis_over_18 = RedisNum("age") > 18is_under_99 = RedisNum("age") < 99age_range = is_over_18 & is_under_99results = rds.similarity_search("foo", filter=age_range)for result in results: print("User:", result.metadata["user"], "is", result.metadata["age"]) User: derrick is 45 User: nancy is 94 User: joe is 35# make sure to use parenthesis around FilterExpressions# if initializing them while constructing themage_range = (RedisNum("age") > 18) & (RedisNum("age") < 99)results = rds.similarity_search("foo", filter=age_range)for result in results: print("User:", result.metadata["user"], "is", result.metadata["age"]) User: derrick is 45 User: nancy is 94 User: joe is 35Redis as RetrieverHere we go over different options for using the vector store as a retriever.There are three different search methods we can use to do retrieval. By default, it will use semantic similarity.query = "foo"results = rds.similarity_search_with_score(query, k=3, return_metadata=True)for result in results: print("Content:", result[0].page_content, " --- Score: ", result[1]) Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: foo --- Score: 0.0retriever = rds.as_retriever(search_type="similarity", search_kwargs={"k": 4})docs = retriever.get_relevant_documents(query)docs [Document(page_content='foo', metadata={'id': 'doc:users_modified:988ecca7574048e396756efc0e79aeca', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:009b1afeb4084cc6bdef858c7a99b48e', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:7087cee9be5b4eca93c30fbdd09a2731', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'}), Document(page_content='bar', metadata={'id': 'doc:users_modified:01ef6caac12b42c28ad870aefe574253', 'user': 'tyler', 'job': 'engineer', 'credit_score': 'high', 'age': '100'})]There is also the similarity_distance_threshold retriever which allows the user to specify the vector distanceretriever = rds.as_retriever(search_type="similarity_distance_threshold", search_kwargs={"k": 4, "distance_threshold": 0.1})docs = retriever.get_relevant_documents(query)docs [Document(page_content='foo', metadata={'id': 'doc:users_modified:988ecca7574048e396756efc0e79aeca', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:009b1afeb4084cc6bdef858c7a99b48e', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:7087cee9be5b4eca93c30fbdd09a2731', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'})]Lastly, the similarity_score_threshold allows the user to define the minimum score for similar documentsretriever = rds.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.9, "k": 10})retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={'id': 'doc:users_modified:988ecca7574048e396756efc0e79aeca', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:009b1afeb4084cc6bdef858c7a99b48e', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:7087cee9be5b4eca93c30fbdd09a2731', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'})]retriever = rds.as_retriever(search_type="mmr", search_kwargs={"fetch_k": 20, "k": 4, "lambda_mult": 0.1})retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={'id': 'doc:users:8f6b673b390647809d510112cde01a27', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='bar', metadata={'id': 'doc:users:93521560735d42328b48c9c6f6418d6a', 'user': 'tyler', 'job': 'engineer', 'credit_score': 'high', 'age': '100'}), Document(page_content='foo', metadata={'id': 'doc:users:125ecd39d07845eabf1a699d44134a5b', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'}), Document(page_content='foo', metadata={'id': 'doc:users:d6200ab3764c466082fde3eaab972a2a', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'})]Delete keysTo delete your entries you have to address them by their keys.Redis.delete(keys, redis_url="redis://localhost:6379") True# delete the indices tooRedis.drop_index(index_name="users", delete_documents=True, redis_url="redis://localhost:6379")Redis.drop_index(index_name="users_modified", delete_documents=True, redis_url="redis://localhost:6379") TrueRedis connection Url examplesValid Redis Url scheme are:redis:// - Connection to Redis standalone, unencryptedrediss:// - Connection to Redis standalone, with TLS encryptionredis+sentinel:// - Connection to Redis server via Redis Sentinel, unencryptedrediss+sentinel:// - Connection to Redis server via Redis Sentinel, booth connections with TLS encryptionMore information about additional connection parameter can be found in the redis-py documentation at https://redis-py.readthedocs.io/en/stable/connections.html# connection to redis standalone at localhost, db 0, no passwordredis_url = "redis://localhost:6379"# connection to host "redis" port 7379 with db 2 and password "secret" (old style authentication scheme without username / pre 6.x)redis_url = "redis://:secret@redis:7379/2"# connection to host redis on default port with user "joe", pass "secret" using redis version 6+ ACLsredis_url = "redis://joe:secret@redis/0"# connection to sentinel at localhost with default group mymaster and db 0, no passwordredis_url = "redis+sentinel://localhost:26379"# connection to sentinel at host redis with default port 26379 and user "joe" with password "secret" with default group mymaster and db 0redis_url = "redis+sentinel://joe:secret@redis"# connection to sentinel, no auth with sentinel monitoring group "zone-1" and database 2redis_url = "redis+sentinel://redis:26379/zone-1/2"# connection to redis standalone at localhost, db 0, no password but with TLS supportredis_url = "rediss://localhost:6379"# connection to redis sentinel at localhost and default port, db 0, no password# but with TLS support for booth Sentinel and Redis serverredis_url = "rediss+sentinel://localhost"PreviousQdrantNextRocksetWhat is Redis?Redis as a Vector DatabaseClientsDeployment OptionsExamplesMore ResourcesInstall Redis Python ClientSample DataInitializing RedisInspecting the Created IndexQueryingConnect to an Existing IndexCustom Metadata IndexingHybrid FilteringRedis as RetrieverRedis connection Url examples |
655 | https://python.langchain.com/docs/integrations/vectorstores/rockset | ComponentsVector storesRocksetOn this pageRocksetRockset is a real-time search and analytics database built for the cloud. Rockset uses a Converged Index™ with an efficient store for vector embeddings to serve low latency, high concurrency search queries at scale. Rockset has full support for metadata filtering and handles real-time ingestion for constantly updating, streaming data.This notebook demonstrates how to use Rockset as a vector store in LangChain. Before getting started, make sure you have access to a Rockset account and an API key available. Start your free trial today.Setting Up Your EnvironmentLeverage the Rockset console to create a collection with the Write API as your source. In this walkthrough, we create a collection named langchain_demo. Configure the following ingest transformation to mark your embeddings field and take advantage of performance and storage optimizations: (We used OpenAI text-embedding-ada-002 for this examples, where #length_of_vector_embedding = 1536)SELECT _input.* EXCEPT(_meta), VECTOR_ENFORCE(_input.description_embedding, #length_of_vector_embedding, 'float') as description_embedding FROM _inputAfter creating your collection, use the console to retrieve an API key. For the purpose of this notebook, we assume you are using the Oregon(us-west-2) region.Install the rockset-python-client to enable LangChain to communicate directly with Rockset.pip install rocksetLangChain TutorialFollow along in your own Python notebook to generate and store vector embeddings in Rockset.
Start using Rockset to search for documents similar to your search queries.1. Define Key Variablesimport osimport rocksetROCKSET_API_KEY = os.environ.get("ROCKSET_API_KEY") # Verify ROCKSET_API_KEY environment variableROCKSET_API_SERVER = rockset.Regions.usw2a1 # Verify Rockset regionrockset_client = rockset.RocksetClient(ROCKSET_API_SERVER, ROCKSET_API_KEY)COLLECTION_NAME='langchain_demo'TEXT_KEY='description'EMBEDDING_KEY='description_embedding'2. Prepare Documentsfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import Rocksetloader = TextLoader('../../../state_of_the_union.txt')documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)3. Insert Documentsembeddings = OpenAIEmbeddings() # Verify OPENAI_API_KEY environment variabledocsearch = Rockset( client=rockset_client, embeddings=embeddings, collection_name=COLLECTION_NAME, text_key=TEXT_KEY, embedding_key=EMBEDDING_KEY,)ids=docsearch.add_texts( texts=[d.page_content for d in docs], metadatas=[d.metadata for d in docs],)4. Search for Similar Documentsquery = "What did the president say about Ketanji Brown Jackson"output = docsearch.similarity_search_with_relevance_scores( query, 4, Rockset.DistanceFunction.COSINE_SIM)print("output length:", len(output))for d, dist in output: print(dist, d.metadata, d.page_content[:20] + '...')### output length: 4# 0.764990692109871 {'source': '../../../state_of_the_union.txt'} Madam Speaker, Madam...# 0.7485416901622112 {'source': '../../../state_of_the_union.txt'} And I’m taking robus...# 0.7468678973398306 {'source': '../../../state_of_the_union.txt'} And so many families...# 0.7436231261419488 {'source': '../../../state_of_the_union.txt'} Groups of citizens b...5. Search for Similar Documents with Filteringoutput = docsearch.similarity_search_with_relevance_scores( query, 4, Rockset.DistanceFunction.COSINE_SIM, where_str="{} NOT LIKE '%citizens%'".format(TEXT_KEY),)print("output length:", len(output))for d, dist in output: print(dist, d.metadata, d.page_content[:20] + '...')### output length: 4# 0.7651359650263554 {'source': '../../../state_of_the_union.txt'} Madam Speaker, Madam...# 0.7486265516824893 {'source': '../../../state_of_the_union.txt'} And I’m taking robus...# 0.7469625542348115 {'source': '../../../state_of_the_union.txt'} And so many families...# 0.7344177777547739 {'source': '../../../state_of_the_union.txt'} We see the unity amo...6. [Optional] Delete Inserted DocumentsYou must have the unique ID associated with each document to delete them from your collection.
Define IDs when inserting documents with Rockset.add_texts(). Rockset will otherwise generate a unique ID for each document. Regardless, Rockset.add_texts() returns the IDs of inserted documents.To delete these docs, simply use the Rockset.delete_texts() function.docsearch.delete_texts(ids)SummaryIn this tutorial, we successfully created a Rockset collection, inserted documents with OpenAI embeddings, and searched for similar documents with and without metadata filters.Keep an eye on https://rockset.com/ for future updates in this space.PreviousRedisNextScaNNSetting Up Your EnvironmentLangChain Tutorial1. Define Key Variables2. Prepare Documents3. Insert Documents4. Search for Similar Documents5. Search for Similar Documents with Filtering6. Optional Delete Inserted DocumentsSummary |
656 | https://python.langchain.com/docs/integrations/vectorstores/scann | ComponentsVector storesScaNNOn this pageScaNNScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.ScaNN includes search space pruning and quantization for Maximum Inner Product Search and also supports other distance functions such as Euclidean distance. The implementation is optimized for x86 processors with AVX2 support. See its Google Research github for more details.InstallationInstall ScaNN through pip. Alternatively, you can follow instructions on the ScaNN Website to install from source.pip install scannRetrieval DemoBelow we show how to use ScaNN in conjunction with Huggingface Embeddings.from langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import ScaNNfrom langchain.document_loaders import TextLoaderloader = TextLoader("state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)from langchain.embeddings import TensorflowHubEmbeddingsembeddings = HuggingFaceEmbeddings()db = ScaNN.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)docs[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'state_of_the_union.txt'})RetrievalQA DemoNext, we demonstrate using ScaNN in conjunction with Google PaLM API.You can obtain an API key from https://developers.generativeai.google/tutorials/setupfrom langchain.chains import RetrievalQAfrom langchain.chat_models import google_palmpalm_client = google_palm.ChatGooglePalm(google_api_key='YOUR_GOOGLE_PALM_API_KEY')qa = RetrievalQA.from_chain_type( llm=palm_client, chain_type="stuff", retriever=db.as_retriever(search_kwargs={'k': 10}))print(qa.run('What did the president say about Ketanji Brown Jackson?')) The president said that Ketanji Brown Jackson is one of our nation's top legal minds, who will continue Justice Breyer's legacy of excellence.print(qa.run('What did the president say about Michael Phelps?')) The president did not mention Michael Phelps in his speech.Save and loading local retrieval indexdb.save_local('/tmp/db', 'state_of_union')restored_db = ScaNN.load_local('/tmp/db', embeddings, index_name='state_of_union')PreviousRocksetNextSingleStoreDBInstallationRetrieval DemoRetrievalQA DemoSave and loading local retrieval index |
657 | https://python.langchain.com/docs/integrations/vectorstores/singlestoredb | ComponentsVector storesSingleStoreDBSingleStoreDBSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching. This tutorial illustrates how to work with vector data in SingleStoreDB.# Establishing a connection to the database is facilitated through the singlestoredb Python connector.# Please ensure that this connector is installed in your working environment.pip install singlestoredbimport osimport getpass# We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SingleStoreDBfrom langchain.document_loaders import TextLoader# Load text samplesloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()There are several ways to establish a connection to the database. You can either set up environment variables or pass named parameters to the SingleStoreDB constructor. Alternatively, you may provide these parameters to the from_documents and from_texts methods.# Setup connection url as environment variableos.environ["SINGLESTOREDB_URL"] = "root:pass@localhost:3306/db"# Load documents to the storedocsearch = SingleStoreDB.from_documents( docs, embeddings, table_name="notebook", # use table with a custom name)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) # Find documents that correspond to the queryprint(docs[0].page_content)PreviousScaNNNextscikit-learn |
658 | https://python.langchain.com/docs/integrations/vectorstores/sklearn | ComponentsVector storesscikit-learnOn this pagescikit-learnscikit-learn is an open source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.This notebook shows how to use the SKLearnVectorStore vector database.# # if you plan to use bson serialization, install also:# %pip install bson# # if you plan to use parquet serialization, install also:%pip install pandas pyarrowTo use OpenAI embeddings, you will need an OpenAI key. You can get one at https://platform.openai.com/account/api-keys or feel free to use any other embeddings.import osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI key:")Basic usageLoad a sample document corpusfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SKLearnVectorStorefrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Create the SKLearnVectorStore, index the document corpus and run a sample queryimport tempfilepersist_path = os.path.join(tempfile.gettempdir(), "union.parquet")vector_store = SKLearnVectorStore.from_documents( documents=docs, embedding=embeddings, persist_path=persist_path, # persist_path and serializer are optional serializer="parquet",)query = "What did the president say about Ketanji Brown Jackson"docs = vector_store.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Saving and loading a vector storevector_store.persist()print("Vector store was persisted to", persist_path) Vector store was persisted to /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquetvector_store2 = SKLearnVectorStore( embedding=embeddings, persist_path=persist_path, serializer="parquet")print("A new instance of vector store was loaded from", persist_path) A new instance of vector store was loaded from /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquetdocs = vector_store2.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Clean-upos.remove(persist_path)PreviousSingleStoreDBNextsqlite-vssBasic usageLoad a sample document corpusCreate the SKLearnVectorStore, index the document corpus and run a sample querySaving and loading a vector storeClean-up |
659 | https://python.langchain.com/docs/integrations/vectorstores/sqlitevss | ComponentsVector storessqlite-vssOn this pagesqlite-vsssqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities.This notebook shows how to use the SQLiteVSS vector database.# You need to install sqlite-vss as a dependency.%pip install sqlite-vssQuickstartfrom langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SQLiteVSSfrom langchain.document_loaders import TextLoader# load the document and split it into chunksloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()# split it into chunkstext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)texts = [doc.page_content for doc in docs]# create the open-source embedding functionembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")# load it in sqlite-vss in a table named state_union.# the db_file parameter is the name of the file you want# as your sqlite database.db = SQLiteVSS.from_texts( texts=texts, embedding=embedding_function, table="state_union", db_file="/tmp/vss.db")# query itquery = "What did the president say about Ketanji Brown Jackson"data = db.similarity_search(query)# print resultsdata[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'Using existing sqlite connectionfrom langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SQLiteVSSfrom langchain.document_loaders import TextLoader# load the document and split it into chunksloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()# split it into chunkstext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)texts = [doc.page_content for doc in docs]# create the open-source embedding functionembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")connection = SQLiteVSS.create_connection(db_file="/tmp/vss.db")db1 = SQLiteVSS( table="state_union", embedding=embedding_function, connection=connection)db1.add_texts(["Ketanji Brown Jackson is awesome"])# query it againquery = "What did the president say about Ketanji Brown Jackson"data = db1.similarity_search(query)# print resultsdata[0].page_content 'Ketanji Brown Jackson is awesome'# Cleaning upimport osos.remove("/tmp/vss.db")Previousscikit-learnNextStarRocksQuickstartUsing existing sqlite connection |
660 | https://python.langchain.com/docs/integrations/vectorstores/starrocks | ComponentsVector storesStarRocksOn this pageStarRocksStarRocks is a High-Performance Analytical Database.
StarRocks is a next-gen sub-second MPP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics and ad-hoc query.Usually StarRocks is categorized into OLAP, and it has showed excellent performance in ClickBench — a Benchmark For Analytical DBMS. Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.Here we'll show how to use the StarRocks Vector Store.Setup#!pip install pymysqlSet update_vectordb = False at the beginning. If there is no docs updated, then we don't need to rebuild the embeddings of docsfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import StarRocksfrom langchain.vectorstores.starrocks import StarRocksSettingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitter, TokenTextSplitterfrom langchain.llms import OpenAIfrom langchain.chains import VectorDBQAfrom langchain.document_loaders import DirectoryLoaderfrom langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoader, UnstructuredMarkdownLoaderupdate_vectordb = False /Users/dirlt/utils/py3env/lib/python3.9/site-packages/requests/__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.7) or chardet (5.1.0)/charset_normalizer (2.0.9) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "Load docs and split them into tokensLoad all markdown files under the docs directoryfor starrocks documents, you can clone repo from https://github.com/StarRocks/starrocks, and there is docs directory in it.loader = DirectoryLoader( "./docs", glob="**/*.md", loader_cls=UnstructuredMarkdownLoader)documents = loader.load()Split docs into tokens, and set update_vectordb = True because there are new docs/tokens.# load text splitter and split docs into snippets of texttext_splitter = TokenTextSplitter(chunk_size=400, chunk_overlap=50)split_docs = text_splitter.split_documents(documents)# tell vectordb to update text embeddingsupdate_vectordb = Truesplit_docs[-20] Document(page_content='Compile StarRocks with Docker\n\nThis topic describes how to compile StarRocks using Docker.\n\nOverview\n\nStarRocks provides development environment images for both Ubuntu 22.04 and CentOS 7.9. With the image, you can launch a Docker container and compile StarRocks in the container.\n\nStarRocks version and DEV ENV image\n\nDifferent branches of StarRocks correspond to different development environment images provided on StarRocks Docker Hub.\n\nFor Ubuntu 22.04:\n\n| Branch name | Image name |\n | --------------- | ----------------------------------- |\n | main | starrocks/dev-env-ubuntu:latest |\n | branch-3.0 | starrocks/dev-env-ubuntu:3.0-latest |\n | branch-2.5 | starrocks/dev-env-ubuntu:2.5-latest |\n\nFor CentOS 7.9:\n\n| Branch name | Image name |\n | --------------- | ------------------------------------ |\n | main | starrocks/dev-env-centos7:latest |\n | branch-3.0 | starrocks/dev-env-centos7:3.0-latest |\n | branch-2.5 | starrocks/dev-env-centos7:2.5-latest |\n\nPrerequisites\n\nBefore compiling StarRocks, make sure the following requirements are satisfied:\n\nHardware\n\n', metadata={'source': 'docs/developers/build-starrocks/Build_in_docker.md'})print("# docs = %d, # splits = %d" % (len(documents), len(split_docs))) # docs = 657, # splits = 2802Create vectordb instanceUse StarRocks as vectordbdef gen_starrocks(update_vectordb, embeddings, settings): if update_vectordb: docsearch = StarRocks.from_documents(split_docs, embeddings, config=settings) else: docsearch = StarRocks(embeddings, settings) return docsearchConvert tokens into embeddings and put them into vectordbHere we use StarRocks as vectordb, you can configure StarRocks instance via StarRocksSettings.Configuring StarRocks instance is pretty much like configuring mysql instance. You need to specify:host/portusername(default: 'root')password(default: '')database(default: 'default')table(default: 'langchain')embeddings = OpenAIEmbeddings()# configure starrocks settings(host/port/user/pw/db)settings = StarRocksSettings()settings.port = 41003settings.host = "127.0.0.1"settings.username = "root"settings.password = ""settings.database = "zya"docsearch = gen_starrocks(update_vectordb, embeddings, settings)print(docsearch)update_vectordb = False Inserting data...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2802/2802 [02:26<00:00, 19.11it/s] zya.langchain @ 127.0.0.1:41003 username: root Table Schema: ---------------------------------------------------------------------------- |name |type |key | ---------------------------------------------------------------------------- |id |varchar(65533) |true | |document |varchar(65533) |false | |embedding |array<float> |false | |metadata |varchar(65533) |false | ---------------------------------------------------------------------------- Build QA and ask question to itllm = OpenAI()qa = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=docsearch.as_retriever())query = "is profile enabled by default? if not, how to enable profile?"resp = qa.run(query)print(resp) No, profile is not enabled by default. To enable profile, set the variable `enable_profile` to `true` using the command `set enable_profile = true;`Previoussqlite-vssNextSupabase (Postgres)SetupLoad docs and split them into tokensCreate vectordb instanceUse StarRocks as vectordbConvert tokens into embeddings and put them into vectordbBuild QA and ask question to it |
661 | https://python.langchain.com/docs/integrations/vectorstores/supabase | ComponentsVector storesSupabase (Postgres)On this pageSupabase (Postgres)Supabase is an open source Firebase alternative. Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.PostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.This notebook shows how to use Supabase and pgvector as your VectorStore.To run this notebook, please ensure:the pgvector extension is enabledyou have installed the supabase-py packagethat you have created a match_documents function in your databasethat you have a documents table in your public schema similar to the one below.The following function determines cosine similarity, but you can adjust to your needs.-- Enable the pgvector extension to work with embedding vectorscreate extension if not exists vector;-- Create a table to store your documentscreate table documents ( id uuid primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector (1536) -- 1536 works for OpenAI embeddings, change if needed );-- Create a function to search for documentscreate function match_documents ( query_embedding vector (1536), filter jsonb default '{}') returns table ( id uuid, content text, metadata jsonb, similarity float) language plpgsql as $$#variable_conflict use_columnbegin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding;end;$$;# with pippip install supabase# with conda# !conda install -c conda-forge supabaseWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["SUPABASE_URL"] = getpass.getpass("Supabase URL:")os.environ["SUPABASE_SERVICE_KEY"] = getpass.getpass("Supabase Service Key:")# If you're storing your Supabase and OpenAI API keys in a .env file, you can load them with dotenvfrom dotenv import load_dotenvload_dotenv()First we'll create a Supabase client and instantiate a OpenAI embeddings class.import osfrom supabase.client import Client, create_clientfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import SupabaseVectorStoresupabase_url = os.environ.get("SUPABASE_URL")supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")supabase: Client = create_client(supabase_url, supabase_key)embeddings = OpenAIEmbeddings()Next we'll load and parse some data for our vector store (skip if you already have documents with embeddings stored in your DB).from langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)Insert the above documents into the database. Embeddings will automatically be generated for each document.vector_store = SupabaseVectorStore.from_documents(docs, embeddings, client=supabase, table_name="documents", query_name="match_documents")Alternatively if you already have documents with embeddings in your database, simply instantiate a new SupabaseVectorStore directly:vector_store = SupabaseVectorStore(embedding=embeddings, client=supabase, table_name="documents", query_name="match_documents")Finally, test it out by performing a similarity search:query = "What did the president say about Ketanji Brown Jackson"matched_docs = vector_store.similarity_search(query)print(matched_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with scoreThe returned distance score is cosine distance. Therefore, a lower score is better.matched_docs = vector_store.similarity_search_with_relevance_scores(query)matched_docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.802509746274066)Retriever optionsThis section goes over different options for how to use SupabaseVectorStore as a retriever.Maximal Marginal Relevance SearchesIn addition to using similarity search in the retriever object, you can also use mmr.retriever = vector_store.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content) ## Document 0 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ## Document 1 One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. When they came home, many of the world’s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness. A cancer that would put them in a flag-draped coffin. I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. Stationed near Baghdad, just yards from burn pits the size of football fields. Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. ## Document 2 And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. These steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. But I want you to know that we are going to be okay. When the history of this era is written Putin’s war on Ukraine will have left Russia weaker and the rest of the world stronger. While it shouldn’t have taken something so terrible for people around the world to see what’s at stake now everyone sees it clearly. ## Document 3 We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.PreviousStarRocksNextTairSimilarity search with scoreRetriever optionsMaximal Marginal Relevance Searches |
662 | https://python.langchain.com/docs/integrations/vectorstores/tair | ComponentsVector storesTairTairTair is a cloud native in-memory database service developed by Alibaba Cloud.
It provides rich data models and enterprise-grade capabilities to support your real-time online scenarios while maintaining full compatibility with open source Redis. Tair also introduces persistent memory-optimized instances that are based on the new non-volatile memory (NVM) storage medium.This notebook shows how to use functionality related to the Tair vector database.To run, you should have a Tair instance up and running.from langchain.embeddings.fake import FakeEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Tairfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../extras/modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = FakeEmbeddings(size=128) --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) File /opt/homebrew/lib/python3.10/site-packages/langchain/document_loaders/text.py:40, in TextLoader.load(self) 39 try: ---> 40 with open(self.file_path, encoding=self.encoding) as f: 41 text = f.read() FileNotFoundError: [Errno 2] No such file or directory: '../../../state_of_the_union.txt' The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Cell In[30], line 4 1 from langchain.document_loaders import TextLoader 3 loader = TextLoader("../../../state_of_the_union.txt") ----> 4 documents = loader.load() 5 text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) 6 docs = text_splitter.split_documents(documents) File /opt/homebrew/lib/python3.10/site-packages/langchain/document_loaders/text.py:56, in TextLoader.load(self) 54 raise RuntimeError(f"Error loading {self.file_path}") from e 55 except Exception as e: ---> 56 raise RuntimeError(f"Error loading {self.file_path}") from e 58 metadata = {"source": self.file_path} 59 return [Document(page_content=text, metadata=metadata)] RuntimeError: Error loading ../../../state_of_the_union.txtConnect to Tair using the TAIR_URL environment variable export TAIR_URL="redis://{username}:{password}@{tair_address}:{tair_port}"or the keyword argument tair_url.Then store documents and embeddings into Tair.tair_url = "redis://localhost:6379"# drop first if index already existsTair.drop_index(tair_url=tair_url)vector_store = Tair.from_documents(docs, embeddings, tair_url=tair_url) --------------------------------------------------------------------------- NameError Traceback (most recent call last) Cell In[6], line 6 3 # drop first if index already exists 4 Tair.drop_index(tair_url=tair_url) ----> 6 vector_store = Tair.from_documents(docs, embeddings, tair_url=tair_url) NameError: name 'docs' is not definedQuery similar documents.query = "What did the president say about Ketanji Brown Jackson"docs = vector_store.similarity_search(query)docs[0] Document(page_content='We’re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans. \n\nAnd tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nBy the end of this year, the deficit will be down to less than half what it was before I took office. \n\nThe only president ever to cut the deficit by more than one trillion dollars in a single year. \n\nLowering your costs also means demanding more competition. \n\nI’m a capitalist, but capitalism without competition isn’t capitalism. \n\nIt’s exploitation—and it drives up prices. \n\nWhen corporations don’t have to compete, their profits go up, your prices go up, and small businesses and family farmers and ranchers go under. \n\nWe see it happening with ocean carriers moving goods in and out of America. \n\nDuring the pandemic, these foreign-owned companies raised prices by as much as 1,000% and made record profits.', metadata={'source': '../../../state_of_the_union.txt'})Tair Hybrid Search Index build# drop first if index already existsTair.drop_index(tair_url=tair_url)vector_store = Tair.from_documents(docs, embeddings, tair_url=tair_url, index_params={"lexical_algorithm":"bm25"})Tair Hybrid Searchquery = "What did the president say about Ketanji Brown Jackson"# hybrid_ratio: 0.5 hybrid search, 0.9999 vector search, 0.0001 text searchkwargs = {"TEXT" : query, "hybrid_ratio" : 0.5}docs = vector_store.similarity_search(query, **kwargs)docs[0]PreviousSupabase (Postgres)NextTencent Cloud VectorDB |
663 | https://python.langchain.com/docs/integrations/vectorstores/tencentvectordb | ComponentsVector storesTencent Cloud VectorDBTencent Cloud VectorDBTencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service.This notebook shows how to use functionality related to the Tencent vector database.To run, you should have a Database instance..pip3 install tcvectordbfrom langchain.embeddings.fake import FakeEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import TencentVectorDBfrom langchain.vectorstores.tencentvectordb import ConnectionParamsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = FakeEmbeddings(size=128)conn_params = ConnectionParams(url="http://10.0.X.X", key="eC4bLRy2va******************************", username="root", timeout=20)vector_db = TencentVectorDB.from_documents( docs, embeddings, connection_params=conn_params, # drop_old=True,)query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_contentvector_db = TencentVectorDB(embeddings, conn_params)vector_db.add_texts(["Ankush went to Princeton"])query = "Where did Ankush go to college?"docs = vector_db.max_marginal_relevance_search(query)docs[0].page_contentPreviousTairNextTigris |
664 | https://python.langchain.com/docs/integrations/vectorstores/tigris | ComponentsVector storesTigrisOn this pageTigrisTigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
Tigris eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.This notebook guides you how to use Tigris as your VectorStorePre requisitesAn OpenAI account. You can sign up for an account hereSign up for a free Tigris account. Once you have signed up for the Tigris account, create a new project called vectordemo. Next, make a note of the Uri for the region you've created your project in, the clientId and clientSecret. You can get all this information from the Application Keys section of the project.Let's first install our dependencies:pip install tigrisdb openapi-schema-pydantic openai tiktokenWe will load the OpenAI api key and Tigris credentials in our environmentimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["TIGRIS_PROJECT"] = getpass.getpass("Tigris Project Name:")os.environ["TIGRIS_CLIENT_ID"] = getpass.getpass("Tigris Client Id:")os.environ["TIGRIS_CLIENT_SECRET"] = getpass.getpass("Tigris Client Secret:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Tigrisfrom langchain.document_loaders import TextLoaderInitialize Tigris vector storeLet's import our test dataset:loader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vector_store = Tigris.from_documents(docs, embeddings, index_name="my_embeddings")Similarity Searchquery = "What did the president say about Ketanji Brown Jackson"found_docs = vector_store.similarity_search(query)print(found_docs)Similarity Search with score (vector distance)query = "What did the president say about Ketanji Brown Jackson"result = vector_store.similarity_search_with_score(query)for doc, score in result: print(f"document={doc}, score={score}")PreviousTencent Cloud VectorDBNextTimescale Vector (Postgres)Initialize Tigris vector storeSimilarity SearchSimilarity Search with score (vector distance) |
665 | https://python.langchain.com/docs/integrations/vectorstores/timescalevector | ComponentsVector storesTimescale Vector (Postgres)On this pageTimescale Vector (Postgres)This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries.What is Timescale Vector?Timescale Vector is PostgreSQL++ for AI applications.Timescale Vector enables you to efficiently store and query millions of vector embeddings in PostgreSQL.Enhances pgvector with faster and more accurate similarity search on 100M+ vectors via DiskANN inspired indexing algorithm.Enables fast time-based vector search via automatic time-based partitioning and indexing.Provides a familiar SQL interface for querying vector embeddings and relational data.Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database.Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.Enables a worry-free experience with enterprise-grade security and compliance.How to access Timescale VectorTimescale Vector is available on Timescale, the cloud PostgreSQL platform. (There is no self-hosted version at this time.)LangChain users get a 90-day free trial for Timescale Vector.To get started, signup to Timescale, create a new database and follow this notebook!See the Timescale Vector explainer blog for more details and performance benchmarks.See the installation instructions for more details on using Timescale Vector in python.SetupFollow these steps to get ready to follow this tutorial.# Pip install necessary packagespip install timescale-vectorpip install openaipip install tiktokenIn this example, we'll use OpenAIEmbeddings, so let's load your OpenAI API key.import os# Run export OPENAI_API_KEY=sk-YOUR_OPENAI_API_KEY...# Get openAI api key by reading local .env filefrom dotenv import load_dotenv, find_dotenv_ = load_dotenv(find_dotenv())OPENAI_API_KEY = os.environ['OPENAI_API_KEY']# Get the API key and save it as an environment variable#import os#import getpass#os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from typing import List, TupleNext we'll import the needed Python libraries and libraries from LangChain. Note that we import the timescale-vector library as well as the TimescaleVector LangChain vectorstore.import timescale_vectorfrom datetime import datetime, timedeltafrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders.json_loader import JSONLoaderfrom langchain.docstore.document import Documentfrom langchain.vectorstores.timescalevector import TimescaleVector1. Similarity Search with Euclidean Distance (Default)First, we'll look at an example of doing a similarity search query on the State of the Union speech to find the most similar sentences to a given query sentence. We'll use the Euclidean distance as our similarity metric.# Load the text and split it into chunksloader = TextLoader("../../../extras/modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Next, we'll load the service URL for our Timescale database. If you haven't already, signup for Timescale, and create a new database.Then, to connect to your PostgreSQL database, you'll need your service URI, which can be found in the cheatsheet or .env file you downloaded after creating a new database. The URI will look something like this: postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require. # Timescale Vector needs the service url to your cloud database. You can see this as soon as you create the # service in the cloud UI or in your credentials.sql fileSERVICE_URL = os.environ['TIMESCALE_SERVICE_URL']# Specify directly if testing#SERVICE_URL = "postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require"# # You can get also it from an enviornment variables. We suggest using a .env file.# import os# SERVICE_URL = os.environ.get("TIMESCALE_SERVICE_URL", "")Next we create a TimescaleVector vectorstore. We specify a collection name, which will be the name of the table our data is stored in. Note: When creating a new instance of TimescaleVector, the TimescaleVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique (i.e it doesn't already exist).# The TimescaleVector Module will create a table with the name of the collection.COLLECTION_NAME = "state_of_the_union_test"# Create a Timescale Vector instance from the collection of documentsdb = TimescaleVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME, service_url=SERVICE_URL,)Now that we've loaded our data, we can perform a similarity search.query = "What did the president say about Ketanji Brown Jackson"docs_with_score = db.similarity_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18443380687035138 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.18452197313308139 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21720781018594182 A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21724902288621384 A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. --------------------------------------------------------------------------------Using a Timescale Vector as a RetrieverAfter initializing a TimescaleVector store, you can use it as a retriever.# Use TimescaleVector as a retrieverretriever = db.as_retriever()print(retriever) tags=['TimescaleVector', 'OpenAIEmbeddings'] metadata=None vectorstore=<langchain.vectorstores.timescalevector.TimescaleVector object at 0x10fc8d070> search_type='similarity' search_kwargs={}Let's look at an example of using Timescale Vector as a retriever with the RetrievalQA chain and the stuff chain.In this example, we'll ask the same query as above, but this time we'll pass the relevant documents returned from Timescale Vector to an LLM to use as context to answer our question.First we'll create our stuff chain:# Initialize GPT3.5 modelfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature = 0.1, model = 'gpt-3.5-turbo-16k')# Initialize a RetrievalQA class from a stuff chainfrom langchain.chains import RetrievalQAqa_stuff = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, verbose=True,)query = "What did the president say about Ketanji Brown Jackson?"response = qa_stuff.run(query) > Entering new RetrievalQA chain... > Finished chain.print(response) The President said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of our nation's top legal minds and will continue Justice Breyer's legacy of excellence. He also mentioned that since her nomination, she has received a broad range of support from various groups, including the Fraternal Order of Police and former judges appointed by Democrats and Republicans.2. Similarity Search with time-based filteringA key use case for Timescale Vector is efficient time-based vector search. Timescale Vector enables this by automatically partitioning vectors (and associated metadata) by time. This allows you to efficiently query vectors by both similarity to a query vector and time.Time-based vector search functionality is helpful for applications like:Storing and retrieving LLM response history (e.g. chatbots)Finding the most recent embeddings that are similar to a query vector (e.g recent news).Constraining similarity search to a relevant time range (e.g asking time-based questions about a knowledge base)To illustrate how to use TimescaleVector's time-based vector search functionality, we'll ask questions about the git log history for TimescaleDB . We'll illustrate how to add documents with a time-based uuid and how run similarity searches with time range filters.Extract content and metadata from git log JSONFirst lets load in the git log data into a new collection in our PostgreSQL database named timescale_commits.import jsonWe'll define a helper funciton to create a uuid for a document and associated vector embedding based on its timestamp. We'll use this function to create a uuid for each git log entry.Important note: If you are working with documents and want the current date and time associated with vector for time-based search, you can skip this step. A uuid will be automatically generated when the documents are ingested by default.from timescale_vector import client# Function to take in a date string in the past and return a uuid v1def create_uuid(date_string: str): if date_string is None: return None time_format = '%a %b %d %H:%M:%S %Y %z' datetime_obj = datetime.strptime(date_string, time_format) uuid = client.uuid_from_time(datetime_obj) return str(uuid)Next, we'll define a metadata function to extract the relevant metadata from the JSON record. We'll pass this function to the JSONLoader. See the JSON document loader docs for more details.# Helper function to split name and email given an author string consisting of Name Lastname <email>def split_name(input_string: str) -> Tuple[str, str]: if input_string is None: return None, None start = input_string.find("<") end = input_string.find(">") name = input_string[:start].strip() email = input_string[start+1:end].strip() return name, email# Helper function to transform a date string into a timestamp_tz stringdef create_date(input_string: str) -> datetime: if input_string is None: return None # Define a dictionary to map month abbreviations to their numerical equivalents month_dict = { "Jan": "01", "Feb": "02", "Mar": "03", "Apr": "04", "May": "05", "Jun": "06", "Jul": "07", "Aug": "08", "Sep": "09", "Oct": "10", "Nov": "11", "Dec": "12", } # Split the input string into its components components = input_string.split() # Extract relevant information day = components[2] month = month_dict[components[1]] year = components[4] time = components[3] timezone_offset_minutes = int(components[5]) # Convert the offset to minutes timezone_hours = timezone_offset_minutes // 60 # Calculate the hours timezone_minutes = timezone_offset_minutes % 60 # Calculate the remaining minutes # Create a formatted string for the timestamptz in PostgreSQL format timestamp_tz_str = f"{year}-{month}-{day} {time}+{timezone_hours:02}{timezone_minutes:02}" return timestamp_tz_str# Metadata extraction function to extract metadata from a JSON recorddef extract_metadata(record: dict, metadata: dict) -> dict: record_name, record_email = split_name(record["author"]) metadata["id"] = create_uuid(record["date"]) metadata["date"] = create_date(record["date"]) metadata["author_name"] = record_name metadata["author_email"] = record_email metadata["commit_hash"] = record["commit"] return metadataNext, you'll need to download the sample dataset and place it in the same directory as this notebook.You can use following command:# Download the file using curl and save it as commit_history.csv# Note: Execute this command in your terminal, in the same directory as the notebookcurl -O https://s3.amazonaws.com/assets.timescale.com/ai/ts_git_log.jsonFinally we can initialize the JSON loader to parse the JSON records. We also remove empty records for simplicity.# Define path to the JSON file relative to this notebook# Change this to the path to your JSON fileFILE_PATH = "../../../../../ts_git_log.json"# Load data from JSON file and extract metadataloader = JSONLoader( file_path=FILE_PATH, jq_schema='.commit_history[]', text_content=False, metadata_func=extract_metadata)documents = loader.load()# Remove documents with None datesdocuments = [doc for doc in documents if doc.metadata["date"] is not None]print(documents[0]) page_content='{"commit": "44e41c12ab25e36c202f58e068ced262eadc8d16", "author": "Lakshmi Narayanan Sreethar<[email protected]>", "date": "Tue Sep 5 21:03:21 2023 +0530", "change summary": "Fix segfault in set_integer_now_func", "change details": "When an invalid function oid is passed to set_integer_now_func, it finds out that the function oid is invalid but before throwing the error, it calls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that by removing the invalid call to ReleaseSysCache. Fixes #6037 "}' metadata={'source': '/Users/avtharsewrathan/sideprojects2023/timescaleai/tsv-langchain/ts_git_log.json', 'seq_num': 1, 'id': '8b407680-4c01-11ee-96a6-b82284ddccc6', 'date': '2023-09-5 21:03:21+0850', 'author_name': 'Lakshmi Narayanan Sreethar', 'author_email': '[email protected]', 'commit_hash': '44e41c12ab25e36c202f58e068ced262eadc8d16'}Load documents and metadata into TimescaleVector vectorstoreNow that we have prepared our documents, let's process them and load them, along with their vector embedding representations into our TimescaleVector vectorstore.Since this is a demo, we will only load the first 500 records. In practice, you can load as many records as you want.NUM_RECORDS = 500documents = documents[:NUM_RECORDS]Then we use the CharacterTextSplitter to split the documents into smaller chunks if needed for easier embedding. Note that this splitting process retains the metadata for each document.# Split the documents into chunks for embeddingtext_splitter = CharacterTextSplitter( chunk_size=1000, chunk_overlap=200,)docs = text_splitter.split_documents(documents)Next we'll create a Timescale Vector instance from the collection of documents that we finished pre-processsing.First, we'll define a collection name, which will be the name of our table in the PostgreSQL database. We'll also define a time delta, which we pass to the time_partition_interval argument, which will be used to as the interval for partitioning the data by time. Each partition will consist of data for the specified length of time. We'll use 7 days for simplicity, but you can pick whatever value make sense for your use case -- for example if you query recent vectors frequently you might want to use a smaller time delta like 1 day, or if you query vectors over a decade long time period then you might want to use a larger time delta like 6 months or 1 year.Finally, we'll create the TimescaleVector instance. We specify the ids argument to be the uuid field in our metadata that we created in the pre-processing step above. We do this because we want the time part of our uuids to reflect dates in the past (i.e when the commit was made). However, if we wanted the current date and time to be associated with our document, we can remove the id argument and uuid's will be automatically created with the current date and time.# Define collection nameCOLLECTION_NAME = "timescale_commits"embeddings = OpenAIEmbeddings()# Create a Timescale Vector instance from the collection of documentsdb = TimescaleVector.from_documents( embedding=embeddings, ids = [doc.metadata["id"] for doc in docs], documents=docs, collection_name=COLLECTION_NAME, service_url=SERVICE_URL, time_partition_interval=timedelta(days = 7),)Querying vectors by time and similarityNow that we have loaded our documents into TimescaleVector, we can query them by time and similarity.TimescaleVector provides multiple methods for querying vectors by doing similarity search with time-based filtering.Let's take a look at each method below:# Time filter variablesstart_dt = datetime(2023, 8, 1, 22, 10, 35) # Start date = 1 August 2023, 22:10:35end_dt = datetime(2023, 8, 30, 22, 10, 35) # End date = 30 August 2023, 22:10:35td = timedelta(days=7) # Time delta = 7 daysquery = "What's new with TimescaleDB functions?"Method 1: Filter within a provided start date and end date.# Method 1: Query for vectors between start_date and end_datedocs_with_score = db.similarity_search_with_score(query, start_date=start_dt, end_date=end_dt)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print("Date: ", doc.metadata["date"]) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.17488396167755127 Date: 2023-08-29 18:13:24+0320 {"commit": " e4facda540286b0affba47ccc63959fefe2a7b26", "author": "Sven Klemm<[email protected]>", "date": "Tue Aug 29 18:13:24 2023 +0200", "change summary": "Add compatibility layer for _timescaledb_internal functions", "change details": "With timescaledb 2.12 all the functions present in _timescaledb_internal were moved into the _timescaledb_functions schema to improve schema security. This patch adds a compatibility layer so external callers of these internal functions will not break and allow for more flexibility when migrating. "} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.18102192878723145 Date: 2023-08-20 22:47:10+0320 {"commit": " 0a66bdb8d36a1879246bd652e4c28500c4b951ab", "author": "Sven Klemm<[email protected]>", "date": "Sun Aug 20 22:47:10 2023 +0200", "change summary": "Move functions to _timescaledb_functions schema", "change details": "To increase schema security we do not want to mix our own internal objects with user objects. Since chunks are created in the _timescaledb_internal schema our internal functions should live in a different dedicated schema. This patch make the necessary adjustments for the following functions: - to_unix_microseconds(timestamptz) - to_timestamp(bigint) - to_timestamp_without_timezone(bigint) - to_date(bigint) - to_interval(bigint) - interval_to_usec(interval) - time_to_internal(anyelement) - subtract_integer_from_now(regclass, bigint) "} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.18150119891755445 Date: 2023-08-22 12:01:19+0320 {"commit": " cf04496e4b4237440274eb25e4e02472fc4e06fc", "author": "Sven Klemm<[email protected]>", "date": "Tue Aug 22 12:01:19 2023 +0200", "change summary": "Move utility functions to _timescaledb_functions schema", "change details": "To increase schema security we do not want to mix our own internal objects with user objects. Since chunks are created in the _timescaledb_internal schema our internal functions should live in a different dedicated schema. This patch make the necessary adjustments for the following functions: - generate_uuid() - get_git_commit() - get_os_info() - tsl_loaded() "} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.18422493887617963 Date: 2023-08-9 15:26:03+0500 {"commit": " 44eab9cf9bef34274c88efd37a750eaa74cd8044", "author": "Konstantina Skovola<[email protected]>", "date": "Wed Aug 9 15:26:03 2023 +0300", "change summary": "Release 2.11.2", "change details": "This release contains bug fixes since the 2.11.1 release. We recommend that you upgrade at the next available opportunity. **Features** * #5923 Feature flags for TimescaleDB features **Bugfixes** * #5680 Fix DISTINCT query with JOIN on multiple segmentby columns * #5774 Fixed two bugs in decompression sorted merge code * #5786 Ensure pg_config --cppflags are passed * #5906 Fix quoting owners in sql scripts. * #5912 Fix crash in 1-step integer policy creation **Thanks** * @mrksngl for submitting a PR to fix extension upgrade scripts * @ericdevries for reporting an issue with DISTINCT queries using segmentby columns of compressed hypertable "} --------------------------------------------------------------------------------Note how the query only returns results within the specified date range.Method 2: Filter within a provided start date, and a time delta later.# Method 2: Query for vectors between start_dt and a time delta td later# Most relevant vectors between 1 August and 7 days laterdocs_with_score = db.similarity_search_with_score(query, start_date=start_dt, time_delta=td)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print("Date: ", doc.metadata["date"]) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18458807468414307 Date: 2023-08-3 14:30:23+0500 {"commit": " 7aeed663b9c0f337b530fd6cad47704a51a9b2ec", "author": "Dmitry Simonenko<[email protected]>", "date": "Thu Aug 3 14:30:23 2023 +0300", "change summary": "Feature flags for TimescaleDB features", "change details": "This PR adds several GUCs which allow to enable/disable major timescaledb features: - enable_hypertable_create - enable_hypertable_compression - enable_cagg_create - enable_policy_create "} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.20492422580718994 Date: 2023-08-7 18:31:40+0320 {"commit": " 07762ea4cedefc88497f0d1f8712d1515cdc5b6e", "author": "Sven Klemm<[email protected]>", "date": "Mon Aug 7 18:31:40 2023 +0200", "change summary": "Test timescaledb debian 12 packages in CI", "change details": ""} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21106326580047607 Date: 2023-08-3 14:36:39+0500 {"commit": " 2863daf3df83c63ee36c0cf7b66c522da5b4e127", "author": "Dmitry Simonenko<[email protected]>", "date": "Thu Aug 3 14:36:39 2023 +0300", "change summary": "Support CREATE INDEX ONLY ON main table", "change details": "This PR adds support for CREATE INDEX ONLY ON clause which allows to create index only on the main table excluding chunks. Fix #5908 "} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21698051691055298 Date: 2023-08-2 20:24:14+0140 {"commit": " 3af0d282ea71d9a8f27159a6171e9516e62ec9cb", "author": "Lakshmi Narayanan Sreethar<[email protected]>", "date": "Wed Aug 2 20:24:14 2023 +0100", "change summary": "PG16: ExecInsertIndexTuples requires additional parameter", "change details": "PG16 adds a new boolean parameter to the ExecInsertIndexTuples function to denote if the index is a BRIN index, which is then used to determine if the index update can be skipped. The fix also removes the INDEX_ATTR_BITMAP_ALL enum value. Adapt these changes by updating the compat function to accomodate the new parameter added to the ExecInsertIndexTuples function and using an alternative for the removed INDEX_ATTR_BITMAP_ALL enum value. postgres/postgres@19d8e23 "} --------------------------------------------------------------------------------Once again, notice how we get results within the specified time filter, different from the previous query.Method 3: Filter within a provided end date and a time delta earlier.# Method 3: Query for vectors between end_dt and a time delta td earlier# Most relevant vectors between 30 August and 7 days earlierdocs_with_score = db.similarity_search_with_score(query, end_date=end_dt, time_delta=td)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print("Date: ", doc.metadata["date"]) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.17488396167755127 Date: 2023-08-29 18:13:24+0320 {"commit": " e4facda540286b0affba47ccc63959fefe2a7b26", "author": "Sven Klemm<[email protected]>", "date": "Tue Aug 29 18:13:24 2023 +0200", "change summary": "Add compatibility layer for _timescaledb_internal functions", "change details": "With timescaledb 2.12 all the functions present in _timescaledb_internal were moved into the _timescaledb_functions schema to improve schema security. This patch adds a compatibility layer so external callers of these internal functions will not break and allow for more flexibility when migrating. "} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.18496227264404297 Date: 2023-08-29 10:49:47+0320 {"commit": " a9751ccd5eb030026d7b975d22753f5964972389", "author": "Sven Klemm<[email protected]>", "date": "Tue Aug 29 10:49:47 2023 +0200", "change summary": "Move partitioning functions to _timescaledb_functions schema", "change details": "To increase schema security we do not want to mix our own internal objects with user objects. Since chunks are created in the _timescaledb_internal schema our internal functions should live in a different dedicated schema. This patch make the necessary adjustments for the following functions: - get_partition_for_key(val anyelement) - get_partition_hash(val anyelement) "} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.1871250867843628 Date: 2023-08-28 23:26:23+0320 {"commit": " b2a91494a11d8b82849b6f11f9ea6dc26ef8a8cb", "author": "Sven Klemm<[email protected]>", "date": "Mon Aug 28 23:26:23 2023 +0200", "change summary": "Move ddl_internal functions to _timescaledb_functions schema", "change details": "To increase schema security we do not want to mix our own internal objects with user objects. Since chunks are created in the _timescaledb_internal schema our internal functions should live in a different dedicated schema. This patch make the necessary adjustments for the following functions: - chunk_constraint_add_table_constraint(_timescaledb_catalog.chunk_constraint) - chunk_drop_replica(regclass,name) - chunk_index_clone(oid) - chunk_index_replace(oid,oid) - create_chunk_replica_table(regclass,name) - drop_stale_chunks(name,integer[]) - health() - hypertable_constraint_add_table_fk_constraint(name,name,name,integer) - process_ddl_event() - wait_subscription_sync(name,name,integer,numeric) "} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.18867712088363497 Date: 2023-08-27 13:20:04+0320 {"commit": " e02b1f348eb4c48def00b7d5227238b4d9d41a4a", "author": "Sven Klemm<[email protected]>", "date": "Sun Aug 27 13:20:04 2023 +0200", "change summary": "Simplify schema move update script", "change details": "Use dynamic sql to create the ALTER FUNCTION statements for those functions that may not exist in previous versions. "} --------------------------------------------------------------------------------Method 4: We can also filter for all vectors after a given date by only specifying a start date in our query.Method 5: Similarly, we can filter for or all vectors before a given date by only specify an end date in our query.# Method 4: Query all vectors after start_datedocs_with_score = db.similarity_search_with_score(query,start_date=start_dt)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print("Date: ", doc.metadata["date"]) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.17488396167755127 Date: 2023-08-29 18:13:24+0320 {"commit": " e4facda540286b0affba47ccc63959fefe2a7b26", "author": "Sven Klemm<[email protected]>", "date": "Tue Aug 29 18:13:24 2023 +0200", "change summary": "Add compatibility layer for _timescaledb_internal functions", "change details": "With timescaledb 2.12 all the functions present in _timescaledb_internal were moved into the _timescaledb_functions schema to improve schema security. This patch adds a compatibility layer so external callers of these internal functions will not break a |
666 | https://python.langchain.com/docs/integrations/vectorstores/typesense | ComponentsVector storesTypesenseOn this pageTypesenseTypesense is an open source, in-memory search engine, that you can either self-host or run on Typesense Cloud.Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.It also lets you combine attribute-based filtering together with vector queries, to fetch the most relevant documents.This notebook shows you how to use Typesense as your VectorStore.Let's first install our dependencies:pip install typesense openapi-schema-pydantic openai tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Typesensefrom langchain.document_loaders import TextLoaderLet's import our test dataset:loader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Typesense.from_documents( docs, embeddings, typesense_client_params={ "host": "localhost", # Use xxx.a1.typesense.net for Typesense Cloud "port": "8108", # Use 443 for Typesense Cloud "protocol": "http", # Use https for Typesense Cloud "typesense_api_key": "xyz", "typesense_collection_name": "lang-chain", },)Similarity Searchquery = "What did the president say about Ketanji Brown Jackson"found_docs = docsearch.similarity_search(query)print(found_docs[0].page_content)Typesense as a RetrieverTypesense, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.retriever = docsearch.as_retriever()retrieverquery = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0]PreviousTimescale Vector (Postgres)NextUSearchSimilarity SearchTypesense as a Retriever |
667 | https://python.langchain.com/docs/integrations/vectorstores/usearch | ComponentsVector storesUSearchOn this pageUSearchUSearch is a Smaller & Faster Single-File Vector Search EngineUSearch's base functionality is identical to FAISS, and the interface should look familiar if you have ever investigated Approximate Nearest Neigbors search. FAISS is a widely recognized standard for high-performance vector search engines. USearch and FAISS both employ the same HNSW algorithm, but they differ significantly in their design principles. USearch is compact and broadly compatible without sacrificing performance, with a primary focus on user-defined metrics and fewer dependencies.pip install usearchWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import USearchfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../extras/modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = USearch.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity Search with scoreThe similarity_search_with_score method allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better.docs_and_scores = db.similarity_search_with_score(query)docs_and_scores[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../extras/modules/state_of_the_union.txt'}), 0.1845687)PreviousTypesenseNextValdSimilarity Search with score |
668 | https://python.langchain.com/docs/integrations/vectorstores/vald | ComponentsVector storesValdOn this pageValdVald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine.This notebook shows how to use functionality related to the Vald database.To run this notebook you need a running Vald cluster.
Check Get Started for more information.See the installation instructions.pip install vald-client-pythonBasic Examplefrom langchain.document_loaders import TextLoaderfrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Valdraw_documents = TextLoader('state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)embeddings = HuggingFaceEmbeddings()db = Vald.from_documents(documents, embeddings, host="localhost", port=8080)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)docs[0].page_contentSimilarity search by vectorembedding_vector = embeddings.embed_query(query)docs = db.similarity_search_by_vector(embedding_vector)docs[0].page_contentSimilarity search with scoredocs_and_scores = db.similarity_search_with_score(query)docs_and_scores[0]Maximal Marginal Relevance Search (MMR)In addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = db.as_retriever(search_type="mmr")retriever.get_relevant_documents(query)Or use max_marginal_relevance_search directly:db.max_marginal_relevance_search(query, k=2, fetch_k=10)PreviousUSearchNextvearchBasic ExampleSimilarity search by vectorSimilarity search with scoreMaximal Marginal Relevance Search (MMR) |
669 | https://python.langchain.com/docs/integrations/vectorstores/vearch | ComponentsVector storesvearchvearchfrom langchain.document_loaders import TextLoaderfrom langchain.embeddings.huggingface import HuggingFaceEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom transformers import AutoModel, AutoTokenizerfrom langchain.vectorstores.vearch import Vearch# repalce to your local model pathmodel_path ="/data/zhx/zhx/langchain-ChatGLM_new/chatglm2-6b" tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda(0) /export/anaconda3/envs/vearch_cluster_langchain/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Loading checkpoint shards: 100%|██████████| 7/7 [00:07<00:00, 1.01s/it]query = "你好!"response, history = model.chat(tokenizer, query, history=[])print(f"Human: {query}\nChatGLM:{response}\n")query = "你知道凌波微步吗,你知道都有谁学会了吗?"response, history = model.chat(tokenizer, query, history=history)print(f"Human: {query}\nChatGLM:{response}\n") Human: 你好! ChatGLM:你好👋!我是人工智能助手 ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。 Human: 你知道凌波微步吗,你知道都有谁学会了吗? ChatGLM:凌波微步是一种步伐,最早出自《倚天屠龙记》。在电视剧《人民的名义》中,侯亮平也学会了凌波微步。 # Add your local knowledge filesfile_path = "/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/天龙八部/lingboweibu.txt"#Your local file path"loader = TextLoader(file_path,encoding="utf-8")documents = loader.load()# split text into sentences and embedding the sentencestext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)texts = text_splitter.split_documents(documents)#replace to your model pathembedding_path = '/data/zhx/zhx/langchain-ChatGLM_new/text2vec/text2vec-large-chinese'embeddings = HuggingFaceEmbeddings(model_name=embedding_path) No sentence-transformers model found with name /data/zhx/zhx/langchain-ChatGLM_new/text2vec/text2vec-large-chinese. Creating a new one with MEAN pooling.#first add your document into vearch vectorstorevearch_standalone = Vearch.from_documents( texts,embeddings,path_or_url="/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/localdb_new_test",table_name="localdb_new_test",flag=0)print("***************after is cluster res*****************")vearch_cluster = Vearch.from_documents( texts,embeddings,path_or_url="http://test-vearch-langchain-router.vectorbase.svc.ht1.n.jd.local",db_name="vearch_cluster_langchian",table_name="tobenumone",flag=1) docids ['18ce6747dca04a2c833e60e8dfd83c04', 'aafacb0e46574b378a9f433877ab06a8', '9776bccfdd8643a8b219ccee0596f370'] ***************after is cluster res***************** docids ['1841638988191686991', '-4519586577642625749', '5028230008472292907']query = "你知道凌波微步吗,你知道都有谁会凌波微步?"vearch_standalone_res=vearch_standalone.similarity_search(query, 3)for idx,tmp in enumerate(vearch_standalone_res): print(f"{'#'*20}第{idx+1}段相关文档{'#'*20}\n\n{tmp.page_content}\n")# combine your local knowleadge and query context = "".join([tmp.page_content for tmp in vearch_standalone_res])new_query = f"基于以下信息,尽可能准确的来回答用户的问题。背景信息:\n {context} \n 回答用户这个问题:{query}\n\n"response, history = model.chat(tokenizer, new_query, history=[])print(f"********ChatGLM:{response}\n")print("***************************after is cluster res******************************")query_c = "你知道凌波微步吗,你知道都有谁会凌波微步?"cluster_res=vearch_cluster.similarity_search(query_c, 3)for idx,tmp in enumerate(cluster_res): print(f"{'#'*20}第{idx+1}段相关文档{'#'*20}\n\n{tmp.page_content}\n")# combine your local knowleadge and query context_c = "".join([tmp.page_content for tmp in cluster_res])new_query_c = f"基于以下信息,尽可能准确的来回答用户的问题。背景信息:\n {context_c} \n 回答用户这个问题:{query_c}\n\n"response_c, history_c = model.chat(tokenizer, new_query_c, history=[])print(f"********ChatGLM:{response_c}\n") ####################第1段相关文档#################### 午饭过后,段誉又练“凌波微步”,走一步,吸一口气,走第二步时将气呼出,六十四卦走完,四肢全无麻痹之感,料想呼吸顺畅,便无害处。第二次再走时连走两步吸一口气,再走两步始行呼出。这“凌波微步”是以动功修习内功,脚步踏遍六十四卦一个周天,内息自然而然地也转了一个周天。因此他每走一遍,内力便有一分进益。 这般练了几天,“凌波微步”已走得颇为纯熟,不须再数呼吸,纵然疾行,气息也已无所窒滞。心意既畅,跨步时渐渐想到《洛神赋》中那些与“凌波微步”有关的句子:“仿佛兮若轻云之蔽月,飘飘兮若流风之回雪”,“竦轻躯以鹤立,若将飞而未翔”,“体迅飞凫,飘忽若神”,“动无常则,若危若安。进止难期,若往若还”。 百度简介 凌波微步是「逍遥派」独门轻功身法,精妙异常。 凌波微步乃是一门极上乘的轻功,所以列于卷轴之末,以易经八八六十四卦为基础,使用者按特定顺序踏着卦象方位行进,从第一步到最后一步正好行走一个大圈。此步法精妙异常,原是要待人练成「北冥神功」,吸人内力,自身内力已【颇为深厚】之后再练。 ####################第2段相关文档#################### 《天龙八部》第五回 微步縠纹生 卷轴中此外诸种经脉修习之法甚多,皆是取人内力的法门,段誉虽自语宽解,总觉习之有违本性,单是贪多务得,便非好事,当下暂不理会。 卷到卷轴末端,又见到了“凌波微步”那四字,登时便想起《洛神赋》中那些句子来:“凌波微步,罗袜生尘……转眄流精,光润玉颜。含辞未吐,气若幽兰。华容婀娜,令我忘餐。”曹子建那些千古名句,在脑海中缓缓流过:“秾纤得衷,修短合度,肩若削成,腰如约素。延颈秀项,皓质呈露。芳泽无加,铅华弗御。云髻峨峨,修眉连娟。丹唇外朗,皓齿内鲜。明眸善睐,靥辅承权。瑰姿艳逸,仪静体闲。柔情绰态,媚于语言……”这些句子用在木婉清身上,“这话倒也有理”;但如用之于神仙姊姊,只怕更为适合。想到神仙姊姊的姿容体态,“皎若太阳升朝霞,灼若芙蓉出绿波”,但觉依她吩咐行事,实为人生至乐,心想:“我先来练这‘凌波微步’,此乃逃命之妙法,非害人之手段也,练之有百利而无一害。” ####################第3段相关文档#################### 《天龙八部》第二回 玉壁月华明 再展帛卷,长卷上源源皆是裸女画像,或立或卧,或现前胸,或见后背。人像的面容都是一般,但或喜或愁,或含情凝眸,或轻嗔薄怒,神情各异。一共有三十六幅图像,每幅像上均有颜色细线,注明穴道部位及练功法诀。 帛卷尽处题着“凌波微步”四字,其后绘的是无数足印,注明“妇妹”、“无妄”等等字样,尽是《易经》中的方位。段誉前几日还正全心全意地钻研《易经》,一见到这些名称,登时精神大振,便似遇到故交良友一般。只见足印密密麻麻,不知有几千百个,自一个足印至另一个足印均有绿线贯串,线上绘有箭头,最后写着一行字道:“步法神妙,保身避敌,待积内力,再取敌命。” 段誉心道:“神仙姊姊所遗的步法,必定精妙之极,遇到强敌时脱身逃走,那就很好,‘再取敌命’也就不必了。” 卷好帛卷,对之作了两个揖,珍而重之地揣入怀中,转身对那玉像道:“神仙姊姊,你吩咐我朝午晚三次练功,段誉不敢有违。今后我对人加倍客气,别人不会来打我,我自然也不会去吸他内力。你这套‘凌波微步’我更要用心练熟,眼见不对,立刻溜之大吉,就吸不到他内力了。”至于“杀尽我逍遥派弟子”一节,却想也不敢去想。 ********ChatGLM:凌波微步是一门极上乘的轻功,源于《易经》八八六十四卦。使用者按照特定顺序踏着卦象方位行进,从第一步到最后一步正好行走一个大圈。这门轻功精妙异常,可以使人内力大为提升,但需在练成“北冥神功”后才能真正掌握。凌波微步在金庸先生的《天龙八部》中得到了充分的描写。 ***************************after is cluster res****************************** ####################第1段相关文档#################### 午饭过后,段誉又练“凌波微步”,走一步,吸一口气,走第二步时将气呼出,六十四卦走完,四肢全无麻痹之感,料想呼吸顺畅,便无害处。第二次再走时连走两步吸一口气,再走两步始行呼出。这“凌波微步”是以动功修习内功,脚步踏遍六十四卦一个周天,内息自然而然地也转了一个周天。因此他每走一遍,内力便有一分进益。 这般练了几天,“凌波微步”已走得颇为纯熟,不须再数呼吸,纵然疾行,气息也已无所窒滞。心意既畅,跨步时渐渐想到《洛神赋》中那些与“凌波微步”有关的句子:“仿佛兮若轻云之蔽月,飘飘兮若流风之回雪”,“竦轻躯以鹤立,若将飞而未翔”,“体迅飞凫,飘忽若神”,“动无常则,若危若安。进止难期,若往若还”。 百度简介 凌波微步是「逍遥派」独门轻功身法,精妙异常。 凌波微步乃是一门极上乘的轻功,所以列于卷轴之末,以易经八八六十四卦为基础,使用者按特定顺序踏着卦象方位行进,从第一步到最后一步正好行走一个大圈。此步法精妙异常,原是要待人练成「北冥神功」,吸人内力,自身内力已【颇为深厚】之后再练。 ####################第2段相关文档#################### 《天龙八部》第五回 微步縠纹生 卷轴中此外诸种经脉修习之法甚多,皆是取人内力的法门,段誉虽自语宽解,总觉习之有违本性,单是贪多务得,便非好事,当下暂不理会。 卷到卷轴末端,又见到了“凌波微步”那四字,登时便想起《洛神赋》中那些句子来:“凌波微步,罗袜生尘……转眄流精,光润玉颜。含辞未吐,气若幽兰。华容婀娜,令我忘餐。”曹子建那些千古名句,在脑海中缓缓流过:“秾纤得衷,修短合度,肩若削成,腰如约素。延颈秀项,皓质呈露。芳泽无加,铅华弗御。云髻峨峨,修眉连娟。丹唇外朗,皓齿内鲜。明眸善睐,靥辅承权。瑰姿艳逸,仪静体闲。柔情绰态,媚于语言……”这些句子用在木婉清身上,“这话倒也有理”;但如用之于神仙姊姊,只怕更为适合。想到神仙姊姊的姿容体态,“皎若太阳升朝霞,灼若芙蓉出绿波”,但觉依她吩咐行事,实为人生至乐,心想:“我先来练这‘凌波微步’,此乃逃命之妙法,非害人之手段也,练之有百利而无一害。” ####################第3段相关文档#################### 《天龙八部》第二回 玉壁月华明 再展帛卷,长卷上源源皆是裸女画像,或立或卧,或现前胸,或见后背。人像的面容都是一般,但或喜或愁,或含情凝眸,或轻嗔薄怒,神情各异。一共有三十六幅图像,每幅像上均有颜色细线,注明穴道部位及练功法诀。 帛卷尽处题着“凌波微步”四字,其后绘的是无数足印,注明“妇妹”、“无妄”等等字样,尽是《易经》中的方位。段誉前几日还正全心全意地钻研《易经》,一见到这些名称,登时精神大振,便似遇到故交良友一般。只见足印密密麻麻,不知有几千百个,自一个足印至另一个足印均有绿线贯串,线上绘有箭头,最后写着一行字道:“步法神妙,保身避敌,待积内力,再取敌命。” 段誉心道:“神仙姊姊所遗的步法,必定精妙之极,遇到强敌时脱身逃走,那就很好,‘再取敌命’也就不必了。” 卷好帛卷,对之作了两个揖,珍而重之地揣入怀中,转身对那玉像道:“神仙姊姊,你吩咐我朝午晚三次练功,段誉不敢有违。今后我对人加倍客气,别人不会来打我,我自然也不会去吸他内力。你这套‘凌波微步’我更要用心练熟,眼见不对,立刻溜之大吉,就吸不到他内力了。”至于“杀尽我逍遥派弟子”一节,却想也不敢去想。 ********ChatGLM:凌波微步是一门极上乘的轻功,源于《易经》中的六十四卦。使用者按照特定顺序踏着卦象方位行进,从第一步到最后一步正好行走一个大圈。这门轻功精妙异常,可以使人内力增进,但需要谨慎练习,避免伤害他人。凌波微步在逍遥派中尤为流行,但并非所有逍遥派弟子都会凌波微步。 query = "你知道vearch是什么吗?"response, history = model.chat(tokenizer, query, history=history)print(f"Human: {query}\nChatGLM:{response}\n")vearch_info = ["Vearch 是一款存储大语言模型数据的向量数据库,用于存储和快速搜索模型embedding后的向量,可用于基于个人知识库的大模型应用", "Vearch 支持OpenAI, Llama, ChatGLM等模型,以及LangChain库", "vearch 是基于C语言,go语言开发的,并提供python接口,可以直接通过pip安装"]vearch_source=[{'source': '/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/tlbb/three_body.txt'},{'source': '/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/tlbb/three_body.txt'},{'source': '/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/tlbb/three_body.txt'}]vearch_standalone.add_texts(vearch_info,vearch_source)print("*****************after is cluster res********************")vearch_cluster.add_texts(vearch_info,vearch_source) Human: 你知道vearch是什么吗? ChatGLM:是的,我知道 Vearch。Vearch 是一种用于计算机械系统极化子的工具,它可以用于模拟和优化电路的性能。它是一个基于Matlab的电路仿真软件,可以用于设计和分析各种类型的电路,包括交流电路和直流电路。 docids ['eee5e7468434427eb49829374c1e8220', '2776754da8fc4bb58d3e482006010716', '9223acd6d89d4c2c84ff42677ac0d47c'] *****************after is cluster res******************** docids ['-4311783201092343475', '-2899734009733762895', '1342026762029067927'] ['-4311783201092343475', '-2899734009733762895', '1342026762029067927']query3 = "你知道vearch是什么吗?"res1 = vearch_standalone.similarity_search(query3, 3)for idx,tmp in enumerate(res1): print(f"{'#'*20}第{idx+1}段相关文档{'#'*20}\n\n{tmp.page_content}\n")context1 = "".join([tmp.page_content for tmp in res1])new_query1 = f"基于以下信息,尽可能准确的来回答用户的问题。背景信息:\n {context1} \n 回答用户这个问题:{query3}\n\n"response, history = model.chat(tokenizer, new_query1, history=[])print(f"***************ChatGLM:{response}\n")print("***************after is cluster res******************")query3_c = "你知道vearch是什么吗?"res1_c = vearch_standalone.similarity_search(query3_c, 3)for idx,tmp in enumerate(res1_c): print(f"{'#'*20}第{idx+1}段相关文档{'#'*20}\n\n{tmp.page_content}\n")context1_C = "".join([tmp.page_content for tmp in res1_c])new_query1_c = f"基于以下信息,尽可能准确的来回答用户的问题。背景信息:\n {context1_C} \n 回答用户这个问题:{query3_c}\n\n"response_c, history_c = model.chat(tokenizer, new_query1_c, history=[])print(f"***************ChatGLM:{response_c}\n") ####################第1段相关文档#################### Vearch 是一款存储大语言模型数据的向量数据库,用于存储和快速搜索模型embedding后的向量,可用于基于个人知识库的大模型应用 ####################第2段相关文档#################### Vearch 支持OpenAI, Llama, ChatGLM等模型,以及LangChain库 ####################第3段相关文档#################### vearch 是基于C语言,go语言开发的,并提供python接口,可以直接通过pip安装 ***************ChatGLM:是的,Varch是一个向量数据库,旨在存储和快速搜索模型embedding后的向量。它支持OpenAI、ChatGLM等模型,并可直接通过pip安装。 ***************after is cluster res****************** ####################第1段相关文档#################### Vearch 是一款存储大语言模型数据的向量数据库,用于存储和快速搜索模型embedding后的向量,可用于基于个人知识库的大模型应用 ####################第2段相关文档#################### Vearch 支持OpenAI, Llama, ChatGLM等模型,以及LangChain库 ####################第3段相关文档#################### vearch 是基于C语言,go语言开发的,并提供python接口,可以直接通过pip安装 ***************ChatGLM:是的,Varch是一个向量数据库,旨在存储和快速搜索模型embedding后的向量。它支持OpenAI,ChatGLM等模型,并可用于基于个人知识库的大模型应用。Varch基于C语言和Go语言开发,并提供Python接口,可以通过pip安装。 ##delete and get function need to maintian docids ##your docidres_d=vearch_standalone.delete(['eee5e7468434427eb49829374c1e8220', '2776754da8fc4bb58d3e482006010716', '9223acd6d89d4c2c84ff42677ac0d47c'])print("delete vearch standalone docid",res_d)query = "你知道vearch是什么吗?"response, history = model.chat(tokenizer, query, history=[])print(f"Human: {query}\nChatGLM:{response}\n")res_cluster=vearch_cluster.delete(['-4311783201092343475', '-2899734009733762895', '1342026762029067927'])print("delete vearch cluster docid",res_cluster)query_c = "你知道vearch是什么吗?"response_c, history = model.chat(tokenizer, query_c, history=[])print(f"Human: {query}\nChatGLM:{response_c}\n")get_delet_doc=vearch_standalone.get(['eee5e7468434427eb49829374c1e8220', '2776754da8fc4bb58d3e482006010716', '9223acd6d89d4c2c84ff42677ac0d47c'])print("after delete docid to query again:",get_delet_doc)get_id_doc=vearch_standalone.get(['18ce6747dca04a2c833e60e8dfd83c04', 'aafacb0e46574b378a9f433877ab06a8', '9776bccfdd8643a8b219ccee0596f370','9223acd6d89d4c2c84ff42677ac0d47c'])print("get existed docid",get_id_doc)get_delet_doc=vearch_cluster.get(['-4311783201092343475', '-2899734009733762895', '1342026762029067927'])print("after delete docid to query again:",get_delet_doc)get_id_doc=vearch_cluster.get(['1841638988191686991', '-4519586577642625749', '5028230008472292907','1342026762029067927'])print("get existed docid",get_id_doc) delete vearch standalone docid True Human: 你知道vearch是什么吗? ChatGLM:Vearch是一种用于处理向量的库,可以轻松地将向量转换为矩阵,并提供许多有用的函数和算法,以操作向量。 Vearch支持许多常见的向量操作,例如加法、减法、乘法、除法、矩阵乘法、求和、统计和归一化等。 Vearch还提供了一些高级功能,例如L2正则化、协方差矩阵、稀疏矩阵和奇异值分解等。 delete vearch cluster docid True Human: 你知道vearch是什么吗? ChatGLM:Vearch是一种用于处理向量数据的函数,可以应用于多种不同的编程语言和数据结构中。 Vearch最初是作为Java中一个名为“vearch”的包而出现的,它的目的是提供一种高效的向量数据结构。它支持向量的多态性,可以轻松地实现不同类型的向量之间的转换,同时还支持向量的压缩和反向操作等操作。 后来,Vearch被广泛应用于其他编程语言中,如Python、Ruby、JavaScript等。在Python中,它被称为“vectorize”,在Ruby中,它被称为“Vector”。 Vearch的主要优点是它的向量操作具有多态性,可以应用于不同类型的向量数据,同时还支持高效的向量操作和反向操作,因此可以提高程序的性能。 after delete docid to query again: {} get existed docid {'18ce6747dca04a2c833e60e8dfd83c04': Document(page_content='《天龙八部》第二回 玉壁月华明\n\n再展帛卷,长卷上源源皆是裸女画像,或立或卧,或现前胸,或见后背。人像的面容都是一般,但或喜或愁,或含情凝眸,或轻嗔薄怒,神情各异。一共有三十六幅图像,每幅像上均有颜色细线,注明穴道部位及练功法诀。\n\n帛卷尽处题着“凌波微步”四字,其后绘的是无数足印,注明“妇妹”、“无妄”等等字样,尽是《易经》中的方位。段誉前几日还正全心全意地钻研《易经》,一见到这些名称,登时精神大振,便似遇到故交良友一般。只见足印密密麻麻,不知有几千百个,自一个足印至另一个足印均有绿线贯串,线上绘有箭头,最后写着一行字道:“步法神妙,保身避敌,待积内力,再取敌命。”\n\n段誉心道:“神仙姊姊所遗的步法,必定精妙之极,遇到强敌时脱身逃走,那就很好,‘再取敌命’也就不必了。”\n卷好帛卷,对之作了两个揖,珍而重之地揣入怀中,转身对那玉像道:“神仙姊姊,你吩咐我朝午晚三次练功,段誉不敢有违。今后我对人加倍客气,别人不会来打我,我自然也不会去吸他内力。你这套‘凌波微步’我更要用心练熟,眼见不对,立刻溜之大吉,就吸不到他内力了。”至于“杀尽我逍遥派弟子”一节,却想也不敢去想。', metadata={'source': '/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/天龙八部/lingboweibu.txt'}), 'aafacb0e46574b378a9f433877ab06a8': Document(page_content='《天龙八部》第五回 微步縠纹生\n\n卷轴中此外诸种经脉修习之法甚多,皆是取人内力的法门,段誉虽自语宽解,总觉习之有违本性,单是贪多务得,便非好事,当下暂不理会。\n\n卷到卷轴末端,又见到了“凌波微步”那四字,登时便想起《洛神赋》中那些句子来:“凌波微步,罗袜生尘……转眄流精,光润玉颜。含辞未吐,气若幽兰。华容婀娜,令我忘餐。”曹子建那些千古名句,在脑海中缓缓流过:“秾纤得衷,修短合度,肩若削成,腰如约素。延颈秀项,皓质呈露。芳泽无加,铅华弗御。云髻峨峨,修眉连娟。丹唇外朗,皓齿内鲜。明眸善睐,靥辅承权。瑰姿艳逸,仪静体闲。柔情绰态,媚于语言……”这些句子用在木婉清身上,“这话倒也有理”;但如用之于神仙姊姊,只怕更为适合。想到神仙姊姊的姿容体态,“皎若太阳升朝霞,灼若芙蓉出绿波”,但觉依她吩咐行事,实为人生至乐,心想:“我先来练这‘凌波微步’,此乃逃命之妙法,非害人之手段也,练之有百利而无一害。”', metadata={'source': '/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/天龙八部/lingboweibu.txt'}), '9776bccfdd8643a8b219ccee0596f370': Document(page_content='午饭过后,段誉又练“凌波微步”,走一步,吸一口气,走第二步时将气呼出,六十四卦走完,四肢全无麻痹之感,料想呼吸顺畅,便无害处。第二次再走时连走两步吸一口气,再走两步始行呼出。这“凌波微步”是以动功修习内功,脚步踏遍六十四卦一个周天,内息自然而然地也转了一个周天。因此他每走一遍,内力便有一分进益。\n\n这般练了几天,“凌波微步”已走得颇为纯熟,不须再数呼吸,纵然疾行,气息也已无所窒滞。心意既畅,跨步时渐渐想到《洛神赋》中那些与“凌波微步”有关的句子:“仿佛兮若轻云之蔽月,飘飘兮若流风之回雪”,“竦轻躯以鹤立,若将飞而未翔”,“体迅飞凫,飘忽若神”,“动无常则,若危若安。进止难期,若往若还”。\n\n\n\n百度简介\n\n凌波微步是「逍遥派」独门轻功身法,精妙异常。\n\n凌波微步乃是一门极上乘的轻功,所以列于卷轴之末,以易经八八六十四卦为基础,使用者按特定顺序踏着卦象方位行进,从第一步到最后一步正好行走一个大圈。此步法精妙异常,原是要待人练成「北冥神功」,吸人内力,自身内力已【颇为深厚】之后再练。', metadata={'source': '/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/天龙八部/lingboweibu.txt'})} after delete docid to query again: {} get existed docid {'1841638988191686991': Document(page_content='《天龙八部》第二回 玉壁月华明\n\n再展帛卷,长卷上源源皆是裸女画像,或立或卧,或现前胸,或见后背。人像的面容都是一般,但或喜或愁,或含情凝眸,或轻嗔薄怒,神情各异。一共有三十六幅图像,每幅像上均有颜色细线,注明穴道部位及练功法诀。\n\n帛卷尽处题着“凌波微步”四字,其后绘的是无数足印,注明“妇妹”、“无妄”等等字样,尽是《易经》中的方位。段誉前几日还正全心全意地钻研《易经》,一见到这些名称,登时精神大振,便似遇到故交良友一般。只见足印密密麻麻,不知有几千百个,自一个足印至另一个足印均有绿线贯串,线上绘有箭头,最后写着一行字道:“步法神妙,保身避敌,待积内力,再取敌命。”\n\n段誉心道:“神仙姊姊所遗的步法,必定精妙之极,遇到强敌时脱身逃走,那就很好,‘再取敌命’也就不必了。”\n卷好帛卷,对之作了两个揖,珍而重之地揣入怀中,转身对那玉像道:“神仙姊姊,你吩咐我朝午晚三次练功,段誉不敢有违。今后我对人加倍客气,别人不会来打我,我自然也不会去吸他内力。你这套‘凌波微步’我更要用心练熟,眼见不对,立刻溜之大吉,就吸不到他内力了。”至于“杀尽我逍遥派弟子”一节,却想也不敢去想。', metadata={'source': '/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/天龙八部/lingboweibu.txt'}), '-4519586577642625749': Document(page_content='《天龙八部》第五回 微步縠纹生\n\n卷轴中此外诸种经脉修习之法甚多,皆是取人内力的法门,段誉虽自语宽解,总觉习之有违本性,单是贪多务得,便非好事,当下暂不理会。\n\n卷到卷轴末端,又见到了“凌波微步”那四字,登时便想起《洛神赋》中那些句子来:“凌波微步,罗袜生尘……转眄流精,光润玉颜。含辞未吐,气若幽兰。华容婀娜,令我忘餐。”曹子建那些千古名句,在脑海中缓缓流过:“秾纤得衷,修短合度,肩若削成,腰如约素。延颈秀项,皓质呈露。芳泽无加,铅华弗御。云髻峨峨,修眉连娟。丹唇外朗,皓齿内鲜。明眸善睐,靥辅承权。瑰姿艳逸,仪静体闲。柔情绰态,媚于语言……”这些句子用在木婉清身上,“这话倒也有理”;但如用之于神仙姊姊,只怕更为适合。想到神仙姊姊的姿容体态,“皎若太阳升朝霞,灼若芙蓉出绿波”,但觉依她吩咐行事,实为人生至乐,心想:“我先来练这‘凌波微步’,此乃逃命之妙法,非害人之手段也,练之有百利而无一害。”', metadata={'source': '/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/天龙八部/lingboweibu.txt'}), '5028230008472292907': Document(page_content='午饭过后,段誉又练“凌波微步”,走一步,吸一口气,走第二步时将气呼出,六十四卦走完,四肢全无麻痹之感,料想呼吸顺畅,便无害处。第二次再走时连走两步吸一口气,再走两步始行呼出。这“凌波微步”是以动功修习内功,脚步踏遍六十四卦一个周天,内息自然而然地也转了一个周天。因此他每走一遍,内力便有一分进益。\n\n这般练了几天,“凌波微步”已走得颇为纯熟,不须再数呼吸,纵然疾行,气息也已无所窒滞。心意既畅,跨步时渐渐想到《洛神赋》中那些与“凌波微步”有关的句子:“仿佛兮若轻云之蔽月,飘飘兮若流风之回雪”,“竦轻躯以鹤立,若将飞而未翔”,“体迅飞凫,飘忽若神”,“动无常则,若危若安。进止难期,若往若还”。\n\n\n\n百度简介\n\n凌波微步是「逍遥派」独门轻功身法,精妙异常。\n\n凌波微步乃是一门极上乘的轻功,所以列于卷轴之末,以易经八八六十四卦为基础,使用者按特定顺序踏着卦象方位行进,从第一步到最后一步正好行走一个大圈。此步法精妙异常,原是要待人练成「北冥神功」,吸人内力,自身内力已【颇为深厚】之后再练。', metadata={'source': '/data/zhx/zhx/langchain-ChatGLM_new/knowledge_base/天龙八部/lingboweibu.txt'})}PreviousValdNextVectara |
670 | https://python.langchain.com/docs/integrations/vectorstores/vectara | ComponentsVector storesVectaraOn this pageVectaraVectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.
See the Vectara API documentation for more information on how to use the API.This notebook shows how to use functionality related to the Vectara's integration with langchain.
Note that unlike many other integrations in this category, Vectara provides an end-to-end managed service for Grounded Generation (aka retrieval agumented generation), which includes:A way to extract text from document files and chunk them into sentences.Its own embeddings model and vector store - each text segment is encoded into a vector embedding and stored in the Vectara internal vector storeA query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for Hybrid Search)All of these are supported in this LangChain integration.SetupYou will need a Vectara account to use Vectara with LangChain. To get started, use the following steps (see our quickstart guide):Sign up for a Vectara account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the "Create Corpus" button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.Next you'll need to create API keys to access the corpus. Click on the "Authorization" tab in the corpus view and then the "Create API Key" button. Give your key a name, and choose whether you want query only or query+index for your key. Click "Create" and you now have an active API key. Keep this key confidential. To use LangChain with Vectara, you'll need to have these three values: customer ID, corpus ID and api_key.
You can provide those to LangChain in two ways:Include in your environment these three variables: VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY.For example, you can set these variables using os.environ and getpass as follows:import osimport getpassos.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")Provide them as arguments when creating the Vectara vectorstore object:vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key )Connecting to Vectara from LangChainIn this example, we assume that you've created an account and a corpus, and added your VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY (created with permissions for both indexing and query) as environment variables.The corpus has 3 fields defined as metadata for filtering:url: a string field containing the source URL of the document (where relevant)speech: a string field containing the name of the speechauthor: the name of the authorLet's start by ingesting 3 documents into the corpus:The State of the Union speech from 2022, available in the LangChain repository as a text fileThe "I have a dream" speech by Dr. KindThe "We shall Fight on the Beaches" speech by Winston Churchilfrom langchain.embeddings import FakeEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Vectarafrom langchain.document_loaders import TextLoaderfrom langchain.llms import OpenAIfrom langchain.chains import ConversationalRetrievalChainfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfoloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)vectara = Vectara.from_documents( docs, embedding=FakeEmbeddings(size=768), doc_metadata={"speech": "state-of-the-union", "author": "Biden"},)Vectara's indexing API provides a file upload API where the file is handled directly by Vectara - pre-processed, chunked optimally and added to the Vectara vector store.
To use this, we added the add_files() method (as well as from_files()). Let's see this in action. We pick two PDF documents to upload: The "I have a dream" speech by Dr. KingChurchill's "We Shall Fight on the Beaches" speechimport tempfileimport urllib.requesturls = [ [ "https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf", "I-have-a-dream", "Dr. King" ], [ "https://www.parkwayschools.net/cms/lib/MO01931486/Centricity/Domain/1578/Churchill_Beaches_Speech.pdf", "we shall fight on the beaches", "Churchil" ],]files_list = []for url, _, _ in urls: name = tempfile.NamedTemporaryFile().name urllib.request.urlretrieve(url, name) files_list.append(name)docsearch: Vectara = Vectara.from_files( files=files_list, embedding=FakeEmbeddings(size=768), metadatas=[{"url": url, "speech": title, "author": author} for url, title, author in urls],)Similarity searchThe simplest scenario for using Vectara is to perform a similarity search. query = "What did the president say about Ketanji Brown Jackson"found_docs = vectara.similarity_search( query, n_sentence_context=0, filter="doc.speech = 'state-of-the-union'")print(found_docs[0].page_content) And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson.Similarity search with scoreSometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.query = "What did the president say about Ketanji Brown Jackson"found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'state-of-the-union'", score_threshold=0.2,)document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice. Score: 0.8299499Now let's do similar search for content in the files we uploadedquery = "We must forever conduct our struggle"min_score = 1.2found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'I-have-a-dream'", score_threshold=min_score,)print(f"With this threshold of {min_score} we have {len(found_docs)} documents") With this threshold of 1.2 we have 0 documentsquery = "We must forever conduct our struggle"min_score = 0.2found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'I-have-a-dream'", score_threshold=min_score,)print(f"With this threshold of {min_score} we have {len(found_docs)} documents") With this threshold of 0.2 we have 5 documentsVectara as a RetrieverVectara, as all the other LangChain vectorstores, is most often used as a LangChain Retriever:retriever = vectara.as_retriever()retriever VectaraRetriever(tags=['Vectara'], metadata=None, vectorstore=<langchain.vectorstores.vectara.Vectara object at 0x13b15e9b0>, search_type='similarity', search_kwargs={'lambda_val': 0.025, 'k': 5, 'filter': '', 'n_sentence_context': '2'})query = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0] Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '596', 'len': '97', 'speech': 'state-of-the-union', 'author': 'Biden'})Using Vectara as a SelfQuery Retrievermetadata_field_info = [ AttributeInfo( name="speech", description="what name of the speech", type="string or list[string]", ), AttributeInfo( name="author", description="author of the speech", type="string or list[string]", ),]document_content_description = "the text of the speech"vectordb = Vectara()llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm(llm, vectara, document_content_description, metadata_field_info, verbose=True)retriever.get_relevant_documents("what did Biden say about the freedom?") /Users/ofer/dev/langchain/libs/langchain/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='freedom' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='author', value='Biden') limit=None [Document(page_content='Well I know this nation. We will meet the test. To protect freedom and liberty, to expand fairness and opportunity. We will save democracy. As hard as these times have been, I am more optimistic about America today than I have been my whole life.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '346', 'len': '67', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='To our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. Putin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people. He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '740', 'len': '47', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '413', 'len': '77', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='We can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. We have fought for freedom, expanded liberty, defeated totalitarianism and terror. And built the strongest, freest, and most prosperous nation the world has ever known. Now is the hour. \n\nOur moment of responsibility.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '906', 'len': '82', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '0', 'len': '63', 'speech': 'state-of-the-union', 'author': 'Biden'})]retriever.get_relevant_documents("what did Dr. King say about the freedom?") query='freedom' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='author', value='Dr. King') limit=None [Document(page_content='And if America is to be a great nation, this must become true. So\nlet freedom ring from the prodigious hilltops of New Hampshire. Let freedom ring from the mighty\nmountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let\nfreedom ring from the snowcapped Rockies of Colorado.', metadata={'lang': 'eng', 'section': '3', 'offset': '1534', 'len': '55', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='And if America is to be a great nation, this must become true. So\nlet freedom ring from the prodigious hilltops of New Hampshire. Let freedom ring from the mighty\nmountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let\nfreedom ring from the snowcapped Rockies of Colorado.', metadata={'lang': 'eng', 'section': '3', 'offset': '1534', 'len': '55', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='Let freedom ring from the curvaceous slopes of\nCalifornia. But not only that. Let freedom ring from Stone Mountain of Georgia. Let freedom ring from Lookout\nMountain of Tennessee. Let freedom ring from every hill and molehill of Mississippi, from every\nmountain side. Let freedom ring . . .\nWhen we allow freedom to ring—when we let it ring from every city and every hamlet, from every state\nand every city, we will be able to speed up that day when all of God’s children, black men and white\nmen, Jews and Gentiles, Protestants and Catholics, will be able to join hands and sing in the words of the\nold Negro spiritual, “Free at last, Free at last, Great God a-mighty, We are free at last.”', metadata={'lang': 'eng', 'section': '3', 'offset': '1842', 'len': '52', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='Let freedom ring from the curvaceous slopes of\nCalifornia. But not only that. Let freedom ring from Stone Mountain of Georgia. Let freedom ring from Lookout\nMountain of Tennessee. Let freedom ring from every hill and molehill of Mississippi, from every\nmountain side. Let freedom ring . . .\nWhen we allow freedom to ring—when we let it ring from every city and every hamlet, from every state\nand every city, we will be able to speed up that day when all of God’s children, black men and white\nmen, Jews and Gentiles, Protestants and Catholics, will be able to join hands and sing in the words of the\nold Negro spiritual, “Free at last, Free at last, Great God a-mighty, We are free at last.”', metadata={'lang': 'eng', 'section': '3', 'offset': '1842', 'len': '52', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='Let freedom ring from the mighty\nmountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let\nfreedom ring from the snowcapped Rockies of Colorado. Let freedom ring from the curvaceous slopes of\nCalifornia. But not only that. Let freedom ring from Stone Mountain of Georgia.', metadata={'lang': 'eng', 'section': '3', 'offset': '1657', 'len': '57', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'})]PreviousvearchNextVespaConnecting to Vectara from LangChainSimilarity searchSimilarity search with scoreVectara as a RetrieverUsing Vectara as a SelfQuery Retriever |
671 | https://python.langchain.com/docs/integrations/vectorstores/vespa | ComponentsVector storesVespaOn this pageVespaVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.This notebook shows how to use Vespa.ai as a LangChain vector store.In order to create the vector store, we use
pyvespa to create a
connection a Vespa service.#!pip install pyvespaUsing the pyvespa package, you can either connect to a
Vespa Cloud instance
or a local
Docker instance.
Here, we will create a new Vespa application and deploy that using Docker.Creating a Vespa applicationFirst, we need to create an application package:from vespa.package import ApplicationPackage, Field, RankProfileapp_package = ApplicationPackage(name="testapp")app_package.schema.add_fields( Field(name="text", type="string", indexing=["index", "summary"], index="enable-bm25"), Field(name="embedding", type="tensor<float>(x[384])", indexing=["attribute", "summary"], attribute=[f"distance-metric: angular"]),)app_package.schema.add_rank_profile( RankProfile(name="default", first_phase="closeness(field, embedding)", inputs=[("query(query_embedding)", "tensor<float>(x[384])")] ))This sets up a Vespa application with a schema for each document that contains
two fields: text for holding the document text and embedding for holding
the embedding vector. The text field is set up to use a BM25 index for
efficient text retrieval, and we'll see how to use this and hybrid search a
bit later.The embedding field is set up with a vector of length 384 to hold the
embedding representation of the text. See
Vespa's Tensor Guide
for more on tensors in Vespa.Lastly, we add a rank profile to
instruct Vespa how to order documents. Here we set this up with a
nearest neighbor search.Now we can deploy this application locally:from vespa.deployment import VespaDockervespa_docker = VespaDocker()vespa_app = vespa_docker.deploy(application_package=app_package)This deploys and creates a connection to a Vespa service. In case you
already have a Vespa application running, for instance in the cloud,
please refer to the PyVespa application for how to connect.Creating a Vespa vector storeNow, let's load some documents:from langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")Here, we also set up local sentence embedder to transform the text to embedding
vectors. One could also use OpenAI embeddings, but the vector length needs to
be updated to 1536 to reflect the larger size of that embedding.To feed these to Vespa, we need to configure how the vector store should map to
fields in the Vespa application. Then we create the vector store directly from
this set of documents:vespa_config = dict( page_content_field="text", embedding_field="embedding", input_field="query_embedding")from langchain.vectorstores import VespaStoredb = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)This creates a Vespa vector store and feeds that set of documents to Vespa.
The vector store takes care of calling the embedding function for each document
and inserts them into the database.We can now query the vector store:query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query)print(results[0].page_content)This will use the embedding function given above to create a representation
for the query and use that to search Vespa. Note that this will use the
default ranking function, which we set up in the application package
above. You can use the ranking argument to similarity_search to
specify which ranking function to use.Please refer to the pyvespa documentation
for more information.This covers the basic usage of the Vespa store in LangChain.
Now you can return the results and continue using these in LangChain.Updating documentsAn alternative to calling from_documents, you can create the vector
store directly and call add_texts from that. This can also be used to update
documents:query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query)result = results[0]result.page_content = "UPDATED: " + result.page_contentdb.add_texts([result.page_content], [result.metadata], result.metadata["id"])results = db.similarity_search(query)print(results[0].page_content)However, the pyvespa library contains methods to manipulate
content on Vespa which you can use directly.Deleting documentsYou can delete documents using the delete function:result = db.similarity_search(query)# docs[0].metadata["id"] == "id:testapp:testapp::32"db.delete(["32"])result = db.similarity_search(query)# docs[0].metadata["id"] != "id:testapp:testapp::32"Again, the pyvespa connection contains methods to delete documents as well.Returning with scoresThe similarity_search method only returns the documents in order of
relevancy. To retrieve the actual scores:results = db.similarity_search_with_score(query)result = results[0]# result[1] ~= 0.463This is a result of using the "all-MiniLM-L6-v2" embedding model using the
cosine distance function (as given by the argument angular in the
application function).Different embedding functions need different distance functions, and Vespa
needs to know which distance function to use when orderings documents.
Please refer to the
documentation on distance functions
for more information.As retrieverTo use this vector store as a
LangChain retriever
simply call the as_retriever function, which is a standard vector store
method:db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)retriever = db.as_retriever()query = "What did the president say about Ketanji Brown Jackson"results = retriever.get_relevant_documents(query)# results[0].metadata["id"] == "id:testapp:testapp::32"This allows for more general, unstructured, retrieval from the vector store.MetadataIn the example so far, we've only used the text and the embedding for that
text. Documents usually contain additional information, which in LangChain
is referred to as metadata.Vespa can contain many fields with different types by adding them to the application
package:app_package.schema.add_fields( # ... Field(name="date", type="string", indexing=["attribute", "summary"]), Field(name="rating", type="int", indexing=["attribute", "summary"]), Field(name="author", type="string", indexing=["attribute", "summary"]), # ...)vespa_app = vespa_docker.deploy(application_package=app_package)We can add some metadata fields in the documents:# Add metadatafor i, doc in enumerate(docs): doc.metadata["date"] = f"2023-{(i % 12)+1}-{(i % 28)+1}" doc.metadata["rating"] = range(1, 6)[i % 5] doc.metadata["author"] = ["Joe Biden", "Unknown"][min(i, 1)]And let the Vespa vector store know about these fields:vespa_config.update(dict(metadata_fields=["date", "rating", "author"]))Now, when searching for these documents, these fields will be returned.
Also, these fields can be filtered on:db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query, filter="rating > 3")# results[0].metadata["id"] == "id:testapp:testapp::34"# results[0].metadata["author"] == "Unknown"Custom queryIf the default behavior of the similarity search does not fit your
requirements, you can always provide your own query. Thus, you don't
need to provide all of the configuration to the vector store, but
rather just write this yourself.First, let's add a BM25 ranking function to our application:from vespa.package import FieldSetapp_package.schema.add_field_set(FieldSet(name="default", fields=["text"]))app_package.schema.add_rank_profile(RankProfile(name="bm25", first_phase="bm25(text)"))vespa_app = vespa_docker.deploy(application_package=app_package)db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)Then, to perform a regular text search based on BM25:query = "What did the president say about Ketanji Brown Jackson"custom_query = { "yql": f"select * from sources * where userQuery()", "query": query, "type": "weakAnd", "ranking": "bm25", "hits": 4}results = db.similarity_search_with_score(query, custom_query=custom_query)# results[0][0].metadata["id"] == "id:testapp:testapp::32"# results[0][1] ~= 14.384All of the powerful search and query capabilities of Vespa can be used
by using a custom query. Please refer to the Vespa documentation on it's
Query API for more details.Hybrid searchHybrid search means using both a classic term-based search such as
BM25 and a vector search and combining the results. We need to create
a new rank profile for hybrid search on Vespa:app_package.schema.add_rank_profile( RankProfile(name="hybrid", first_phase="log(bm25(text)) + 0.5 * closeness(field, embedding)", inputs=[("query(query_embedding)", "tensor<float>(x[384])")] ))vespa_app = vespa_docker.deploy(application_package=app_package)db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)Here, we score each document as a combination of it's BM25 score and its
distance score. We can query using a custom query:query = "What did the president say about Ketanji Brown Jackson"query_embedding = embedding_function.embed_query(query)nearest_neighbor_expression = "{targetHits: 4}nearestNeighbor(embedding, query_embedding)"custom_query = { "yql": f"select * from sources * where {nearest_neighbor_expression} and userQuery()", "query": query, "type": "weakAnd", "input.query(query_embedding)": query_embedding, "ranking": "hybrid", "hits": 4}results = db.similarity_search_with_score(query, custom_query=custom_query)# results[0][0].metadata["id"], "id:testapp:testapp::32")# results[0][1] ~= 2.897Native embedders in VespaUp until this point we've used an embedding function in Python to provide
embeddings for the texts. Vespa supports embedding function natively, so
you can defer this calculation in to Vespa. One benefit is the ability to use
GPUs when embedding documents if you have a large collections.Please refer to Vespa embeddings
for more information.First, we need to modify our application package:from vespa.package import Component, Parameterapp_package.components = [ Component(id="hf-embedder", type="hugging-face-embedder", parameters=[ Parameter("transformer-model", {"path": "..."}), Parameter("tokenizer-model", {"url": "..."}), ] )]Field(name="hfembedding", type="tensor<float>(x[384])", is_document_field=False, indexing=["input text", "embed hf-embedder", "attribute", "summary"], attribute=[f"distance-metric: angular"], )app_package.schema.add_rank_profile( RankProfile(name="hf_similarity", first_phase="closeness(field, hfembedding)", inputs=[("query(query_embedding)", "tensor<float>(x[384])")] ))Please refer to the embeddings documentation on adding embedder models
and tokenizers to the application. Note that the hfembedding field
includes instructions for embedding using the hf-embedder.Now we can query with a custom query:query = "What did the president say about Ketanji Brown Jackson"nearest_neighbor_expression = "{targetHits: 4}nearestNeighbor(internalembedding, query_embedding)"custom_query = { "yql": f"select * from sources * where {nearest_neighbor_expression}", "input.query(query_embedding)": f"embed(hf-embedder, \"{query}\")", "ranking": "internal_similarity", "hits": 4}results = db.similarity_search_with_score(query, custom_query=custom_query)# results[0][0].metadata["id"], "id:testapp:testapp::32")# results[0][1] ~= 0.630Note that the query here includes an embed instruction to embed the query
using the same model as for the documents.Approximate nearest neighborIn all of the above examples, we've used exact nearest neighbor to
find results. However, for large collections of documents this is
not feasible as one has to scan through all documents to find the
best matches. To avoid this, we can use
approximate nearest neighbors.First, we can change the embedding field to create a HNSW index:from vespa.package import HNSWapp_package.schema.add_fields( Field(name="embedding", type="tensor<float>(x[384])", indexing=["attribute", "summary", "index"], ann=HNSW(distance_metric="angular", max_links_per_node=16, neighbors_to_explore_at_insert=200) ))This creates a HNSW index on the embedding data which allows for efficient
searching. With this set, we can easily search using ANN by setting
the approximate argument to True:query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query, approximate=True)# results[0][0].metadata["id"], "id:testapp:testapp::32")This covers most of the functionality in the Vespa vector store in LangChain.PreviousVectaraNextWeaviateReturning with scoresAs retrieverMetadataCustom queryHybrid searchNative embedders in VespaApproximate nearest neighbor |
672 | https://python.langchain.com/docs/integrations/vectorstores/weaviate | ComponentsVector storesWeaviateOn this pageWeaviateWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects.This notebook shows how to use functionality related to the Weaviatevector database.See the Weaviate installation instructions.pip install weaviate-client Requirement already satisfied: weaviate-client in /workspaces/langchain/.venv/lib/python3.9/site-packages (3.19.1) Requirement already satisfied: requests<2.29.0,>=2.28.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (2.28.2) Requirement already satisfied: validators<=0.21.0,>=0.18.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (0.20.0) Requirement already satisfied: tqdm<5.0.0,>=4.59.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (4.65.0) Requirement already satisfied: authlib>=1.1.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (1.2.0) Requirement already satisfied: cryptography>=3.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from authlib>=1.1.0->weaviate-client) (40.0.2) Requirement already satisfied: charset-normalizer<4,>=2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (1.26.15) Requirement already satisfied: certifi>=2017.4.17 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (2023.5.7) Requirement already satisfied: decorator>=3.4.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from validators<=0.21.0,>=0.18.2->weaviate-client) (5.1.1) Requirement already satisfied: cffi>=1.12 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cryptography>=3.2->authlib>=1.1.0->weaviate-client) (1.15.1) Requirement already satisfied: pycparser in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cffi>=1.12->cryptography>=3.2->authlib>=1.1.0->weaviate-client) (2.21)We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")WEAVIATE_URL = getpass.getpass("WEAVIATE_URL:")os.environ["WEAVIATE_API_KEY"] = getpass.getpass("WEAVIATE_API_KEY:")WEAVIATE_API_KEY = os.environ["WEAVIATE_API_KEY"]from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Weaviatefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = Weaviate.from_documents(docs, embeddings, weaviate_url=WEAVIATE_URL, by_text=False)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.AuthenticationWeaviate instances have authentication enabled by default. You can use either a username/password combination or API key. import weaviateclient = weaviate.Client(url=WEAVIATE_URL, auth_client_secret=weaviate.AuthApiKey(WEAVIATE_API_KEY))# client = weaviate.Client(# url=WEAVIATE_URL,# auth_client_secret=weaviate.AuthClientPassword(# username = "WCS_USERNAME", # Replace w/ your WCS username# password = "WCS_PASSWORD", # Replace w/ your WCS password# ),# )vectorstore = Weaviate.from_documents(documents, embeddings, client=client, by_text=False) <langchain.vectorstores.weaviate.Weaviate object at 0x107f46550>Similarity search with scoreSometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.
The returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query, by_text=False)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'_additional': {'vector': [-0.015289668, -0.011418287, -0.018540842, 0.00274522, 0.008310737, 0.014179829, 0.0080104275, -0.0010217049, -0.022327352, -0.0055002323, 0.018958665, 0.0020548347, -0.0044393567, -0.021609223, -0.013709779, -0.004543812, 0.025722157, 0.01821442, 0.031728342, -0.031388864, -0.01051083, -0.029978717, 0.011555385, 0.0009751897, 0.014675993, -0.02102166, 0.0301354, -0.031754456, 0.013526983, -0.03392191, 0.002800712, -0.0027778621, -0.024259781, -0.006202043, -0.019950991, 0.0176138, -0.0001134321, 0.008343379, 0.034209162, -0.027654583, 0.03149332, -0.0008389079, 0.0053696632, -0.0024644958, -0.016582303, 0.0066720927, -0.005036711, -0.035514854, 0.002942706, 0.02958701, 0.032825127, 0.015694432, -0.019846536, -0.024520919, -0.021974817, -0.0063293483, -0.01081114, -0.0084282495, 0.003025944, -0.010210521, 0.008780787, 0.014793505, -0.006486031, 0.011966679, 0.01774437, -0.006985459, -0.015459408, 0.01625588, -0.016007798, 0.01706541, 0.035567082, 0.0029900377, 0.021543937, -0.0068483613, 0.040868197, -0.010909067, -0.03339963, 0.010954766, -0.014689049, -0.021596165, 0.0025607906, -0.01599474, -0.017757427, -0.0041651614, 0.010752384, 0.0053598704, -0.00019248774, 0.008480477, -0.010517359, -0.005017126, 0.0020434097, 0.011699011, 0.0051379027, 0.021687564, -0.010830725, 0.020734407, -0.006606808, 0.029769806, 0.02817686, -0.047318324, 0.024338122, -0.001150642, -0.026231378, -0.012325744, -0.0318328, -0.0094989175, -0.00897664, 0.004736402, 0.0046482678, 0.0023241339, -0.005826656, 0.0072531262, 0.015498579, -0.0077819317, -0.011953622, -0.028934162, -0.033974137, -0.01574666, 0.0086306315, -0.029299757, 0.030213742, -0.0033148287, 0.013448641, -0.013474754, 0.015851116, 0.0076578907, -0.037421167, -0.015185213, 0.010719741, -0.014636821, 0.0001918757, 0.011783881, 0.0036330915, -0.02132197, 0.0031010215, 0.0024334856, -0.0033229894, 0.050086394, 0.0031973163, -0.01115062, 0.004837593, 0.01298512, -0.018645298, -0.02992649, 0.004837593, 0.0067634913, 0.02992649, 0.0145062525, 0.00566018, -0.0017055618, -0.0056667086, 0.012697867, 0.0150677, -0.007559964, -0.01991182, -0.005268472, -0.008650217, -0.008702445, 0.027550127, 0.0018296026, 0.0018589807, -0.033295177, 0.0036265631, -0.0060290387, 0.014349569, 0.019898765, 0.00023339267, 0.0034568228, -0.018958665, 0.012031963, 0.005186866, 0.020747464, -0.03817847, 0.028202975, -0.01340947, 0.00091643346, 0.014884903, -0.02314994, -0.024468692, 0.0004859627, 0.018828096, 0.012906778, 0.027941836, 0.027550127, -0.015028529, 0.018606128, 0.03449641, -0.017757427, -0.016020855, -0.012142947, 0.025304336, 0.00821281, -0.0025461016, -0.01902395, -0.635507, -0.030083172, 0.0177052, -0.0104912445, 0.012502013, -0.0010747487, 0.00465806, 0.020825805, -0.006887532, 0.013892576, -0.019977106, 0.029952602, 0.0012004217, -0.015211326, -0.008708973, -0.017809656, 0.008578404, -0.01612531, 0.022614606, -0.022327352, -0.032616217, 0.0050693536, -0.020629952, -0.01357921, 0.011477043, 0.0013938275, -0.0052390937, 0.0142581705, -0.013200559, 0.013252786, -0.033582427, 0.030579336, -0.011568441, 0.0038387382, 0.049564116, 0.016791213, -0.01991182, 0.010889481, -0.0028251936, 0.035932675, -0.02183119, -0.008611047, 0.025121538, 0.008349908, 0.00035641342, 0.009028868, 0.007631777, -0.01298512, -0.0015350056, 0.009982024, -0.024207553, -0.003332782, 0.006283649, 0.01868447, -0.010732798, -0.00876773, -0.0075273216, -0.016530076, 0.018175248, 0.016020855, -0.00067284, 0.013461698, -0.0065904865, -0.017809656, -0.014741276, 0.016582303, -0.0088526, 0.0046482678, 0.037473395, -0.02237958, 0.010112594, 0.022549322, 9.680491e-05, -0.0059082615, 0.020747464, -0.026923396, 0.01162067, -0.0074816225, 0.00024277734, 0.011842638, 0.016921783, -0.019285088, 0.005565517, 0.0046907025, 0.018109964, 0.0028676286, -0.015080757, -0.01536801, 0.0024726565, 0.020943318, 0.02187036, 0.0037767177, 0.018997835, -0.026766712, 0.005026919, 0.015942514, 0.0097469995, -0.0067830766, 0.023828901, -0.01523744, -0.0121494755, 0.00744898, 0.010445545, -0.011006993, -0.0032789223, 0.020394927, -0.017796598, -0.0029116957, 0.02318911, -0.031754456, -0.018188305, -0.031441092, -0.030579336, 0.0011832844, 0.0065023527, -0.027053965, 0.009198609, 0.022079272, -0.027785152, 0.005846241, 0.013500868, 0.016699815, 0.010445545, -0.025265165, -0.004396922, 0.0076774764, 0.014597651, -0.009851455, -0.03637661, 0.0004745379, -0.010112594, -0.009205136, 0.01578583, 0.015211326, -0.0011653311, -0.0015847852, 0.01489796, -0.01625588, -0.0029067993, -0.011411758, 0.0046286825, 0.0036330915, -0.0034143878, 0.011894866, -0.03658552, 0.007266183, -0.015172156, -0.02038187, -0.033739112, 0.0018948873, -0.011379116, -0.0020923733, -0.014075373, 0.01970291, 0.0020352493, -0.0075273216, -0.02136114, 0.0027974476, -0.009577259, -0.023815846, 0.024847344, 0.014675993, -0.019454828, -0.013670608, 0.011059221, -0.005438212, 0.0406854, 0.0006218364, -0.024494806, -0.041259903, 0.022013986, -0.0040019494, -0.0052097156, 0.015798887, 0.016190596, 0.0003794671, -0.017444061, 0.012325744, 0.024769, 0.029482553, -0.0046547963, -0.015955571, -0.018397218, -0.0102431625, 0.020577725, 0.016190596, -0.02038187, 0.030030945, -0.01115062, 0.0032560725, -0.014819618, 0.005647123, -0.0032560725, 0.0038909658, 0.013311543, 0.024285894, -0.0045699263, -0.010112594, 0.009237779, 0.008728559, 0.0423828, 0.010909067, 0.04225223, -0.031806685, -0.013696723, -0.025787441, 0.00838255, -0.008715502, 0.006776548, 0.01825359, -0.014480138, -0.014427911, -0.017600743, -0.030004831, 0.0145845935, 0.013762007, -0.013226673, 0.004168425, 0.0047951583, -0.026923396, 0.014675993, 0.0055851024, 0.015616091, -0.012306159, 0.007670948, 0.038439605, -0.015759716, 0.00016178355, 0.01076544, -0.008232395, -0.009942854, 0.018801982, -0.0025314125, 0.030709906, -0.001442791, -0.042617824, -0.007409809, -0.013109161, 0.031101612, 0.016229765, 0.006162872, 0.017901054, -0.0063619902, -0.0054577976, 0.01872364, -0.0032430156, 0.02966535, 0.006495824, 0.0011008625, -0.00024318536, -0.007011573, -0.002746852, -0.004298995, 0.007710119, 0.03407859, -0.008898299, -0.008565348, 0.030527107, -0.0003027576, 0.025082368, 0.0405026, 0.03867463, 0.0014117807, -0.024076983, 0.003933401, -0.009812284, 0.00829768, -0.0074293944, 0.0061530797, -0.016647588, -0.008147526, -0.015629148, 0.02055161, 0.000504324, 0.03157166, 0.010112594, -0.009009283, 0.026557801, -0.013997031, -0.0071878415, 0.009414048, -0.03480978, 0.006626393, 0.013827291, -0.011444401, -0.011823053, -0.0042957305, -0.016229765, -0.014192886, 0.026531687, -0.012534656, -0.0056569157, -0.0010331298, 0.007977786, 0.0033654245, -0.017352663, 0.034626983, -0.011803466, 0.009035396, 0.0005288057, 0.020421041, 0.013115689, -0.0152504975, -0.0111114485, 0.032355078, 0.0025542623, -0.0030226798, -0.00074261305, 0.030892702, -0.026218321, 0.0062803845, -0.018031623, -0.021504767, -0.012834964, 0.009009283, -0.0029198565, -0.014349569, -0.020434098, 0.009838398, -0.005993132, -0.013618381, -0.031597774, -0.019206747, 0.00086583785, 0.15835446, 0.033765227, 0.00893747, 0.015119928, -0.019128405, 0.0079582, -0.026270548, -0.015877228, 0.014153715, -0.011960151, 0.007853745, 0.006972402, -0.014101488, 0.02456009, 0.015119928, -0.0018850947, 0.019010892, -0.0046188897, -0.0050954674, -0.03548874, -0.01608614, -0.00324628, 0.009466276, 0.031911142, 7.033402e-05, -0.025095424, 0.020225188, 0.014832675, 0.023228282, -0.011829581, -0.011300774, -0.004073763, 0.0032544404, -0.0025983294, -0.020943318, 0.019650683, -0.0074424515, -0.0030977572, 0.0073379963, -0.00012455089, 0.010230106, -0.0007254758, -0.0025052987, -0.009681715, 0.03439196, -0.035123147, -0.0028806855, 0.012828437, 0.00018646932, 0.0066133365, 0.025539361, -0.00055736775, -0.025356563, -0.004537284, -0.007031158, 0.015825002, -0.013076518, 0.00736411, -0.00075689406, 0.0076578907, -0.019337315, -0.0024187965, -0.0110331075, -0.01187528, 0.0013048771, 0.0009711094, -0.027863493, -0.020616895, -0.0024481746, -0.0040802914, 0.014571536, -0.012306159, -0.037630077, 0.012652168, 0.009068039, -0.0018263385, 0.0371078, -0.0026831995, 0.011333417, -0.011548856, -0.0059049972, -0.025186824, 0.0069789304, -0.010993936, -0.0009066408, 0.0002619547, 0.01727432, -0.008082241, -0.018645298, 0.024507863, 0.0030895968, -0.0014656406, 0.011137563, -0.025513247, -0.022967143, -0.002033617, 0.006887532, 0.016621474, -0.019337315, -0.0030618508, 0.0014697209, -0.011679426, -0.003597185, -0.0049844836, -0.012332273, 0.009068039, 0.009407519, 0.027080078, -0.011215905, -0.0062542707, -0.0013114056, -0.031911142, 0.011209376, 0.009903682, -0.007351053, 0.021335026, -0.005510025, 0.0062053073, -0.010869896, -0.0045601334, 0.017561574, -0.024847344, 0.04115545, -0.00036457402, -0.0061400225, 0.013037347, -0.005480647, 0.005947433, 0.020799693, 0.014702106, 0.03272067, 0.026701428, -0.015550806, -0.036193814, -0.021126116, -0.005412098, -0.013076518, 0.027080078, 0.012900249, -0.0073379963, -0.015119928, -0.019781252, 0.0062346854, -0.03266844, 0.025278222, -0.022797402, -0.0028415148, 0.021452539, -0.023162996, 0.005170545, -0.022314297, 0.011215905, -0.009838398, -0.00033233972, 0.0019650683, 0.0026326037, 0.009753528, -0.0029639236, 0.021126116, 0.01944177, -0.00044883206, -0.00961643, 0.008846072, -0.0035775995, 0.02352859, -0.0020956376, 0.0053468137, 0.013305014, 0.0006418298, 0.023802789, 0.013122218, -0.0031548813, -0.027471786, 0.005046504, 0.008545762, 0.011261604, -0.01357921, -0.01110492, -0.014845733, -0.035384286, -0.02550019, 0.008154054, -0.0058331843, -0.008702445, -0.007311882, -0.006525202, 0.03817847, 0.00372449, 0.022914914, -0.0018981516, 0.031545546, -0.01051083, 0.013801178, -0.006296706, -0.00025052988, -0.01795328, -0.026296662, 0.0017659501, 0.021883417, 0.0028937424, 0.00495837, -0.011888337, -0.008950527, -0.012058077, 0.020316586, 0.00804307, -0.0068483613, -0.0038387382, 0.019715967, -0.025069311, -0.000797697, -0.04507253, -0.009179023, -0.016242823, 0.013553096, -0.0019014158, 0.010223578, 0.0062934416, -5.5644974e-05, -0.038282923, -0.038544063, -0.03162389, -0.006815719, 0.009936325, 0.014192886, 0.02277129, -0.006972402, -0.029769806, 0.034862008, 0.01217559, -0.0037179615, 0.0008666539, 0.008924413, -0.026296662, -0.012678281, 0.014480138, 0.020734407, -0.012103776, -0.037499506, 0.022131499, 0.015028529, -0.033843566, 0.00020187242, 0.002650557, -0.0015113399, 0.021570051, -0.008284623, -0.003793039, -0.013422526, -0.009655601, -0.0016614947, -0.02388113, 0.00114901, 0.0034405016, 0.02796795, -0.039118566, 0.0023975791, -0.010608757, 0.00093438674, 0.0017382042, -0.02047327, 0.026283605, -0.020799693, 0.005947433, -0.014349569, 0.009890626, -0.022719061, -0.017248206, 0.0042565595, 0.022327352, -0.015681375, -0.013840348, 6.502964e-05, 0.015485522, -0.002678303, -0.0047984226, -0.012182118, -0.001512972, 0.013931747, -0.009642544, 0.012652168, -0.012932892, -0.027759038, -0.01085031, 0.0050236546, -0.009675186, -0.00893747, -0.0051770736, 0.036011018, 0.003528636, -0.001008648, -0.015811944, -0.008865656, 0.012364916, 0.016621474, -0.01340947, 0.03219839, 0.032955695, -0.021517823, 0.00372449, -0.045124754, 0.015589978, -0.033582427, -0.01642562, -0.009609901, -0.031179955, 0.0012591778, -0.011176733, -0.018658355, -0.015224383, 0.014884903, 0.013083046, 0.0063587264, -0.008238924, -0.008917884, -0.003877909, 0.022836573, -0.004374072, -0.031127727, 0.02604858, -0.018136078, 0.000769951, -0.002312709, -0.025095424, -0.010621814, 0.013207087, 0.013944804, -0.0070899143, -0.022183727, -0.0028088724, -0.011424815, 0.026087752, -0.0058625625, -0.020186016, -0.010217049, 0.015315781, -0.012580355, 0.01374895, 0.004948577, -0.0021854038, 0.023215225, 0.00207442, 0.029639237, 0.01391869, -0.015811944, -0.005356606, -0.022327352, -0.021844247, -0.008310737, -0.020786636, -0.022484036, 0.011411758, 0.005826656, 0.012188647, -0.020394927, -0.0013024289, -0.027315103, -0.017000126, -0.0010600596, -0.0019014158, 0.016712872, 0.0012673384, 0.02966535, 0.02911696, -0.03081436, 0.025552418, 0.0014215735, -0.02510848, 0.020277414, -0.02672754, 0.01829276, 0.03381745, -0.013957861, 0.0049094064, 0.033556316, 0.005167281, 0.0176138, 0.014140658, -0.0043708077, -0.0095446175, 0.012952477, 0.007853745, -0.01034109, 0.01804468, 0.0038322096, -0.04959023, 0.0023078127, 0.0053794556, -0.015106871, -0.03225062, -0.010073422, 0.007285768, 0.0056079524, -0.009002754, -0.014362626, 0.010909067, 0.009779641, -0.02796795, 0.013246258, 0.025474075, -0.001247753, 0.02442952, 0.012802322, -0.032276735, 0.0029802448, 0.014179829, 0.010321504, 0.0053337566, -0.017156808, -0.010439017, 0.034444187, -0.010393318, -0.006042096, -0.018566957, 0.004517698, -0.011228961, -0.009015812, -0.02089109, 0.022484036, 0.0029867734, -0.029064732, -0.010236635, -0.0006761042, -0.029038617, 0.004367544, -0.012293102, 0.0017528932, -0.023358852, 0.02217067, 0.012606468, -0.008160583, -0.0104912445, -0.0034894652, 0.011078807, 0.00050922035, 0.015759716, 0.23774062, -0.0019291617, 0.006218364, 0.013762007, -0.029900376, 0.018188305, 0.0092965355, 0.0040574414, -0.014976301, -0.006228157, -0.016647588, 0.0035188433, -0.01919369, 0.0037506039, 0.029247528, -0.014532366, -0.049773026, -0.019624569, -0.034783665, -0.015028529, 0.0097469995, 0.016281994, 0.0047135525, -0.011294246, 0.011477043, 0.015485522, 0.03426139, 0.014323455, 0.011052692, -0.008362965, -0.037969556, -0.00252162, -0.013709779, -0.0030292084, -0.016569246, -0.013879519, 0.0011849166, -0.0016925049, 0.009753528, 0.008349908, -0.008245452, 0.033007924, -0.0035873922, -0.025461018, 0.016791213, 0.05410793, -0.005950697, -0.011672897, -0.0072335405, 0.013814235, -0.0593307, -0.008624103, 0.021400312, 0.034235276, 0.015642203, -0.020068504, 0.03136275, 0.012567298, -0.010419431, 0.027445672, -0.031754456, 0.014219, -0.0075403787, 0.03812624, 0.0009988552, 0.038752973, -0.018005509, 0.013670608, 0.045882057, -0.018841153, -0.031650003, 0.010628343, -0.00459604, -0.011999321, -0.028202975, -0.018593071, 0.029743692, 0.021857304, 0.01438874, 0.00014128008, -0.006156344, -0.006691678, 0.01672593, -0.012821908, -0.0024367499, -0.03219839, 0.0058233915, -0.0056405943, -0.009381405, 0.0064044255, 0.013905633, -0.011228961, -0.0013481282, -0.014023146, 0.00016239559, -0.0051901303, 0.0025265163, 0.023619989, -0.021517823, 0.024703717, -0.025643816, 0.040189236, 0.016295051, -0.0040411204, -0.0113595305, 0.0029981981, -0.015589978, 0.026479458, 0.0067439056, -0.035775993, -0.010550001, -0.014767391, -0.009897154, -0.013944804, -0.0147543335, 0.015798887, -0.02456009, -0.0018850947, 0.024442578, 0.0019715966, -0.02422061, -0.02945644, -0.003443766, 0.0004945313, 0.0011522742, -0.020773578, -0.011777353, 0.008173639, -0.012325744, -0.021348083, 0.0036461484, 0.0063228197, 0.00028970066, -0.0036200345, -0.021596165, -0.003949722, -0.0006034751, 0.007305354, -0.023424136, 0.004834329, -0.008833014, -0.013435584, 0.0026097542, -0.0012240873, -0.0028349862, -0.01706541, 0.027863493, -0.026414175, -0.011783881, 0.014075373, -0.005634066, -0.006313027, -0.004638475, -0.012495484, 0.022836573, -0.022719061, -0.031284407, -0.022405695, -0.017352663, 0.021113059, -0.03494035, 0.002772966, 0.025643816, -0.0064240107, -0.009897154, 0.0020711557, -0.16409951, 0.009688243, 0.010393318, 0.0033262535, 0.011059221, -0.012919835, 0.0014493194, -0.021857304, -0.0075730206, -0.0020695236, 0.017822713, 0.017417947, -0.034835894, -0.009159437, -0.0018573486, -0.0024840813, -0.022444865, 0.0055687814, 0.0037767177, 0.0033915383, 0.0301354, -0.012227817, 0.0021854038, -0.042878963, 0.021517823, -0.010419431, -0.0051183174, 0.01659536, 0.0017333078, -0.00727924, -0.0020026069, -0.0012493852, 0.031441092, 0.0017431005, 0.008702445, -0.0072335405, -0.020081561, -0.012423672, -0.0042239176, 0.031049386, 0.04324456, 0.02550019, 0.014362626, -0.0107393265, -0.0037538682, -0.0061791935, -0.006737377, 0.011548856, -0.0166737, -0.012828437, -0.003375217, -0.01642562, -0.011424815, 0.007181313, 0.017600743, -0.0030226798, -0.014192886, 0.0128937205, -0.009975496, 0.0051444313, -0.0044654706, -0.008826486, 0.004158633, 0.004971427, -0.017835768, 0.025017083, -0.021792019, 0.013657551, -0.01872364, 0.009100681, -0.0079582, -0.011640254, -0.01093518, -0.0147543335, -0.005000805, 0.02345025, -0.028908048, 0.0104912445, -0.00753385, 0.017561574, -0.012025435, 0.042670052, -0.0041978033, 0.0013056932, -0.009263893, -0.010941708, -0.004471999, 0.01008648, -0.002578744, -0.013931747, 0.018619185, -0.04029369, -0.00025909848, 0.0030063589, 0.003149985, 0.011091864, 0.006495824, 0.00026583098, 0.0045503406, -0.007586078, -0.0007475094, -0.016856499, -0.003528636, 0.038282923, -0.0010494508, 0.024494806, 0.012593412, 0.032433417, -0.003203845, 0.005947433, -0.019937934, -0.00017800271, 0.027706811, 0.03047488, 0.02047327, 0.0019258976, -0.0068940604, -0.0014990991, 0.013305014, -0.007690533, 0.058808424, -0.0016859764, -0.0044622063, -0.0037734534, 0.01578583, -0.0018459238, -0.1196015, -0.0007075225, 0.0030341048, 0.012306159, -0.0068483613, 0.01851473, 0.015315781, 0.031388864, -0.015563863, 0.04776226, -0.008199753, -0.02591801, 0.00546759, -0.004915935, 0.0050824108, 0.0027011528, -0.009205136, -0.016712872, -0.0033409426, 0.0043218443, -0.018279705, 0.00876773, 0.0050138617, -0.009688243, -0.017783541, -0.018645298, -0.010380261, 0.018606128, 0.0077492893, 0.007324939, -0.012704396, -0.002692992, -0.01259994, -0.0076970616, -0.013814235, -0.0004365912, -0.023606932, -0.020186016, 0.025330449, -0.00991674, -0.0048278007, -0.019350372, 0.015433294, -0.0056144805, -0.0034927295, -0.00043455104, 0.008611047, 0.025748271, 0.022353467, -0.020747464, -0.015759716, 0.029038617, -0.000377631, -0.028725252, 0.018109964, -0.0016125311, -0.022719061, -0.009133324, -0.033060152, 0.011248547, -0.0019797573, -0.007181313, 0.0018867267, 0.0070899143, 0.004077027, 0.0055328747, -0.014245113, -0.021217514, -0.006750434, -0.038230695, 0.013233202, 0.014219, -0.017692143, 0.024742888, -0.008833014, -0.00753385, -0.026923396, -0.0021527617, 0.013135274, -0.018070793, -0.013500868, -0.0016696552, 0.011568441, -0.03230285, 0.023646105, 0.0111114485, -0.015172156, 0.0257091, 0.0045699263, -0.00919208, 0.021517823, 0.037838988, 0.00787333, -0.007755818, -0.028281316, 0.011170205, -0.005412098, -0.016321165, 0.009929797, 0.004609097, -0.03047488, 0.002688096, -0.07264877, 0.024455635, -0.020930262, -0.015381066, -0.0033148287, 0.027236762, 0.0014501355, -0.014101488, -0.024076983, 0.026218321, -0.009009283, 0.019624569, 0.0020646274, -0.009081096, -0.01565526, -0.003358896, 0.048571788, -0.004857179, 0.022444865, 0.024181439, 0.00080708164, 0.024873456, 3.463147e-05, 0.0010535312, -0.017940223, 0.0012159267, -0.011065749, 0.008258509, -0.018527785, -0.022797402, 0.012377972, -0.002087477, 0.010791554, 0.022288183, 0.0048604426, -0.032590102, 0.013709779, 0.004922463, 0.020055447, -0.0150677, -0.0057222005, -0.036246043, 0.0021364405, 0.021387255, -0.013435584, 0.010732798, 0.0075534354, -0.00061612396, -0.002018928, -0.004432828, -0.032746784, 0.025513247, -0.0025852725, 0.014467081, -0.008617575, -0.019755138, 0.003966043, -0.0033915383, 0.0004088452, -0.025173767, 0.02796795, 0.0023763615, 0.0052358294, 0.017796598, 0.014806561, 0.0150024155, -0.005859298, 0.01259994, 0.021726735, -0.026466403, -0.017457118, -0.0025493659, 0.0070899143, 0.02668837, 0.015485522, -0.011588027, 0.01906312, -0.003388274, -0.010210521, 0.020956375, 0.028620796, -0.018540842, 0.0025722156, 0.0110331075, -0.003992157, 0.020930262, 0.008487006, 0.0016557822, -0.0009882465, 0.0062640635, -0.016242823, -0.0007785196, -0.0007213955, 0.018971723, 0.021687564, 0.0039464575, -0.01574666, 0.011783881, -0.0019797573, -0.013383356, -0.002706049, 0.0037734534, 0.020394927, -0.00021931567, 0.0041814824, 0.025121538, -0.036246043, -0.019428715, -0.023802789, 0.014845733, 0.015420238, 0.019650683, 0.008186696, 0.025304336, -0.03204171, 0.01774437, 0.0021233836, -0.008434778, -0.0059441687, 0.038335152, 0.022653777, -0.0066002794, 0.02149171, 0.015093814, 0.025382677, -0.007579549, 0.0030357367, -0.0014117807, -0.015341896, 0.014545423, 0.007135614, -0.0113595305, -0.04387129, 0.016308108, -0.008186696, -0.013370299, -0.014297341, 0.017431004, -0.022666834, 0.039458048, 0.0032005806, -0.02081275, 0.008526176, -0.0019307939, 0.024024757, 0.009068039, 0.00953156, 0.010608757, 0.013801178, 0.035932675, -0.015185213, -0.0038322096, -0.012462842, -0.03655941, 0.0013946436, 0.00025726235, 0.008016956, -0.0042565595, 0.008447835, 0.0038191527, -0.014702106, 0.02196176, 0.0052097156, -0.010869896, 0.0051640165, 0.030840475, -0.041468814, 0.009250836, -0.018997835, 0.020107675, 0.008421721, -0.016373392, 0.004602568, 0.0327729, -0.00812794, 0.001581521, 0.019350372, 0.016112253, 0.02132197, 0.00043944738, -0.01472822, -0.025735214, -0.03313849, 0.0033817457, 0.028855821, -0.016033912, 0.0050791465, -0.01808385]}, 'source': '../../../state_of_the_union.txt'}), 0.8154189703772676)PersistenceAnything uploaded to weaviate is automatically persistent into the database. You do not need to call any specific method or pass any param for this to happen.Retriever optionsThis section goes over different options for how to use Weaviate as a retriever.MMRIn addition to using similarity search in the retriever object, you can also use mmr.retriever = db.as_retriever(search_type="mmr")retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})Question Answering with SourcesThis section goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index. from langchain.chains import RetrievalQAWithSourcesChainfrom langchain.llms import OpenAIwith open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)docsearch = Weaviate.from_texts( texts, embeddings, weaviate_url=WEAVIATE_URL, by_text=False, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))],)chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain( {"question": "What did the president say about Justice Breyer"}, return_only_outputs=True,) {'answer': " The president honored Justice Breyer for his service and mentioned his legacy of excellence. He also nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to continue Justice Breyer's legacy.\n", 'sources': '31-pl, 34-pl'}PreviousVespaNextXataAuthenticationSimilarity search with scoreRetriever optionsMMRQuestion Answering with Sources |
673 | https://python.langchain.com/docs/integrations/vectorstores/xata | ComponentsVector storesXataOn this pageXataXata is a serverless data platform, based on PostgreSQL. It provides a Python SDK for interacting with your database, and a UI for managing your data.
Xata has a native vector type, which can be added to any table, and supports similarity search. LangChain inserts vectors directly to Xata, and queries it for the nearest neighbors of a given vector, so that you can use all the LangChain Embeddings integrations with Xata.This notebook guides you how to use Xata as a VectorStore.SetupCreate a database to use as a vector storeIn the Xata UI create a new database. You can name it whatever you want, in this notepad we'll use langchain.
Create a table, again you can name it anything, but we will use vectors. Add the following columns via the UI:content of type "Text". This is used to store the Document.pageContent values.embedding of type "Vector". Use the dimension used by the model you plan to use. In this notebook we use OpenAI embeddings, which have 1536 dimensions.search of type "Text". This is used as a metadata column by this example.any other columns you want to use as metadata. They are populated from the Document.metadata object. For example, if in the Document.metadata object you have a title property, you can create a title column in the table and it will be populated.Let's first install our dependencies:pip install xata openai tiktoken langchainLet's load the OpenAI key to the environemnt. If you don't have one you can create an OpenAI account and create a key on this page.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Similarly, we need to get the environment variables for Xata. You can create a new API key by visiting your account settings. To find the database URL, go to the Settings page of the database that you have created. The database URL should look something like this: https://demo-uni3q8.eu-west-1.xata.sh/db/langchain.api_key = getpass.getpass("Xata API key: ")db_url = input("Xata database URL (copy it from your DB settings):")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores.xata import XataVectorStoreCreate the Xata vector storeLet's import our test dataset:loader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Now create the actual vector store, backed by the Xata table.vector_store = XataVectorStore.from_documents(docs, embeddings, api_key=api_key, db_url=db_url, table_name="vectors")After running the above command, if you go to the Xata UI, you should see the documents loaded together with their embeddings.Similarity Searchquery = "What did the president say about Ketanji Brown Jackson"found_docs = vector_store.similarity_search(query)print(found_docs)Similarity Search with score (vector distance)query = "What did the president say about Ketanji Brown Jackson"result = vector_store.similarity_search_with_score(query)for doc, score in result: print(f"document={doc}, score={score}")PreviousWeaviateNextZepSetupCreate a database to use as a vector storeCreate the Xata vector storeSimilarity SearchSimilarity Search with score (vector distance) |
674 | https://python.langchain.com/docs/integrations/vectorstores/zep | ComponentsVector storesZepOn this pageZepZep is an open source long-term memory store for LLM applications. Zep makes it easy to add relevant documents,
chat history memory & rich user data to your LLM app's prompts.Note: The ZepVectorStore works with Documents and is intended to be used as a Retriever.
It offers separate functionality to Zep's ZepMemory class, which is designed for persisting, enriching
and searching your user's chat history.Why Zep's VectorStore? 🤖🚀Zep automatically embeds documents added to the Zep Vector Store using low-latency models local to the Zep server.
The Zep client also offers async interfaces for all document operations. These two together with Zep's chat memory
functionality make Zep ideal for building conversational LLM apps where latency and performance are important.InstallationFollow the Zep Quickstart Guide to install and get started with Zep.UsageYou'll need your Zep API URL and optionally an API key to use the Zep VectorStore.
See the Zep docs for more information.In the examples below, we're using Zep's auto-embedding feature which automatically embed documents on the Zep server
using low-latency embedding models.NoteThese examples use Zep's async interfaces. Call sync interfaces by removing the a prefix from the method names.If you pass in an Embeddings instance Zep will use this to embed documents rather than auto-embed them.
You must also set your document collection to isAutoEmbedded === false. If you set your collection to isAutoEmbedded === false, you must pass in an Embeddings instance.Load or create a Collection from documentsfrom uuid import uuid4from langchain.document_loaders import WebBaseLoaderfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.vectorstores import ZepVectorStorefrom langchain.vectorstores.zep import CollectionConfigZEP_API_URL = "http://localhost:8000" # this is the API url of your Zep instanceZEP_API_KEY = "<optional_key>" # optional API Key for your Zep instancecollection_name = f"babbage{uuid4().hex}" # a unique collection name. alphanum only# Collection config is needed if we're creating a new Zep Collectionconfig = CollectionConfig( name=collection_name, description="<optional description>", metadata={"optional_metadata": "associated with the collection"}, is_auto_embedded=True, # we'll have Zep embed our documents using its low-latency embedder embedding_dimensions=1536 # this should match the model you've configured Zep to use.)# load the documentarticle_url = "https://www.gutenberg.org/cache/epub/71292/pg71292.txt"loader = WebBaseLoader(article_url)documents = loader.load()# split it into chunkstext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)# Instantiate the VectorStore. Since the collection does not already exist in Zep,# it will be created and populated with the documents we pass in.vs = ZepVectorStore.from_documents(docs, collection_name=collection_name, config=config, api_url=ZEP_API_URL, api_key=ZEP_API_KEY )# wait for the collection embedding to completeasync def wait_for_ready(collection_name: str) -> None: from zep_python import ZepClient import time client = ZepClient(ZEP_API_URL, ZEP_API_KEY) while True: c = await client.document.aget_collection(collection_name) print( "Embedding status: " f"{c.document_embedded_count}/{c.document_count} documents embedded" ) time.sleep(1) if c.status == "ready": breakawait wait_for_ready(collection_name) Embedding status: 0/402 documents embedded Embedding status: 0/402 documents embedded Embedding status: 402/402 documents embeddedSimarility Search Query over the Collection# query itquery = "what is the structure of our solar system?"docs_scores = await vs.asimilarity_search_with_relevance_scores(query, k=3)# print resultsfor d, s in docs_scores: print(d.page_content, " -> ", s, "\n====\n") Tables necessary to determine the places of the planets are not less necessary than those for the sun, moon, and stars. Some notion of the number and complexity of these tables may be formed, when we state that the positions of the two principal planets, (and these are the most necessary for the navigator,) Jupiter and Saturn, require each not less than one hundred and sixteen tables. Yet it is not only necessary to predict the position of these bodies, but it is likewise expedient to -> 0.8998482592744614 ==== tabulate the motions of the four satellites of Jupiter, to predict the exact times at which they enter his shadow, and at which their shadows cross his disc, as well as the times at which they are interposed between him and the Earth, and he between them and the Earth. Among the extensive classes of tables here enumerated, there are several which are in their nature permanent and unalterable, and would never require to be recomputed, if they could once be computed with perfect -> 0.8976143854195493 ==== the scheme of notation thus applied, immediately suggested the advantages which must attend it as an instrument for expressing the structure, operation, and circulation of the animal system; and we entertain no doubt of its adequacy for that purpose. Not only the mechanical connexion of the solid members of the bodies of men and animals, but likewise the structure and operation of the softer parts, including the muscles, integuments, membranes, &c. the nature, motion, -> 0.889982614061763 ====Search over Collection Re-ranked by MMRquery = "what is the structure of our solar system?"docs = await vs.asearch(query, search_type="mmr", k=3)for d in docs: print(d.page_content, "\n====\n") Tables necessary to determine the places of the planets are not less necessary than those for the sun, moon, and stars. Some notion of the number and complexity of these tables may be formed, when we state that the positions of the two principal planets, (and these the most necessary for the navigator,) Jupiter and Saturn, require each not less than one hundred and sixteen tables. Yet it is not only necessary to predict the position of these bodies, but it is likewise expedient to ==== the scheme of notation thus applied, immediately suggested the advantages which must attend it as an instrument for expressing the structure, operation, and circulation of the animal system; and we entertain no doubt of its adequacy for that purpose. Not only the mechanical connexion of the solid members of the bodies of men and animals, but likewise the structure and operation of the softer parts, including the muscles, integuments, membranes, &c. the nature, motion, ==== tabulate the motions of the four satellites of Jupiter, to predict the exact times at which they enter his shadow, and at which their shadows cross his disc, as well as the times at which they are interposed between him and the Earth, and he between them and the Earth. Among the extensive classes of tables here enumerated, there are several which are in their nature permanent and unalterable, and would never require to be recomputed, if they could once be computed with perfect ====Filter by MetadataUse a metadata filter to narrow down results. First, load another book: "Adventures of Sherlock Holmes"# Let's add more content to the existing Collectionarticle_url = "https://www.gutenberg.org/files/48320/48320-0.txt"loader = WebBaseLoader(article_url)documents = loader.load()# split it into chunkstext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)await vs.aadd_documents(docs)await wait_for_ready(collection_name) Embedding status: 402/1692 documents embedded Embedding status: 402/1692 documents embedded Embedding status: 552/1692 documents embedded Embedding status: 702/1692 documents embedded Embedding status: 1002/1692 documents embedded Embedding status: 1002/1692 documents embedded Embedding status: 1152/1692 documents embedded Embedding status: 1302/1692 documents embedded Embedding status: 1452/1692 documents embedded Embedding status: 1602/1692 documents embedded Embedding status: 1692/1692 documents embeddedWe see results from both books. Note the source metadataquery = "Was he interested in astronomy?"docs = await vs.asearch(query, search_type="similarity", k=3)for d in docs: print(d.page_content, " -> ", d.metadata, "\n====\n") by that body to Mr Babbage:--'In no department of science, or of the arts, does this discovery promise to be so eminently useful as in that of astronomy, and its kindred sciences, with the various arts dependent on them. In none are computations more operose than those which astronomy in particular requires;--in none are preparatory facilities more needful;--in none is error more detrimental. The practical astronomer is interrupted in his pursuit, and diverted from his task of -> {'source': 'https://www.gutenberg.org/cache/epub/71292/pg71292.txt'} ==== possess all knowledge which is likely to be useful to him in his work, and this I have endeavored in my case to do. If I remember rightly, you on one occasion, in the early days of our friendship, defined my limits in a very precise fashion.” “Yes,” I answered, laughing. “It was a singular document. Philosophy, astronomy, and politics were marked at zero, I remember. Botany variable, geology profound as regards the mud-stains from any region -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ==== in all its relations; but above all, with Astronomy and Navigation. So important have they been considered, that in many instances large sums have been appropriated by the most enlightened nations in the production of them; and yet so numerous and insurmountable have been the difficulties attending the attainment of this end, that after all, even navigators, putting aside every other department of art and science, have, until very recently, been scantily and imperfectly supplied with -> {'source': 'https://www.gutenberg.org/cache/epub/71292/pg71292.txt'} ====Let's try again using a filter for only the Sherlock Holmes document.filter = { "where": {"jsonpath": "$[*] ? (@.source == 'https://www.gutenberg.org/files/48320/48320-0.txt')"},}docs = await vs.asearch(query, search_type="similarity", metadata=filter, k=3)for d in docs: print(d.page_content, " -> ", d.metadata, "\n====\n") possess all knowledge which is likely to be useful to him in his work, and this I have endeavored in my case to do. If I remember rightly, you on one occasion, in the early days of our friendship, defined my limits in a very precise fashion.” “Yes,” I answered, laughing. “It was a singular document. Philosophy, astronomy, and politics were marked at zero, I remember. Botany variable, geology profound as regards the mud-stains from any region -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ==== the light shining upon his strong-set aquiline features. So he sat as I dropped off to sleep, and so he sat when a sudden ejaculation caused me to wake up, and I found the summer sun shining into the apartment. The pipe was still between his lips, the smoke still curled upward, and the room was full of a dense tobacco haze, but nothing remained of the heap of shag which I had seen upon the previous night. “Awake, Watson?” he asked. “Yes.” “Game for a morning drive?” -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ==== “I glanced at the books upon the table, and in spite of my ignorance of German I could see that two of them were treatises on science, the others being volumes of poetry. Then I walked across to the window, hoping that I might catch some glimpse of the country-side, but an oak shutter, heavily barred, was folded across it. It was a wonderfully silent house. There was an old clock ticking loudly somewhere in the passage, but otherwise everything was deadly still. A vague feeling of -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ====PreviousXataNextZillizWhy Zep's VectorStore? 🤖🚀InstallationUsageNoteLoad or create a Collection from documentsSimarility Search Query over the CollectionSearch over Collection Re-ranked by MMRWe see results from both books. Note the source metadataLet's try again using a filter for only the Sherlock Holmes document. |
675 | https://python.langchain.com/docs/integrations/vectorstores/zilliz | ComponentsVector storesZillizZillizZilliz Cloud is a fully managed service on cloud for LF AI Milvus®,This notebook shows how to use functionality related to the Zilliz Cloud managed vector database.To run, you should have a Zilliz Cloud instance up and running. Here are the installation instructionspip install pymilvusWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:········# replaceZILLIZ_CLOUD_URI = "" # example: "https://in01-17f69c292d4a5sa.aws-us-west-2.vectordb.zillizcloud.com:19536"ZILLIZ_CLOUD_USERNAME = "" # example: "username"ZILLIZ_CLOUD_PASSWORD = "" # example: "*********"ZILLIZ_CLOUD_API_KEY = "" # example: "*********" (for serverless clusters which can be used as replacements for user and password)from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Milvusfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vector_db = Milvus.from_documents( docs, embeddings, connection_args={ "uri": ZILLIZ_CLOUD_URI, "user": ZILLIZ_CLOUD_USERNAME, "password": ZILLIZ_CLOUD_PASSWORD, # "token": ZILLIZ_CLOUD_API_KEY, # API key, for serverless clusters which can be used as replacements for user and password "secure": True, },)query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'PreviousZepNextRetrievers |
676 | https://python.langchain.com/docs/integrations/retrievers | ComponentsRetrieversRetrievers📄️ Amazon KendraAmazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.📄️ Arcee RetrieverThis notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs).📄️ ArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.📄️ Azure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.📄️ BM25BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.📄️ ChaindeskChaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources).📄️ ChatGPT PluginOpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions.📄️ Cohere RerankerCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.📄️ DocArrayDocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps!📄️ ElasticSearch BM25Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.📄️ Google Cloud Enterprise SearchEnterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud.📄️ Google DriveThis notebook covers how to retrieve documents from Google Drive.📄️ Google Vertex AI SearchVertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.📄️ Kay.aiData API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.📄️ kNNIn statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.📄️ LOTR (Merger Retriever)Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.📄️ MetalMetal is a managed service for ML Embeddings.📄️ Pinecone Hybrid SearchPinecone is a vector database with broad functionality.📄️ PubMedPubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.📄️ RePhraseQueryRePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever.📄️ SEC filingThe SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes.📄️ SVMSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.📄️ Tavily Search APITavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.📄️ TF-IDFTF-IDF means term-frequency times inverse document-frequency.📄️ VespaVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.📄️ Weaviate Hybrid SearchWeaviate is an open source vector database.📄️ WikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.📄️ you-retrieverUsing the You.com Retriever📄️ ZepRetriever Example for Zep - A long-term memory store for LLM applications.PreviousZillizNextAmazon Kendra |
677 | https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever | ComponentsRetrieversAmazon KendraOn this pageAmazon KendraAmazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.With Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.Using the Amazon Kendra Index Retriever%pip install boto3import boto3from langchain.retrievers import AmazonKendraRetrieverCreate New Retrieverretriever = AmazonKendraRetriever(index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03")Now you can use retrieved documents from Kendra indexretriever.get_relevant_documents("what is langchain")PreviousRetrieversNextArcee RetrieverUsing the Amazon Kendra Index Retriever |
678 | https://python.langchain.com/docs/integrations/retrievers/arcee | ComponentsRetrieversArcee RetrieverOn this pageArcee RetrieverThis notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs).SetupBefore using ArceeRetriever, make sure the Arcee API key is set as ARCEE_API_KEY environment variable. You can also pass the api key as a named parameter.from langchain.retrievers import ArceeRetrieverretriever = ArceeRetriever( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY" # if not already set in the environment)Additional ConfigurationYou can also configure ArceeRetriever's parameters such as arcee_api_url, arcee_app_url, and model_kwargs as needed.
Setting the model_kwargs at the object initialization uses the filters and size as default for all the subsequent retrievals.retriever = ArceeRetriever( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY", # if not already set in the environment arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai model_kwargs={ "size": 5, "filters": [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein" } ] })Retrieving documentsYou can retrieve relevant documents from uploaded contexts by providing a query. Here's an example:query = "Can AI-driven music therapy contribute to the rehabilitation of patients with disorders of consciousness?"documents = retriever.get_relevant_documents(query=query)Additional parametersArcee allows you to apply filters and set the size (in terms of count) of retrieved document(s). Filters help narrow down the results. Here's how to use these parameters:# Define filtersfilters = [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Music" }, { "field_name": "year", "filter_type": "strict_search", "value": "1905" }]# Retrieve documents with filters and size paramsdocuments = retriever.get_relevant_documents(query=query, size=5, filters=filters)PreviousAmazon KendraNextArxivSetupAdditional ConfigurationRetrieving documentsAdditional parameters |
679 | https://python.langchain.com/docs/integrations/retrievers/arxiv | ComponentsRetrieversArxivOn this pageArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.This notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream.InstallationFirst, you need to install arxiv python package.#!pip install arxivArxivRetriever has these arguments:optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.get_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.orgExamplesRunning retrieverfrom langchain.retrievers import ArxivRetrieverretriever = ArxivRetriever(load_max_docs=2)docs = retriever.get_relevant_documents(query="1605.08386")docs[0].metadata # meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}docs[0].page_content[:400] # a content of the Document 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'Question Answering on facts# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassOPENAI_API_KEY = getpass() ········import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What are Heat-bath random walks with Markov base?", "What is the ImageBind model?", "How does Compositional Reasoning with Large Language Models works?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? **Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term? -> **Question**: What is the ImageBind model? **Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks. -> **Question**: How does Compositional Reasoning with Large Language Models works? **Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones. In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed. The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts. questions = [ "What are Heat-bath random walks with Markov base? Include references to answer.",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. **Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings. The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties. References: Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18. Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media. PreviousArcee RetrieverNextAzure Cognitive SearchInstallationExamplesRunning retrieverQuestion Answering on facts |
680 | https://python.langchain.com/docs/integrations/retrievers/azure_cognitive_search | ComponentsRetrieversAzure Cognitive SearchOn this pageAzure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:A search engine for full text search over a search index containing user-owned contentRich indexing, with lexical analysis and optional AI enrichment for content extraction and transformationRich query syntax for text search, fuzzy search, autocomplete, geo-search and moreProgrammability through REST APIs and client libraries in Azure SDKsAzure integration at the data layer, machine learning layer, and AI (Cognitive Services)This notebook shows how to use Azure Cognitive Search (ACS) within LangChain.Set up Azure Cognitive SearchTo set up ACS, please follow the instrcutions here.Please notethe name of your ACS service, the name of your ACS index,your API key.Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.Using the Azure Cognitive Search Retrieverimport osfrom langchain.retrievers import AzureCognitiveSearchRetrieverSet Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureCognitiveSearchRetriever).os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = "<YOUR_ACS_SERVICE_NAME>"os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = "<YOUR_ACS_INDEX_NAME>"os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = "<YOUR_API_KEY>"Create the Retrieverretriever = AzureCognitiveSearchRetriever(content_key="content", top_k=10)Now you can use retrieve documents from Azure Cognitive Searchretriever.get_relevant_documents("what is langchain")You can change the number of results returned with the top_k parameter. The default value is None, which returns all results.PreviousArxivNextBM25Set up Azure Cognitive SearchUsing the Azure Cognitive Search Retriever |
681 | https://python.langchain.com/docs/integrations/retrievers/bm25 | ComponentsRetrieversBM25On this pageBM25BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.This notebook goes over how to use a retriever that under the hood uses BM25 using rank_bm25 package.# !pip install rank_bm25from langchain.retrievers import BM25Retriever /workspaces/langchain/.venv/lib/python3.10/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.10) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(Create New Retriever with Textsretriever = BM25Retriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])Create a New Retriever with DocumentsYou can now create a new retriever with the documents you created.from langchain.schema import Documentretriever = BM25Retriever.from_documents( [ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ])Use RetrieverWe can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousAzure Cognitive SearchNextChaindeskCreate New Retriever with TextsCreate a New Retriever with DocumentsUse Retriever |
682 | https://python.langchain.com/docs/integrations/retrievers/chaindesk | ComponentsRetrieversChaindeskOn this pageChaindeskChaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources).
Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Chaindesk API.This notebook shows how to use Chaindesk's retriever.First, you will need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url. You need the API Key.QueryNow that our index is set up, we can set up a retriever and start querying it.from langchain.retrievers import ChaindeskRetrieverretriever = ChaindeskRetriever( datastore_url="https://clg1xg2h80000l708dymr0fxc.chaindesk.ai/query", # api_key="CHAINDESK_API_KEY", # optional if datastore is public # top_k=10 # optional)retriever.get_relevant_documents("What is Daftpage?") [Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}), Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}), Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]PreviousBM25NextChatGPT PluginQuery |
683 | https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin | ComponentsRetrieversChatGPT PluginOn this pageChatGPT PluginOpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions.Plugins can allow ChatGPT to do things like:Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc.Retrieve knowledge-base information; e.g., company docs, personal notes, etc.Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.# STEP 1: Load# Load documents using LangChain's DocumentLoaders# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.htmlfrom langchain.document_loaders.csv_loader import CSVLoaderloader = CSVLoader( file_path="../../document_loaders/examples/example_data/mlb_teams_2012.csv")data = loader.load()# STEP 2: Convert# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-pluginfrom typing import Listfrom langchain.docstore.document import Documentimport jsondef write_json(path: str, documents: List[Document]) -> None: results = [{"text": doc.page_content} for doc in documents] with open(path, "w") as f: json.dump(results, f, indent=2)write_json("foo.json", data)# STEP 3: Use# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_jsonUsing the ChatGPT Retriever PluginOkay, so we've created the ChatGPT Retriever Plugin, but how do we actually use it?The below code walks through how to do that.We want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.retrievers import ChatGPTPluginRetrieverretriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")retriever.get_relevant_documents("alice's phone number") [Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0), Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]PreviousChaindeskNextCohere RerankerUsing the ChatGPT Retriever Plugin |
684 | https://python.langchain.com/docs/integrations/retrievers/cohere-reranker | ComponentsRetrieversCohere RerankerOn this pageCohere RerankerCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.This notebook shows how to use Cohere's rerank endpoint in a retriever. This builds on top of ideas in the ContextualCompressionRetriever.#!pip install cohere#!pip install faiss# OR (depending on Python version)#!pip install faiss-cpu# get a new token: https://dashboard.cohere.ai/import osimport getpassos.environ["COHERE_API_KEY"] = getpass.getpass("Cohere API Key:")os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")# Helper function for printing docsdef pretty_print_docs(docs): print( f"\n{'-' * 100}\n".join( [f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)] ) )Set up the base vector store retrieverLet's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.from langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSdocuments = TextLoader("../../../state_of_the_union.txt").load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)texts = text_splitter.split_documents(documents)retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever( search_kwargs={"k": 20})query = "What did the president say about Ketanji Brown Jackson"docs = retriever.get_relevant_documents(query)pretty_print_docs(docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. ---------------------------------------------------------------------------------------------------- Document 4: He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. ---------------------------------------------------------------------------------------------------- Document 5: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 6: Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down. Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. America used to have the best roads, bridges, and airports on Earth. Now our infrastructure is ranked 13th in the world. ---------------------------------------------------------------------------------------------------- Document 7: And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. I’m a capitalist, but capitalism without competition isn’t capitalism. It’s exploitation—and it drives up prices. ---------------------------------------------------------------------------------------------------- Document 8: For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. ---------------------------------------------------------------------------------------------------- Document 9: All told, we created 369,000 new manufacturing jobs in America just last year. Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight. As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. ---------------------------------------------------------------------------------------------------- Document 10: I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease. ---------------------------------------------------------------------------------------------------- Document 11: He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand. ---------------------------------------------------------------------------------------------------- Document 12: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. ---------------------------------------------------------------------------------------------------- Document 13: I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. ---------------------------------------------------------------------------------------------------- Document 14: And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery. ---------------------------------------------------------------------------------------------------- Document 15: Third, support our veterans. Veterans are the best of us. I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. ---------------------------------------------------------------------------------------------------- Document 16: When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America. For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. And I know you’re tired, frustrated, and exhausted. But I also know this. ---------------------------------------------------------------------------------------------------- Document 17: Now is the hour. Our moment of responsibility. Our test of resolve and conscience, of history itself. It is in this moment that our character is formed. Our purpose is found. Our future is forged. Well I know this nation. We will meet the test. To protect freedom and liberty, to expand fairness and opportunity. We will save democracy. As hard as these times have been, I am more optimistic about America today than I have been my whole life. ---------------------------------------------------------------------------------------------------- Document 18: He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers. ---------------------------------------------------------------------------------------------------- Document 19: I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis. ---------------------------------------------------------------------------------------------------- Document 20: So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.Doing reranking with CohereRerankNow let's wrap our base retriever with a ContextualCompressionRetriever. We'll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.from langchain.llms import OpenAIfrom langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import CohereRerankllm = OpenAI(temperature=0)compressor = CohereRerank()compression_retriever = ContextualCompressionRetriever( base_compressor=compressor, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.You can of course use this retriever within a QA pipelinefrom langchain.chains import RetrievalQAchain = RetrievalQA.from_chain_type( llm=OpenAI(temperature=0), retriever=compression_retriever)chain({"query": query}) {'query': 'What did the president say about Ketanji Brown Jackson', 'result': " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."}PreviousChatGPT PluginNextDocArraySet up the base vector store retrieverDoing reranking with CohereRerank |
685 | https://python.langchain.com/docs/integrations/retrievers/docarray_retriever | ComponentsRetrieversDocArrayOn this pageDocArrayDocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps!This notebook is split into two sections. The first section offers an introduction to all five supported document index backends. It provides guidance on setting up and indexing each backend and also instructs you on how to build a DocArrayRetriever for finding relevant documents.
In the second section, we'll select one of these backends and illustrate how to use it through a basic example.Document Index Backendsfrom langchain.retrievers import DocArrayRetrieverfrom docarray import BaseDocfrom docarray.typing import NdArrayimport numpy as npfrom langchain.embeddings import FakeEmbeddingsimport randomembeddings = FakeEmbeddings(size=32)Before you start building the index, it's important to define your document schema. This determines what fields your documents will have and what type of data each field will hold.For this demonstration, we'll create a somewhat random schema containing 'title' (str), 'title_embedding' (numpy array), 'year' (int), and 'color' (str)class MyDoc(BaseDoc): title: str title_embedding: NdArray[32] year: int color: strInMemoryExactNNIndexInMemoryExactNNIndex stores all Documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.Learn more here: https://docs.docarray.org/user_guide/storing/index_in_memory/from docarray.index import InMemoryExactNNIndex# initialize the indexdb = InMemoryExactNNIndex[MyDoc]()# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 56', metadata={'id': '1f33e58b6468ab722f3786b96b20afe6', 'year': 56, 'color': 'red'})]HnswDocumentIndexHnswDocumentIndex is a lightweight Document Index implementation that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.Learn more here: https://docs.docarray.org/user_guide/storing/index_hnswlib/from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="hnsw_index")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 28', metadata={'id': 'ca9f3f4268eec7c97a7d6e77f541cb82', 'year': 28, 'color': 'red'})]WeaviateDocumentIndexWeaviateDocumentIndex is a document index that is built upon Weaviate vector database.Learn more here: https://docs.docarray.org/user_guide/storing/index_weaviate/# There's a small difference with the Weaviate backend compared to the others.# Here, you need to 'mark' the field used for vector search with 'is_embedding=True'.# So, let's create a new schema for Weaviate that takes care of this requirement.from pydantic import Fieldclass WeaviateDoc(BaseDoc): title: str title_embedding: NdArray[32] = Field(is_embedding=True) year: int color: strfrom docarray.index import WeaviateDocumentIndex# initialize the indexdbconfig = WeaviateDocumentIndex.DBConfig(host="http://localhost:8080")db = WeaviateDocumentIndex[WeaviateDoc](db_config=dbconfig)# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"path": ["year"], "operator": "LessThanEqual", "valueInt": "90"}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 17', metadata={'id': '3a5b76e85f0d0a01785dc8f9d965ce40', 'year': 17, 'color': 'red'})]ElasticDocIndexElasticDocIndex is a document index that is built upon ElasticSearchLearn more herefrom docarray.index import ElasticDocIndex# initialize the indexdb = ElasticDocIndex[MyDoc]( hosts="http://localhost:9200", index_name="docarray_retriever")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"range": {"year": {"lte": 90}}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 46', metadata={'id': 'edbc721bac1c2ad323414ad1301528a4', 'year': 46, 'color': 'green'})]QdrantDocumentIndexQdrantDocumentIndex is a document index that is built upon Qdrant vector databaseLearn more herefrom docarray.index import QdrantDocumentIndexfrom qdrant_client.http import models as rest# initialize the indexqdrant_config = QdrantDocumentIndex.DBConfig(path=":memory:")db = QdrantDocumentIndex[MyDoc](qdrant_config)# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = rest.Filter( must=[ rest.FieldCondition( key="year", range=rest.Range( gte=10, lt=90, ), ) ]) WARNING:root:Payload indexes have no effect in the local Qdrant. Please use server Qdrant if you need payload indexes.# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 80', metadata={'id': '97465f98d0810f1f330e4ecc29b13d20', 'year': 80, 'color': 'blue'})]Movie Retrieval using HnswDocumentIndexmovies = [ { "title": "Inception", "description": "A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.", "director": "Christopher Nolan", "rating": 8.8, }, { "title": "The Dark Knight", "description": "When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.", "director": "Christopher Nolan", "rating": 9.0, }, { "title": "Interstellar", "description": "Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.", "director": "Christopher Nolan", "rating": 8.6, }, { "title": "Pulp Fiction", "description": "The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", "director": "Quentin Tarantino", "rating": 8.9, }, { "title": "Reservoir Dogs", "description": "When a simple jewelry heist goes horribly wrong, the surviving criminals begin to suspect that one of them is a police informant.", "director": "Quentin Tarantino", "rating": 8.3, }, { "title": "The Godfather", "description": "An aging patriarch of an organized crime dynasty transfers control of his empire to his reluctant son.", "director": "Francis Ford Coppola", "rating": 9.2, },]import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from docarray import BaseDoc, DocListfrom docarray.typing import NdArrayfrom langchain.embeddings.openai import OpenAIEmbeddings# define schema for your movie documentsclass MyDoc(BaseDoc): title: str description: str description_embedding: NdArray[1536] rating: float director: strembeddings = OpenAIEmbeddings()# get "description" embeddings, and create documentsdocs = DocList[MyDoc]( [ MyDoc( description_embedding=embeddings.embed_query(movie["description"]), **movie ) for movie in movies ])from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="movie_search")# add datadb.index(docs)Normal Retrieverfrom langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description",)# find the relevant documentdoc = retriever.get_relevant_documents("movie about dreams")print(doc) [Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]Retriever with Filtersfrom langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"director": {"$eq": "Christopher Nolan"}}, top_k=2,)# find relevant documentsdocs = retriever.get_relevant_documents("space travel")print(docs) [Document(page_content='Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.', metadata={'id': 'ab704cc7ae8573dc617f9a5e25df022a', 'title': 'Interstellar', 'rating': 8.6, 'director': 'Christopher Nolan'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]Retriever with MMR searchfrom langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"rating": {"$gte": 8.7}}, search_type="mmr", top_k=3,)# find relevant documentsdocs = retriever.get_relevant_documents("action movies")print(docs) [Document(page_content="The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", metadata={'id': 'e6aa313bbde514e23fbc80ab34511afd', 'title': 'Pulp Fiction', 'rating': 8.9, 'director': 'Quentin Tarantino'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'}), Document(page_content='When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.', metadata={'id': '91dec17d4272041b669fd113333a65f7', 'title': 'The Dark Knight', 'rating': 9.0, 'director': 'Christopher Nolan'})]PreviousCohere RerankerNextElasticSearch BM25Document Index BackendsInMemoryExactNNIndexHnswDocumentIndexWeaviateDocumentIndexElasticDocIndexQdrantDocumentIndexMovie Retrieval using HnswDocumentIndexNormal RetrieverRetriever with FiltersRetriever with MMR search |
686 | https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25 | ComponentsRetrieversElasticSearch BM25On this pageElasticSearch BM25Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.This notebook shows how to use a retriever that uses ElasticSearch and BM25.For more information on the details of BM25 see this blog post.#!pip install elasticsearchfrom langchain.retrievers import ElasticSearchBM25RetrieverCreate New Retrieverelasticsearch_url = "http://localhost:9200"retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")# Alternatively, you can load an existing index# import elasticsearch# elasticsearch_url="http://localhost:9200"# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")Add texts (if necessary)We can optionally add texts to the retriever (if they aren't already in there)retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"]) ['cbd4cb47-8d9f-4f34-b80e-ea871bc49856', 'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365', '8631bfc8-7c12-48ee-ab56-8ad5f373676e', '8be8374c-3253-4d87-928d-d73550a2ecf0', 'd79f457b-2842-4eab-ae10-77aa420b53d7']Use RetrieverWe can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={})]PreviousDocArrayNextGoogle Cloud Enterprise SearchCreate New RetrieverAdd texts (if necessary)Use Retriever |
687 | https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search | ComponentsRetrieversGoogle Cloud Enterprise SearchOn this pageGoogle Cloud Enterprise SearchEnterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud.Gen AI App Builder lets developers, even those with limited machine learning skills, quickly and easily tap into the power of Google’s foundation models, search expertise, and conversational AI technologies to create enterprise-grade generative AI applications. Enterprise Search lets organizations quickly build generative AI powered search engines for customers and employees.Enterprise Search is underpinned by a variety of Google Search technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user’s query input. Enterprise Search also benefits from Google’s expertise in understanding how users search and factors in content relevance to order displayed results. Google Cloud offers Enterprise Search via Gen App Builder in Google Cloud Console and via an API for enterprise workflow integration. This notebook demonstrates how to configure Enterprise Search and use the Enterprise Search retriever. The Enterprise Search retriever encapsulates the Generative AI App Builder Python client library and uses it to access the Enterprise Search Search Service API.Install pre-requisitesYou need to install the google-cloud-discoverengine package to use the Enterprise Search retriever.pip install google-cloud-discoveryengineConfigure access to Google Cloud and Google Cloud Enterprise SearchEnterprise Search is generally available for the allowlist (which means customers need to be approved for access) as of June 6, 2023. Contact your Google Cloud sales team for access and pricing details. We are previewing additional features that are coming soon to the generally available offering as part of our Trusted Tester program. Sign up for Trusted Tester and contact your Google Cloud sales team for an expedited trial.Before you can run this notebook you need to:Set or create a Google Cloud project and turn on Gen App BuilderCreate and populate an unstructured data storeSet credentials to access Enterprise Search APISet or create a Google Cloud poject and turn on Gen App BuilderFollow the instructions in the Enterprise Search Getting Started guide to set/create a GCP project and enable Gen App Builder.Create and populate an unstructured data storeUse Google Cloud Console to create an unstructured data store and populate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder. Make sure to use the Cloud Storage (without metadata) option.Set credentials to access Enterprise Search APIThe Gen App Builder client libraries used by the Enterprise Search retriever provide high-level language support for authenticating to Gen App Builder programmatically. Client libraries support Application Default Credentials (ADC); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API. With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code.If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of the supported methods to make sure that you Application Default Credentials are properly set.import sysif "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user()Configure and use the Enterprise Search retrieverThe Enterprise Search retriever is implemented in the langchain.retriever.GoogleCloudEntepriseSearchRetriever class. The get_relevant_documents method returns a list of langchain.schema.Document documents where the page_content field of each document is populated the document content.
Depending on the data type used in Enterprise search (structured or unstructured) the page_content field is populated as follows:Structured data source: either an extractive segment or an extractive answer that matches a query. The metadata field is populated with metadata (if any) of the document from which the segments or answers were extracted.Unstructured data source: a string json containing all the fields returned from the structured data source. The metadata field is populated with metadata (if any) of the document Only for Unstructured data sources:An extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search.For more information about extractive segments and extractive answers refer to product documentation.When creating an instance of the retriever you can specify a number of parameters that control which Enterprise data store to access and how a natural language query is processed, including configurations for extractive answers and segments.The mandatory parameters are:project_id - Your Google Cloud PROJECT_IDsearch_engine_id - The ID of the data store you want to use. The project_id and search_engine_id parameters can be provided explicitly in the retriever's constructor or through the environment variables - PROJECT_ID and SEARCH_ENGINE_ID.You can also configure a number of optional parameters, including:max_documents - The maximum number of documents used to provide extractive segments or extractive answersget_extractive_answers - By default, the retriever is configured to return extractive segments. Set this field to True to return extractive answers. This is used only when engine_data_type set to 0 (unstructured) max_extractive_answer_count - The maximum number of extractive answers returned in each search result.
At most 5 answers will be returned. This is used only when engine_data_type set to 0 (unstructured) max_extractive_segment_count - The maximum number of extractive segments returned in each search result.
Currently one segment will be returned. This is used only when engine_data_type set to 0 (unstructured) filter - The filter expression that allows you filter the search results based on the metadata associated with the documents in the searched data store. query_expansion_condition - Specification to determine under which conditions query expansion should occur.
0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled.
1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero.
2 - Automatic query expansion built by the Search API.engine_data_type - Defines the enterprise search data type
0 - Unstructured data
1 - Structured dataConfigure and use the retriever for unstructured data with extractve segmentsfrom langchain.retrievers import GoogleCloudEnterpriseSearchRetrieverPROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project IDSEARCH_ENGINE_ID = "<YOUR SEARCH ENGINE ID>" # Set to your data store IDretriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for unstructured data with extractve answersretriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for structured data with extractve answersretriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, engine_data_type=1)result = retriever.get_relevant_documents(query)for doc in result: print(doc)PreviousElasticSearch BM25NextGoogle DriveInstall pre-requisitesConfigure access to Google Cloud and Google Cloud Enterprise SearchSet or create a Google Cloud poject and turn on Gen App BuilderCreate and populate an unstructured data storeSet credentials to access Enterprise Search APIConfigure and use the Enterprise Search retrieverOnly for Unstructured data sources:The mandatory parameters are:Configure and use the retriever for unstructured data with extractve segmentsConfigure and use the retriever for unstructured data with extractve answersConfigure and use the retriever for structured data with extractve answers |
688 | https://python.langchain.com/docs/integrations/retrievers/google_drive | ComponentsRetrieversGoogle DriveOn this pageGoogle DriveThis notebook covers how to retrieve documents from Google Drive.PrerequisitesCreate a Google Cloud project or use an existing projectEnable the Google Drive APIAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlibRetrieve the Google DocsBy default, the GoogleDriveRetriever expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the GOOGLE_ACCOUNT_FILE environment variable.
The location of token.json uses the same directory (or use the parameter token_path). Note that token.json will be created automatically the first time you use the retriever.GoogleDriveRetriever can retrieve a selection of files with some requests. By default, If you use a folder_id, all the files inside this folder can be retrieved to Document.You can obtain your folder and document id from the URL:Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"The special value root is for your personal home.from langchain_googledrive.retrievers import GoogleDriveRetrieverfolder_id="root"#folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5'retriever = GoogleDriveRetriever( num_results=2,)By default, all files with these mime-type can be converted to Document.text/texttext/plaintext/htmltext/csvtext/markdownimage/pngimage/jpegapplication/epub+zipapplication/pdfapplication/rtfapplication/vnd.google-apps.document (GDoc)application/vnd.google-apps.presentation (GSlide)application/vnd.google-apps.spreadsheet (GSheet)application/vnd.google.colaboratory (Notebook colab)application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX)application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX)It's possible to update or customize this. See the documentation of GDriveRetriever.But, the corresponding packages must be installed.#!pip install unstructuredretriever.get_relevant_documents("machine learning")You can customize the criteria to select the files. A set of predefined filter are proposed:
| template | description |
| -------------------------------------- | --------------------------------------------------------------------- |
| gdrive-all-in-folder | Return all compatible files from a folder_id |
| gdrive-query | Search query in all drives |
| gdrive-by-name | Search file with name query) |
| gdrive-query-in-folder | Search query in folder_id (and sub-folders in _recursive=true) |
| gdrive-mime-type | Search a specific mime_type |
| gdrive-mime-type-in-folder | Search a specific mime_type in folder_id |
| gdrive-query-with-mime-type | Search query with a specific mime_type |
| gdrive-query-with-mime-type-and-folder | Search query with a specific mime_type and in folder_id |retriever = GoogleDriveRetriever( template="gdrive-query", # Search everywhere num_results=2, # But take only 2 documents)for doc in retriever.get_relevant_documents("machine learning"): print("---") print(doc.page_content.strip()[:60]+"...")Else, you can customize the prompt with a specialized PromptTemplatefrom langchain.prompts import PromptTemplateretriever = GoogleDriveRetriever( template=PromptTemplate(input_variables=['query'], # See https://developers.google.com/drive/api/guides/search-files template="(fullText contains '{query}') " "and mimeType='application/vnd.google-apps.document' " "and modifiedTime > '2000-01-01T00:00:00' " "and trashed=false"), num_results=2, # See https://developers.google.com/drive/api/v3/reference/files/list includeItemsFromAllDrives=False, supportsAllDrives=False,)for doc in retriever.get_relevant_documents("machine learning"): print(f"{doc.metadata['name']}:") print("---") print(doc.page_content.strip()[:60]+"...")Use Google Drive 'description' metadataEach Google Drive has a description field in metadata (see the details of a file).
Use the snippets mode to return the description of selected files.retriever = GoogleDriveRetriever( template='gdrive-mime-type-in-folder', folder_id=folder_id, mime_type='application/vnd.google-apps.document', # Only Google Docs num_results=2, mode='snippets', includeItemsFromAllDrives=False, supportsAllDrives=False,)retriever.get_relevant_documents("machine learning")PreviousGoogle Cloud Enterprise SearchNextGoogle Vertex AI SearchPrerequisitesRetrieve the Google DocsUse Google Drive 'description' metadata |
689 | https://python.langchain.com/docs/integrations/retrievers/google_vertex_ai_search | ComponentsRetrieversGoogle Vertex AI SearchOn this pageGoogle Vertex AI SearchVertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.Vertex AI Search lets organizations quickly build generative AI powered search engines for customers and employees. It's underpinned by a variety of Google Search technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user’s query input. Vertex AI Search also benefits from Google’s expertise in understanding how users search and factors in content relevance to order displayed results.Vertex AI Search is available in the Google Cloud Console and via an API for enterprise workflow integration.This notebook demonstrates how to configure Vertex AI Search and use the Vertex AI Search retriever. The Vertex AI Search retriever encapsulates the Python client library and uses it to access the Search Service API.Install pre-requisitesYou need to install the google-cloud-discoveryengine package to use the Vertex AI Search retriever.pip install google-cloud-discoveryengineConfigure access to Google Cloud and Vertex AI SearchVertex AI Search is generally available without allowlist as of August 2023.Before you can use the retriever, you need to complete the following steps:Create a search engine and populate an unstructured data storeFollow the instructions in the Vertex AI Search Getting Started guide to set up a Google Cloud project and Vertex AI Search.Use the Google Cloud Console to create an unstructured data storePopulate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder.Make sure to use the Cloud Storage (without metadata) option.Set credentials to access Vertex AI Search APIThe Vertex AI Search client libraries used by the Vertex AI Search retriever provide high-level language support for authenticating to Google Cloud programmatically.
Client libraries support Application Default Credentials (ADC); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API.
With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code.If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of the supported methods to make sure that you Application Default Credentials are properly set.import sysif "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user()Configure and use the Vertex AI Search retrieverThe Vertex AI Search retriever is implemented in the langchain.retriever.GoogleVertexAISearchRetriever class. The get_relevant_documents method returns a list of langchain.schema.Document documents where the page_content field of each document is populated the document content.
Depending on the data type used in Vertex AI Search (structured or unstructured) the page_content field is populated as follows:Structured data source: either an extractive segment or an extractive answer that matches a query. The metadata field is populated with metadata (if any) of the document from which the segments or answers were extracted.Unstructured data source: a string json containing all the fields returned from the structured data source. The metadata field is populated with metadata (if any) of the documentOnly for Unstructured data sources:An extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search.For more information about extractive segments and extractive answers refer to product documentation.NOTE: Extractive segments require the Enterprise edition features to be enabled.When creating an instance of the retriever you can specify a number of parameters that control which data store to access and how a natural language query is processed, including configurations for extractive answers and segments.The mandatory parameters are:project_id - Your Google Cloud Project ID.location_id - The location of the data store.global (default)useudata_store_id - The ID of the data store you want to use.Note: This was called search_engine_id in previous versions of the retriever.The project_id and data_store_id parameters can be provided explicitly in the retriever's constructor or through the environment variables - PROJECT_ID and DATA_STORE_ID.You can also configure a number of optional parameters, including:max_documents - The maximum number of documents used to provide extractive segments or extractive answersget_extractive_answers - By default, the retriever is configured to return extractive segments.Set this field to True to return extractive answers. This is used only when engine_data_type set to 0 (unstructured)max_extractive_answer_count - The maximum number of extractive answers returned in each search result.At most 5 answers will be returned. This is used only when engine_data_type set to 0 (unstructured).max_extractive_segment_count - The maximum number of extractive segments returned in each search result.Currently one segment will be returned. This is used only when engine_data_type set to 0 (unstructured).filter - The filter expression for the search results based on the metadata associated with the documents in the data store.query_expansion_condition - Specification to determine under which conditions query expansion should occur.0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled.1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero.2 - Automatic query expansion built by the Search API.engine_data_type - Defines the Vertex AI Search data type0 - Unstructured data1 - Structured dataMigration guide for GoogleCloudEnterpriseSearchRetrieverIn previous versions, this retriever was called GoogleCloudEnterpriseSearchRetriever. Some backwards-incompatible changes had to be made to the retriever after the General Availability launch due to changes in the product behavior.To update to the new retriever, make the following changes:Change the import from: from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever -> from langchain.retrievers import GoogleVertexAISearchRetriever.Change all class references from GoogleCloudEnterpriseSearchRetriever -> GoogleVertexAISearchRetriever.Upon class initialization, change the search_engine_id parameter name to data_store_id.Configure and use the retriever for unstructured data with extractive segmentsfrom langchain.retrievers import GoogleVertexAISearchRetrieverPROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project IDLOCATION_ID = "<YOUR LOCATION>" # Set to your data store locationDATA_STORE_ID = "<YOUR DATA STORE ID>" # Set to your data store IDretriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for unstructured data with extractive answersretriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for structured dataretriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3, engine_data_type=1,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)PreviousGoogle DriveNextKay.aiInstall pre-requisitesConfigure access to Google Cloud and Vertex AI SearchCreate a search engine and populate an unstructured data storeSet credentials to access Vertex AI Search APIConfigure and use the Vertex AI Search retrieverOnly for Unstructured data sources:The mandatory parameters are:Migration guide for GoogleCloudEnterpriseSearchRetrieverConfigure and use the retriever for unstructured data with extractive segmentsConfigure and use the retriever for unstructured data with extractive answersConfigure and use the retriever for structured data |
690 | https://python.langchain.com/docs/integrations/retrievers/kay | ComponentsRetrieversKay.aiOn this pageKay.aiData API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.This notebook shows you how to retrieve datasets supported by Kay. You can currently search SEC Filings and Press Releases of US companies. Visit kay.ai for the latest data drops. For any questions, join our discord or tweet at usInstallationFirst you will need to install the kay package. You will also need an API key: you can get one for free at https://kay.ai. Once you have an API key, you must set it as an environment variable KAY_API_KEY.KayAiRetriever has a static .create() factory method that takes the following arguments:dataset_id: string required -- A Kay dataset id. This is a collection of data about a particular entity such as companies, people, or places. For example, try "company" data_type: List[string] optional -- This is a category within a dataset based on its origin or format, such as ‘SEC Filings’, ‘Press Releases’, or ‘Reports’ within the “company” dataset. For example, try ["10-K", "10-Q", "PressRelease"] under the “company” dataset. If left empty, Kay will retrieve the most relevant context across all types.num_contexts: int optional, defaults to 6 -- The number of document chunks to retrieve on each call to get_relevant_documents()ExamplesBasic Retriever Usage# Setup API keyfrom getpass import getpassKAY_API_KEY = getpass() ········from langchain.retrievers import KayAiRetrieverimport osfrom kay.rag.retrievers import KayRetrieveros.environ["KAY_API_KEY"] = KAY_API_KEYretriever = KayAiRetriever.create(dataset_id="company", data_types=["10-K", "10-Q", "PressRelease"], num_contexts=3)docs = retriever.get_relevant_documents("What were the biggest strategy changes and partnerships made by Roku in 2023??")docs [Document(page_content='Company Name: ROKU INC\nCompany Industry: CABLE & OTHER PAY TELEVISION SERVICES\nArticle Title: Roku and FreeWheel Announce Strategic Partnership to Bring Roku’s Leading Ad Tech to FreeWheel Customers\nText: Additionally, eMarketer Link: https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.insiderintelligence.com%2Finsights%2Favod-more-than-50-percent-of-us-digital-video-viewers%2F&esheet=53451144&newsitemid=20230712907788&lan=en-US&anchor=eMarketer&index=4&md5=b64dea72bcf6b6379474462602781d83 projects 57% of U.S. digital video users will stream an advertising-based video on demand (AVOD) service this year.\nHaving solutions aimed at driving greater interoperability and automation will help accelerate this growth.\nKey highlights of this collaboration include:\nStreamlined Integration: Roku has now integrated its demand application programming interface (dAPI) with FreeWheel s TV platform. Roku s demand API gives publishers direct, automatic and real-time access to more advertiser demand. This enhanced integration allows for streamlined ad operation workflows and better inventory quality control, both of which will improve publisher yield and revenue.\nSeamless Data Targeting: Publishers can now use Roku platform signals to enable advertisers to target audiences and measure campaign performance without relying on cookies. Additionally, FreeWheel and Roku will rely on data clean room technology to enable the activation of additional data sets providing better measurement and monetization to publishers and agencies.', metadata={'_additional': {'id': '962b79e0-f9d1-43ae-9f7a-8a9b42bc7a9a'}, 'chunk_type': 'text', 'chunk_years_mentioned': [], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': 'PressRelease', 'data_source_link': 'https://www.nasdaq.com/press-release/roku-and-freewheel-announce-strategic-partnership-to-bring-rokus-leading-ad-tech-to', 'data_source_publish_date': '2023-07-12T00:00:00Z', 'data_source_uid': 'a46f309c-705d-3946-96db-87aa4e73261f', 'title': 'ROKU INC | Roku and FreeWheel Announce Strategic Partnership to Bring Roku’s Leading Ad Tech to FreeWheel Customers'}), Document(page_content='Company Name: ROKU INC \n Company Industry: CABLE & OTHER PAY TELEVISION SERVICES \n Form Title: 10-K 2022-FY \n Form Section: Risk Factors \n Text: nd the Note Regarding Forward Looking Statements.This section of this Annual Report generally discusses fiscal years 2022 and 2021 and year to year comparisons between those years.Discussions of fiscal year 2020 and year to year comparisons between fiscal years 2021 and 2020 that are not included in this Annual Report can be found in Management\'s Discussion and Analysis of Financial Condition and Results of Operations in Part II, Item 7 of our Annual Report for the fiscal year ended December 31, 2021 filed with the SEC on February 18, 2022.Overview Effective as of the fourth quarter of fiscal 2022, we reorganized our reportable segments to better align with management\'s reporting of information reviewed by the Chief Operating Decision Maker ("CODM") for each segment.We renamed our "player" segment to "devices" which now includes our licensing arrangements with service operators and licensed Roku TV partners in addition to sales of our streaming players, audio products, smart home products and Roku branded TVs that will be designed, made, and sold by us in 2023.Our historical segment information is recast to conform to our new presentation in our financial statements and accompanying notes included in Item 8 of this Annual Report.Our two reportable segments are the platform segment and the devices segment.', metadata={'_additional': {'id': 'a76c5fed-5d63-45a7-b63a-2c30e05140fc'}, 'chunk_type': 'text', 'chunk_years_mentioned': [2020, 2021, 2022, 2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': '10-K', 'data_source_link': 'https://www.sec.gov/Archives/edgar/data/1428439/000142843923000007', 'data_source_publish_date': '2022-01-01T00:00:00Z', 'data_source_uid': '0001428439-23-000007', 'title': 'ROKU INC | 10-K 2022-FY '}), Document(page_content='Company Name: ROKU INC \n Company Industry: CABLE & OTHER PAY TELEVISION SERVICES \n Form Title: 10-Q 2023-Q1 \n Form Section: Risk Factors \n Text: Our current and potential partners include TV brands, cable and satellite companies, and telecommunication providers.Under these license arrangements, we generally have limited or no control over the amount and timing of resources these entities dedicate to the relationship.In the past, our licensed Roku TV partners have failed to meet their forecasts and anticipated market launch dates for distributing Roku TV models, and they may fail to meet their forecasts or such launches in the future.If our licensed Roku TV partners or service operator partners fail to meet their forecasts or such launches for distributing licensed streaming devices or choose to deploy competing streaming solutions within their product lines, our business may be harmed.We depend on a small number of content publishers for a majority of our streaming hours, and if we fail to maintain these relationships, our business could be harmed.*Historically, a small number of content publishers have accounted for a significant portion of the hours streamed on our platform.In the three months ended March 31, 2023, the top three streaming services represented over 50% of all hours streamed in the period.If, for any reason, we cease distributing channels that have historically streamed a large percentage of the aggregate streaming hours on our platform, our streaming hours, our active accounts, or Roku streaming device sales may be adversely affected, and our business may be harmed.', metadata={'_additional': {'id': '2a92b2bb-02a0-4e15-8b64-d7e04078a205'}, 'chunk_type': 'text', 'chunk_years_mentioned': [2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': '10-Q', 'data_source_link': 'https://www.sec.gov/Archives/edgar/data/1428439/000142843923000017', 'data_source_publish_date': '2023-01-01T00:00:00Z', 'data_source_uid': '0001428439-23-000017', 'title': 'ROKU INC | 10-Q 2023-Q1 '})]Usage in a chainOPENAI_API_KEY = getpass() ········os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo")qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What were the biggest strategy changes and partnerships made by Roku in 2023?" # "Where is Wex making the most money in 2023?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What were the biggest strategy changes and partnerships made by Roku in 2023? **Answer**: In 2023, Roku made a strategic partnership with FreeWheel to bring Roku's leading ad tech to FreeWheel customers. This partnership aimed to drive greater interoperability and automation in the advertising-based video on demand (AVOD) space. Key highlights of this collaboration include streamlined integration of Roku's demand application programming interface (dAPI) with FreeWheel's TV platform, allowing for better inventory quality control and improved publisher yield and revenue. Additionally, publishers can now use Roku platform signals to enable advertisers to target audiences and measure campaign performance without relying on cookies. This partnership also involves the use of data clean room technology to enable the activation of additional data sets for better measurement and monetization for publishers and agencies. These partnerships and strategies aim to support Roku's growth in the AVOD market. PreviousGoogle Vertex AI SearchNextkNNBasic Retriever UsageUsage in a chain |
691 | https://python.langchain.com/docs/integrations/retrievers/knn | ComponentsRetrieverskNNOn this pagekNNIn statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.This notebook goes over how to use a retriever that under the hood uses an kNN.Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.htmlfrom langchain.retrievers import KNNRetrieverfrom langchain.embeddings import OpenAIEmbeddingsCreate New Retriever with Textsretriever = KNNRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())Use RetrieverWe can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='bar', metadata={})]PreviousKay.aiNextLOTR (Merger Retriever)Create New Retriever with TextsUse Retriever |
692 | https://python.langchain.com/docs/integrations/retrievers/merger_retriever | ComponentsRetrieversLOTR (Merger Retriever)On this pageLOTR (Merger Retriever)Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.The MergerRetriever class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first.import osimport chromadbfrom langchain.retrievers.merger_retriever import MergerRetrieverfrom langchain.vectorstores import Chromafrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_transformers import ( EmbeddingsRedundantFilter, EmbeddingsClusteringFilter,)from langchain.retrievers.document_compressors import DocumentCompressorPipelinefrom langchain.retrievers import ContextualCompressionRetriever# Get 3 diff embeddings.all_mini = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")multi_qa_mini = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-dot-v1")filter_embeddings = OpenAIEmbeddings()ABS_PATH = os.path.dirname(os.path.abspath(__file__))DB_DIR = os.path.join(ABS_PATH, "db")# Instantiate 2 diff cromadb indexs, each one with a diff embedding.client_settings = chromadb.config.Settings( is_persistent=True, persist_directory=DB_DIR, anonymized_telemetry=False,)db_all = Chroma( collection_name="project_store_all", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=all_mini,)db_multi_qa = Chroma( collection_name="project_store_multi", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=multi_qa_mini,)# Define 2 diff retrievers with 2 diff embeddings and diff search type.retriever_all = db_all.as_retriever( search_type="similarity", search_kwargs={"k": 5, "include_metadata": True})retriever_multi_qa = db_multi_qa.as_retriever( search_type="mmr", search_kwargs={"k": 5, "include_metadata": True})# The Lord of the Retrievers will hold the ouput of boths retrievers and can be used as any other# retriever on different types of chains.lotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa])Remove redundant results from the merged retrievers.# We can remove redundant results from both retrievers using yet another embedding.# Using multiples embeddings in diff steps could help reduce biases.filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)pipeline = DocumentCompressorPipeline(transformers=[filter])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)Pick a representative sample of documents from the merged retrievers.# This filter will divide the documents vectors into clusters or "centers" of meaning.# Then it will pick the closest document to that center for the final results.# By default the result document will be ordered/grouped by clusters.filter_ordered_cluster = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1,)# If you want the final document to be ordered by the original retriever scores# you need to add the "sorted" parameter.filter_ordered_by_retriever = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1, sorted=True,)pipeline = DocumentCompressorPipeline(transformers=[filter_ordered_by_retriever])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)Re-order results to avoid performance degradation.No matter the architecture of your model, there is a sustancial performance degradation when you include 10+ retrieved documents.
In brief: When models must access relevant information in the middle of long contexts, then tend to ignore the provided documents.
See: https://arxiv.org/abs//2307.03172# You can use an additional document transformer to reorder documents after removing redudance.from langchain.document_transformers import LongContextReorderfilter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)reordering = LongContextReorder()pipeline = DocumentCompressorPipeline(transformers=[filter, reordering])compression_retriever_reordered = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)PreviouskNNNextMetalRemove redundant results from the merged retrievers.Pick a representative sample of documents from the merged retrievers.Re-order results to avoid performance degradation. |
693 | https://python.langchain.com/docs/integrations/retrievers/metal | ComponentsRetrieversMetalOn this pageMetalMetal is a managed service for ML Embeddings.This notebook shows how to use Metal's retriever.First, you will need to sign up for Metal and get an API key. You can do so here# !pip install metal_sdkfrom metal_sdk.metal import MetalAPI_KEY = ""CLIENT_ID = ""INDEX_ID = ""metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);Ingest DocumentsYou only need to do this if you haven't already set up an indexmetal.index({"text": "foo1"})metal.index({"text": "foo"}) {'data': {'id': '642739aa7559b026b4430e42', 'text': 'foo', 'createdAt': '2023-03-31T19:51:06.748Z'}}QueryNow that our index is set up, we can set up a retriever and start querying it.from langchain.retrievers import MetalRetrieverretriever = MetalRetriever(metal, params={"limit": 2})retriever.get_relevant_documents("foo1") [Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}), Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]PreviousLOTR (Merger Retriever)NextPinecone Hybrid SearchIngest DocumentsQuery |
694 | https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search | ComponentsRetrieversPinecone Hybrid SearchOn this pagePinecone Hybrid SearchPinecone is a vector database with broad functionality.This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.The logic of this retriever is taken from this documentaionTo use Pinecone, you must have an API key and an Environment.
Here are the installation instructions.#!pip install pinecone-client pinecone-textimport osimport getpassos.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:")from langchain.retrievers import PineconeHybridSearchRetrieveros.environ["PINECONE_ENVIRONMENT"] = getpass.getpass("Pinecone Environment:")We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Setup PineconeYou should only have to do this part once.Note: it's important to make sure that the "context" field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone's docs.import osimport pineconeapi_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY"# find environment next to your API key in the Pinecone consoleenv = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT"index_name = "langchain-pinecone-hybrid-search"pinecone.init(api_key=api_key, environment=env)pinecone.whoami() WhoAmIResponse(username='load', user_label='label', projectname='load-test')# create the indexpinecone.create_index( name=index_name, dimension=1536, # dimensionality of dense model metric="dotproduct", # sparse values supported only for dotproduct pod_type="s1", metadata_config={"indexed": []}, # see explaination above)Now that its created, we can use itindex = pinecone.Index(index_name)Get embeddings and sparse encodersEmbeddings are used for the dense vectors, tokenizer is used for the sparse vectorfrom langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.For more information about the sparse encoders you can checkout pinecone-text library docs.from pinecone_text.sparse import BM25Encoder# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE# use default tf-idf valuesbm25_encoder = BM25Encoder().default()The above code is using default tfids values. It's highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:corpus = ["foo", "bar", "world", "hello"]# fit tf-idf values on your corpusbm25_encoder.fit(corpus)# store the values to a json filebm25_encoder.dump("bm25_values.json")# load to your BM25Encoder objectbm25_encoder = BM25Encoder().load("bm25_values.json")Load RetrieverWe can now construct the retriever!retriever = PineconeHybridSearchRetriever( embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)Add texts (if necessary)We can optionally add texts to the retriever (if they aren't already in there)retriever.add_texts(["foo", "bar", "world", "hello"]) 100%|██████████| 1/1 [00:02<00:00, 2.27s/it]Use RetrieverWe can now use the retriever!result = retriever.get_relevant_documents("foo")result[0] Document(page_content='foo', metadata={})PreviousMetalNextPubMedSetup PineconeGet embeddings and sparse encodersLoad RetrieverAdd texts (if necessary)Use Retriever |