id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
1a93eb904d6b-100 | Upload a block to a signed URL and return the public URL.
langchain.tools.render¶
Different methods for rendering Tools to be passed to LLMs.
Depending on the LLM you are using and the prompting strategy you are using,
you may want Tools to be rendered in a different way.
This module contains various ways to render tools.
Functions¶
tools.render.format_tool_to_openai_function(tool)
Format tool into the OpenAI function API.
tools.render.format_tool_to_openai_tool(tool)
Format tool into the OpenAI function API.
tools.render.render_text_description(tools)
Render the tool name and description in plain text.
tools.render.render_text_description_and_args(tools)
Render the tool name, description, and args in plain text.
langchain.utilities¶
Utilities are the integrations with third-part systems and packages.
Other LangChain classes use Utilities to interact with third-part systems
and packages.
Classes¶
utilities.alpha_vantage.AlphaVantageAPIWrapper
Wrapper for AlphaVantage API for Currency Exchange Rate.
utilities.apify.ApifyWrapper
Wrapper around Apify.
utilities.arcee.ArceeDocument
Arcee document.
utilities.arcee.ArceeDocumentAdapter()
Adapter for Arcee documents
utilities.arcee.ArceeDocumentSource
Source of an Arcee document.
utilities.arcee.ArceeRoute(value[, names, ...])
Routes available for the Arcee API as enumerator.
utilities.arcee.ArceeWrapper(arcee_api_key, ...)
Wrapper for Arcee API.
utilities.arcee.DALMFilter
Filters available for a DALM retrieval and generation.
utilities.arcee.DALMFilterType(value[, ...])
Filter types available for a DALM retrieval as enumerator.
utilities.arxiv.ArxivAPIWrapper
Wrapper around ArxivAPI.
utilities.awslambda.LambdaWrapper | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-101 | Wrapper around ArxivAPI.
utilities.awslambda.LambdaWrapper
Wrapper for AWS Lambda SDK.
utilities.bibtex.BibtexparserWrapper
Wrapper around bibtexparser.
utilities.bing_search.BingSearchAPIWrapper
Wrapper for Bing Search API.
utilities.brave_search.BraveSearchWrapper
Wrapper around the Brave search engine.
utilities.clickup.CUList(folder_id, name[, ...])
Component class for a list.
utilities.clickup.ClickupAPIWrapper
Wrapper for Clickup API.
utilities.clickup.Component()
Base class for all components.
utilities.clickup.Member(id, username, ...)
Component class for a member.
utilities.clickup.Space(id, name, private, ...)
Component class for a space.
utilities.clickup.Task(id, name, ...)
Class for a task.
utilities.clickup.Team(id, name, members)
Component class for a team.
utilities.dalle_image_generator.DallEAPIWrapper
Wrapper for OpenAI's DALL-E Image Generator.
utilities.dataforseo_api_search.DataForSeoAPIWrapper
Wrapper around the DataForSeo API.
utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper
Wrapper for DuckDuckGo Search API.
utilities.github.GitHubAPIWrapper
Wrapper for GitHub API.
utilities.gitlab.GitLabAPIWrapper
Wrapper for GitLab API.
utilities.golden_query.GoldenQueryAPIWrapper
Wrapper for Golden.
utilities.google_places_api.GooglePlacesAPIWrapper
Wrapper around Google Places API.
utilities.google_scholar.GoogleScholarAPIWrapper
Wrapper for Google Scholar API
utilities.google_search.GoogleSearchAPIWrapper
Wrapper for Google Search API.
utilities.google_serper.GoogleSerperAPIWrapper
Wrapper around the Serper.dev Google Search API.
utilities.graphql.GraphQLAPIWrapper | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-102 | Wrapper around the Serper.dev Google Search API.
utilities.graphql.GraphQLAPIWrapper
Wrapper around GraphQL API.
utilities.jira.JiraAPIWrapper
Wrapper for Jira API.
utilities.max_compute.MaxComputeAPIWrapper(client)
Interface for querying Alibaba Cloud MaxCompute tables.
utilities.metaphor_search.MetaphorSearchAPIWrapper
Wrapper for Metaphor Search API.
utilities.openapi.HTTPVerb(value[, names, ...])
Enumerator of the HTTP verbs.
utilities.openapi.OpenAPISpec()
OpenAPI Model that removes mis-formatted parts of the spec.
utilities.openweathermap.OpenWeatherMapAPIWrapper
Wrapper for OpenWeatherMap API using PyOWM.
utilities.portkey.Portkey()
Portkey configuration.
utilities.powerbi.PowerBIDataset
Create PowerBI engine from dataset ID and credential or token.
utilities.pubmed.PubMedAPIWrapper
Wrapper around PubMed API.
utilities.python.PythonREPL
Simulates a standalone Python REPL.
utilities.redis.TokenEscaper([escape_chars_re])
Escape punctuation within an input string.
utilities.requests.Requests
Wrapper around requests to handle auth and async.
utilities.requests.RequestsWrapper
alias of TextRequestsWrapper
utilities.requests.TextRequestsWrapper
Lightweight wrapper around requests library.
utilities.scenexplain.SceneXplainAPIWrapper
Wrapper for SceneXplain API.
utilities.searchapi.SearchApiAPIWrapper
Wrapper around SearchApi API.
utilities.searx_search.SearxResults(data)
Dict like wrapper around search api results.
utilities.searx_search.SearxSearchWrapper
Wrapper for Searx API.
utilities.serpapi.HiddenPrints()
Context manager to hide prints.
utilities.serpapi.SerpAPIWrapper
Wrapper around SerpAPI.
utilities.spark_sql.SparkSQL([...])
SparkSQL is a utility class for interacting with Spark SQL. | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-103 | SparkSQL is a utility class for interacting with Spark SQL.
utilities.sql_database.SQLDatabase(engine[, ...])
SQLAlchemy wrapper around a database.
utilities.tavily_search.TavilySearchAPIWrapper
Wrapper for Tavily Search API.
utilities.tensorflow_datasets.TensorflowDatasets
Access to the TensorFlow Datasets.
utilities.twilio.TwilioAPIWrapper
Messaging Client using Twilio.
utilities.wikipedia.WikipediaAPIWrapper
Wrapper around WikipediaAPI.
utilities.wolfram_alpha.WolframAlphaAPIWrapper
Wrapper for Wolfram Alpha.
utilities.zapier.ZapierNLAWrapper
Wrapper for Zapier NLA.
Functions¶
utilities.anthropic.get_num_tokens_anthropic(text)
Get the number of tokens in a string of text.
utilities.anthropic.get_token_ids_anthropic(text)
Get the token ids for a string of text.
utilities.clickup.extract_dict_elements_from_component_fields(...)
Extract elements from a dictionary.
utilities.clickup.fetch_data(url, access_token)
Fetch data from a URL.
utilities.clickup.fetch_first_id(data, key)
Fetch the first id from a dictionary.
utilities.clickup.fetch_folder_id(space_id, ...)
Fetch the folder id.
utilities.clickup.fetch_list_id(space_id, ...)
Fetch the list id.
utilities.clickup.fetch_space_id(team_id, ...)
Fetch the space id.
utilities.clickup.fetch_team_id(access_token)
Fetch the team id.
utilities.clickup.load_query(query[, ...])
Attempts to parse a JSON string and return the parsed object.
utilities.clickup.parse_dict_through_component(...)
Parse a dictionary by creating a component and then turning it back into a dictionary.
utilities.opaqueprompts.desanitize(...)
Restore the original sensitive data from the sanitized text.
utilities.opaqueprompts.sanitize(input) | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-104 | utilities.opaqueprompts.sanitize(input)
Sanitize input string or dict of strings by replacing sensitive data with placeholders.
utilities.powerbi.fix_table_name(table)
Add single quotes around table names that contain spaces.
utilities.powerbi.json_to_md(json_contents)
Converts a JSON object to a markdown table.
utilities.redis.check_redis_module_exist(...)
Check if the correct Redis modules are installed.
utilities.redis.get_client(redis_url, **kwargs)
Get a redis client from the connection url given.
utilities.sql_database.truncate_word(...[, ...])
Truncate a string to a certain number of words, based on the max string length.
utilities.vertexai.get_client_info([module])
Returns a custom user agent header.
utilities.vertexai.init_vertexai([project, ...])
Init vertexai.
utilities.vertexai.raise_vertex_import_error([...])
Raise ImportError related to Vertex SDK being not available.
langchain.utils¶
Utility functions for LangChain.
These functions do not depend on any other LangChain module.
Classes¶
utils.aiter.NoLock()
Dummy lock that provides the proper interface but no protection
utils.aiter.Tee(iterable[, n, lock])
Create n separate asynchronous iterators over iterable
utils.aiter.atee
alias of Tee
utils.formatting.StrictFormatter()
A subclass of formatter that checks for extra keys.
utils.iter.NoLock()
Dummy lock that provides the proper interface but no protection
utils.iter.Tee(iterable[, n, lock])
Create n separate asynchronous iterators over iterable
utils.iter.safetee
alias of Tee
utils.openai_functions.FunctionDescription
Representation of a callable function to the OpenAI API.
utils.openai_functions.ToolDescription
Representation of a callable function to the OpenAI API.
Functions¶
utils.aiter.py_anext(iterator[, default]) | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-105 | Functions¶
utils.aiter.py_anext(iterator[, default])
Pure-Python implementation of anext() for testing purposes.
utils.aiter.tee_peer(iterator, buffer, ...)
An individual iterator of a tee()
utils.env.get_from_dict_or_env(data, key, ...)
Get a value from a dictionary or an environment variable.
utils.env.get_from_env(key, env_key[, default])
Get a value from a dictionary or an environment variable.
utils.html.extract_sub_links(raw_html, url, *)
Extract all links from a raw html string and convert into absolute paths.
utils.html.find_all_links(raw_html, *[, pattern])
Extract all links from a raw html string.
utils.input.get_bolded_text(text)
Get bolded text.
utils.input.get_color_mapping(items[, ...])
Get mapping for items to a support color.
utils.input.get_colored_text(text, color)
Get colored text.
utils.input.print_text(text[, color, end, file])
Print text with highlighting and no end characters.
utils.iter.batch_iterate(size, iterable)
Utility batching function.
utils.iter.tee_peer(iterator, buffer, peers, ...)
An individual iterator of a tee()
utils.json_schema.dereference_refs(schema_obj, *)
Try to substitute $refs in JSON Schema.
utils.loading.try_load_from_hub(path, ...)
Load configuration from hub.
utils.math.cosine_similarity(X, Y)
Row-wise cosine similarity between two equal-width matrices.
utils.math.cosine_similarity_top_k(X, Y[, ...])
Row-wise cosine similarity with optional top-k and score threshold filtering.
utils.openai.is_openai_v1()
utils.openai_functions.convert_pydantic_to_openai_function(...) | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-106 | utils.openai_functions.convert_pydantic_to_openai_function(...)
Converts a Pydantic model to a function description for the OpenAI API.
utils.openai_functions.convert_pydantic_to_openai_tool(...)
Converts a Pydantic model to a function description for the OpenAI API.
utils.pydantic.get_pydantic_major_version()
Get the major version of Pydantic.
utils.strings.comma_list(items)
Convert a list to a comma-separated string.
utils.strings.stringify_dict(data)
Stringify a dictionary.
utils.strings.stringify_value(val)
Stringify a value.
utils.utils.build_extra_kwargs(extra_kwargs, ...)
Build extra kwargs from values and extra_kwargs.
utils.utils.check_package_version(package[, ...])
Check the version of a package.
utils.utils.convert_to_secret_str(value)
Convert a string to a SecretStr if needed.
utils.utils.get_pydantic_field_names(...)
Get field names, including aliases, for a pydantic class.
utils.utils.guard_import(module_name, *[, ...])
Dynamically imports a module and raises a helpful exception if the module is not installed.
utils.utils.mock_now(dt_value)
Context manager for mocking out datetime.now() in unit tests.
utils.utils.raise_for_status_with_text(response)
Raise an error with the response text.
utils.utils.xor_args(*arg_groups)
Validate specified keyword args are mutually exclusive.
langchain.vectorstores¶
Vector store stores embedded data and performs vector search.
One of the most common ways to store and search over unstructured data is to
embed it and store the resulting embedding vectors, and then query the store
and retrieve the data that are ‘most similar’ to the embedded query.
Class hierarchy:
VectorStore --> <name> # Examples: Annoy, FAISS, Milvus | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-107 | BaseRetriever --> VectorStoreRetriever --> <name>Retriever # Example: VespaRetriever
Main helpers:
Embeddings, Document
Classes¶
vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch(...)
Alibaba Cloud OpenSearch vector store.
vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings(...)
Alibaba Cloud Opensearch` client configuration.
vectorstores.analyticdb.AnalyticDB(...[, ...])
AnalyticDB (distributed PostgreSQL) vector store.
vectorstores.annoy.Annoy(embedding_function, ...)
Annoy vector store.
vectorstores.astradb.AstraDB(*, embedding, ...)
Wrapper around DataStax Astra DB for vector-store workloads.
vectorstores.atlas.AtlasDB(name[, ...])
Atlas vector store.
vectorstores.awadb.AwaDB([table_name, ...])
AwaDB vector store.
vectorstores.azure_cosmos_db.AzureCosmosDBVectorSearch(...)
Azure Cosmos DB for MongoDB vCore vector store.
vectorstores.azure_cosmos_db.CosmosDBSimilarityType(value)
Cosmos DB Similarity Type as enumerator.
vectorstores.azuresearch.AzureSearch(...[, ...])
Azure Cognitive Search vector store.
vectorstores.azuresearch.AzureSearchVectorStoreRetriever
Retriever that uses Azure Cognitive Search.
vectorstores.bageldb.Bagel([cluster_name, ...])
BagelDB.ai vector store.
vectorstores.baiducloud_vector_search.BESVectorStore(...)
Baidu Elasticsearch vector store.
vectorstores.cassandra.Cassandra(embedding, ...)
Wrapper around Apache Cassandra(R) for vector-store workloads.
vectorstores.chroma.Chroma([...])
ChromaDB vector store. | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-108 | vectorstores.chroma.Chroma([...])
ChromaDB vector store.
vectorstores.clarifai.Clarifai([user_id, ...])
Clarifai AI vector store.
vectorstores.clickhouse.Clickhouse(embedding)
ClickHouse VectorSearch vector store.
vectorstores.clickhouse.ClickhouseSettings
ClickHouse client configuration.
vectorstores.dashvector.DashVector(...)
DashVector vector store.
vectorstores.deeplake.DeepLake([...])
Activeloop Deep Lake vector store.
vectorstores.dingo.Dingo(embedding, text_key, *)
Dingo vector store.
vectorstores.docarray.base.DocArrayIndex(...)
Base class for DocArray based vector stores.
vectorstores.docarray.hnsw.DocArrayHnswSearch(...)
HnswLib storage using DocArray package.
vectorstores.docarray.in_memory.DocArrayInMemorySearch(...)
In-memory DocArray storage for exact search.
vectorstores.elastic_vector_search.ElasticKnnSearch(...)
[Deprecated] [DEPRECATED] Elasticsearch with k-nearest neighbor search (k-NN) vector store.
vectorstores.elastic_vector_search.ElasticVectorSearch(...)
ElasticVectorSearch uses the brute force method of searching on vectors.
vectorstores.elasticsearch.ApproxRetrievalStrategy([...])
Approximate retrieval strategy using the HNSW algorithm.
vectorstores.elasticsearch.BaseRetrievalStrategy()
Base class for Elasticsearch retrieval strategies.
vectorstores.elasticsearch.ElasticsearchStore(...)
Elasticsearch vector store.
vectorstores.elasticsearch.ExactRetrievalStrategy()
Exact retrieval strategy using the script_score query.
vectorstores.elasticsearch.SparseRetrievalStrategy([...])
Sparse retrieval strategy using the text_expansion processor.
vectorstores.epsilla.Epsilla(client, embeddings)
Wrapper around Epsilla vector database. | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-109 | Wrapper around Epsilla vector database.
vectorstores.faiss.FAISS(embedding_function, ...)
Meta Faiss vector store.
vectorstores.hippo.Hippo(embedding_function)
Hippo vector store.
vectorstores.hologres.Hologres(...[, ndims, ...])
Hologres API vector store.
vectorstores.hologres.HologresWrapper(...)
Hologres API wrapper.
vectorstores.lancedb.LanceDB(connection, ...)
LanceDB vector store.
vectorstores.llm_rails.LLMRails([...])
Implementation of Vector Store using LLMRails.
vectorstores.llm_rails.LLMRailsRetriever
Retriever for LLMRails.
vectorstores.marqo.Marqo(client, index_name)
Marqo vector store.
vectorstores.matching_engine.MatchingEngine(...)
Google Vertex AI Matching Engine vector store.
vectorstores.meilisearch.Meilisearch(embedding)
Meilisearch vector store.
vectorstores.milvus.Milvus(embedding_function)
Milvus vector store.
vectorstores.momento_vector_index.MomentoVectorIndex(...)
Momento Vector Index (MVI) vector store.
vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch(...)
MongoDB Atlas Vector Search vector store.
vectorstores.myscale.MyScale(embedding[, config])
MyScale vector store.
vectorstores.myscale.MyScaleSettings
MyScale client configuration.
vectorstores.myscale.MyScaleWithoutJSON(...)
MyScale vector store without metadata column
vectorstores.neo4j_vector.Neo4jVector(...[, ...])
Neo4j vector index.
vectorstores.neo4j_vector.SearchType(value)
Enumerator of the Distance strategies.
vectorstores.nucliadb.NucliaDB(...[, ...]) | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-110 | vectorstores.nucliadb.NucliaDB(...[, ...])
NucliaDB vector store.
vectorstores.opensearch_vector_search.OpenSearchVectorSearch(...)
Amazon OpenSearch Vector Engine vector store.
vectorstores.pgembedding.BaseModel(**kwargs)
Base model for all SQL stores.
vectorstores.pgembedding.CollectionStore(...)
Collection store.
vectorstores.pgembedding.EmbeddingStore(**kwargs)
Embedding store.
vectorstores.pgembedding.PGEmbedding(...[, ...])
Postgres with the pg_embedding extension as a vector store.
vectorstores.pgembedding.QueryResult()
Result from a query.
vectorstores.pgvecto_rs.PGVecto_rs(...[, ...])
vectorstores.pgvector.BaseModel(**kwargs)
Base model for the SQL stores.
vectorstores.pgvector.DistanceStrategy(value)
Enumerator of the Distance strategies.
vectorstores.pgvector.PGVector(...[, ...])
Postgres/PGVector vector store.
vectorstores.pinecone.Pinecone(index, ...[, ...])
Pinecone vector store.
vectorstores.qdrant.Qdrant(client, ...[, ...])
Qdrant vector store.
vectorstores.qdrant.QdrantException
Qdrant related exceptions.
vectorstores.redis.base.Redis(redis_url, ...)
Redis vector database.
vectorstores.redis.base.RedisVectorStoreRetriever
Retriever for Redis VectorStore.
vectorstores.redis.filters.RedisFilter()
Collection of RedisFilterFields.
vectorstores.redis.filters.RedisFilterExpression([...])
A logical expression of RedisFilterFields.
vectorstores.redis.filters.RedisFilterField(field)
Base class for RedisFilterFields.
vectorstores.redis.filters.RedisFilterOperator(value)
RedisFilterOperator enumerator is used to create RedisFilterExpressions.
vectorstores.redis.filters.RedisNum(field) | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-111 | vectorstores.redis.filters.RedisNum(field)
A RedisFilterField representing a numeric field in a Redis index.
vectorstores.redis.filters.RedisTag(field)
A RedisFilterField representing a tag in a Redis index.
vectorstores.redis.filters.RedisText(field)
A RedisFilterField representing a text field in a Redis index.
vectorstores.redis.schema.FlatVectorField
Schema for flat vector fields in Redis.
vectorstores.redis.schema.HNSWVectorField
Schema for HNSW vector fields in Redis.
vectorstores.redis.schema.NumericFieldSchema
Schema for numeric fields in Redis.
vectorstores.redis.schema.RedisDistanceMetric(value)
Distance metrics for Redis vector fields.
vectorstores.redis.schema.RedisField
Base class for Redis fields.
vectorstores.redis.schema.RedisModel
Schema for Redis index.
vectorstores.redis.schema.RedisVectorField
Base class for Redis vector fields.
vectorstores.redis.schema.TagFieldSchema
Schema for tag fields in Redis.
vectorstores.redis.schema.TextFieldSchema
Schema for text fields in Redis.
vectorstores.rocksetdb.Rockset(client, ...)
Rockset vector store.
vectorstores.scann.ScaNN(embedding, index, ...)
ScaNN vector store.
vectorstores.semadb.SemaDB(collection_name, ...)
SemaDB vector store.
vectorstores.singlestoredb.SingleStoreDB(...)
SingleStore DB vector store.
vectorstores.sklearn.BaseSerializer(persist_path)
Base class for serializing data.
vectorstores.sklearn.BsonSerializer(persist_path)
Serializes data in binary json using the bson python package.
vectorstores.sklearn.JsonSerializer(persist_path)
Serializes data in json using the json package from python standard library.
vectorstores.sklearn.ParquetSerializer(...)
Serializes data in Apache Parquet format using the pyarrow package.
vectorstores.sklearn.SKLearnVectorStore(...) | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-112 | vectorstores.sklearn.SKLearnVectorStore(...)
Simple in-memory vector store based on the scikit-learn library NearestNeighbors implementation.
vectorstores.sklearn.SKLearnVectorStoreException
Exception raised by SKLearnVectorStore.
vectorstores.sqlitevss.SQLiteVSS(table, ...)
Wrapper around SQLite with vss extension as a vector database.
vectorstores.starrocks.StarRocks(embedding)
StarRocks vector store.
vectorstores.starrocks.StarRocksSettings
StarRocks client configuration.
vectorstores.supabase.SupabaseVectorStore(...)
Supabase Postgres vector store.
vectorstores.tair.Tair(embedding_function, ...)
Tair vector store.
vectorstores.tencentvectordb.ConnectionParams(...)
Tencent vector DB Connection params.
vectorstores.tencentvectordb.IndexParams(...)
Tencent vector DB Index params.
vectorstores.tencentvectordb.TencentVectorDB(...)
Initialize wrapper around the tencent vector database.
vectorstores.tigris.Tigris(client, ...)
Tigris vector store.
vectorstores.tiledb.TileDB(embedding, ...[, ...])
Wrapper around TileDB vector database.
vectorstores.timescalevector.TimescaleVector(...)
VectorStore implementation using the timescale vector client to store vectors in Postgres.
vectorstores.typesense.Typesense(...[, ...])
Typesense vector store.
vectorstores.usearch.USearch(embedding, ...)
USearch vector store.
vectorstores.utils.DistanceStrategy(value[, ...])
Enumerator of the Distance strategies for calculating distances between vectors.
vectorstores.vald.Vald(embedding[, host, ...])
Wrapper around Vald vector database.
vectorstores.vearch.Vearch(embedding_function)
Initialize vearch vector store flag 1 for cluster,0 for standalone | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-113 | Initialize vearch vector store flag 1 for cluster,0 for standalone
vectorstores.vectara.Vectara([...])
Vectara API vector store.
vectorstores.vectara.VectaraRetriever
Retriever class for Vectara.
vectorstores.vespa.VespaStore(app[, ...])
Vespa vector store.
vectorstores.weaviate.Weaviate(client, ...)
Weaviate vector store.
vectorstores.xata.XataVectorStore(api_key, ...)
Xata vector store.
vectorstores.zep.CollectionConfig(name, ...)
Configuration for a Zep Collection.
vectorstores.zep.ZepVectorStore(...[, ...])
Zep vector store.
vectorstores.zilliz.Zilliz(embedding_function)
Zilliz vector store.
Functions¶
vectorstores.alibabacloud_opensearch.create_metadata(fields)
Create metadata from fields.
vectorstores.annoy.dependable_annoy_import()
Import annoy if available, otherwise raise error.
vectorstores.clickhouse.has_mul_sub_str(s, *args)
Check if a string contains multiple substrings.
vectorstores.faiss.dependable_faiss_import([...])
Import faiss if available, otherwise raise error.
vectorstores.myscale.has_mul_sub_str(s, *args)
Check if a string contains multiple substrings.
vectorstores.neo4j_vector.check_if_not_null(...)
Check if the values are not None or empty string
vectorstores.neo4j_vector.sort_by_index_name(...)
Sort first element to match the index_name if exists
vectorstores.qdrant.sync_call_fallback(method)
Decorator to call the synchronous method of the class if the async method is not implemented.
vectorstores.redis.base.check_index_exists(...)
Check if Redis index exists.
vectorstores.redis.filters.check_operator_misuse(func) | lang/api.python.langchain.com/en/latest/api_reference.html |
1a93eb904d6b-114 | Check if Redis index exists.
vectorstores.redis.filters.check_operator_misuse(func)
Decorator to check for misuse of equality operators.
vectorstores.redis.schema.read_schema(...)
Reads in the index schema from a dict or yaml file.
vectorstores.scann.dependable_scann_import()
Import scann if available, otherwise raise error.
vectorstores.scann.normalize(x)
Normalize vectors to unit length.
vectorstores.starrocks.debug_output(s)
Print a debug message if DEBUG is True.
vectorstores.starrocks.get_named_result(...)
Get a named result from a query.
vectorstores.starrocks.has_mul_sub_str(s, *args)
Check if a string has multiple substrings.
vectorstores.tiledb.dependable_tiledb_import()
Import tiledb-vector-search if available, otherwise raise error.
vectorstores.tiledb.get_documents_array_uri(uri)
vectorstores.tiledb.get_documents_array_uri_from_group(group)
vectorstores.tiledb.get_vector_index_uri(uri)
vectorstores.tiledb.get_vector_index_uri_from_group(group)
vectorstores.usearch.dependable_usearch_import()
Import usearch if available, otherwise raise error.
vectorstores.utils.filter_complex_metadata(...)
Filter out metadata types that are not supported for a vector store.
vectorstores.utils.maximal_marginal_relevance(...)
Calculate maximal marginal relevance. | lang/api.python.langchain.com/en/latest/api_reference.html |
a509b383164c-0 | langchain_experimental API Reference¶
langchain_experimental.agents¶
Functions¶
agents.agent_toolkits.csv.base.create_csv_agent(...)
Create csv agent by loading to a dataframe and using pandas agent.
agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent(llm, df)
Construct a pandas agent from an LLM and dataframe.
agents.agent_toolkits.python.base.create_python_agent(...)
Construct a python agent from an LLM and tool.
agents.agent_toolkits.spark.base.create_spark_dataframe_agent(llm, df)
Construct a Spark agent from an LLM and dataframe.
agents.agent_toolkits.xorbits.base.create_xorbits_agent(...)
Construct a xorbits agent from an LLM and dataframe.
langchain_experimental.autonomous_agents¶
Classes¶
autonomous_agents.autogpt.agent.AutoGPT(...)
Agent class for interacting with Auto-GPT.
autonomous_agents.autogpt.memory.AutoGPTMemory
Memory for AutoGPT.
autonomous_agents.autogpt.output_parser.AutoGPTAction(...)
Action returned by AutoGPTOutputParser.
autonomous_agents.autogpt.output_parser.AutoGPTOutputParser
Output parser for AutoGPT.
autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser
Base Output parser for AutoGPT.
autonomous_agents.autogpt.prompt.AutoGPTPrompt
Prompt for AutoGPT.
autonomous_agents.autogpt.prompt_generator.PromptGenerator()
A class for generating custom prompt strings.
autonomous_agents.baby_agi.baby_agi.BabyAGI
Controller model for the BabyAGI agent.
autonomous_agents.baby_agi.task_creation.TaskCreationChain
Chain generating tasks.
autonomous_agents.baby_agi.task_execution.TaskExecutionChain
Chain to execute tasks.
autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-1 | autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain
Chain to prioritize tasks.
autonomous_agents.hugginggpt.hugginggpt.HuggingGPT(...)
autonomous_agents.hugginggpt.repsonse_generator.ResponseGenerationChain
Chain to execute tasks.
autonomous_agents.hugginggpt.repsonse_generator.ResponseGenerator(...)
autonomous_agents.hugginggpt.task_executor.Task(...)
autonomous_agents.hugginggpt.task_executor.TaskExecutor(plan)
Load tools to execute tasks.
autonomous_agents.hugginggpt.task_planner.BasePlanner
Create a new model by parsing and validating input data from keyword arguments.
autonomous_agents.hugginggpt.task_planner.Plan(steps)
autonomous_agents.hugginggpt.task_planner.PlanningOutputParser
Create a new model by parsing and validating input data from keyword arguments.
autonomous_agents.hugginggpt.task_planner.Step(...)
autonomous_agents.hugginggpt.task_planner.TaskPlaningChain
Chain to execute tasks.
autonomous_agents.hugginggpt.task_planner.TaskPlanner
Create a new model by parsing and validating input data from keyword arguments.
Functions¶
autonomous_agents.autogpt.output_parser.preprocess_json_input(...)
Preprocesses a string to be parsed as json.
autonomous_agents.autogpt.prompt_generator.get_prompt(tools)
Generates a prompt string.
autonomous_agents.hugginggpt.repsonse_generator.load_response_generator(llm)
autonomous_agents.hugginggpt.task_planner.load_chat_planner(llm)
langchain_experimental.chat_models¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose
is a bit different. Rather than expose a “text in, text out” API, they expose
an interface where “chat messages” are the inputs and outputs. | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-2 | an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
chat_models.llm_wrapper.ChatWrapper
Create a new model by parsing and validating input data from keyword arguments.
chat_models.llm_wrapper.Llama2Chat
Create a new model by parsing and validating input data from keyword arguments.
chat_models.llm_wrapper.Orca
Create a new model by parsing and validating input data from keyword arguments.
chat_models.llm_wrapper.Vicuna
Create a new model by parsing and validating input data from keyword arguments.
langchain_experimental.comprehend_moderation¶
Classes¶
comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain
A subclass of Chain, designed to apply moderation to LLMs.
comprehend_moderation.base_moderation.BaseModeration(client)
comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler()
comprehend_moderation.base_moderation_config.BaseModerationConfig
Create a new model by parsing and validating input data from keyword arguments.
comprehend_moderation.base_moderation_config.ModerationPiiConfig
Create a new model by parsing and validating input data from keyword arguments.
comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig
Create a new model by parsing and validating input data from keyword arguments.
comprehend_moderation.base_moderation_config.ModerationToxicityConfig
Create a new model by parsing and validating input data from keyword arguments.
comprehend_moderation.base_moderation_exceptions.ModerationPiiError([...]) | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-3 | comprehend_moderation.base_moderation_exceptions.ModerationPiiError([...])
Exception raised if PII entities are detected.
comprehend_moderation.base_moderation_exceptions.ModerationPromptSafetyError([...])
Exception raised if Intention entities are detected.
comprehend_moderation.base_moderation_exceptions.ModerationToxicityError([...])
Exception raised if Toxic entities are detected.
comprehend_moderation.pii.ComprehendPII(client)
comprehend_moderation.prompt_safety.ComprehendPromptSafety(client)
comprehend_moderation.toxicity.ComprehendToxicity(client)
langchain_experimental.cpal¶
Classes¶
cpal.base.CPALChain
Causal program-aided language (CPAL) chain implementation.
cpal.base.CausalChain
Translate the causal narrative into a stack of operations.
cpal.base.InterventionChain
Set the hypothetical conditions for the causal model.
cpal.base.NarrativeChain
Decompose the narrative into its story elements
cpal.base.QueryChain
Query the outcome table using SQL.
cpal.constants.Constant(value[, names, ...])
Enum for constants used in the CPAL.
cpal.models.CausalModel
Create a new model by parsing and validating input data from keyword arguments.
cpal.models.EntityModel
Create a new model by parsing and validating input data from keyword arguments.
cpal.models.EntitySettingModel
Initial conditions for an entity
cpal.models.InterventionModel
aka initial conditions
cpal.models.NarrativeModel
Represent the narrative input as three story elements.
cpal.models.QueryModel
translate a question about the story outcome into a programmatic expression
cpal.models.ResultModel
Create a new model by parsing and validating input data from keyword arguments.
cpal.models.StoryModel | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-4 | cpal.models.StoryModel
Create a new model by parsing and validating input data from keyword arguments.
cpal.models.SystemSettingModel
Initial global conditions for the system.
langchain_experimental.fallacy_removal¶
The Chain runs a self-review of logical fallacies as determined by this paper categorizing and defining logical fallacies https://arxiv.org/pdf/2212.07425.pdf. Modeled after Constitutional AI and in same format, but applying logical fallacies as generalized rules to remove in output
Classes¶
fallacy_removal.base.FallacyChain
Chain for applying logical fallacy evaluations, modeled after Constitutional AI and in same format, but applying logical fallacies as generalized rules to remove in output
fallacy_removal.models.LogicalFallacy
Class for a logical fallacy.
langchain_experimental.generative_agents¶
Generative Agents primitives.
Classes¶
generative_agents.generative_agent.GenerativeAgent
An Agent as a character with memory and innate characteristics.
generative_agents.memory.GenerativeAgentMemory
Memory for the generative agent.
langchain_experimental.graph_transformers¶
Classes¶
graph_transformers.diffbot.DiffbotGraphTransformer([...])
Transforms documents into graph documents using Diffbot's NLP API.
graph_transformers.diffbot.NodesList()
Manages a list of nodes with associated properties.
graph_transformers.diffbot.SimplifiedSchema()
Provides functionality for working with a simplified schema mapping.
Functions¶
graph_transformers.diffbot.format_property_key(s)
langchain_experimental.llm_bash¶
Chain that interprets a prompt and executes bash code to perform bash operations.
Classes¶
llm_bash.base.LLMBashChain
Chain that interprets a prompt and executes bash operations.
llm_bash.bash.BashProcess([strip_newlines, ...]) | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-5 | llm_bash.bash.BashProcess([strip_newlines, ...])
Wrapper class for starting subprocesses.
llm_bash.prompt.BashOutputParser
Parser for bash output.
langchain_experimental.llm_symbolic_math¶
Chain that interprets a prompt and executes python code to do math.
Heavily borrowed from llm_math, wrapper for SymPy
Classes¶
llm_symbolic_math.base.LLMSymbolicMathChain
Chain that interprets a prompt and executes python code to do symbolic math.
langchain_experimental.llms¶
Experimental LLM wrappers.
Classes¶
llms.anthropic_functions.AnthropicFunctions
Create a new model by parsing and validating input data from keyword arguments.
llms.anthropic_functions.TagParser()
A heavy-handed solution, but it's fast for prototyping.
llms.jsonformer_decoder.JsonFormer
Jsonformer wrapped LLM using HuggingFace Pipeline API.
llms.llamaapi.ChatLlamaAPI
Create a new model by parsing and validating input data from keyword arguments.
llms.lmformatenforcer_decoder.LMFormatEnforcer
LMFormatEnforcer wrapped LLM using HuggingFace Pipeline API.
llms.rellm_decoder.RELLM
RELLM wrapped LLM using HuggingFace Pipeline API.
Functions¶
llms.jsonformer_decoder.import_jsonformer()
Lazily import jsonformer.
llms.lmformatenforcer_decoder.import_lmformatenforcer()
Lazily import lmformatenforcer.
llms.rellm_decoder.import_rellm()
Lazily import rellm.
langchain_experimental.open_clip¶
Classes¶
open_clip.open_clip.OpenCLIPEmbeddings
Create a new model by parsing and validating input data from keyword arguments.
langchain_experimental.pal_chain¶ | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-6 | langchain_experimental.pal_chain¶
Implements Program-Aided Language Models.
As in https://arxiv.org/pdf/2211.10435.pdf.
This is vulnerable to arbitrary code execution:
https://github.com/langchain-ai/langchain/issues/5872
Classes¶
pal_chain.base.PALChain
Implements Program-Aided Language Models (PAL).
pal_chain.base.PALValidation([...])
Initialize a PALValidation instance.
langchain_experimental.plan_and_execute¶
Classes¶
plan_and_execute.agent_executor.PlanAndExecute
Plan and execute a chain of steps.
plan_and_execute.executors.base.BaseExecutor
Base executor.
plan_and_execute.executors.base.ChainExecutor
Chain executor.
plan_and_execute.planners.base.BasePlanner
Base planner.
plan_and_execute.planners.base.LLMPlanner
LLM planner.
plan_and_execute.planners.chat_planner.PlanningOutputParser
Planning output parser.
plan_and_execute.schema.BaseStepContainer
Base step container.
plan_and_execute.schema.ListStepContainer
List step container.
plan_and_execute.schema.Plan
Plan.
plan_and_execute.schema.PlanOutputParser
Plan output parser.
plan_and_execute.schema.Step
Step.
plan_and_execute.schema.StepResponse
Step response.
Functions¶
plan_and_execute.executors.agent_executor.load_agent_executor(...)
Load an agent executor.
plan_and_execute.planners.chat_planner.load_chat_planner(llm)
Load a chat planner.
langchain_experimental.prompt_injection_identifier¶
HuggingFace Security toolkit.
Classes¶
prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier
Tool that uses deberta-v3-base-injection to detect prompt injection attacks.
Functions¶
langchain_experimental.retrievers¶
Classes¶
retrievers.vector_sql_database.VectorSQLDatabaseChainRetriever | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-7 | Classes¶
retrievers.vector_sql_database.VectorSQLDatabaseChainRetriever
Retriever that uses SQLDatabase as Retriever
langchain_experimental.rl_chain¶
Classes¶
rl_chain.base.AutoSelectionScorer
Create a new model by parsing and validating input data from keyword arguments.
rl_chain.base.Embedder(*args, **kwargs)
rl_chain.base.Event(inputs[, selected])
rl_chain.base.Policy(**kwargs)
rl_chain.base.RLChain
The RLChain class leverages the Vowpal Wabbit (VW) model as a learned policy for reinforcement learning.
rl_chain.base.Selected()
rl_chain.base.SelectionScorer
Abstract method to grade the chosen selection or the response of the llm
rl_chain.base.VwPolicy(model_repo, vw_cmd, ...)
rl_chain.metrics.MetricsTrackerAverage(step)
rl_chain.metrics.MetricsTrackerRollingWindow(...)
rl_chain.model_repository.ModelRepository(folder)
rl_chain.pick_best_chain.PickBest
PickBest is a class designed to leverage the Vowpal Wabbit (VW) model for reinforcement learning with a context, with the goal of modifying the prompt before the LLM call.
rl_chain.pick_best_chain.PickBestEvent(...)
rl_chain.pick_best_chain.PickBestFeatureEmbedder(...)
Text Embedder class that embeds the BasedOn and ToSelectFrom inputs into a format that can be used by the learning policy
rl_chain.pick_best_chain.PickBestRandomPolicy(...)
rl_chain.pick_best_chain.PickBestSelected([...])
rl_chain.vw_logger.VwLogger(path)
Functions¶
rl_chain.base.BasedOn(anything)
rl_chain.base.Embed(anything[, keep])
rl_chain.base.EmbedAndKeep(anything)
rl_chain.base.ToSelectFrom(anything)
rl_chain.base.embed(to_embed, model[, namespace]) | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-8 | rl_chain.base.embed(to_embed, model[, namespace])
Embeds the actions or context using the SentenceTransformer model (or a model that has an encode function)
rl_chain.base.embed_dict_type(item, model)
Helper function to embed a dictionary item.
rl_chain.base.embed_list_type(item, model[, ...])
rl_chain.base.embed_string_type(item, model)
Helper function to embed a string or an _Embed object.
rl_chain.base.get_based_on_and_to_select_from(inputs)
rl_chain.base.is_stringtype_instance(item)
Helper function to check if an item is a string.
rl_chain.base.parse_lines(parser, input_str)
rl_chain.base.prepare_inputs_for_autoembed(inputs)
go over all the inputs and if something is either wrapped in _ToSelectFrom or _BasedOn, and if their inner values are not already _Embed, then wrap them in EmbedAndKeep while retaining their _ToSelectFrom or _BasedOn status
rl_chain.base.stringify_embedding(embedding)
langchain_experimental.smart_llm¶
Generalized implementation of SmartGPT (origin: https://youtu.be/wVzuvf9D9BU)
Classes¶
smart_llm.base.SmartLLMChain
Generalized implementation of SmartGPT (origin: https://youtu.be/wVzuvf9D9BU)
langchain_experimental.sql¶
Chain for interacting with SQL Database.
Classes¶
sql.base.SQLDatabaseChain
Chain for interacting with SQL Database.
sql.base.SQLDatabaseSequentialChain
Chain for querying SQL database that is a sequential chain.
sql.vector_sql.VectorSQLDatabaseChain
Chain for interacting with Vector SQL Database.
sql.vector_sql.VectorSQLOutputParser
Output Parser for Vector SQL 1.
sql.vector_sql.VectorSQLRetrieveAllOutputParser | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-9 | Output Parser for Vector SQL 1.
sql.vector_sql.VectorSQLRetrieveAllOutputParser
Based on VectorSQLOutputParser It also modify the SQL to get all columns
Functions¶
sql.vector_sql.get_result_from_sqldb(db, cmd)
langchain_experimental.tabular_synthetic_data¶
Classes¶
tabular_synthetic_data.base.SyntheticDataGenerator
Generates synthetic data using the given LLM and few-shot template.
Functions¶
tabular_synthetic_data.openai.create_openai_data_generator(...)
Create an instance of SyntheticDataGenerator tailored for OpenAI models.
langchain_experimental.tools¶
Classes¶
tools.python.tool.PythonAstREPLTool
A tool for running python code in a REPL.
tools.python.tool.PythonInputs
Create a new model by parsing and validating input data from keyword arguments.
tools.python.tool.PythonREPLTool
A tool for running python code in a REPL.
Functions¶
tools.python.tool.sanitize_input(query)
Sanitize input to the python REPL.
langchain_experimental.tot¶
Classes¶
tot.base.ToTChain
A Chain implementing the Tree of Thought (ToT).
tot.checker.ToTChecker
Tree of Thought (ToT) checker.
tot.controller.ToTController([c])
Tree of Thought (ToT) controller.
tot.memory.ToTDFSMemory([stack])
Memory for the Tree of Thought (ToT) chain.
tot.prompts.CheckerOutputParser
tot.prompts.JSONListOutputParser
Class to parse the output of a PROPOSE_PROMPT response.
tot.thought.Thought
Create a new model by parsing and validating input data from keyword arguments.
tot.thought.ThoughtValidity(value[, names, ...])
tot.thought_generation.BaseThoughtGenerationStrategy
Base class for a thought generation strategy. | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
a509b383164c-10 | tot.thought_generation.BaseThoughtGenerationStrategy
Base class for a thought generation strategy.
tot.thought_generation.ProposePromptStrategy
Propose thoughts sequentially using a "propose prompt".
tot.thought_generation.SampleCoTStrategy
Sample thoughts from a Chain-of-Thought (CoT) prompt.
langchain_experimental.utilities¶
Classes¶
utilities.python.PythonREPL
Simulates a standalone Python REPL. | lang/api.python.langchain.com/en/latest/experimental_api_reference.html |
96e420fee0e5-0 | langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper¶
class langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper[source]¶
Bases: BaseModel
Wrapper around the DataForSeo API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aiosession: Optional[aiohttp.client.ClientSession] = None¶
The aiohttp session to use for the DataForSEO SERP API.
param api_login: Optional[str] = None¶
The API login to use for the DataForSEO SERP API.
param api_password: Optional[str] = None¶
The API password to use for the DataForSEO SERP API.
param default_params: dict = {'depth': 10, 'language_code': 'en', 'location_name': 'United States', 'se_name': 'google', 'se_type': 'organic'}¶
Default parameters to use for the DataForSEO SERP API.
param json_result_fields: Optional[list] = None¶
The JSON result fields.
param json_result_types: Optional[list] = None¶
The JSON result types.
param params: dict = {}¶
Additional parameters to pass to the DataForSEO SERP API.
param top_count: Optional[int] = None¶
The number of top results to return.
async aresults(url: str) → list[source]¶
async arun(url: str) → str[source]¶
Run request to DataForSEO SERP API and parse result async.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper.html |
96e420fee0e5-1 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper.html |
96e420fee0e5-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
results(url: str) → list[source]¶
run(url: str) → str[source]¶
Run request to DataForSEO SERP API and parse result async.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper.html |
96e420fee0e5-3 | classmethod validate(value: Any) → Model¶
Examples using DataForSeoAPIWrapper¶
DataForSeo
DataForSEO | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper.html |
a653d77abd38-0 | langchain.utilities.portkey.Portkey¶
class langchain.utilities.portkey.Portkey[source]¶
Portkey configuration.
base¶
The base URL for the Portkey API.
Default: “https://api.portkey.ai/v1/proxy”
Attributes
base
Methods
Config(api_key[, trace_id, environment, ...])
__init__()
static Config(api_key: str, trace_id: Optional[str] = None, environment: Optional[str] = None, user: Optional[str] = None, organisation: Optional[str] = None, prompt: Optional[str] = None, retry_count: Optional[int] = None, cache: Optional[str] = None, cache_force_refresh: Optional[str] = None, cache_age: Optional[int] = None) → Dict[str, str][source]¶
__init__()¶
Examples using Portkey¶
Log, Trace, and Monitor
Portkey | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.portkey.Portkey.html |
cfc730ee4d62-0 | langchain.utilities.opaqueprompts.sanitize¶
langchain.utilities.opaqueprompts.sanitize(input: Union[str, Dict[str, str]]) → Dict[str, Union[str, Dict[str, str]]][source]¶
Sanitize input string or dict of strings by replacing sensitive data with
placeholders.
It returns the sanitized input string or dict of strings and the secure
context as a dict following the format:
{
“sanitized_input”: <sanitized input string or dict of strings>,
“secure_context”: <secure context>
}
The secure context is a bytes object that is needed to de-sanitize the response
from the LLM.
Parameters
input – Input string or dict of strings.
Returns
Sanitized input string or dict of strings and the secure context
as a dict following the format:
{
”sanitized_input”: <sanitized input string or dict of strings>,
“secure_context”: <secure context>
}
The secure_context needs to be passed to the desanitize function.
Raises
ValueError – If the input is not a string or dict of strings.
ImportError – If the opaqueprompts Python package is not installed. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.opaqueprompts.sanitize.html |
8083bd08e457-0 | langchain.utilities.scenexplain.SceneXplainAPIWrapper¶
class langchain.utilities.scenexplain.SceneXplainAPIWrapper[source]¶
Bases: BaseSettings, BaseModel
Wrapper for SceneXplain API.
In order to set this up, you need API key for the SceneXplain API.
You can obtain a key by following the steps below.
- Sign up for a free account at https://scenex.jina.ai/.
- Navigate to the API Access page (https://scenex.jina.ai/api)
and create a new API key.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param scenex_api_key: str [Required]¶
param scenex_api_url: str = 'https://api.scenex.jina.ai/v1/describe'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html |
8083bd08e457-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html |
8083bd08e457-2 | run(image: str) → str[source]¶
Run SceneXplain image explainer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html |
646781372492-0 | langchain.utilities.serpapi.HiddenPrints¶
class langchain.utilities.serpapi.HiddenPrints[source]¶
Context manager to hide prints.
Methods
__init__()
__init__()¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.serpapi.HiddenPrints.html |
35f91cea4dfc-0 | langchain.utilities.arcee.ArceeDocumentSource¶
class langchain.utilities.arcee.ArceeDocumentSource[source]¶
Bases: BaseModel
Source of an Arcee document.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param document: str [Required]¶
param id: str [Required]¶
param name: str [Required]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arcee.ArceeDocumentSource.html |
35f91cea4dfc-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arcee.ArceeDocumentSource.html |
35f91cea4dfc-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arcee.ArceeDocumentSource.html |
beb3bbbd47cf-0 | langchain.utilities.openapi.HTTPVerb¶
class langchain.utilities.openapi.HTTPVerb(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enumerator of the HTTP verbs.
GET = 'get'¶
PUT = 'put'¶
POST = 'post'¶
DELETE = 'delete'¶
OPTIONS = 'options'¶
HEAD = 'head'¶
PATCH = 'patch'¶
TRACE = 'trace'¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html |
3b445cd97a73-0 | langchain.utilities.google_places_api.GooglePlacesAPIWrapper¶
class langchain.utilities.google_places_api.GooglePlacesAPIWrapper[source]¶
Bases: BaseModel
Wrapper around Google Places API.
To use, you should have the googlemaps python package installed,an API key for the google maps platform,
and the environment variable ‘’GPLACES_API_KEY’’
set with your API key , or pass ‘gplaces_api_key’
as a named parameter to the constructor.
By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.
Example
from langchain.utilities import GooglePlacesAPIWrapper
gplaceapi = GooglePlacesAPIWrapper()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param gplaces_api_key: Optional[str] = None¶
param top_k_results: Optional[int] = None¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_places_api.GooglePlacesAPIWrapper.html |
3b445cd97a73-1 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
fetch_place_details(place_id: str) → Optional[str][source]¶
format_place_details(place_details: Dict[str, Any]) → Optional[str][source]¶
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_places_api.GooglePlacesAPIWrapper.html |
3b445cd97a73-2 | classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(query: str) → str[source]¶
Run Places search and get k number of places that exists that match.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_places_api.GooglePlacesAPIWrapper.html |
3f1702a287e1-0 | langchain.utilities.requests.RequestsWrapper¶
langchain.utilities.requests.RequestsWrapper¶
alias of TextRequestsWrapper
Examples using RequestsWrapper¶
OpenAPI | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.requests.RequestsWrapper.html |
fe76d73d7393-0 | langchain.utilities.arcee.DALMFilter¶
class langchain.utilities.arcee.DALMFilter[source]¶
Bases: BaseModel
Filters available for a DALM retrieval and generation.
Parameters
field_name – The field to filter on. Can be ‘document’ or ‘name’ to filter
on your document’s raw text or title. Any other field will be presumed
to be a metadata field you included when uploading your context data
filter_type – Currently ‘fuzzy_search’ and ‘strict_search’ are supported.
‘fuzzy_search’ means a fuzzy search on the provided field is performed.
The exact strict doesn’t need to exist in the document
for this to find a match.
Very useful for scanning a document for some keyword terms.
‘strict_search’ means that the exact string must appear
in the provided field.
This is NOT an exact eq filter. ie a document with content
“the happy dog crossed the street” will match on a strict_search of
“dog” but won’t match on “the dog”.
Python equivalent of return search_string in full_string.
value – The actual value to search for in the context data/metadata
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param field_name: str [Required]¶
param filter_type: langchain.utilities.arcee.DALMFilterType [Required]¶
param value: str [Required]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arcee.DALMFilter.html |
fe76d73d7393-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arcee.DALMFilter.html |
fe76d73d7393-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arcee.DALMFilter.html |
4c8e23210d16-0 | langchain.utilities.clickup.load_query¶
langchain.utilities.clickup.load_query(query: str, fault_tolerant: bool = False) → Tuple[Optional[Dict], Optional[str]][source]¶
Attempts to parse a JSON string and return the parsed object.
If parsing fails, returns an error message.
Parameters
query – The JSON string to parse.
Returns
A tuple containing the parsed object or None and an error message or None. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.load_query.html |
9da414a44cd7-0 | langchain.utilities.awslambda.LambdaWrapper¶
class langchain.utilities.awslambda.LambdaWrapper[source]¶
Bases: BaseModel
Wrapper for AWS Lambda SDK.
To use, you should have the boto3 package installed
and a lambda functions built from the AWS Console or
CLI. Set up your AWS credentials with aws configure
Example
pip install boto3
aws configure
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param awslambda_tool_description: Optional[str] = None¶
If passing to an agent as a tool, the description
param awslambda_tool_name: Optional[str] = None¶
If passing to an agent as a tool, the tool name
param function_name: Optional[str] = None¶
The name of your lambda function
param lambda_client: Any = None¶
The configured boto3 client
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.awslambda.LambdaWrapper.html |
9da414a44cd7-1 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.awslambda.LambdaWrapper.html |
9da414a44cd7-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(query: str) → str[source]¶
Invokes the lambda function and returns the
result.
Parameters
query – an input to passed to the lambda
function as the body of a JSON
object.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.awslambda.LambdaWrapper.html |
a13b2634c599-0 | langchain.utilities.arxiv.ArxivAPIWrapper¶
class langchain.utilities.arxiv.ArxivAPIWrapper[source]¶
Bases: BaseModel
Wrapper around ArxivAPI.
To use, you should have the arxiv python package installed.
https://lukasschwab.me/arxiv.py/index.html
This wrapper will use the Arxiv API to conduct searches and
fetch document summaries. By default, it will return the document summaries
of the top-k results.
If the query is in the form of arxiv identifier
(see https://info.arxiv.org/help/find/index.html), it will return the paper
corresponding to the arxiv identifier.
It limits the Document content by doc_content_chars_max.
Set doc_content_chars_max=None if you don’t want to limit the content size.
top_k_results¶
number of the top-scored document used for the arxiv tool
ARXIV_MAX_QUERY_LENGTH¶
the cut limit on the query used for the arxiv tool.
load_max_docs¶
a limit to the number of loaded documents
load_all_available_meta¶
if True: the metadata of the loaded Documents contains all available
meta info (see https://lukasschwab.me/arxiv.py/index.html#Result),
if False: the metadata contains only the published date, title,
authors and summary.
doc_content_chars_max¶
an optional cut limit for the length of a document’s
content
Example
from langchain.utilities.arxiv import ArxivAPIWrapper
arxiv = ArxivAPIWrapper(
top_k_results = 3,
ARXIV_MAX_QUERY_LENGTH = 300,
load_max_docs = 3,
load_all_available_meta = False,
doc_content_chars_max = 40000
)
arxiv.run("tree of thought llm) | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arxiv.ArxivAPIWrapper.html |
a13b2634c599-1 | )
arxiv.run("tree of thought llm)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ARXIV_MAX_QUERY_LENGTH: int = 300¶
param arxiv_exceptions: Any = None¶
param doc_content_chars_max: Optional[int] = 4000¶
param load_all_available_meta: bool = False¶
param load_max_docs: int = 100¶
param top_k_results: int = 3¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arxiv.ArxivAPIWrapper.html |
a13b2634c599-2 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_summaries_as_docs(query: str) → List[Document][source]¶
Performs an arxiv search and returns list of
documents, with summaries as the content.
If an error occurs or no documents found, error text
is returned instead. Wrapper for
https://lukasschwab.me/arxiv.py/index.html#Search
Parameters
query – a plaintext search query
is_arxiv_identifier(query: str) → bool[source]¶
Check if a query is an arxiv identifier.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arxiv.ArxivAPIWrapper.html |
a13b2634c599-3 | load(query: str) → List[Document][source]¶
Run Arxiv search and get the article texts plus the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
Returns: a list of documents with the document.page_content in text format
Performs an arxiv search, downloads the top k results as PDFs, loads
them as Documents, and returns them in a List.
Parameters
query – a plaintext search query
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(query: str) → str[source]¶
Performs an arxiv search and A single string
with the publish date, title, authors, and summary
for each article separated by two newlines.
If an error occurs or no documents found, error text
is returned instead. Wrapper for
https://lukasschwab.me/arxiv.py/index.html#Search
Parameters
query – a plaintext search query
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using ArxivAPIWrapper¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arxiv.ArxivAPIWrapper.html |
a13b2634c599-4 | classmethod validate(value: Any) → Model¶
Examples using ArxivAPIWrapper¶
ArXiv | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arxiv.ArxivAPIWrapper.html |
06bd70da544f-0 | langchain.utilities.google_scholar.GoogleScholarAPIWrapper¶
class langchain.utilities.google_scholar.GoogleScholarAPIWrapper[source]¶
Bases: BaseModel
Wrapper for Google Scholar API
You can create serpapi key by signing up at: https://serpapi.com/users/sign_up.
The wrapper uses the serpapi python package:
https://serpapi.com/integrations/python#search-google-scholar
To use, you should have the environment variable SERP_API_KEY
set with your API key, or pass serp_api_key as a named parameter
to the constructor.
top_k_results¶
number of results to return from google-scholar query search.
By default it returns top 10 results.
hl¶
attribute defines the language to use for the Google Scholar search.
It’s a two-letter language code.
(e.g., en for English, es for Spanish, or fr for French). Head to the
Google languages page for a full list of supported Google languages:
https://serpapi.com/google-languages
lr¶
attribute defines one or multiple languages to limit the search to.
It uses lang_{two-letter language code} to specify languages
and | as a delimiter. (e.g., lang_fr|lang_de will only search French
and German pages). Head to the Google lr languages for a full
list of supported languages: https://serpapi.com/google-lr-languages
Example:
from langchain.utilities import GoogleScholarAPIWrapper
google_scholar = GoogleScholarAPIWrapper()
google_scholar.run(‘langchain’)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param hl: str = 'en'¶
param lr: str = 'lang_en'¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_scholar.GoogleScholarAPIWrapper.html |
06bd70da544f-1 | param lr: str = 'lang_en'¶
param serp_api_key: Optional[str] = None¶
param top_k_results: int = 10¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_scholar.GoogleScholarAPIWrapper.html |
06bd70da544f-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(query: str) → str[source]¶
Run query through GoogleSearchScholar and parse result
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_scholar.GoogleScholarAPIWrapper.html |
32590fae3b14-0 | langchain.utilities.clickup.fetch_team_id¶
langchain.utilities.clickup.fetch_team_id(access_token: str) → Optional[int][source]¶
Fetch the team id. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.fetch_team_id.html |
f08a53d49b60-0 | langchain.utilities.clickup.ClickupAPIWrapper¶
class langchain.utilities.clickup.ClickupAPIWrapper[source]¶
Bases: BaseModel
Wrapper for Clickup API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: Optional[str] = None¶
param folder_id: Optional[str] = None¶
param list_id: Optional[str] = None¶
param space_id: Optional[str] = None¶
param team_id: Optional[str] = None¶
attempt_parse_teams(input_dict: dict) → Dict[str, List[dict]][source]¶
Parse appropriate content from the list of teams.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_folder(query: str) → Dict[source]¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.ClickupAPIWrapper.html |
f08a53d49b60-1 | Returns
new model instance
create_folder(query: str) → Dict[source]¶
Creates a new folder.
create_list(query: str) → Dict[source]¶
Creates a new list.
create_task(query: str) → Dict[source]¶
Creates a new task.
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_access_code_url(oauth_client_id: str, redirect_uri: str = 'https://google.com') → str[source]¶
Get the URL to get an access code.
classmethod get_access_token(oauth_client_id: str, oauth_client_secret: str, code: str) → Optional[str][source]¶
Get the access token.
get_authorized_teams() → Dict[Any, Any][source]¶
Get all teams for the user.
get_default_params() → Dict[source]¶
get_folders() → Dict[source]¶
Get all the folders for the team.
get_headers() → Mapping[str, Union[str, bytes]][source]¶
Get the headers for the request.
get_lists() → Dict[source]¶
Get all available lists.
get_spaces() → Dict[source]¶
Get all spaces for the team.
get_task(query: str, fault_tolerant: bool = True) → Dict[source]¶
Retrieve a specific task.
get_task_attribute(query: str) → Dict[source]¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.ClickupAPIWrapper.html |
f08a53d49b60-2 | Retrieve a specific task.
get_task_attribute(query: str) → Dict[source]¶
Update an attribute of a specified task.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
query_tasks(query: str) → Dict[source]¶
Query tasks that match certain fields
run(mode: str, query: str) → str[source]¶
Run the API.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.ClickupAPIWrapper.html |
f08a53d49b60-3 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
update_task(query: str) → Dict[source]¶
Update an attribute of a specified task.
update_task_assignees(query: str) → Dict[source]¶
Add or remove assignees of a specified task.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.ClickupAPIWrapper.html |
a390124055cb-0 | langchain.utilities.sql_database.SQLDatabase¶
class langchain.utilities.sql_database.SQLDatabase(engine: Engine, schema: Optional[str] = None, metadata: Optional[MetaData] = None, ignore_tables: Optional[List[str]] = None, include_tables: Optional[List[str]] = None, sample_rows_in_table_info: int = 3, indexes_in_table_info: bool = False, custom_table_info: Optional[dict] = None, view_support: bool = False, max_string_length: int = 300)[source]¶
SQLAlchemy wrapper around a database.
Create engine from database URI.
Attributes
dialect
Return string representation of dialect to use.
table_info
Information about all tables in the database.
Methods
__init__(engine[, schema, metadata, ...])
Create engine from database URI.
from_cnosdb([url, user, password, tenant, ...])
Class method to create an SQLDatabase instance from a CnosDB connection.
from_databricks(catalog, schema[, host, ...])
Class method to create an SQLDatabase instance from a Databricks connection.
from_uri(database_uri[, engine_args])
Construct a SQLAlchemy engine from URI.
get_table_info([table_names])
Get information about specified tables.
get_table_info_no_throw([table_names])
Get information about specified tables.
get_table_names()
Get names of tables available.
get_usable_table_names()
Get names of tables available.
run(command[, fetch])
Execute a SQL command and return a string representing the results.
run_no_throw(command[, fetch])
Execute a SQL command and return a string representing the results. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.sql_database.SQLDatabase.html |
a390124055cb-1 | Execute a SQL command and return a string representing the results.
__init__(engine: Engine, schema: Optional[str] = None, metadata: Optional[MetaData] = None, ignore_tables: Optional[List[str]] = None, include_tables: Optional[List[str]] = None, sample_rows_in_table_info: int = 3, indexes_in_table_info: bool = False, custom_table_info: Optional[dict] = None, view_support: bool = False, max_string_length: int = 300)[source]¶
Create engine from database URI.
classmethod from_cnosdb(url: str = '127.0.0.1:8902', user: str = 'root', password: str = '', tenant: str = 'cnosdb', database: str = 'public') → SQLDatabase[source]¶
Class method to create an SQLDatabase instance from a CnosDB connection.
This method requires the ‘cnos-connector’ package. If not installed, it
can be added using pip install cnos-connector.
Parameters
url (str) – The HTTP connection host name and port number of the CnosDB
service, excluding “http://” or “https://”, with a default value
of “127.0.0.1:8902”.
user (str) – The username used to connect to the CnosDB service, with a
default value of “root”.
password (str) – The password of the user connecting to the CnosDB service,
with a default value of “”.
tenant (str) – The name of the tenant used to connect to the CnosDB service,
with a default value of “cnosdb”.
database (str) – The name of the database in the CnosDB tenant.
Returns
An instance of SQLDatabase configured with the provided
CnosDB connection details.
Return type
SQLDatabase | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.sql_database.SQLDatabase.html |
a390124055cb-2 | CnosDB connection details.
Return type
SQLDatabase
classmethod from_databricks(catalog: str, schema: str, host: Optional[str] = None, api_token: Optional[str] = None, warehouse_id: Optional[str] = None, cluster_id: Optional[str] = None, engine_args: Optional[dict] = None, **kwargs: Any) → SQLDatabase[source]¶
Class method to create an SQLDatabase instance from a Databricks connection.
This method requires the ‘databricks-sql-connector’ package. If not installed,
it can be added using pip install databricks-sql-connector.
Parameters
catalog (str) – The catalog name in the Databricks database.
schema (str) – The schema name in the catalog.
host (Optional[str]) – The Databricks workspace hostname, excluding
‘https://’ part. If not provided, it attempts to fetch from the
environment variable ‘DATABRICKS_HOST’. If still unavailable and if
running in a Databricks notebook, it defaults to the current workspace
hostname. Defaults to None.
api_token (Optional[str]) – The Databricks personal access token for
accessing the Databricks SQL warehouse or the cluster. If not provided,
it attempts to fetch from ‘DATABRICKS_TOKEN’. If still unavailable
and running in a Databricks notebook, a temporary token for the current
user is generated. Defaults to None.
warehouse_id (Optional[str]) – The warehouse ID in the Databricks SQL. If
provided, the method configures the connection to use this warehouse.
Cannot be used with ‘cluster_id’. Defaults to None.
cluster_id (Optional[str]) – The cluster ID in the Databricks Runtime. If
provided, the method configures the connection to use this cluster. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.sql_database.SQLDatabase.html |
a390124055cb-3 | provided, the method configures the connection to use this cluster.
Cannot be used with ‘warehouse_id’. If running in a Databricks notebook
and both ‘warehouse_id’ and ‘cluster_id’ are None, it uses the ID of the
cluster the notebook is attached to. Defaults to None.
engine_args (Optional[dict]) – The arguments to be used when connecting
Databricks. Defaults to None.
**kwargs (Any) – Additional keyword arguments for the from_uri method.
Returns
An instance of SQLDatabase configured with the providedDatabricks connection details.
Return type
SQLDatabase
Raises
ValueError – If ‘databricks-sql-connector’ is not found, or if both
‘warehouse_id’ and ‘cluster_id’ are provided, or if neither
‘warehouse_id’ nor ‘cluster_id’ are provided and it’s not executing
inside a Databricks notebook.
classmethod from_uri(database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any) → SQLDatabase[source]¶
Construct a SQLAlchemy engine from URI.
get_table_info(table_names: Optional[List[str]] = None) → str[source]¶
Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If sample_rows_in_table_info, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
get_table_info_no_throw(table_names: Optional[List[str]] = None) → str[source]¶
Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498) | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.sql_database.SQLDatabase.html |
a390124055cb-4 | (https://arxiv.org/abs/2204.00498)
If sample_rows_in_table_info, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
get_table_names() → Iterable[str][source]¶
Get names of tables available.
get_usable_table_names() → Iterable[str][source]¶
Get names of tables available.
run(command: str, fetch: Union[Literal['all'], Literal['one']] = 'all') → str[source]¶
Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
run_no_throw(command: str, fetch: Union[Literal['all'], Literal['one']] = 'all') → str[source]¶
Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
Examples using SQLDatabase¶
Rebuff
SQL Database
Multiple Retrieval Sources
Set env var OPENAI_API_KEY or load from a .env file
Vector SQL Retriever with MyScale
SQL
sql_db.md | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.sql_database.SQLDatabase.html |
67c2259f4c03-0 | langchain.utilities.wikipedia.WikipediaAPIWrapper¶
class langchain.utilities.wikipedia.WikipediaAPIWrapper[source]¶
Bases: BaseModel
Wrapper around WikipediaAPI.
To use, you should have the wikipedia python package installed.
This wrapper will use the Wikipedia API to conduct searches and
fetch page summaries. By default, it will return the page summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param doc_content_chars_max: int = 4000¶
param lang: str = 'en'¶
param load_all_available_meta: bool = False¶
param top_k_results: int = 3¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.wikipedia.WikipediaAPIWrapper.html |
67c2259f4c03-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load(query: str) → List[Document][source]¶
Run Wikipedia search and get the article text plus the meta information.
See
Returns: a list of documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.wikipedia.WikipediaAPIWrapper.html |
67c2259f4c03-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(query: str) → str[source]¶
Run Wikipedia search and get page summaries.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using WikipediaAPIWrapper¶
Wikipedia
Zep Memory | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.wikipedia.WikipediaAPIWrapper.html |
2939f093700d-0 | langchain.utilities.clickup.parse_dict_through_component¶
langchain.utilities.clickup.parse_dict_through_component(data: dict, component: Type[Component], fault_tolerant: bool = False) → Dict[source]¶
Parse a dictionary by creating
a component and then turning it back into a dictionary.
This helps with two things
1. Extract and format data from a dictionary according to schema
2. Provide a central place to do this in a fault-tolerant way | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.parse_dict_through_component.html |
3ed3fa651395-0 | langchain.utilities.vertexai.get_client_info¶
langchain.utilities.vertexai.get_client_info(module: Optional[str] = None) → ClientInfo[source]¶
Returns a custom user agent header.
Parameters
module (Optional[str]) – Optional. The module for a custom user agent header.
Returns
google.api_core.gapic_v1.client_info.ClientInfo | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.vertexai.get_client_info.html |
2edc1facc014-0 | langchain.utilities.golden_query.GoldenQueryAPIWrapper¶
class langchain.utilities.golden_query.GoldenQueryAPIWrapper[source]¶
Bases: BaseModel
Wrapper for Golden.
Docs for using:
Go to https://golden.com and sign up for an account
Get your API Key from https://golden.com/settings/api
Save your API Key into GOLDEN_API_KEY env variable
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param golden_api_key: Optional[str] = None¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.golden_query.GoldenQueryAPIWrapper.html |
2edc1facc014-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(query: str) → str[source]¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.golden_query.GoldenQueryAPIWrapper.html |
2edc1facc014-2 | run(query: str) → str[source]¶
Run query through Golden Query API and return the JSON raw result.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using GoldenQueryAPIWrapper¶
Golden Query
Golden | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.golden_query.GoldenQueryAPIWrapper.html |
f57de50f843b-0 | langchain.utilities.clickup.fetch_space_id¶
langchain.utilities.clickup.fetch_space_id(team_id: int, access_token: str) → Optional[int][source]¶
Fetch the space id. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.fetch_space_id.html |
f1058e9130ea-0 | langchain.utilities.twilio.TwilioAPIWrapper¶
class langchain.utilities.twilio.TwilioAPIWrapper[source]¶
Bases: BaseModel
Messaging Client using Twilio.
To use, you should have the twilio python package installed,
and the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and
TWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as
named parameters to the constructor.
Example
from langchain.utilities.twilio import TwilioAPIWrapper
twilio = TwilioAPIWrapper(
account_sid="ACxxx",
auth_token="xxx",
from_number="+10123456789"
)
twilio.run('test', '+12484345508')
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param account_sid: Optional[str] = None¶
Twilio account string identifier.
param auth_token: Optional[str] = None¶
Twilio auth token.
param from_number: Optional[str] = None¶
A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)
format, an
[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),
or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)
that is enabled for the type of message you want to send. Phone numbers or
[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from
Twilio also work here. You cannot, for example, spoof messages from a private
cell phone number. If you are using messaging_service_sid, this parameter
must be empty. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.twilio.TwilioAPIWrapper.html |
f1058e9130ea-1 | cell phone number. If you are using messaging_service_sid, this parameter
must be empty.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.twilio.TwilioAPIWrapper.html |
f1058e9130ea-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(body: str, to: str) → str[source]¶
Run body through Twilio and respond with message sid.
Parameters
body – The text of the message you want to send. Can be up to 1,600
characters in length.
to – The destination phone number in
[E.164](https://www.twilio.com/docs/glossary/what-e164) format for
SMS/MMS or
[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)
for other 3rd-party channels. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.twilio.TwilioAPIWrapper.html |
f1058e9130ea-3 | for other 3rd-party channels.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using TwilioAPIWrapper¶
Twilio | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.twilio.TwilioAPIWrapper.html |
818a639c0896-0 | langchain.utilities.powerbi.PowerBIDataset¶
class langchain.utilities.powerbi.PowerBIDataset[source]¶
Bases: BaseModel
Create PowerBI engine from dataset ID and credential or token.
Use either the credential or a supplied token to authenticate.
If both are supplied the credential is used to generate a token.
The impersonated_user_name is the UPN of a user to be impersonated.
If the model is not RLS enabled, this will be ignored.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aiosession: Optional[aiohttp.ClientSession] = None¶
param credential: Optional[TokenCredential] = None¶
param dataset_id: str [Required]¶
param group_id: Optional[str] = None¶
param impersonated_user_name: Optional[str] = None¶
param sample_rows_in_table_info: int = 1¶
Constraints
exclusiveMinimum = 0
maximum = 10
param schemas: Dict[str, str] [Optional]¶
param table_names: List[str] [Required]¶
param token: Optional[str] = None¶
async aget_table_info(table_names: Optional[Union[List[str], str]] = None) → str[source]¶
Get information about specified tables.
async arun(command: str) → Any[source]¶
Execute a DAX command and return the result asynchronously.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.PowerBIDataset.html |
818a639c0896-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_schemas() → str[source]¶
Get the available schema’s.
get_table_info(table_names: Optional[Union[List[str], str]] = None) → str[source]¶
Get information about specified tables.
get_table_names() → Iterable[str][source]¶
Get names of tables available. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.PowerBIDataset.html |
818a639c0896-2 | get_table_names() → Iterable[str][source]¶
Get names of tables available.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(command: str) → Any[source]¶
Execute a DAX command and return a json representing the results.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.PowerBIDataset.html |
818a639c0896-3 | classmethod validate(value: Any) → Model¶
property headers: Dict[str, str]¶
Get the token.
property request_url: str¶
Get the request url.
property table_info: str¶
Information about all tables in the database.
Examples using PowerBIDataset¶
PowerBI Dataset | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.PowerBIDataset.html |
25aea3bfd89d-0 | langchain.utilities.redis.TokenEscaper¶
class langchain.utilities.redis.TokenEscaper(escape_chars_re: Optional[Pattern] = None)[source]¶
Escape punctuation within an input string.
Attributes
DEFAULT_ESCAPED_CHARS
Methods
__init__([escape_chars_re])
escape(value)
__init__(escape_chars_re: Optional[Pattern] = None)[source]¶
escape(value: str) → str[source]¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.redis.TokenEscaper.html |
543cc83a9017-0 | langchain.utilities.anthropic.get_token_ids_anthropic¶
langchain.utilities.anthropic.get_token_ids_anthropic(text: str) → List[int][source]¶
Get the token ids for a string of text. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.anthropic.get_token_ids_anthropic.html |
d8abb7621c75-0 | langchain.utilities.anthropic.get_num_tokens_anthropic¶
langchain.utilities.anthropic.get_num_tokens_anthropic(text: str) → int[source]¶
Get the number of tokens in a string of text. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.anthropic.get_num_tokens_anthropic.html |
a629d75b53ee-0 | langchain.utilities.clickup.fetch_data¶
langchain.utilities.clickup.fetch_data(url: str, access_token: str, query: Optional[dict] = None) → dict[source]¶
Fetch data from a URL. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.fetch_data.html |
30194cd32d1a-0 | langchain.utilities.serpapi.SerpAPIWrapper¶
class langchain.utilities.serpapi.SerpAPIWrapper[source]¶
Bases: BaseModel
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain.utilities import SerpAPIWrapper
serpapi = SerpAPIWrapper()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aiosession: Optional[aiohttp.client.ClientSession] = None¶
param params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}¶
param serpapi_api_key: Optional[str] = None¶
async aresults(query: str) → dict[source]¶
Use aiohttp to run query through SerpAPI and return the results async.
async arun(query: str, **kwargs: Any) → str[source]¶
Run query through SerpAPI and parse result async.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.serpapi.SerpAPIWrapper.html |
30194cd32d1a-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_params(query: str) → Dict[str, str][source]¶
Get parameters for SerpAPI. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.serpapi.SerpAPIWrapper.html |
30194cd32d1a-2 | Get parameters for SerpAPI.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
results(query: str) → dict[source]¶
Run query through SerpAPI and return the raw result.
run(query: str, **kwargs: Any) → str[source]¶
Run query through SerpAPI and parse result.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.serpapi.SerpAPIWrapper.html |
30194cd32d1a-3 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using SerpAPIWrapper¶
SerpAPI
Bittensor
AutoGPT | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.serpapi.SerpAPIWrapper.html |
6a1ab553dc73-0 | langchain.utilities.arcee.DALMFilterType¶
class langchain.utilities.arcee.DALMFilterType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Filter types available for a DALM retrieval as enumerator.
fuzzy_search = 'fuzzy_search'¶
strict_search = 'strict_search'¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.arcee.DALMFilterType.html |
d9ca59353dce-0 | langchain.utilities.clickup.Space¶
class langchain.utilities.clickup.Space(id: int, name: str, private: bool, enabled_features: Dict[str, Any])[source]¶
Component class for a space.
Attributes
id
name
private
enabled_features
Methods
__init__(id, name, private, enabled_features)
from_data(data)
__init__(id: int, name: str, private: bool, enabled_features: Dict[str, Any]) → None¶
classmethod from_data(data: Dict[str, Any]) → Space[source]¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.clickup.Space.html |
a34b3b837be7-0 | langchain.utilities.jira.JiraAPIWrapper¶
class langchain.utilities.jira.JiraAPIWrapper[source]¶
Bases: BaseModel
Wrapper for Jira API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param confluence: Any = None¶
param jira_api_token: Optional[str] = None¶
param jira_instance_url: Optional[str] = None¶
param jira_username: Optional[str] = None¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html |
a34b3b837be7-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
issue_create(query: str) → str[source]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
other(query: str) → str[source]¶
page_create(query: str) → str[source]¶
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
parse_issues(issues: Dict) → List[dict][source]¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html |
a34b3b837be7-2 | classmethod parse_obj(obj: Any) → Model¶
parse_projects(projects: List[dict]) → List[dict][source]¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
project() → str[source]¶
run(mode: str, query: str) → str[source]¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
search(query: str) → str[source]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using JiraAPIWrapper¶
Jira | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html |
d8c216c9cf77-0 | langchain.utilities.apify.ApifyWrapper¶
class langchain.utilities.apify.ApifyWrapper[source]¶
Bases: BaseModel
Wrapper around Apify.
To use, you should have the apify-client python package installed,
and the environment variable APIFY_API_TOKEN set with your API key, or pass
apify_api_token as a named parameter to the constructor.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param apify_client: Any = None¶
param apify_client_async: Any = None¶
async acall_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) → ApifyDatasetLoader[source]¶
Run an Actor on the Apify platform and wait for results to be ready.
:param actor_id: The ID or name of the Actor on the Apify platform.
:type actor_id: str
:param run_input: The input object of the Actor that you’re trying to run.
:type run_input: Dict
:param dataset_mapping_function: A function that takes a single
dictionary (an Apify dataset item) and converts it to
an instance of the Document class.
Parameters
build (str, optional) – Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) – Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) – Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor run’s default dataset.
Return type
ApifyDatasetLoader | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.apify.ApifyWrapper.html |