Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'Unnamed: 0'})

This happened while the csv dataset builder was generating data using

hf://datasets/towardsai-buster/ai-tutor-rag-system-data/advanced_rag_course.csv (at revision dd024716ca6aa7b94ea7150f5a3a9501f95f90ff)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              Unnamed: 0: int64
              title: string
              content: string
              source: string
              url: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 820
              to
              {'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'content': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'Unnamed: 0'})
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/towardsai-buster/ai-tutor-rag-system-data/advanced_rag_course.csv (at revision dd024716ca6aa7b94ea7150f5a3a9501f95f90ff)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Open a discussion for direct support.

title
string
url
string
content
string
source
string
Deep Lake as a Vector Store for LLM Applications
https://docs.activeloop.ai/#deep-lake-as-a-vector-store-for-llm-applications
Store and search embeddings and their metadata including text, jsons, images, audio, video, and more. Save the data locally, in your cloud, or on Deep Lake storage.Build LLM Apps using or integrations with LangChain and LlamaIndexRun computations locally or on our Managed Tensor Database​
activeloop
Deep Lake as a Data Lake For Deep Learning
https://docs.activeloop.ai/#deep-lake-as-a-data-lake-for-deep-learning
Store images, audios, videos, text and their metadata (i.e. annotations) in a data format optimized for Deep Learning. Save the data locally, in your cloud, or on Activeloop storage.Rapidly train PyTorch and TensorFlow models while streaming data with no boilerplate code.Run version control, dataset queries, and distributed workloads using a simple Python API.Deep Lake Architecture for Inference and Model Development Applications.
activeloop
To start using Deep Lake ASAP, check out our Vector Store Quickstart, Deep Learning Quickstart, Getting Started Guides, Tutorials, and Playbooks.
https://docs.activeloop.ai/
Please check out Deep Lake's GitHub repository and give us a ⭐ if you like the project. Join our Slack Community if you need help or have suggestions for improving documentation!
activeloop
Deep Lake Docs Overview
https://docs.activeloop.ai/#deep-lake-docs-overview
Vector Store QuickstartDeep Learning QuickstartStorage & CredentialsGetting StartedTutorials (w Colab)PlaybooksDataset VisualizationBest PracticesLow-Level API Summary​​
activeloop
Deep Lake API Reference
https://docs.deeplake.ai/en/latest/#deep-lake-api-reference
Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Vector Store Creating a Deep Lake Vector Store Vector Store Operations Vector Store Properties VectorStore.DeepMemory Creating a Deep Memory Deep Memory Operations Deep Memory Properties Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype 3D Bounding Box Htype Intrinsics Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Nifti Htype Point Cloud Htype Mesh Htype Embedding Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read MMDetection High-Performance Features Dataloader Sampler Tensor Query Language Random Split Deep Memory API Reference deeplake deeplake.VectorStore deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log deeplake.core.transform deeplake.core.vectorstore.deep_memory deeplake.random.seed
activeloop
Indices and tables
https://docs.deeplake.ai/en/latest/#indices-and-tables
Index Module Index Search Page
activeloop
How to Get Started with Vector Search in Deep Lake in Under 5 Minutes
https://docs.activeloop.ai/quickstart#how-to-get-started-with-vector-search-in-deep-lake-in-under-5-minutes
If you prefer to use Deep Lake with LangChain, check out this tutorial. This quickstart focuses on vector storage and search, instead of end-2-end LLM apps, and it offers more customization and search options compared to the LangChain integration.
activeloop
Installing Deep Lake
https://docs.activeloop.ai/quickstart#installing-deep-lake
Deep Lake can be installed using pip. By default, Deep Lake does not install dependencies for the compute engine, google-cloud, and other features. Details on all installation options are available here. This quickstart also requires OpenAI.!pip3 install deeplake!pip3 install openai
activeloop
Performing Vector Search
https://docs.activeloop.ai/quickstart#performing-vector-search
Deep Lake offers highly-flexible vector search and hybrid search options discussed in detail in these tutorials. In this Quickstart, we show a simple example of vector search using default options, which performs cosine similarity search in Python on the client. prompt = 'What are the first programs he tried writing?'​search_results = vector_store.search(embedding_data=prompt, embedding_function=embedding_function)The search_results is a dictionary with keys for the text, score, id, and metadata, with data ordered by score. If we examine the first returned text using search_results['text'][0], it appears to contain the answer to the prompt.What I Worked On​February 2021​Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.​The first programs I tried writing were on the IBM 1401 that our school district used for what was then called 'data processing.' This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.​The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in
activeloop
Creating Your First Vector Store
https://docs.activeloop.ai/quickstart#creating-your-first-vector-store
Let's embed and store one of Paul Graham's essays in a Deep Lake Vector Store stored locally. First, we download the data:Next, let's import the required modules and set the OpenAI environmental variables for embeddings:from deeplake.core.vectorstore import VectorStoreimport openaiimport os​os.environ['OPENAI_API_KEY'] = <OPENAI_API_KEY>Next, lets specify paths for the source text and the Deep Lake Vector Store. Though we store the Vector Store locally, Deep Lake Vectors Stores can also be created in memory, in the Deep Lake Managed Tensor Database, or in your cloud. Further details on storage options are available here. Let's also read and chunk the essay text based on a constant number of characters. source_text = 'paul_graham_essay.txt'vector_store_path = 'pg_essay_deeplake'​with open(source_text, 'r') as f: text = f.read()​CHUNK_SIZE = 1000chunked_text = [text[i:i+1000] for i in range(0,len(text), CHUNK_SIZE)]Next, let's define an embedding function using OpenAI. It must work for a single string and a list of strings, so that it can both be used to embed a prompt and a batch of texts. def embedding_function(texts, model='text-embedding-ada-002'): if isinstance(texts, str): texts = [texts]​ texts = [t.replace('\n', ' ') for t in texts] return [data.embedding for data in openai.embeddings.create(input = texts, model=model).data]Finally, let's create the Deep Lake Vector Store and populate it with data. We use a default tensor configuration, which creates tensors with text (str), metadata(json), id (str, auto-populated), embedding (float32). Learn more about tensor customizability here. vector_store = VectorStore( path = vector_store_path,)​vector_store.add(text = chunked_text, embedding_function = embedding_function, embedding_data = chunked_text, metadata = [{'source': source_text}]*len(chunked_text))The path parameter is bi-directional:When a new path is specified, a new Vector Store is createdWhen an existing path is specified, the existing Vector Store is loadedThe Vector Store's data structure can be summarized using vector_store.summary(), which shows 4 tensors with 76 samples: tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (76, 1536) float32 None id text (76, 1) str None metadata json (76, 1) str None text text (76, 1) str None To create a vector store using pre-compute embeddings instead of the embedding_data and embedding_function, you may run# vector_store.add(text = chunked_text, # embedding = <list_of_embeddings>, # metadata = [{'source': source_text}]*len(chunked_text))
activeloop
Authentication
https://docs.activeloop.ai/quickstart#authentication
To use Deep Lake features that require authentication (Deep Lake storage, Tensor Database storage, connecting your cloud dataset to the Deep Lake UI, etc.) you should register in the Deep Lake App and authenticate on the client using the methods in the link below:User Authentication
activeloop
Creating Vector Stores in the Deep Lake Managed Tensor Database
https://docs.activeloop.ai/quickstart#creating-vector-stores-in-the-deep-lake-managed-tensor-database
Deep Lake provides Managed Tensor Database that stores and runs queries on Deep Lake infrastructure, instead of the client. To use this service, specify runtime = {'tensor_db': True} when creating the Vector Store.# vector_store = VectorStore(# path = vector_store_path,# runtime = {'tensor_db': True}# )​# vector_store.add(text = chunked_text, # embedding_function = embedding_function, # embedding_data = chunked_text, # metadata = [{'source': source_text}]*len(chunked_text)) # search_results = vector_store.search(embedding_data = prompt, # embedding_function = embedding_function)
activeloop
Next Steps
https://docs.activeloop.ai/quickstart#next-steps
Check out our Getting Started Guide for a comprehensive walk-through of Deep Lake Vector Stores. For scaling Deep Lake to production-level applications, check out our Managed Tensor Database and Support for Concurrent Writes.Congratulations, you've created a Vector Store and performed vector search using Deep Lake🤓
activeloop
Visualizing your Vector Store
https://docs.activeloop.ai/quickstart#visualizing-your-vector-store
Visualization is available for Vector Stores stored in or connected to Deep Lake. The vector store above is stored locally, so it cannot be visualized, but here's an example of visualization for a representative Vector Store.
activeloop
Installing Deep Lake
https://docs.activeloop.ai/quickstart-dl#installing-deep-lake
Deep Lake can be installed using pip. By default, Deep Lake does not install dependencies for video, google-cloud, compute engine, and other features. Details on all installation options are available here. !pip3 install deeplake
activeloop
Reading Samples From a Deep Lake Dataset
https://docs.activeloop.ai/quickstart-dl#reading-samples-from-a-deep-lake-dataset
Data is not immediately read into memory because Deep Lake operates lazily. You can fetch data by calling the .numpy() or .data() methods:# Indexingimage = ds.images[0].numpy() # Fetch the first image and return a numpy arraylabels = ds.labels[0].data() # Fetch the labels in the first image​# Slicingimg_list = ds.labels[0:100].numpy(aslist=True) # Fetch 100 labels and store # them as a list of numpy arraysOther metadata such as the mapping between numerical labels and their text counterparts can be accessed using:labels_list = ds.labels.info['class_names']
activeloop
Fetching Your First Deep Lake Dataset
https://docs.activeloop.ai/quickstart-dl#fetching-your-first-deep-lake-dataset
Let's load the Visdrone dataset, a rich dataset with many object detections per image. Datasets hosted by Activeloop are identified by the host organization id followed by the dataset name: activeloop/visdrone-det-train.import deeplake​dataset_path = 'hub://activeloop/visdrone-det-train'ds = deeplake.load(dataset_path) # Returns a Deep Lake Dataset but does not download data locally
activeloop
Visualizing a Deep Lake Dataset
https://docs.activeloop.ai/quickstart-dl#visualizing-a-deep-lake-dataset
Deep Lake enables users to visualize and interpret large datasets. The tensor layout for a dataset can be inspected using:ds.summary()The dataset can be visualized in the Deep Lake UI, or using an iframe in a Jupyter notebook:ds.visualize()Visualizing datasets in the Deep Lake UI will unlock more features and faster performance compared to visualization in Jupyter notebooks.
activeloop
Creating Your Own Deep Lake Datasets
https://docs.activeloop.ai/quickstart-dl#creating-your-own-deep-lake-datasets
You can access all of the features above and more with your own datasets! If your source data conforms to one of the formats below, you can ingest them directly with 1 line of code. The ingestion functions support source data from the cloud, as well as creation of Deep Lake datasets in the cloud.​YOLO​​COCO​​Classifications​For example, a COCO format dataset can be ingested using:dataset_path = 's3://bucket_name_deeplake/dataset_name' # Destination for the Deep Lake dataset​images_folder = 's3://bucket_name_source/images_folder'annotations_files = ['s3://bucket_name_source/annotations.json'] # Can be a list of COCO jsons.​ds = deeplake.ingest_coco(images_folder, annotations_files, dataset_path, src_creds = {...}, dest_creds = {...})For creating datasets that do not conform to one of the formats above, you can use our methods for manually creating datasets, tensors, and populating them with data.
activeloop
Authentication
https://docs.activeloop.ai/quickstart-dl#authentication
To use Deep Lake features that require authentication (Activeloop storage, Tensor Database storage, connecting your cloud dataset to the Deep Lake UI, etc.) you should register in the Deep Lake App and authenticate on the client using the methods in the link below:User Authentication
activeloop
Next Steps
https://docs.activeloop.ai/quickstart-dl#next-steps
Check out our Getting Started Guide for a comprehensive walk-through of Deep Lake. Also check out tutorials on Running Queries, Training Models, and Creating Datasets, as well as Playbooks about powerful use-cases that are enabled by Deep Lake.​Congratulations, you've got Deep Lake working on your local machine🤓
activeloop
Storing Datasets in Your Own Cloud
https://docs.activeloop.ai/storage-and-credentials#storing-datasets-in-your-own-cloud
Deep Lake can be used as a pure OSS package without any registration or relationship with Activeloop. However, registering with Activeloop offers several benefits:Storage provided by ActiveloopAccess to the Tensor Database for performant vector searchAccess to Deep Lake App, which provides dataset visualization, querying, version control UI, dataset analytics, and other powerful features​Managed credentials for Deep Lake datasets stored outside of ActiveloopWhen connecting data from your cloud using Managed Credentials, the data is never stored or cached in Deep Lake. All Deep Lake user interfaces (browser, python, etc.) fetch data directly from long-term storage.Authentication Using Managed CredentialsStorage OptionsStoring Deep Lake Data in Your Own Cloud
activeloop
Compute Engine offers high-performance implementations of compute-heavy Deep Lake features, such as distributed dataloading, large queries, and indexing. The engine is built in C++ and the user-interface is in Python.
https://docs.activeloop.ai/performance-features/introduction#compute-engine-offers-high-performance-implementations-of-compute-heavy-deep-lake-features-such-as-d
The Deep Lake Compute Engine is only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.
activeloop
Features Optimized in the Compute Engine:
https://docs.activeloop.ai/performance-features/introduction#features-optimized-in-the-compute-engine
Performant DataloaderTensor Query Language (TQL)Index for ANN SearchManaged Tensor Database
activeloop
How to use Deep Lake's performant Dataloader built and optimized in C++
https://docs.activeloop.ai/performance-features/performant-dataloader#how-to-use-deep-lakes-performant-dataloader-built-and-optimized-in-c++
Deep Lake offers an optimized dataloader implementation built in C++, which is 1.5-3X faster than the pure-python implementation, and it supports distributed training. The C++ and Python dataloaders can be used interchangeably, and their syntax varies as shown below.
activeloop
Pure-Python Dataloader
https://docs.activeloop.ai/performance-features/performant-dataloader#pure-python-dataloader
train_loader = ds_train.pytorch(num_workers = 8, transform = transform, batch_size = 32, tensors=['images', 'labels'], shuffle = True)
activeloop
C++ Dataloader
https://docs.activeloop.ai/performance-features/performant-dataloader#c++-dataloader
The C++ dataloader is only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.The Deep Lake query engine is only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.
activeloop
TensorFlow
https://docs.activeloop.ai/performance-features/performant-dataloader#tensorflow
train_loader = ds.dataloader()\ .transform(transform)\ .batch(32)\ .shuffle(True)\ .offset(10000)\ .tensorflow(tensors=['images', 'labels'], num_workers = 8)
activeloop
Further Information
https://docs.activeloop.ai/performance-features/performant-dataloader#further-information
Training ModelsTraining Reproducibility Using Deep Lake and Weights & Biases
activeloop
PyTorch (returns PyTorch Dataloader)
https://docs.activeloop.ai/performance-features/performant-dataloader#pytorch-returns-pytorch-dataloader
train_loader = ds.dataloader()\ .transform(transform)\ .batch(32)\ .shuffle(True)\ .offset(10000)\ .pytorch(tensors=['images', 'labels'], num_workers = 8)
activeloop
How to query datasets using the Deep Lake Tensor Query Language (TQL)
https://docs.activeloop.ai/performance-features/querying-datasets#how-to-query-datasets-using-the-deep-lake-tensor-query-language-tql
Querying datasets is a critical aspect of data science workflows that enables users to filter datasets and focus their work on the most relevant data. Deep Lake offers a highly-performant query engine built in C++ and optimized for the Deep Lake data format. The Deep Lake query engine is only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.
activeloop
Querying in the Vector Store Python API
https://docs.activeloop.ai/performance-features/querying-datasets#querying-in-the-vector-store-python-api
view = vector_store.search(query = <query_string>, exec_option = 'compute_engine')
activeloop
Query Syntax
https://docs.activeloop.ai/performance-features/querying-datasets#query-syntax
TQL Syntax
activeloop
Querying in the low-level Python API
https://docs.activeloop.ai/performance-features/querying-datasets#querying-in-the-low-level-python-api
Queries can also be performed in the Python API using:view = ds.query(<query_string>)
activeloop
Saving and utilizing dataset query results in the low-level Python API
https://docs.activeloop.ai/performance-features/querying-datasets#saving-and-utilizing-dataset-query-results-in-the-low-level-python-api
The query results (Dataset Views) can be saved in the UI as shown above, or if the view is generated in Python, it can be saved using the Python API below. Full details are available here.ds_view.save_view(message = 'Samples with monarchs')In order to maintain data lineage, Dataset Views are immutable and are connected to specific commits. Therefore, views can only be saved if the dataset has a commit and there are no uncommitted changes in the HEAD. You can check for this using ds.has_head_changes can be loaded in the python API and they can passed to ML frameworks just like regular datasets:ds_view = ds.load_view(view_id, optimize = True, num_workers = 2)​for data in ds_view.pytorch(): # Training loop hereThe optimize parameter in ds.load_view(..., optimize = True) materializes the Dataset View into a new sub-dataset that is optimized for streaming. If the original dataset uses linked tensors, the data will be copied to Deep Lake format.Optimizing the Dataset View is critical for achieving rapid streaming.If the saved Dataset View is no longer needed, it can be deleted using:ds.delete_view(view_id)
activeloop
How to Use Deep Memory to Improve Retrieval Accuracy in Your LLM Apps
https://docs.activeloop.ai/performance-features/deep-memory#how-to-use-deep-memory-to-improve-retrieval-accuracy-in-your-llm-apps
Deep Memory is a suite of tools that enables you to optimize your Vector Store for your use-case and achieve higher accuracy in your LLM apps.
activeloop
Embedding Transformation
https://docs.activeloop.ai/performance-features/deep-memory#embedding-transformation
Deep Memory computes a transformation that converts your embeddings into an embedding space that is tailored for your use case. This increases the accuracy of your Vector Search by up to 22%, which significantly impacts the user experience of your LLM applications. Furthermore, Deep Memory can also be used to decrease costs by reducing the amount of context (k) that must be injected into the LLM prompt to achieve a given accuracy, thereby reducing token usage.​
activeloop
How Deep Lake Implements an Index for ANN Search
https://docs.activeloop.ai/performance-features/index-for-ann-search#how-deep-lake-implements-an-index-for-ann-search
Deep Lake implements the Hierarchical Navigable Small World (HSNW) index for Approximate Nearest Neighbor (ANN) search. The index is based on the OSS Hsnwlib package with added optimizations. The implementation enables users to run queries on >35M embeddings in less than 1 second.
activeloop
Unique aspects of Deep Lake's HSNW implementation
https://docs.activeloop.ai/performance-features/index-for-ann-search#unique-aspects-of-deep-lakes-hsnw-implementation
Rapid index creation with multi-threading optimized for Deep LakeEfficient memory management that reduces RAM usage
activeloop
Memory Management in Deep Lake
https://docs.activeloop.ai/performance-features/index-for-ann-search#memory-management-in-deep-lake
RAM Cost >> On-disk Cost >> Object Storage CostMinimizing RAM usage and maximizing object store significantly reduces costs of running a Vector Database. Deep Lake has a unique implementation of memory allocation that minimizes RAM requirement without any performance penalty:Memory Architecture for the Deep Lake Vector Store
activeloop
Limitations
https://docs.activeloop.ai/performance-features/index-for-ann-search#limitations
The following limitations of the index are being implemented in upcoming releases:Index does not support incremental updates. If any update is made to the dataset, the index is re-created.If the search is performed using a combination of attribute and vector search, the index is not used and linear search is applied instead.
activeloop
Using the Index
https://docs.activeloop.ai/performance-features/index-for-ann-search#using-the-index
By default, the index is turned off in Deep Lake. To enable the index, during Vector Store initialization or loading, specify the Vector Store length threshold above which the index will be applied:vectorstore = VectorStore(path, index_params = {threshold: <threshold_int>})
activeloop
LangChain and LlamaIndex
https://docs.activeloop.ai/performance-features/managed-database#langchain-and-llamaindex
To use the Managed Vector Database in LangChain or Llama Index, specify dataset_path = hub://org_id/dataset_name and runtime = {'tensor_db': True} during Vector Store creation.
activeloop
Overview of Deep Lake's Managed Tensor Database
https://docs.activeloop.ai/performance-features/managed-database#overview-of-deep-lakes-managed-tensor-database
Deep Lake offers a serverless Managed Tensor Database that eliminates the complexity of self-hosting and substantially lowers costs. Currently, it only supports dataset queries, including vector search, but additional features for creating and modifying data being added in December 2023.Comparison of Deep Lake as a Managed Database vs Embedded Database
activeloop
REST API
https://docs.activeloop.ai/performance-features/managed-database#rest-api
A standalone REST API is available for interacting with the Managed Database:REST API
activeloop
Further Information:
https://docs.activeloop.ai/performance-features/managed-database#further-information
Migrating Datasets to the Tensor Database
activeloop
Vector Store use-cases are implemented using an API that balances simplicity and customizability
https://docs.activeloop.ai/getting-started#vector-store-use-cases-are-implemented-using-an-api-that-balances-simplicity-and-customizability
Vector Store
activeloop
Deep Learning use-cases are implemented using the low-level API that offers maximum customizability
https://docs.activeloop.ai/getting-started#deep-learning-use-cases-are-implemented-using-the-low-level-api-that-offers-maximum-customizability
Deep Learning
activeloop
Deep Lake Tutorials Based on Use-Case:
https://docs.activeloop.ai/tutorials#deep-lake-tutorials-based-on-use-case
Vector Store TutorialsDeep Learning Tutorials
activeloop
Playbooks are comprehensive examples of end-to-end workflows using Activeloop products
https://docs.activeloop.ai/playbooks#playbooks-are-comprehensive-examples-of-end-to-end-workflows-using-activeloop-products
Querying, Training and Editing Datasets with Data LineageEvaluating Model PerformanceTraining Reproducibility Using Deep Lake and Weights & BiasesWorking with Videos
activeloop
Import and Installation
https://docs.activeloop.ai/api-basics#import-and-installation
By default, Deep Lake does not install dependencies for audio, video, google-cloud, and other features. Details on installation options are available here. !pip3 install deeplake​import deeplake
activeloop
Loading Deep Lake Datasets
https://docs.activeloop.ai/api-basics#loading-deep-lake-datasets
Deep Lake datasets can be stored at a variety of storage locations using the appropriate dataset_path parameter below. We support S3, GCS, Activeloop storage, and are constantly adding to the list.# Load a Deep Lake Datasetds = deeplake.load('dataset_path', creds = {'optional'}, token = 'optional')
activeloop
Deleting Datasets
https://docs.activeloop.ai/api-basics#deleting-datasets
ds.delete()​deeplake.delete('dataset_path', creds = {'optional'}, token = 'optional', token = 'optional')API deletions of Deep Lake Cloud datasets are immediate, whereas UI-initiated deletions are postponed by 5 minutes. Once deleted, dataset names can't be reused in the Deep Lake Cloud.
activeloop
Creating Deep Lake Datasets
https://docs.activeloop.ai/api-basics#creating-deep-lake-datasets
# Create an empty Deep Lake datasetds = deeplake.empty('dataset_path', creds = {'optional'}, token = 'optional')​# Create an Deep Lake Dataset with the same tensors as another datasetds = deeplake.like(ds_object or 'dataset_path', creds = {'optional'}, token = 'optional')​# Automatically create a Deep Lake Dataset from another data sourceds = deeplake.ingest(source_folder, deeplake_dataset_path, ... 'see API reference for details')ds = deeplake.ingest_coco(images_folder, 'annotations.json', deeplake_dataset_path, ... 'see API reference for details')ds = deeplake.ingest_yolo(data_directory, deeplake_dataset_path, class_names_file, ... 'see API reference for details')
activeloop
Visualizing and Inspecting Datasets
https://docs.activeloop.ai/api-basics#visualizing-and-inspecting-datasets
ds.visualize()​ds.summary()
activeloop
Appending Data to Datasets
https://docs.activeloop.ai/api-basics#appending-data-to-datasets
ds.append({'tensor_1': np.ones((1,4)), 'tensor_2': deeplake.read('image.jpg')})ds.my_group.append({'tensor_1': np.ones((1,4)), 'tensor_2': deeplake.read('image.jpg')})
activeloop
Appending/Updating Data in Individual Tensors
https://docs.activeloop.ai/api-basics#appending-updating-data-in-individual-tensors
# Append a single sampleds.my_tensor.append(np.ones((1,4)))ds.my_tensor.append(deeplake.read('image.jpg'))​# Append multiple samples. The first axis in the # numpy array is assumed to be the sample axis for the tensords.my_tensor.extend(np.ones((5,1,4)))​# Editing or adding data at a specific indexds.my_tensor[i] = deeplake.read('image.jpg')
activeloop
Deleting data
https://docs.activeloop.ai/api-basics#deleting-data
# Removing samples by indexds.pop[i]​# Delete all data in a tensords.<tensor_name>.clear()​# Delete tensor and all of its datads.delete_tensor(<tensor_name>)
activeloop
Creating Tensors
https://docs.activeloop.ai/api-basics#creating-tensors
# Specifying htype is recommended for maximizing performance.ds.create_tensor('my_tensor', htype = 'bbox')​# Specifiying the correct compression is critical for images, videos, audio and # other rich data types. ds.create_tensor('songs', htype = 'audio', sample_compression = 'mp3')
activeloop
Appending Empty Samples or Skipping Samples
https://docs.activeloop.ai/api-basics#appending-empty-samples-or-skipping-samples
# Data appended as None will be returned as an empty arrayds.append('tensor_1': deeplake.read(...), 'tensor_2': None)ds.my_tensor.append(None)​# Empty arrays can be explicitly appended if the length of the shape # of the empty array matches that of the other samplesds.boxes.append(np.zeros((0,4))
activeloop
Connecting Deep Lake Datasets to ML Frameworks
https://docs.activeloop.ai/api-basics#connecting-deep-lake-datasets-to-ml-frameworks
# PyTorch Dataloaderdataloader = ds.pytorch(batch_size = 16, transform = {'images': torchvision_tform, 'labels': None}, num_workers = 2, scheduler = 'threaded')​# TensorFlow Datasetds_tensorflow = ds.tensorflow()​# Enterprise Dataloaderdataloader = ds.dataloader().batch(batch_size = 64).pytorch(num_workers = 8)
activeloop
Accessing Tensor Data
https://docs.activeloop.ai/api-basics#accessing-tensor-data
# Read the i-th tensor samplenp_array = ds.my_tensor[i].numpy()text = ds.my_text_tensor[i].data() # More comprehensive view of the databytes = ds.my_tensor[i].tobytes() # More comprehensive view of the data​# Read the i-th dataset sample as a numpy arrayimage = ds[i].images.numpy()​# Read the i-th labels as a numpy array or list of stringslabels_array = ds.labels[i].numpy()labels_array = ds.labels[i].data()['value'] # same as .numpy()labels_string_list = ds.labels[i].data()['text']​​# Read a tensor sample from a hierarchical groupnp_array = ds.my_group.my_tensor_1[i].numpy()np_array = ds.my_group.my_tensor_2[i].numpy()​# Read multiple tensor samples into numpy arraynp_array = ds.my_tensor[0:10].numpy() ​# Read multiple tensor samples into a list of numpy arraysnp_array_list = ds.my_tensor[0:10].numpy(aslist=True)
activeloop
Creating Tensor Hierarchies
https://docs.activeloop.ai/api-basics#creating-tensor-hierarchies
ds.create_group('my_group')ds.my_group.create_tensor('my_tensor')ds.create_tensor('my_group/my_tensor') #Automatically creates the group 'my_group'
activeloop
Querying Datasets and Saving Dataset Views
https://docs.activeloop.ai/api-basics#querying-datasets-and-saving-dataset-views
A full list of supported queries is shown here. view = ds.query('Select * where contains(labels, 'giraffe')')​view.save_view(optimize = True)​view = ds.load_view(id = 'query_id')​# Return the original dataset indices that satisfied the query conditionindices = list(view.sample_indices)
activeloop
Adding Tensor and Dataset-Level Metadata
https://docs.activeloop.ai/api-basics#adding-tensor-and-dataset-level-metadata
# Add or update dataset metadatads.info.update(key1 = 'text', key2 = number)# Also can run ds.info.update({'key1'='value1', 'key2' = num_value})​# Add or update tensor metadatads.my_tensor.info.update(key1 = 'text', key2 = number)​# Delete metadatads.info.delete() #Delete all metadatads.info.delete('key1') #Delete 1 key in metadata
activeloop
Copying datasets
https://docs.activeloop.ai/api-basics#copying-datasets
# Fastest option - copies everything including version historyds = deeplake.deepcopy('src_dataset_path', 'dest_dataset_path', src_creds, dest_creds, token)​# Slower option - copies only data on the last commitds = deeplake.copy('src_dataset_path', 'dest_dataset_path', src_creds, dest_creds, token)
activeloop
Advanced
https://docs.activeloop.ai/api-basics#advanced
# Load a Deep Lake Dataset if it already exists (same as deeplake.load), or initialize # a new Deep Lake Dataset if it does not already exist (same as deeplake.empty)ds = deeplake.dataset('dataset_path', creds = {'optional'}, token = 'optional')​​# Append multiple samples using a listds.my_tensor.extend([np.ones((1,4)), np.ones((3,4)), np.ones((2,4)​​# Fetch adjacent data in the chunk -> Increases speed when loading # sequantially or if a tensor's data fits in the cache.numeric_label = ds.labels[i].numpy(fetch_chunks = True)
activeloop
Versioning Datasets
https://docs.activeloop.ai/api-basics#versioning-datasets
# Commit datacommit_id = ds.commit('Added 100 images of trucks')​# Print the commit loglog = ds.log()​# Checkout a branch or commit ds.checkout('branch_name' or commit_id)​# Create a new branchds.checkout('new_branch', create = True)​# Examine differences between commitsds.diff()​# Delete all changes since the previous commitds.reset()​# Delete a branch and its commits - Only allowed for branches that have not been mergedds.delete_branch('branch_name')
activeloop
Maximizing performance
https://docs.activeloop.ai/api-basics#maximizing-performance
Make sure to use the with context when making any updates to datasets. with ds:​ ds.create_tensor('my_tensor') for i in range(10): ds.my_tensor.append(i)
activeloop
How to use Deep Lake at Scale with best practices
https://docs.activeloop.ai/technical-details/best-practices#how-to-use-deep-lake-at-scale-with-best-practices
activeloop
Tensors
https://docs.activeloop.ai/technical-details/data-layout#tensors
Deep Lake uses a columnar storage architecture, and the columns in Deep Lake are referred to as tensors. Data in the tensors can be added or modified, and the data in different tensors are independent of each other.
activeloop
Hidden Tensors
https://docs.activeloop.ai/technical-details/data-layout#hidden-tensors
When data is appended to Deep Lake, certain important information is broken up and duplicated in a separate tensor, so that the information can be accessed and queried without loading all of the data. Examples include the shape of a sample (i.e. width, height, and number of channels for an image), or the metadata from file headers that were passed to deeplake.read('filename').
activeloop
Indexing and Samples
https://docs.activeloop.ai/technical-details/data-layout#indexing-and-samples
Deep Lake datasets and their tensors are indexed, and data at a given index that spans multiple tensors are referred to as samples. Data at the same index are assumed to be related. For example, data in a bbox tensor at index 100 is assumed to be related to data in the tensor image at index 100.
activeloop
Chunking
https://docs.activeloop.ai/technical-details/data-layout#chunking
Most data in Deep Lake format is stored in chunks, which are a blobs of data of a pre-defined size. The purpose of chunking is to accelerate the streaming of data across networks by increasing the amount of data that is transferred per network request.Each tensors has its own chunks, and the default chunk size is 8MB. A single chunk consists of data from multiple indices when the individual data points (image, label, annotation, etc.) are smaller than the chunk size. Conversely, when individual data points are larger than the chunk size, the data is split among multiple chunks (tiling). Exceptions to chunking logic are video data. Videos that are larger than the specified chunk size are not broken into smaller pieces, because Deep Lake uses efficient libraries to stream and access subsets of videos, thus making it unnecessary to split them apart.
activeloop
Groups
https://docs.activeloop.ai/technical-details/data-layout#groups
Multiple tensor can be combined into groups. Groups do not fundamentally change the way data is stored, but they are useful for helping Activeloop Platform understand how different tensors are related.
activeloop
Length of a Dataset
https://docs.activeloop.ai/technical-details/data-layout#length-of-a-dataset
Deep Lake allows for ragged tensors (tensors of different length), so it is important to understand the terminology around dataset length:length (ds.len or len(ds)) - The length of the shortest tensor, as determined by its last index.minimum length (ds.min_len) - Same as lengthminimum length (ds.max_len) - The length of the longest tensor, as determined by its last index. By default, Deep Lake throws an error if a tensor is accessed at an index at which data (empty or non-empty) has not been added. In the example below, ds.bbox[3].numpy() would throw an error. To pad the unspecified data and create a virtual view where the missing samples are treated as empty data, use ds.max_view(). In the example below, the length of this virtual view would be 6.​
activeloop
Understanding the Interaction Between Deep Lake's Versions, Queries, and Dataset Views.
https://docs.activeloop.ai/technical-details/version-control-and-querying#understanding-the-interaction-between-deep-lakes-versions-queries-and-dataset-views.
Version control is the core of the Deep Lake data format, and it interacts with queries and view as follows:Datasets have commits and branches, and they can be traversed or merged using Deep Lake's Python API. Queries are applied on top of commits, and in order to save a query result as a view, the dataset cannot be in an uncommitted state (no changes were performed since the prior commit). Each saved view is associated with a particular commit, and the view itself contains information on which dataset indices satisfied the query condition.This logical approach was chosen in order to preserve data lineage. Otherwise, it would be possible to change data on which a query was executed, thereby potentially invalidating the saved view, since the indices that satisfied the query condition may no longer be correct after the dataset was changed. Please check out our Getting Stated Guide to learn how to use the Python API to version your data, run queries, and save views. An example workflow using version control and queries is shown below.
activeloop
Version Control HEAD Commit
https://docs.activeloop.ai/technical-details/version-control-and-querying#version-control-head-commit
Unlike Git, Deep Lake's dataset version control does not have a local staging area because all dataset updates are immediately synced with the permanent storage location (cloud or local). Therefore, any changes to a dataset are automatically stored in a HEAD commit on the current branch. This means that the uncommitted changes do not appear on other branches, and uncommitted changes are visible to all users.
activeloop
How to visualize machine learning datasets
https://docs.activeloop.ai/technical-details/dataset-visualization#how-to-visualize-machine-learning-datasets
​Deep Lake has a web interface for visualizing, versioning, and querying machine learning datasets. It utilizes the Deep Lake format under-the-hood, and it can be connected to datasets stored in all Deep Lake storage locations.
activeloop
Visualization can be performed in 3 ways:
https://docs.activeloop.ai/technical-details/dataset-visualization#visualization-can-be-performed-in-3-ways
1.In the Deep Lake UI (most feature-rich and performant option)2.In the python API using ds.visualize()3.In your own application using our integration options.
activeloop
Requirements for correctly visualizing your own datasets
https://docs.activeloop.ai/technical-details/dataset-visualization#requirements-for-correctly-visualizing-your-own-datasets
Deep Lake makes assumptions about underlying data types and relationships between tensors in order to display the data correctly. Understanding the following concepts is necessary in order to use the visualizer: 1.​Data Types (htypes)​2.​Relationships between tensors​
activeloop
Downsampling Data for Faster Visualization
https://docs.activeloop.ai/technical-details/dataset-visualization#downsampling-data-for-faster-visualization
For faster visualization of images and masks, tensors can be downsampled during dataset creation. The downsampled data are stored in the dataset and are automatically rendered by the visualizer depending on the zoom level. To add downsampling to your tensors, specify the downsampling factor and the number of downsampling layers during tensor creation:# 3X downsampling per layer, 2X layersds.create_tensor('images', htype = 'image', downsampling = (3,2))Note: since downsampling requires decompression and recompression of data, it will slow down dataset ingestion.
activeloop
Indexing
https://docs.activeloop.ai/technical-details/tensor-relationships#indexing
Hub datasets and their tensors are indexed like ds[index] or ds.tensor_name[index], and data at the same index are assumed to be related. For example, a bounding_box at index 100 is assumed to apply to the image at index 100.
activeloop
Relationships Between Tensors
https://docs.activeloop.ai/technical-details/tensor-relationships#relationships-between-tensors
For datasets with multiple tensors, it is important to follow the conventions below in order for the visualizer to correctly infer how tensors are related.By default, in the absence of groups, the visualizer assumes that all tensors are related to each other. This works well for simple use cases. For example, it is correct to assume that the images, labels, and boxes tensors are related in the dataset below:ds-> images (htype = image)-> labels (htype = class_label)-> boxes (htype = bbox)However, if datasets are highly complex, assuming that all tensor are related may lead to visualization errors, because every tensor may not be related to every other tensor:ds-> images (htype = image)-> vehicle_labels (htype = class_label)-> vehicle_boxes (htype = bbox)-> people_labels (htype = class_label)-> people_masks (htype = binary_mask)In the example above, only some of the annotation tensors are related to each other: vehicle_labels -> vehicle_boxes: Boxes and labels describing cars, trucks, etc.people_labels -> people_masks: Binary masks and labels describing adults, toddlers, etc.The best method for disambiguating the relationships between tensors is to place them in groups, because the visualizer assumes that annotation tensors in different groups are not related.In the example above, the following groups could be used to disambiguate the annotations:ds-> images (htype = image)-> vehicles (group) -> vehicle_labels (htype = class_label) -> vehicle_boxes (htype = bbox)-> people (group) -> people_labels (htype = class_label) -> people_masks (htype = binary_mask)
activeloop
HTML iframe (Alpha)
https://docs.activeloop.ai/technical-details/visualizer-integration#html-iframe-alpha
To embed into your html page, you can use our iframe integration:<iframe src='https://app.activeloop.ai/visualizer/iframe?url=hub://activeloop/imagenet-train' width='800px' height='600px'>iframe URL: Params:url - The url of the dataset vs - Visualizer state, which can be obtained from the platform url token - User token, for private datasets. If the value is ask then the UI will be populated for entering the token checkpoint - Dataset checkpoint query - Query string to apply on the dataset
activeloop
How to embed the Activeloop visualizer into your own web applications
https://docs.activeloop.ai/technical-details/visualizer-integration#how-to-embed-the-activeloop-visualizer-into-your-own-web-applications
Visualization engine allows the user to visualize, explore, and interact with Deep Lake datasets. In addition to using through the Activeloop UI or in Python, the Activeloop visualizer can also be embedded into your application.
activeloop
Javascript API (Alpha)
https://docs.activeloop.ai/technical-details/visualizer-integration#javascript-api-alpha
To have more fine grained control, you can embed the visualizer using Javascript:<div id='container'></div><script src='https://app.activeloop.ai/visualizer/vis.js'></script><script> let container = document.getElementById('container') window.vis.visualize('hub://activeloop/imagenet-train', null, null, container, null)</script>or to visualize private datasets with authentication<div id='container'></div><script src='https://app.activeloop.ai/visualizer/vis.js'></script><script> let container = document.getElementById('container') window.vis.visualize('hub://org/private', null, null, container, { requireSignin: true })</script>InterfaceBelow you can find definitions of the arguments./// ds - Dataset url/// commit - optional commit id/// state - optional initial state of the visualizer/// container - HTML element serving as container for visualizer elements/// options - optional Visualization optionsstatic visualize( ds: string, commit: string | null = null, state: string | null = null, container: HTMLElement, options: VisOptions | null): Promise<Vis>;​/// backlink - Show backlink to platform button/// singleSampleView - Enable single sample view through enter key/// requireSignin - Requires signin to get access token/// token - Token id/// gridMode - Canvas vs Grid/// queryString - Query to apply on the iframeexport type VisOptions = { backlink?: Boolean singleSampleView?: Boolean requireSignin?: Boolean token: string | null gridMode?: 'canvas' | 'grid' queryString?: string}This visualize returns Promise<Vis> which can be used to dynamically change the visualizer state. Vis supports only query functions for nowclass Vis{ /// Asynchronously runs a query and resolves the promise when query completed. /// In case of error in query, rejects the promise. query(queryString: string): Promise<void>}​​
activeloop
How Shuffling Works in Deep Lake's PyTorch DataLoader
https://docs.activeloop.ai/technical-details/shuffling-in-dataloaders#how-shuffling-works-in-deep-lakes-pytorch-dataloader
The Deep Lake shuffling algorithm is based upon a shuffle buffer that preloads a specified amount of data (in MB) determined by the buffer_size parameter in ds.pytorch(buffer_size = 2048). First, the dataloader randomly selects chunks from the applicable tensors until the shuffle buffer is full. Next, the indices in shuffle buffer are randomly sampled to construct the batches that are returned by the dataloader. As the data in the shuffle buffer is consumed, new chunks are randomly selected and added to the buffer.In the OSS dataloader, the shuffle buffer contains the decompressed, decoded, and transformed samples. When using the PyTorch dataloaders, this corresponds to torch tensors. In the Performant dataloader, the shuffle buffer contains the non-decompressed data in the format they are stored in. For images, this typically corresponds to compressed bytes in jpeg, png, or other compressions. Since compressed data is stored more efficiently than uncompressed data, there are typically more distinct samples of data in the Performant dataloader shuffle buffer compared to the OSS shuffle buffer. If many chunks in the buffer contain data from the same class, which may occur if data was uploaded in non-random order, the shuffle buffer may contain fewer unique classes than if the samples were chosen fully randomly based on index. The most extreme case of reduced randomness occurs when datasets are much larger than the shuffle buffer, when they have many classes, and when those classes occur in sequence within the dataset indices. One example dataset is Unshuffled ImageNet, which has 1000 classes, 1.2M images, 140GB of data, and approximately 140 images per 16MB chunk. When the images are uploaded in sequence, the plot below shows how many unique classes are returned by the loader vs the number of images that have been returned in total. It is evident that fully randomly sampling returns more unique values than the Deep Lake dataloader. If reduced randomness has an impact on model performance in your workflows, the recommended countermeasures are:Store the dataset in a shuffled fashion such that the data does not appear in order by class. This completely mitigates the randomness concerns at the output of the data loader.Store the dataset with a smaller chunk size. This increases randomness because the shuffle buffer selects more discreet chunks before filling up. The current default size is 8, and reducing chunk size to 4MB significantly increases randomness (see plot above) with only a modest slowdown in data transfer speed.Increase the size of the shuffle buffer. This mitigates the randomness concerns but may not completely alleviate them.
activeloop
Providing Feedback
https://docs.activeloop.ai/technical-details/how-to-contribute#providing-feedback
We love feedback! Please join our Slack Community or raise an issue in Github.
activeloop
Getting Started With Development
https://docs.activeloop.ai/technical-details/how-to-contribute#getting-started-with-development
Clone the repository:git clone https://github.com/activeloopai/deeplake cd deeplake If you are using Linux, install environment dependencies:apt-get -y updateapt-get -y install git wget build-essential python-setuptools python3-dev libjpeg-dev libpng-dev zlib1g-devapt install build-essentialIf you are planning to work on videos, install codecs:apt-get install -y ffmpeg libavcodec-dev libavformat-dev libswscale-devInstall the package locally with plugins and development dependencies:pip install -r deeplake/requirements/plugins.txtpip install -r deeplake/requirements/tests.txtpip install -e .Run local tests to ensure everything is correct:pytest -x --local .
activeloop
Using Docker (optional)
https://docs.activeloop.ai/technical-details/how-to-contribute#using-docker-optional
You can use docker-compose for running testsdocker-compose -f ./bin/docker-compose.yaml up --build localand even work inside the docker by building the image and bashing into.docker build -t activeloop-deeplake:latest -f ./bin/Dockerfile.dev .docker run -it -v $(pwd):/app activeloop-deeplake:latest bash$ python3 -c 'import deeplake'Now changes done on your local files will be directly reflected into the package running inside the docker.
activeloop
Linting
https://docs.activeloop.ai/technical-details/how-to-contribute#linting
Deep Lake uses the black python linter. You can auto-format your code by running pip install black, and the run black . inside the directory you want to format.
activeloop
Docstrings
https://docs.activeloop.ai/technical-details/how-to-contribute#docstrings
Deep Lake uses Google Docstrings. Please refer to this example to learn more.
activeloop
Typing
https://docs.activeloop.ai/technical-details/how-to-contribute#typing
Deep Lake uses static typing for function arguments/variables for better code readability. Deep Lake has a GitHub action that runs mypy ., which runs similar to pytest . to check for valid static typing. You can refer to mypy documentation for more information.
activeloop
Prerequisites
https://docs.activeloop.ai/technical-details/how-to-contribute#prerequisites
Understand how to write pytest tests.Understand what a pytest fixture is.Understand what pytest parametrizations are.
activeloop
Testing
https://docs.activeloop.ai/technical-details/how-to-contribute#testing
Deep Lake uses pytest for tests. In order to make it easier to contribute, Deep Lake also has a set of custom options defined here.
activeloop
Options
https://docs.activeloop.ai/technical-details/how-to-contribute#options
To see a list of Deep Lake's custom pytest options, run this command: pytest -h | sed -En '/custom options:/,/\[pytest\] ini\-options/p'.
activeloop
Fixtures
https://docs.activeloop.ai/technical-details/how-to-contribute#fixtures
You can find more information on pytest fixtures here.memory_storage: If --memory-skip is provided, tests with this fixture will be skipped. Otherwise, the test will run with only a MemoryProvider.local_storage: If --local is not provided, tests with this fixture will be skipped. Otherwise, the test will run with only a LocalProvider.s3_storage: If --s3 is not provided, tests with this fixture will be skipped. Otherwise, the test will run with only an S3Provider.storage: All tests that use the storage fixture will be parametrized with the enabled StorageProviders (enabled via options defined below). If --cache-chains is provided, storage may also be a cache chain. Cache chains have the same interface as StorageProvider, but instead of just a single provider, it is multiple chained in a sequence, where the last provider in the chain is considered the actual storage.ds: The same as the storage fixture, but the storages that are parametrized are wrapped with a Dataset.Each StorageProvider/Dataset that is created for a test via a fixture will automatically have a root created, and it will be destroyed after the test. If you want to keep this data after the test run, you can use the --keep-storage option.
activeloop
Fixture Examples
https://docs.activeloop.ai/technical-details/how-to-contribute#fixture-examples
Single storage provider fixture:def test_memory(memory_storage): # Test will skip if `--memory-skip` is provided memory_storage['key'] = b'1234' # This data will only be stored in memory​def test_local(local_storage): # Test will skip if `--local` is not provided memory_storage['key'] = b'1234' # This data will only be stored locally​def test_local(s3_storage): # Test will skip if `--s3` is not provided # Test will fail if credentials are not provided memory_storage['key'] = b'1234' # This data will only be stored in s3Multiple storage providers/cache chains:from deeplake.core.tests.common import parametrize_all_storages, parametrize_all_caches, parametrize_all_storages_and_caches​@parametrize_all_storagesdef test_storage(storage): # Storage will be parametrized with all enabled `StorageProvider`s pass​@parametrize_all_cachesdef test_caches(storage): # Storage will be parametrized with all common caches containing enabled `StorageProvider`s pass​@parametrize_all_storages_and_cachesdef test_storages_and_caches(storage): # Storage will be parametrized with all enabled `StorageProvider`s and common caches containing enabled `StorageProvider`s passDataset storage providers/cache chains:from deeplake.core.tests.common import parametrize_all_dataset_storages, parametrize_all_dataset_storages_and_caches​@parametrize_all_dataset_storagesdef test_dataset(ds): # `ds` will be parametrized with 1 `Dataset` object per enabled `StorageProvider` pass​@parametrize_all_dataset_storages_and_cachesdef test_dataset(ds): # `ds` will be parametrized with 1 `Dataset` object per enabled `StorageProvider` and all cache chains containing enabled `StorageProvider`s pass
activeloop
Benchmarks
https://docs.activeloop.ai/technical-details/how-to-contribute#benchmarks
Deep Lake uses pytest-benchmark for benchmarking, which is a plugin for pytest.
activeloop
End of preview.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0