## How to use the data sets ### Use the already preprocessed data Load the dataset using ``` from datasets import load_dataset dataset = load_dataset("jglaser/binding_affinity") ``` ** Loading the data manually ** The file `data/all.parquet` contains the preprocessed data. To extract it, you need download and install [git LFS support] https://git-lfs.github.com/]. ### Pre-process yourself To manually perform the preprocessing, fownload the data sets from 1. BindingDB In `bindingdb`, download the database as tab separated values > Download > BindingDB_All_2021m4.tsv.zip and extract the zip archive into `bindingdb/data` Run the steps in `bindingdb.ipynb` 2. PDBBind-cn Register for an account at , confirm the validation email, then login and download - the Index files (1) - the general protein-ligand complexes (2) - the refined protein-ligand complexes (3) Extract those files in `pdbbind/data` Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster (e.g., `mpirun -n 64 pdbbind.py`). Perform the steps in the notebook `pdbbind.ipynb` 3. BindingMOAD Go to and download the files `every.csv` (All of Binding MOAD, Binding Data) and the non-redundant biounits (`nr_bind.zip`). Place and extract those files into `binding_moad`. Run the script `moad.py` in a compute job on an MPI-enabled cluster (e.g., `mpirun -n 64 moad.py). Perform the steps in the notebook `moad.ipynb` 4. BioLIP Download from the files - receptor_nr1.tar.bz2 (Receptor1, Non-redudant set) - ligand_nr.tar.bz2 (Ligands) - BioLiP_nr.tar.bz2 (Annotations) and extract them in `biolip/data`. Run the script `biolip.py` in a compute job on an MPI-enabled cluster (e.g., `mpirun -n 64 biolip.py). Perform sthe steps in the notebook `biolip.ipynb` 5. Final concatenation and filtering Run the steps in the notebook `combine_dbs.ipynb`