jglaser commited on
Commit
1b1c7d1
1 Parent(s): 0b73ad0

tweak script

Browse files
Files changed (3) hide show
  1. README.md +8 -5
  2. binding_affinity.py +2 -1
  3. requirements.txt +2 -0
README.md CHANGED
@@ -2,13 +2,16 @@
2
 
3
  ### Use the already preprocessed data
4
 
5
- The file `data/all.parquet` contains the preprocessed data. Load the dataset using
6
 
7
  ```
8
  from datasets import load_dataset
9
  dataset = load_dataset("jglaser/binding_affinity")
10
  ```
11
 
 
 
 
12
  ### Pre-process yourself
13
 
14
  To manually perform the preprocessing, fownload the data sets from
@@ -16,14 +19,14 @@ To manually perform the preprocessing, fownload the data sets from
16
  1. BindingDB
17
 
18
  In `bindingdb`, download the database as tab separated values
19
- [https://bindingdb.org] > Download > BindingDB_All_2021m4.tsv.zip
20
  and extract the zip archive into `bindingdb/data`
21
 
22
  Run the steps in `bindingdb.ipynb`
23
 
24
  2. PDBBind-cn
25
 
26
- Register for an account at [https://www.pdbbind.org.cn/], confirm the validation
27
  email, then login and download
28
 
29
  - the Index files (1)
@@ -39,7 +42,7 @@ Perform the steps in the notebook `pdbbind.ipynb`
39
 
40
  3. BindingMOAD
41
 
42
- Go to [https://bindingmoad.org] and download the files `every.csv`
43
  (All of Binding MOAD, Binding Data) and the non-redundant biounits
44
  (`nr_bind.zip`). Place and extract those files into `binding_moad`.
45
 
@@ -50,7 +53,7 @@ Perform the steps in the notebook `moad.ipynb`
50
 
51
  4. BioLIP
52
 
53
- Download from [https://zhanglab.ccmb.med.umich.edu/BioLiP/] the files
54
  - receptor_nr1.tar.bz2 (Receptor1, Non-redudant set)
55
  - ligand_nr.tar.bz2 (Ligands)
56
  - BioLiP_nr.tar.bz2 (Annotations)
 
2
 
3
  ### Use the already preprocessed data
4
 
5
+ Load the dataset using
6
 
7
  ```
8
  from datasets import load_dataset
9
  dataset = load_dataset("jglaser/binding_affinity")
10
  ```
11
 
12
+ The file `data/all.parquet` contains the preprocessed data. To extract it,
13
+ you need download and install [git LFS support] https://git-lfs.github.com/].
14
+
15
  ### Pre-process yourself
16
 
17
  To manually perform the preprocessing, fownload the data sets from
 
19
  1. BindingDB
20
 
21
  In `bindingdb`, download the database as tab separated values
22
+ <https://bindingdb.org> > Download > BindingDB_All_2021m4.tsv.zip
23
  and extract the zip archive into `bindingdb/data`
24
 
25
  Run the steps in `bindingdb.ipynb`
26
 
27
  2. PDBBind-cn
28
 
29
+ Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
30
  email, then login and download
31
 
32
  - the Index files (1)
 
42
 
43
  3. BindingMOAD
44
 
45
+ Go to <https://bindingmoad.org> and download the files `every.csv`
46
  (All of Binding MOAD, Binding Data) and the non-redundant biounits
47
  (`nr_bind.zip`). Place and extract those files into `binding_moad`.
48
 
 
53
 
54
  4. BioLIP
55
 
56
+ Download from <https://zhanglab.ccmb.med.umich.edu/BioLiP/> the files
57
  - receptor_nr1.tar.bz2 (Receptor1, Non-redudant set)
58
  - ligand_nr.tar.bz2 (Ligands)
59
  - BioLiP_nr.tar.bz2 (Annotations)
binding_affinity.py CHANGED
@@ -120,7 +120,8 @@ class BindingAffinity(datasets.ArrowBasedBuilder):
120
  # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
121
  # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
122
  my_urls = _URLs[self.config.name]
123
- data_dir = dl_manager.download_and_extract(my_urls)
 
124
  return [
125
  datasets.SplitGenerator(
126
  name=datasets.Split.TRAIN,
 
120
  # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
121
  # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
122
  my_urls = _URLs[self.config.name]
123
+ files = dl_manager.download_and_extract(my_urls)
124
+ data_dir = os.path.dirname(files[0])+'/'
125
  return [
126
  datasets.SplitGenerator(
127
  name=datasets.Split.TRAIN,
requirements.txt CHANGED
@@ -2,3 +2,5 @@ mpi4py
2
  rdkit
3
  openbabel
4
  pyarrow
 
 
 
2
  rdkit
3
  openbabel
4
  pyarrow
5
+ huggingface_hub
6
+ datasets