maom commited on
Commit
2d7280d
β€’
1 Parent(s): 6964b82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -79
README.md CHANGED
@@ -246,6 +246,8 @@ them functionally on a per-residue basis.
246
 
247
  ## Quickstart Usage
248
 
 
 
249
  Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
250
  First, from the command line install the `datasets` library
251
 
@@ -259,70 +261,54 @@ Optionally set the cache directory, e.g.
259
  then, from within python load the datasets library
260
 
261
  >>> import datasets
 
 
262
 
263
- and load one of the `MPI` model, e.g.,
264
 
265
  >>> dataset_tag = "rosetta_high_quality"
266
  >>> dataset_models = datasets.load_dataset(
267
  path = "RosettaCommons/MIP",
268
  name = f"{dataset_tag}_models",
269
- data_dir = f"{dataset_tag}_models")
270
  Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 54/54 [00:00<00:00, 441.70it/s]
271
  Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 54/54 [01:34<00:00, 1.74s/files]
272
  Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 211069/211069 [01:41<00:00, 2085.54 examples/s]
273
  Loading dataset shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 48/48 [00:00<00:00, 211.74it/s]
274
 
275
- and inspecting the loaded dataset
276
 
277
  >>> dataset_models
278
- DatasetDict({
279
- train: Dataset({
280
- features: ['id', 'pdb', 'Filter_Stage2_aBefore', 'Filter_Stage2_bQuarter', 'Filter_Stage2_cHalf', 'Filter_Stage2_dEnd', 'clashes_bb', 'clashes_total', 'score', 'silent_score', 'time'],
281
- num_rows: 211069
282
- })
283
  })
284
 
285
- many structure-based pipelines expect a `.pdb` file as input. For example, `frame2seq` takes in a structure
286
- and generates a sequence for the backbone. The `frame2seq` can be installed using `pip` from the command line:
287
-
288
- $ pip install frame2seq
289
-
290
- Then used from within python:
291
 
292
- >>> from frame2seq import Frame2seqRunner
293
- >>> runner = Frame2seqRunner()
294
- >>> runner.design(
295
- pdb_file = "target.pdb",
296
- chain_id = "A",
297
- temperature = 1,
298
- num_samples = 5000)
299
 
300
- To run `frame2seq` on each MIP target,
301
-
302
- >>> for pdb in dataset_models.data['train'].column('pdb'):
303
- pdb.str
304
- print(f"Predicting sequences for id = {row$id}")
305
- pdb = row$pdb
306
-
307
 
308
  >>> dataset_function_prediction = datasets.load_dataset(
309
  path = "RosettaCommons/MIP",
310
  name = f"{dataset_tag}_function_predictions",
311
- data_dir = f"{dataset_tag}_function_predictions")
312
  Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.4k/15.4k [00:00<00:00, 264kB/s]
313
  Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 219/219 [00:00<00:00, 1375.51it/s]
314
  Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 219/219 [13:04<00:00, 3.58s/files]
315
  Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1332900735/1332900735 [13:11<00:00, 1684288.89 examples/s]
316
  Loading dataset shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 219/219 [01:22<00:00, 2.66it/s]
317
 
318
- this loads the `>1.3B` function predictions (xxx targets x yyyy terms from the GO and EC ontologies).
319
  The predictions are stored in long format, but can be easily converted to a wide format using pandas:
320
 
321
- >>> dataset_function_prediction
322
-
323
  >>> import pandas
324
  >>> dataset_function_prediction_wide = pandas.pivot(
325
- dataset_function_prediction.data['train'].select(['id', 'term_id', 'Y_hat']).to_pandas()
326
  columns = "term_id",
327
  index = "id",
328
  values = "Y_hat")
@@ -380,28 +366,13 @@ protein structure databases such as the EBI AlphaFold database because it consis
380
  proteins from Archaea and Bacteria, whose protein sequences are generally shorter
381
  than Eukaryotic.
382
 
383
- ### Direct Use
384
- This dataset could be used to train representation models of protein structure
385
-
386
- -
387
-
388
-
389
  ### Out-of-Scope Use
390
  While this dataset has been curated for quality, in some cases the predicted structures
391
  may not represent physically realistic conformations. Thus caution much be used when using
392
  it as training data for protein structure prediction and design.
393
 
394
- ## Dataset Structure
395
- microbiome_immunity_project_dataset
396
- dataset
397
- dmpfold_high_quality_function_predictions
398
- DeepFRI_MIP_<chunk-index>_<gene-ontology-prefix>_pred_scores.json.gz
399
- dmpfold_high_quality_models
400
- MIP_<MIP-ID>.pdb.gz.pdb.gz
401
-
402
 
403
  ### Source Data
404
-
405
  Sequences were obtained from the Genomic Encyclopedia of Bacteria and Archaea
406
  ([GEBA1003](https://genome.jgi.doe.gov/portal/geba1003/geba1003.info.html)) reference
407
  genome database across the microbial tree of life:
@@ -419,35 +390,6 @@ genome database across the microbial tree of life:
419
  > sequence space is still far from saturated, and future endeavors in this direction will continue to be a
420
  > valuable resource for scientific discovery.
421
 
422
- #### Data Collection and Processing
423
-
424
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
425
-
426
- {{ data_collection_and_processing_section | default("[More Information Needed]", true)}}
427
-
428
- #### Who are the source data producers?
429
-
430
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
431
-
432
- {{ source_data_producers_section | default("[More Information Needed]", true)}}
433
-
434
-
435
- ## Bias, Risks, and Limitations
436
-
437
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
438
-
439
- {{ bias_risks_limitations | default("[More Information Needed]", true)}}
440
-
441
- ### Recommendations
442
-
443
-
444
-
445
-
446
-
447
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
448
-
449
- {{ bias_recommendations | default("Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.", true)}}
450
-
451
  ## Citation
452
 
453
  @article{KoehlerLeman2023,
@@ -464,7 +406,5 @@ genome database across the microbial tree of life:
464
  month = apr
465
  }
466
 
467
-
468
-
469
  ## Dataset Card Authors
470
  Matthew O'Meara ([email protected])
 
246
 
247
  ## Quickstart Usage
248
 
249
+ ### Install HuggingFace Datasets package
250
+
251
  Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
252
  First, from the command line install the `datasets` library
253
 
 
261
  then, from within python load the datasets library
262
 
263
  >>> import datasets
264
+
265
+ ### Load model datasets
266
 
267
+ To load one of the `MPI` model datasets, use `datasets.load_dataset(...)`:
268
 
269
  >>> dataset_tag = "rosetta_high_quality"
270
  >>> dataset_models = datasets.load_dataset(
271
  path = "RosettaCommons/MIP",
272
  name = f"{dataset_tag}_models",
273
+ data_dir = f"{dataset_tag}_models")['train']
274
  Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 54/54 [00:00<00:00, 441.70it/s]
275
  Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 54/54 [01:34<00:00, 1.74s/files]
276
  Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 211069/211069 [01:41<00:00, 2085.54 examples/s]
277
  Loading dataset shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 48/48 [00:00<00:00, 211.74it/s]
278
 
279
+ and the dataset is loaded as a `datasets.arrow_dataset.Dataset`
280
 
281
  >>> dataset_models
282
+ Dataset({
283
+ features: ['id', 'pdb', 'Filter_Stage2_aBefore', 'Filter_Stage2_bQuarter', 'Filter_Stage2_cHalf', 'Filter_Stage2_dEnd', 'clashes_bb', 'clashes_total', 'score', 'silent_score', 'time'],
284
+ num_rows: 211069
 
 
285
  })
286
 
287
+ which is a column oriented format that can be accessed directly, converted in to a `pandas.DataFrame`, or `parquet` format, e.g.
 
 
 
 
 
288
 
289
+ >>> dataset_models.data.column('pdb')
290
+ >>> dataset_models.to_pandas()
291
+ >>> dataset_models.to_parquet("dataset.parquet")
 
 
 
 
292
 
293
+ ### Load Function Predictions
294
+ Function predictions are generated using `DeepFRI` across
 
 
 
 
 
295
 
296
  >>> dataset_function_prediction = datasets.load_dataset(
297
  path = "RosettaCommons/MIP",
298
  name = f"{dataset_tag}_function_predictions",
299
+ data_dir = f"{dataset_tag}_function_predictions")['train']
300
  Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.4k/15.4k [00:00<00:00, 264kB/s]
301
  Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 219/219 [00:00<00:00, 1375.51it/s]
302
  Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 219/219 [13:04<00:00, 3.58s/files]
303
  Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1332900735/1332900735 [13:11<00:00, 1684288.89 examples/s]
304
  Loading dataset shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 219/219 [01:22<00:00, 2.66it/s]
305
 
306
+ this loads the `>1.3B` function predictions for all 211069 targets for the GO and EC ontology terms.
307
  The predictions are stored in long format, but can be easily converted to a wide format using pandas:
308
 
 
 
309
  >>> import pandas
310
  >>> dataset_function_prediction_wide = pandas.pivot(
311
+ dataset_function_prediction.data.select(['id', 'term_id', 'Y_hat']).to_pandas(),
312
  columns = "term_id",
313
  index = "id",
314
  values = "Y_hat")
 
366
  proteins from Archaea and Bacteria, whose protein sequences are generally shorter
367
  than Eukaryotic.
368
 
 
 
 
 
 
 
369
  ### Out-of-Scope Use
370
  While this dataset has been curated for quality, in some cases the predicted structures
371
  may not represent physically realistic conformations. Thus caution much be used when using
372
  it as training data for protein structure prediction and design.
373
 
 
 
 
 
 
 
 
 
374
 
375
  ### Source Data
 
376
  Sequences were obtained from the Genomic Encyclopedia of Bacteria and Archaea
377
  ([GEBA1003](https://genome.jgi.doe.gov/portal/geba1003/geba1003.info.html)) reference
378
  genome database across the microbial tree of life:
 
390
  > sequence space is still far from saturated, and future endeavors in this direction will continue to be a
391
  > valuable resource for scientific discovery.
392
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
393
  ## Citation
394
 
395
  @article{KoehlerLeman2023,
 
406
  month = apr
407
  }
408
 
 
 
409
  ## Dataset Card Authors
410
  Matthew O'Meara ([email protected])