text
stringlengths 0
12.7k
| source
stringclasses 1
value |
---|---|
Deep Chem Bharath Ramsundar Democratizing Deep Learning for Sciences www. deepchem. io www. deepforestsci. com Bharath Ramsundar and the Deep Chem Team The Deep Chem Book The Deep Chem Book The Deep Chem Book is a step-by-step tutorial series for deep life sciences. The author, Bharath Ramsundar and the Deep Chem team, cover the essential tools and techniques for mastering deep learning in life sciences. Tailored for beginners in both machine learning and life sciences, the book builds a repertoire of tools required to perform meaningful work in the dynamic field of life sciences. Going beyond machine learning, the tutorial covers critical aspects of data handling necessary for constructing systems within the deep life sciences. Executed on Google Colab, these tutorials prioritize accessibility and convenience, providing an open avenue for exploration. -Bharath Ramsundar βThe Deep Chem project aims to make high quality open source software for scientific machine learning more accessible to scientists and developers worldwide. We have a particular focus on molecular machine learning and drug discovery, but also support a broad range of applications in bioinformatics, materials science, and computational physics. I started Deep Chem while doing my Ph. D. at Stanford, but today Deep Chem operates as a global distributed community of researchers spread across many academic and industrial institutions. We hope that you will join our community and help us build!β Deep Forest Publishing Deep Forest Publishing | deepchem.pdf |
The Deep Chem Book Democratizing Deep-Learning for Drug Discovery Quantum Chemistry, Materials Science and Biology Bharath Ramsundar and the Deep Chem Community | deepchem.pdf |
Acknowledgement We acknowledge the Deep Chem community for their contributions and support. | deepchem.pdf |
Contents 1. Introduction To Deepchem 1. The Basic Tools of the Deep Life Sciences 2. Working With Datasets 3. An Introduction To Molecule Net 4. Molecular Fingerprints 5. Creating Models with Tensor Flow and Py Torch 6. Introduction to Graph Convolutions 7. Going Deeper on Molecular Featurizations 8. Working With Splitters 9. Advanced Model Training 10. Creating a high fidelity model from experimental data 11. Putting Multitask Learning to Work 12. Modeling Protein Ligand Interactions 13. Modeling Protein Ligand Interactions With Atomic Convolutions 14. Conditional Generative Adversarial Networks 15. Training a Generative Adversarial Network on MNIST 16. Advanced model training using hyperopt 17. Introduction to Gaussian Processes 18. Pytorch Lightning Integration 2. Molecular Machine Learning 1. Molecular Fingerprints 2. Going Deeper on Molecular Featurizations 3. Learning Unsupervised Embeddings for Molecules 4. Atomic Contributions for Molecules 5. Interactive Model Evaluation with Trident Chemwidgets 6. Transfer Learning With Chem BERTa Transformers 7. Training a Normalizing Flow on QM9 8. Large Scale Chemical Screens 9. Introduction to Molecular Attention Transformer 10. Generating molecules with Mol GAN 11. Introduction to GROVER 3. Modeling Proteins 1. Protein Deep Learning 4. Protein Ligand Modeling 1. Modeling Protein Ligand Interactions 2. Modeling Protein Ligand Interactions With Atomic Convolutions 3. Deep Chem XAlphafold 5. Quantum Chemistry 1. Exploring Quantum Chemistry with GDB1k 2. Deep QMC tutorial 3. Training an Exchange Correlation Functional using Deepchem | deepchem.pdf |
6. Bioinformatics 1. Introduction to Bioinformatics 2. Multisequence Alignments 3. Deep probabilistic analysis of single-cell omics data 7. Material Sciences 1. Introduction To Material Science 8. Machine Learning Methods 1. Using Reinforcement Learning to Play Pong 2. Introduction to Model Interpretability 3. Uncertainty In Deep Learning 9. Deep Differential Equations 1. Physics Informed Neural Networks 2. Introducing Jax Model and PINNModel 3. About Neural ODE : Using Torchdiffeq with Deepchem 10. Equivariance 1. Introduction to Equivariance 2. Modeling Protein Ligand Interactions With Atomic Convolutions 3. Deep Chem XAlphafold 11. Olfaction 1. Predict Multi Label Odor Descriptors using Open POM | deepchem.pdf |
The Basic Tools of the Deep Life Sciences Welcome to Deep Chem's introductory tutorial for the deep life sciences. This series of notebooks is a step-by-step guide for you to get to know the new tools and techniques needed to do deep learning for the life sciences. We'll start from the basics, assuming that you're new to machine learning and the life sciences, and build up a repertoire of tools and techniques that you can use to do meaningful work in the life sciences. Scope: This tutorial will encompass both the machine learning and data handling needed to build systems for the deep life sciences. Colab This tutorial and the rest in the sequences are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b Why do the Deep Chem Tutorial? 1) Career Advancement: Applying AI in the life sciences is a booming industry at present. There are a host of newly funded startups and initiatives at large pharmaceutical and biotech companies centered around AI. Learning and mastering Deep Chem will bring you to the forefront of this field and will prepare you to enter a career in this field. 2) Humanitarian Considerations: Disease is the oldest cause of human suffering. From the dawn of human civilization, humans have suffered from pathogens, cancers, and neurological conditions. One of the greatest achievements of the last few centuries has been the development of effective treatments for many diseases. By mastering the skills in this tutorial, you will be able to stand on the shoulders of the giants of the past to help develop new medicine. 3) Lowering the Cost of Medicine: The art of developing new medicine is currently an elite skill that can only be practiced by a small core of expert practitioners. By enabling the growth of open source tools for drug discovery, you can help democratize these skills and open up drug discovery to more competition. Increased competition can help drive down the cost of medicine. Getting Extra Credit If you're excited about Deep Chem and want to get more involved, there are some things that you can do right now: Star Deep Chem on Git Hub! - https://github. com/deepchem/deepchem Join the Deep Chem forums and introduce yourself! - https://forum. deepchem. io Say hi on the Deep Chem gitter - https://gitter. im/deepchem/Lobby Make a You Tube video teaching the contents of this notebook. Prerequisites This tutorial sequence will assume some basic familiarity with the Python data science ecosystem. We will assume that you have familiarity with libraries such as Numpy, Pandas, and Tensor Flow. We'll provide some brief refreshers on basics through the tutorial so don't worry if you're not an expert. Setup The first step is to get Deep Chem up and running. We recommend using Google Colab to work through this tutorial series. You'll also need to run the following commands to get Deep Chem installed on your colab notebook. We are going to use a model based on tensorflow, because of that we've added [tensorflow] to the pip install command to ensure the necessary dependencies are also installed ! pip install --pre deepchem [ tensorflow ] You can of course run this tutorial locally if you prefer. In this case, don't run the above cell since it will download and install Anaconda on your local machine. In either case, we can now import the deepchem package to play with. import deepchem as dc dc. __version__ | deepchem.pdf |
'2. 5. 0. dev' Training a Model with Deep Chem: A First Example Deep learning can be used to solve many sorts of problems, but the basic workflow is usually the same. Here are the typical steps you follow. 1. Select the data set you will train your model on (or create a new data set if there isn't an existing suitable one). 2. Create the model. 3. Train the model on the data. 4. Evaluate the model on an independent test set to see how well it works. 5. Use the model to make predictions about new data. With Deep Chem, each of these steps can be as little as one or two lines of Python code. In this tutorial we will walk through a basic example showing the complete workflow to solve a real world scientific problem. The problem we will solve is predicting the solubility of small molecules given their chemical formulas. This is a very important property in drug development: if a proposed drug isn't soluble enough, you probably won't be able to get enough into the patient's bloodstream to have a therapeutic effect. The first thing we need is a data set of measured solubilities for real molecules. One of the core components of Deep Chem is Molecule Net, a diverse collection of chemical and molecular data sets. For this tutorial, we can use the Delaney solubility data set. The property of solubility in this data set is reported in log(solubility) where solubility is measured in moles/liter. tasks, datasets, transformers = dc. molnet. load_delaney ( featurizer = 'Graph Conv' ) train_dataset, valid_dataset, test_dataset = datasets I won't say too much about this code right now. We will see many similar examples in later tutorials. There are two details I do want to draw your attention to. First, notice the featurizer argument passed to the load_delaney() function. Molecules can be represented in many ways. We therefore tell it which representation we want to use, or in more technical language, how to "featurize" the data. Second, notice that we actually get three different data sets: a training set, a validation set, and a test set. Each of these serves a different function in the standard deep learning workflow. Now that we have our data, the next step is to create a model. We will use a particular kind of model called a "graph convolutional network", or "graphconv" for short. model = dc. models. Graph Conv Model ( n_tasks = 1, mode = 'regression', dropout = 0. 2 ) Here again I will not say much about the code. Later tutorials will give lots more information about Graph Conv Model, as well as other types of models provided by Deep Chem. We now need to train the model on the data set. We simply give it the data set and tell it how many epochs of training to perform (that is, how many complete passes through the data to make). model. fit ( train_dataset, nb_epoch = 100 ) If everything has gone well, we should now have a fully trained model! But do we? To find out, we must evaluate the model on the test set. We do that by selecting an evaluation metric and calling evaluate() on the model. For this example, let's use the Pearson correlation, also known as r 2, as our metric. We can evaluate it on both the training set and test set. metric = dc. metrics. Metric ( dc. metrics. pearson_r2_score ) print ( "Training set score:", model. evaluate ( train_dataset, [ metric ], transformers )) print ( "Test set score:", model. evaluate ( test_dataset, [ metric ], transformers )) Training set score: {'pearson_r2_score': 0. 9323622956442351} Test set score: {'pearson_r2_score': 0. 6898768897014962} Notice that it has a higher score on the training set than the test set. Models usually perform better on the particular data they were trained on than they do on similar but independent data. This is called "overfitting", and it is the reason it is essential to evaluate your model on an independent test set. Our model still has quite respectable performance on the test set. For comparison, a model that produced totally random outputs would have a correlation of 0, while one that made perfect predictions would have a correlation of 1. Our model does quite well, so now we can use it to make predictions about other molecules we care about. Since this is just a tutorial and we don't have any other molecules we specifically want to predict, let's just use the first ten molecules from the test set. For each one we print out the chemical structure (represented as a SMILES string) and the predicted log(solubility). To put these predictions in context, we print out the log(solubility) values from the test set | deepchem.pdf |
as well. solubilities = model. predict_on_batch ( test_dataset. X [: 10 ]) for molecule, solubility, test_solubility in zip ( test_dataset. ids, solubilities, test_dataset. y ): print ( solubility, test_solubility, molecule ) [-1. 8629359] [-1. 60114461] c1cc2ccc3cccc4ccc(c1)c2c34 [0. 6617248] [0. 20848251] Cc1cc(=O)[n H]c(=S)[n H]1 [-0. 5705674] [-0. 01602738] Oc1ccc(cc1)C2(OC(=O)c3ccccc23)c4ccc(O)cc4 [-2. 0929456] [-2. 82191713] c1ccc2c(c1)cc3ccc4cccc5ccc2c3c45 [-1. 4962314] [-0. 52891635] C1=Cc2cccc3cccc1c23 [1. 8620405] [1. 10168349] CC1CO1 [-0. 5858227] [-0. 88987406] CCN2c1ccccc1N(C)C(=S)c3cccnc23 [-0. 9799993] [-0. 52649706] CC12CCC3C(CCc4cc(O)ccc34)C2CCC1=O [-1. 0176951] [-0. 76358725] Cn2cc(c1ccccc1)c(=O)c(c2)c3cccc(c3)C(F)(F)F [0. 05622783] [-0. 64020358] Cl C(Cl)(Cl)C(NC=O)N1C=CN(C=C1)C(NC=O)C(Cl)(Cl)Cl Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Intro1, title = { The Basic Tools of the Deep Life Sciences }, organization = { Deep Chem }, author = { Ramsundar, Bharath }, howpublished = { \ url { https : // github. com / deepchem / deepchem / blob / master / examples / tutorials / The_Basic_Tools_of_the_Deep_Life_Sciences year = { 2021 }, } | deepchem.pdf |
Working With Datasets Data is central to machine learning. This tutorial introduces the Dataset class that Deep Chem uses to store and manage data. It provides simple but powerful tools for efficiently working with large amounts of data. It also is designed to easily interact with other popular Python frameworks such as Num Py, Pandas, Tensor Flow, and Py Torch. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem We can now import the deepchem package to play with. import deepchem as dc dc. __version__ '2. 4. 0-rc1. dev' Anatomy of a Dataset In the last tutorial we loaded the Delaney dataset of molecular solubilities. Let's load it again. tasks, datasets, transformers = dc. molnet. load_delaney ( featurizer = 'Graph Conv' ) train_dataset, valid_dataset, test_dataset = datasets We now have three Dataset objects: the training, validation, and test sets. What information does each of them contain? We can start to get an idea by printing out the string representation of one of them. print ( test_dataset ) <Disk Dataset X. shape: (113,), y. shape: (113, 1), w. shape: (113, 1), ids: ['C1c2ccccc2c3ccc4ccccc4c13' 'COc1ccccc 1Cl' 'COP(=S)(OC)Oc1cc(Cl)c(Br)cc1Cl' ... 'CCSCCSP(=S)(OC)OC' 'CCC(C)C' 'COP(=O)(OC)OC(=CCl)c1cc(Cl)c(Cl)cc1Cl'], task_names: ['measured log solubility in mols per litre']> There's a lot of information there, so let's start at the beginning. It begins with the label "Disk Dataset". Dataset is an abstract class. It has a few subclasses that correspond to different ways of storing data. Disk Dataset is a dataset that has been saved to disk. The data is stored in a way that can be efficiently accessed, even if the total amount of data is far larger than your computer's memory. Numpy Dataset is an in-memory dataset that holds all the data in Num Py arrays. It is a useful tool when manipulating small to medium sized datasets that can fit entirely in memory. Image Dataset is a more specialized class that stores some or all of the data in image files on disk. It is useful when working with models that have images as their inputs or outputs. Now let's consider the contents of the Dataset. Every Dataset stores a list of samples. Very roughly speaking, a sample is a single data point. In this case, each sample is a molecule. In other datasets a sample might correspond to an experimental assay, a cell line, an image, or many other things. For every sample the dataset stores the following information. The features, referred to as X. This is the input that should be fed into a model to represent the sample. The labels, referred to as y. This is the desired output from the model. During training, it tries to make the model's output for each sample as close as possible to y. The weights, referred to as w. This can be used to indicate that some data values are more important than others. In later tutorials we will see examples of how this is useful. An ID, which is a unique identifier for the sample. This can be anything as long as it is unique. Sometimes it is just an integer index, but in this dataset the ID is a SMILES string describing the molecule. Notice that X, y, and w all have 113 as the size of their first dimension. That means this dataset contains 113 samples. The final piece of information listed in the output is task_names. Some datasets contain multiple pieces of information for each sample. For example, if a sample represents a molecule, the dataset might record the results of several | deepchem.pdf |
different experiments on that molecule. This dataset has only a single task: "measured log solubility in mols per litre". Also notice that y and w each have shape (113, 1). The second dimension of these arrays usually matches the number of tasks. Accessing Data from a Dataset There are many ways to access the data contained in a dataset. The simplest is just to directly access the X, y, w, and ids properties. Each of these returns the corresponding information as a Num Py array. test_dataset. y array([[-1. 7065408738415053], [0. 2911162036252904], [-1. 4272475857596547], [-0. 9254664241210759], [-1. 9526976701170347], [1. 3514839414275706], [-0. 8591934405084332], [-0. 6509069205829855], [-0. 32900957160729316], [0. 6082797680572224], [1. 8295961803473488], [1. 6213096604219008], [1. 3751528641463715], [0. 45632528420252055], [1. 0532555151706793], [-1. 1053502367839627], [-0. 2011973889257683], [0. 3479216181504126], [-0. 9870056231899582], [-0. 8161160011602158], [0. 8402352107014712], [0. 22815686919328], [0. 06247441016167367], [1. 040947675356903], [-0. 5197810887208284], [0. 8023649343513898], [-0. 41895147793873655], [-2. 5964923680684198], [1. 7443880585596654], [0. 45206487811313645], [0. 233837410645792], [-1. 7917489956291888], [0. 7739622270888287], [1. 0011838851893173], [-0. 05445006806920272], [1. 1043803882432892], [0. 7597608734575482], [-0. 7001382798380905], [0. 8213000725264304], [-1. 3136367567094103], [0. 4567986626568967], [-0. 5732728540653187], [0. 4094608172192949], [-0. 3242757870635329], [-0. 049716283525442634], [-0. 39054877067617544], [-0. 08095926151425996], [-0. 2627365879946506], [-0. 5467636606202616], [1. 997172153196459], [-0. 03551492989416198], [1. 4508934168465344], [-0. 8639272250521937], [0. 23904457364392848], [0. 5278054308132993], [-0. 48475108309700315], [0. 2248432200126478], [0. 3431878336066523], [1. 5029650468278963], [-0. 4946920306388995], [0. 3479216181504126], [0. 7928973652638694], [0. 5609419226196206], [-0. 13965818985688602], [-0. 13965818985688602], [0. 15857023640000523], [1. 6071083067906202], [1. 9006029485037514], | deepchem.pdf |
[-0. 7171799041956278], [-0. 8165893796145915], [-0. 13019062076936566], [-0. 24380144981960986], [-0. 14912575894440638], [0. 9538460397517154], [-0. 07811899078800374], [-0. 18226225075072758], [0. 2532459272752089], [0. 6887541053011454], [0. 044012650441008896], [-0. 5514974451640217], [-0. 2580028034508905], [-0. 021313576262881533], [-2. 4128215277705247], [0. 07336211461232214], [0. 9017744097703536], [1. 9384732248538328], [0. 8402352107014712], [-0. 10652169805056463], [1. 07692443788948], [-0. 403803367398704], [1. 2662758196398873], [-0. 2532690189071302], [0. 29064282517091444], [0. 9443784706641951], [-0. 41563782875810434], [-0. 7370617992794205], [-1. 0012069768212388], [0. 46626623174441706], [0. 3758509469585975], [-0. 46628932337633816], [1. 2662758196398873], [-1. 4968342185529295], [-0. 17800184466134344], [0. 8828392715953128], [-0. 6083028596891439], [-2. 170451759130003], [0. 32898647997537184], [0. 3005837727128107], [0. 6461500444073038], [1. 5058053175541524], [-0. 007585601085977053], [-0. 049716283525442634], [-0. 6849901692980588]], dtype=object) This is a very easy way to access data, but you should be very careful about using it. This requires the data for all samples to be loaded into memory at once. That's fine for small datasets like this one, but for large datasets it could easily take more memory than you have. A better approach is to iterate over the dataset. That lets it load just a little data at a time, process it, then free the memory before loading the next bit. You can use the itersamples() method to iterate over samples one at a time. for X, y, w, id in test_dataset. itersamples (): print ( y, id ) [-1. 70654087] C1c2ccccc2c3ccc4ccccc4c13 [0. 2911162] COc1ccccc1Cl [-1. 42724759] COP(=S)(OC)Oc1cc(Cl)c(Br)cc1Cl [-0. 92546642] Cl C(Cl)CC(=O)NC2=C(Cl)C(=O)c1ccccc1C2=O [-1. 95269767] Cl C(Cl)C(c1ccc(Cl)cc1)c2ccc(Cl)cc2 [1. 35148394] COC(=O)C=C [-0. 85919344] CN(C)C(=O)Nc2ccc(Oc1ccc(Cl)cc1)cc2 [-0. 65090692] N(=Nc1ccccc1)c2ccccc2 [-0. 32900957] CC(C)c1ccc(C)cc1 [0. 60827977] Oc1c(Cl)cccc1Cl [1. 82959618] OCC2OC(OC1(CO)OC(CO)C(O)C1O)C(O)C(O)C2O [1. 62130966] OC1C(O)C(O)C(O)C(O)C1O [1. 37515286] Cn2c(=O)n(C)c1ncn(CC(O)CO)c1c2=O [0. 45632528] OCC(NC(=O)C(Cl)Cl)C(O)c1ccc(cc1)N(=O)=O [1. 05325552] CCC(O)(CC)CC [-1. 10535024] CC45CCC2C(CCC3CC1SC1CC23C)C4CCC5O [-0. 20119739] Brc1ccccc1Br [0. 34792162] Oc1c(Cl)cc(Cl)cc1Cl [-0. 98700562] CCCN(CCC)c1c(cc(cc1N(=O)=O)S(N)(=O)=O)N(=O)=O [-0. 816116] C2c1ccccc1N(CCF)C(=O)c3ccccc23 [0. 84023521] CC(C)C(=O)C(C)C [0. 22815687] O=C1NC(=O)NC(=O)C1(C(C)C)CC=C(C)C [0. 06247441] c1c(O)C2C(=O)C3cc(O)cc C3OC2cc1(OC) [1. 04094768] Cn1cnc2n(C)c(=O)n(C)c(=O)c12 [-0. 51978109] CC(=O)SC4CC1=CC(=O)CCC1(C)C5CCC2(C)C(CCC23CCC(=O)O3)C45 [0. 80236493] Cc1ccc(O)cc1C [-0. 41895148] O(c1ccccc1)c2ccccc2 | deepchem.pdf |
[-2. 59649237] Clc1cc(Cl)c(cc1Cl)c2cc(Cl)c(Cl)cc2Cl [1. 74438806] NC(=O)c1cccnc1 [0. 45206488] Sc1ccccc1 [0. 23383741] CNC(=O)Oc1cc(C)cc(C)c1 [-1. 791749] Cl C1CC2C(C1Cl)C3(Cl)C(=C(Cl)C2(Cl)C3(Cl)Cl)Cl [0. 77396223] CSSC [1. 00118389] NC(=O)c1ccccc1 [-0. 05445007] Clc1ccccc1Br [1. 10438039] COC(=O)c1ccccc1OC2OC(COC3OCC(O)C(O)C3O)C(O)C(O)C2O [0. 75976087] CCCCC(O)CC [-0. 70013828] CCN2c1nc(C)cc(C)c1NC(=O)c3cccnc23 [0. 82130007] Oc1cc(Cl)cc(Cl)c1 [-1. 31363676] Cc1cccc2c1ccc3ccccc32 [0. 45679866] CCCCC(CC)CO [-0. 57327285] CC(C)N(C(C)C)C(=O)SCC(=CCl)Cl [0. 40946082] Cc1ccccc1 [-0. 32427579] Clc1cccc(n1)C(Cl)(Cl)Cl [-0. 04971628] C1CCC=CCC1 [-0. 39054877] CN(C)C(=S)SSC(=S)N(C)C [-0. 08095926] COC1=CC(=O)CC(C)C13Oc2c(Cl)c(OC)cc(OC)c2C3=O [-0. 26273659] CCCCCCCCCCO [-0. 54676366] CCC(C)(C)CC [1. 99717215] CNC(=O)C(C)SCCSP(=O)(OC)(OC) [-0. 03551493] Oc1cc(Cl)c(Cl)c(Cl)c1Cl [1. 45089342] CCCC=O [-0. 86392723] CC4CC3C2CCC1=CC(=O)C=CC1(C)C2(F)C(O)CC3(C)C4(O)C(=O)COC(C)=O [0. 23904457] CCCC [0. 52780543] COc1ccccc1O [-0. 48475108] CC1CC2C3CCC(O)(C(=O)C)C3(C)CC(O)C2(F)C4(C)C=CC(=O)C=C14 [0. 22484322] Cl C(Cl)C(Cl)(Cl)Cl [0. 34318783] CCOC(=O)c1ccccc1C(=O)OCC [1. 50296505] CC(C)CO [-0. 49469203] CC(C)Cc1ccccc1 [0. 34792162] ICI [0. 79289737] CCCC(O)CCC [0. 56094192] CCCCCOC(=O)C [-0. 13965819] Oc1c(Cl)c(Cl)cc(Cl)c1Cl [-0. 13965819] CCCc1ccccc1 [0. 15857024] FC(F)(Cl)C(F)(F)Cl [1. 60710831] CC=CC=O [1. 90060295] CN(C)C(=O)N(C)C [-0. 7171799] Cc1cc(C)c(C)cc1C [-0. 81658938] CC(=O)OC3(CCC4C2CCC1=CC(=O)CCC1C2CCC34C)C#C [-0. 13019062] CCOP(=S)(OCC)N2C(=O)c1ccccc1C2=O [-0. 24380145] c1ccccc1NC(=O)c2c(O)cccc2 [-0. 14912576] CCN(CC)C(=S)SCC(Cl)=C [0. 95384604] Cl CC [-0. 07811899] CC(=O)Nc1cc(NS(=O)(=O)C(F)(F)F)c(C)cc1C [-0. 18226225] O=C(C=CC=Cc2ccc1OCOc1c2)N3CCCCC3 [0. 25324593] CC/C=C\C [0. 68875411] CNC(=O)ON=C(CSC)C(C)(C)C [0. 04401265] O=C2NC(=O)C1(CCCCCCC1)C(=O)N2 [-0. 55149745] c1(C(C)(C)C)cc(C(C)(C)C)cc(OC(=O)NC)c1 [-0. 2580028] Oc2cc(O)c1C(=O)CC(Oc1c2)c3ccc(O)c(O)c3 [-0. 02131358] O=C(c1ccccc1)c2ccccc2 [-2. 41282153] CCCCCCCCCCCCCCCCCCCC [0. 07336211] N(Nc1ccccc1)c2ccccc2 [0. 90177441] CCC(CC)CO [1. 93847322] Oc1ccncc1 [0. 84023521] Cl\C=C/Cl [-0. 1065217] CC1CCCC1 [1. 07692444] CC(C)CC(C)O [-0. 40380337] O2c1ccc(N)cc1N(C)C(=O)c3cc(C)ccc23 [1. 26627582] CC(C)(C)CO [-0. 25326902] CC(C)(C)C(=O)C(Oc1ccc(Cl)cc1)n2cncn2 [0. 29064283] Cc1cc(no1)C(=O)NNCc2ccccc2 [0. 94437847] CC=C [-0. 41563783] Oc1ccc(Cl)cc1Cc2cc(Cl)ccc2O [-0. 7370618] CCOC(=O)Nc2cccc(OC(=O)Nc1ccccc1)c2 [-1. 00120698] O=C1c2ccccc2C(=O)c3ccccc13 [0. 46626623] CCCCCCC(C)O [0. 37585095] CC1=C(C(=O)Nc2ccccc2)S(=O)(=O)CCO1 [-0. 46628932] CCCCc1ccccc1 [1. 26627582] O=C1NC(=O)C(=O)N1 [-1. 49683422] COP(=S)(OC)Oc1ccc(Sc2ccc(OP(=S)(OC)OC)cc2)cc1 [-0. 17800184] NS(=O)(=O)c1cc(ccc1Cl)C2(O)NC(=O)c3ccccc23 [0. 88283927] CC(C)COC(=O)C [-0. 60830286] CC(C)C(C)(C)C [-2. 17045176] Clc1ccc(c(Cl)c1Cl)c2c(Cl)cc(Cl)c(Cl)c2Cl [0. 32898648] N#Cc1ccccc1C#N [0. 30058377] Cc1cccc(c1)N(=O)=O [0. 64615004] FC(F)(F)C(Cl)Br [1. 50580532] CNC(=O)ON=C(SC)C(=O)N(C)C | deepchem.pdf |
[-0. 0075856] CCSCCSP(=S)(OC)OC [-0. 04971628] CCC(C)C [-0. 68499017] COP(=O)(OC)OC(=CCl)c1cc(Cl)c(Cl)cc1Cl Most deep learning models can process a batch of multiple samples all at once. You can use iterbatches() to iterate over batches of samples. for X, y, w, ids in test_dataset. iterbatches ( batch_size = 50 ): print ( y. shape ) (50, 1) (50, 1) (13, 1) iterbatches() has other features that are useful when training models. For example, iterbatches(batch_size=100, epochs=10, deterministic=False) will iterate over the complete dataset ten times, each time with the samples in a different random order. Datasets can also expose data using the standard interfaces for Tensor Flow and Py Torch. To get a tensorflow. data. Dataset, call make_tf_dataset(). To get a torch. utils. data. Iterable Dataset, call make_pytorch_dataset(). See the API documentation for more details. The final way of accessing data is to_dataframe(). This copies the data into a Pandas Data Frame. This requires storing all the data in memory at once, so you should only use it with small datasets. test_dataset. to_dataframe () X y w ids 0 <deepchem. feat. mol_graphs. Conv Mol object at 0x...-1. 706541 1. 0 C1c2ccccc2c3ccc4ccccc4c13 1 <deepchem. feat. mol_graphs. Conv Mol object at 0x... 0. 291116 1. 0 COc1ccccc1Cl 2 <deepchem. feat. mol_graphs. Conv Mol object at 0x...-1. 427248 1. 0 COP(=S)(OC)Oc1cc(Cl)c(Br)cc1Cl 3 <deepchem. feat. mol_graphs. Conv Mol object at 0x...-0. 925466 1. 0 Cl C(Cl)CC(=O)NC2=C(Cl)C(=O)c1ccccc1C2=O 4 <deepchem. feat. mol_graphs. Conv Mol object at 0x...-1. 952698 1. 0 Cl C(Cl)C(c1ccc(Cl)cc1)c2ccc(Cl)cc2............... 108 <deepchem. feat. mol_graphs. Conv Mol object at 0x... 0. 646150 1. 0 FC(F)(F)C(Cl)Br 109 <deepchem. feat. mol_graphs. Conv Mol object at 0x... 1. 505805 1. 0 CNC(=O)ON=C(SC)C(=O)N(C)C 110 <deepchem. feat. mol_graphs. Conv Mol object at 0x...-0. 007586 1. 0 CCSCCSP(=S)(OC)OC 111 <deepchem. feat. mol_graphs. Conv Mol object at 0x...-0. 049716 1. 0 CCC(C)C 112 <deepchem. feat. mol_graphs. Conv Mol object at 0x...-0. 684990 1. 0 COP(=O)(OC)OC(=CCl)c1cc(Cl)c(Cl)cc1Cl 113 rows Γ 4 columns Creating Datasets Now let's talk about how you can create your own datasets. Creating a Numpy Dataset is very simple: just pass the arrays containing the data to the constructor. Let's create some random arrays, then wrap them in a Numpy Dataset. import numpy as np X = np. random. random (( 10, 5 )) y = np. random. random (( 10, 2 )) dataset = dc. data. Numpy Dataset ( X = X, y = y ) print ( dataset ) <Numpy Dataset X. shape: (10, 5), y. shape: (10, 2), w. shape: (10, 1), ids: [0 1 2 3 4 5 6 7 8 9], task_names: [0 1 ]> Notice that we did not specify weights or IDs. These are optional, as is y for that matter. Only X is required. Since we left them out, it automatically built w and ids arrays for us, setting all weights to 1 and setting the IDs to integer indices. dataset. to_dataframe () | deepchem.pdf |
X1 X2 X3 X4 X5 y1 y2 w ids 0 0. 547330 0. 919941 0. 289138 0. 431806 0. 776672 0. 532579 0. 443258 1. 0 0 1 0. 980867 0. 642487 0. 460640 0. 500153 0. 014848 0. 678259 0. 274029 1. 0 1 2 0. 953254 0. 704446 0. 857458 0. 378372 0. 705789 0. 704786 0. 901080 1. 0 2 3 0. 904970 0. 729710 0. 304247 0. 861546 0. 917029 0. 121747 0. 758845 1. 0 3 4 0. 464144 0. 059168 0. 600405 0. 880529 0. 688043 0. 595495 0. 719861 1. 0 4 5 0. 820482 0. 139002 0. 627421 0. 129399 0. 920024 0. 634030 0. 464525 1. 0 5 6 0. 113727 0. 551801 0. 536189 0. 066091 0. 311320 0. 699331 0. 171532 1. 0 6 7 0. 516131 0. 918903 0. 429036 0. 844973 0. 639367 0. 464089 0. 337989 1. 0 7 8 0. 809393 0. 201450 0. 821420 0. 841390 0. 100026 0. 230462 0. 376151 1. 0 8 9 0. 076750 0. 389277 0. 350371 0. 291806 0. 127522 0. 544606 0. 306578 1. 0 9 What about creating a Disk Dataset? If you have the data in Num Py arrays, you can call Disk Dataset. from_numpy() to save it to disk. Since this is just a tutorial, we will save it to a temporary directory. import tempfile with tempfile. Temporary Directory () as data_dir : disk_dataset = dc. data. Disk Dataset. from_numpy ( X = X, y = y, data_dir = data_dir ) print ( disk_dataset ) <Disk Dataset X. shape: (10, 5), y. shape: (10, 2), w. shape: (10, 1), ids: [0 1 2 3 4 5 6 7 8 9], task_names: [0 1] > What about larger datasets that can't fit in memory? What if you have some huge files on disk containing data on hundreds of millions of molecules? The process for creating a Disk Dataset from them is slightly more involved. Fortunately, Deep Chem's Data Loader framework can automate most of the work for you. That is a larger subject, so we will return to it in a later tutorial. Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! | deepchem.pdf |
An Introduction To Molecule Net By Bharath Ramsundar | Twitter One of the most powerful features of Deep Chem is that it comes "batteries included" with datasets to use. The Deep Chem developer community maintains the Molecule Net [1] suite of datasets which maintains a large collection of different scientific datasets for use in machine learning applications. The original Molecule Net suite had 17 datasets mostly focused on molecular properties. Over the last several years, Molecule Net has evolved into a broader collection of scientific datasets to facilitate the broad use and development of scientific machine learning tools. These datasets are integrated with the rest of the Deep Chem suite so you can conveniently access these through functions in the dc. molnet submodule. You've already seen a few examples of these loaders already as you've worked through the tutorial series. The full documentation for the Molecule Net suite is available in our docs [2]. [1] Wu, Zhenqin, et al. "Molecule Net: a benchmark for molecular machine learning. " Chemical science 9. 2 (2018): 513-530. [2] https://deepchem. readthedocs. io/en/latest/moleculenet. html Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b Setup To run Deep Chem within Colab, you'll need to run the following installation commands. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Deep Chem again on your local machine. ! pip install --pre deepchem We can now import the deepchem package to play with. import deepchem as dc dc. __version__ '2. 4. 0-rc1. dev' Molecule Net Overview In the last two tutorials we loaded the Delaney dataset of molecular solubilities. Let's load it one more time. tasks, datasets, transformers = dc. molnet. load_delaney ( featurizer = 'Graph Conv', splitter = 'random' ) Notice that the loader function we invoke dc. molnet. load_delaney lives in the dc. molnet submodule of Molecule Net loaders. Let's take a look at the full collection of loaders available for us [ method for method in dir ( dc. molnet ) if "load_" in method ] | deepchem.pdf |
['load_bace_classification', 'load_bace_regression', 'load_bandgap', 'load_bbbc001', 'load_bbbc002', 'load_bbbp', 'load_cell_counting', 'load_chembl', 'load_chembl25', 'load_clearance', 'load_clintox', 'load_delaney', 'load_factors', 'load_function', 'load_hiv', 'load_hopv', 'load_hppb', 'load_kaggle', 'load_kinase', 'load_lipo', 'load_mp_formation_energy', 'load_mp_metallicity', 'load_muv', 'load_nci', 'load_pcba', 'load_pcba_146', 'load_pcba_2475', 'load_pdbbind', 'load_pdbbind_from_dir', 'load_pdbbind_grid', 'load_perovskite', 'load_ppb', 'load_qm7', 'load_qm7_from_mat', 'load_qm7b_from_mat', 'load_qm8', 'load_qm9', 'load_sampl', 'load_sider', 'load_sweet', 'load_thermosol', 'load_tox21', 'load_toxcast', 'load_uspto', 'load_uv', 'load_zinc15'] The set of Molecule Net loaders is actively maintained by the Deep Chem community and we work on adding new datasets to the collection. Let's see how many datasets there are in Molecule Net today len ([ method for method in dir ( dc. molnet ) if "load_" in method ]) 46 Molecule Net Dataset Categories There's a lot of different datasets in Molecule Net. Let's do a quick overview of the different types of datasets available. We'll break datasets into different categories and list loaders which belong to those categories. More details on each of these datasets can be found at https://deepchem. readthedocs. io/en/latest/moleculenet. html. The original Molecule Net paper [1] provides details about a subset of these papers. We've marked these datasets as "V1" below. All remaining dataset are "V2" and not documented in the older paper. Quantum Mechanical Datasets Molecule Net's quantum mechanical datasets contain various quantum mechanical property prediction tasks. The current set of quantum mechanical datasets includes QM7, QM7b, QM8, QM9. The associated loaders are dc. molnet. load_qm7 : V1 dc. molnet. load_qm7b_from_mat : V1 dc. molnet. load_qm8 : V1 dc. molnet. load_qm9 : V1 Physical Chemistry Datasets The physical chemistry dataset collection contain a variety of tasks for predicting various physical properties of | deepchem.pdf |
molecules. dc. molnet. load_delaney : V1. This dataset is also referred to as ESOL in the original paper. dc. molnet. load_sampl : V1. This dataset is also referred to as Free Solv in the original paper. dc. molnet. load_lipo : V1. This dataset is also referred to as Lipophilicity in the original paper. dc. molnet. load_thermosol : V2. dc. molnet. load_hppb : V2. dc. molnet. load_hopv : V2. This dataset is drawn from a recent publication [3] Chemical Reaction Datasets These datasets hold chemical reaction datasets for use in computational retrosynthesis / forward synthesis. dc. molnet. load_uspto Biochemical/Biophysical Datasets These datasets are drawn from various biochemical/biophysical datasets that measure things like the binding affinity of compounds to proteins. dc. molnet. load_pcba : V1 dc. molnet. load_nci : V2. dc. molnet. load_muv : V1 dc. molnet. load_hiv : V1 dc. molnet. load_ppb : V2. dc. molnet. load_bace_classification : V1. This loader loads the classification task for the BACE dataset from the original Molecule Net paper. dc. molnet. load_bace_regression : V1. This loader loads the regression task for the BACE dataset from the original Molecule Net paper. dc. molnet. load_kaggle : V2. This dataset is from Merck's drug discovery kaggle contest and is described in [4]. dc. molnet. load_factors : V2. This dataset is from [4]. dc. molnet. load_uv : V2. This dataset is from [4]. dc. molnet. load_kinase : V2. This datset is from [4]. Molecular Catalog Datasets These datasets provide molecular datasets which have no associated properties beyond the raw SMILES formula or structure. These types of datasets are useful for generative modeling tasks. dc. molnet. load_zinc15 : V2 dc. molnet. load_chembl : V2 dc. molnet. load_chembl25 : V2 Physiology Datasets These datasets measure physiological properties of how molecules interact with human patients. dc. molnet. load_bbbp : V1 dc. molnet. load_tox21 : V1 dc. molnet. load_toxcast : V1 dc. molnet. load_sider : V1 dc. molnet. load_clintox : V1 dc. molnet. load_clearance : V2. Structural Biology Datasets These datasets contain 3D structures of macromolecules along with associated properties. dc. molnet. load_pdbbind : V1 Microscopy Datasets These datasets contain microscopy image datasets, typically of cell lines. These datasets were not in the original Molecule Net paper. | deepchem.pdf |
dc. molnet. load_bbbc001 : V2 dc. molnet. load_bbbc002 : V2 dc. molnet. load_cell_counting : V2 Materials Properties Datasets These datasets compute properties of various materials. dc. molnet. load_bandgap : V2 dc. molnet. load_perovskite : V2 dc. molnet. load_mp_formation_energy : V2 dc. molnet. load_mp_metallicity : V2 [3] Lopez, Steven A., et al. "The Harvard organic photovoltaic dataset. " Scientific data 3. 1 (2016): 1-7. [4] Ramsundar, Bharath, et al. "Is multitask deep learning practical for pharma?. " Journal of chemical information and modeling 57. 8 (2017): 2068-2076. Molecule Net Loaders Explained All Molecule Net loader functions take the form dc. molnet. load_X. Loader functions return a tuple of arguments (tasks, datasets, transformers). Let's walk through each of these return values and explain what we get: 1. tasks : This is a list of task-names. Many datasets in Molecule Net are "multitask". That is, a given datapoint has multiple labels associated with it. These correspond to different measurements or values associated with this datapoint. 2. datasets : This field is a tuple of three dc. data. Dataset objects (train, valid, test). These correspond to the training, validation, and test set for this Molecule Net dataset. 3. transformers : This field is a list of dc. trans. Transformer objects which were applied to this dataset during processing. This is abstract so let's take a look at each of these fields for the dc. molnet. load_delaney function we invoked above. Let's start with tasks. tasks ['measured log solubility in mols per litre'] We have one task in this dataset which corresponds to the measured log solubility in mol/L. Let's now take a look at datasets : datasets (<Disk Dataset X. shape: (902,), y. shape: (902, 1), w. shape: (902, 1), ids: ['CCC(C)Cl' 'O=C1NC(=O)NC(=O)C1(C(C)C )CC=C' 'Oc1ccccn1' ... 'CCCCCCCC(=O)OCC' 'O=Cc1ccccc1' 'CCCC=C(CC)C=O'], task_names: ['measured log solubility in mols per litre']>, <Disk Dataset X. shape: (113,), y. shape: (113, 1), w. shape: (113, 1), ids: ['CSc1nc(nc(n1)N(C)C)N(C)C' 'CC#N' 'C CCCCCCC#C' ... 'Cl CCBr' 'CCN(CC)C(=O)CSc1ccc(Cl)nn1' 'CC(=O)OC3CCC4C2CCC1=CC(=O)CCC1(C)C2CCC34C '], task_names: ['measured log solubi lity in mols per litre']>, <Disk Dataset X. shape: (113,), y. shape: (113, 1), w. shape: (113, 1), ids: ['CCCCc1c(C)nc(nc1O)N(C)C ' 'Cc3cc2nc1c(=O)[n H]c(=O)nc1n(CC(O)C(O)C(O)CO)c2cc3C' 'CSc1nc(NC(C)C)nc(NC(C)C)n1' ... 'O=c1[n H]cnc2[n H]ncc12 ' 'CC(=C)C1CC=C(C)C(=O)C1' 'OC(C(=O)c1ccccc1)c2ccccc2'], task_names: ['measured log solubility in mols per litr e']>) As we mentioned previously, we see that datasets is a tuple of 3 datasets. Let's split them out. train, valid, test = datasets train <Disk Dataset X. shape: (902,), y. shape: (902, 1), w. shape: (902, 1), ids: ['CCC(C)Cl' 'O=C1NC(=O)NC(=O)C1(C(C)C) CC=C' 'Oc1ccccn1' ... 'CCCCCCCC(=O)OCC' 'O=Cc1ccccc1' 'CCCC=C(CC)C=O'], task_names: ['measured log solubility in mols per litre']> valid <Disk Dataset X. shape: (113,), y. shape: (113, 1), w. shape: (113, 1), ids: ['CSc1nc(nc(n1)N(C)C)N(C)C' 'CC#N' 'CC CCCCCC#C' ... 'Cl CCBr' 'CCN(CC)C(=O)CSc1ccc(Cl)nn1' 'CC(=O)OC3CCC4C2CCC1=CC(=O)CCC1(C)C2CCC34C '], task_names: ['measured log solubil ity in mols per litre']> | deepchem.pdf |
test <Disk Dataset X. shape: (113,), y. shape: (113, 1), w. shape: (113, 1), ids: ['CCCCc1c(C)nc(nc1O)N(C)C ' 'Cc3cc2nc1c(=O)[n H]c(=O)nc1n(CC(O)C(O)C(O)CO)c2cc3C' 'CSc1nc(NC(C)C)nc(NC(C)C)n1' ... 'O=c1[n H]cnc2[n H]ncc12 ' 'CC(=C)C1CC=C(C)C(=O)C1' 'OC(C(=O)c1ccccc1)c2ccccc2'], task_names: ['measured log solubility in mols per litre ']> Let's peek into one of the datapoints in the train dataset. train. X [ 0 ] <deepchem. feat. mol_graphs. Conv Mol at 0x7fe1ef601438> Note that this is a dc. feat. mol_graphs. Conv Mol object produced by dc. feat. Conv Mol Featurizer. We'll say more about how to control choice of featurization shortly. Finally let's take a look at the transformers field: transformers [<deepchem. trans. transformers. Normalization Transformer at 0x7fe2029bdfd0>] So we see that one transformer was applied, the dc. trans. Normalization Transformer. After reading through this description so far, you may be wondering what choices are made under the hood. As we've briefly mentioned previously, datasets can be processed with different choices of "featurizers". Can we control the choice of featurization here? In addition, how was the source dataset split into train/valid/test as three different datasets? You can use the 'featurizer' and 'splitter' keyword arguments and pass in different strings. Common possible choices for 'featurizer' are 'ECFP', 'Graph Conv', 'Weave' and 'smiles2img' corresponding to the dc. feat. Circular Fingerprint, dc. feat. Conv Mol Featurizer, dc. feat. Weave Featurizer and dc. feat. Smiles To Image featurizers. Common possible choices for 'splitter' are None, 'index', 'random', 'scaffold' and 'stratified' corresponding to no split, dc. splits. Index Splitter, dc. splits. Random Splitter, dc. splits. Singletask Stratified Splitter. We haven't talked much about splitters yet, but intuitively they're a way to partition a dataset based on different criteria. We'll say more in a future tutorial. Instead of a string, you also can pass in any Featurizer or Splitter object. This is very useful when, for example, a Featurizer has constructor arguments you can use to customize its behavior. tasks, datasets, transformers = dc. molnet. load_delaney ( featurizer = "ECFP", splitter = "scaffold" ) ( train, valid, test ) = datasets train <Disk Dataset X. shape: (902, 1024), y. shape: (902, 1), w. shape: (902, 1), ids: ['CC(C)=CCCC(C)=CC(=O)' 'CCCC=C' 'CCCCCCCCCCCCCC' ... 'Nc2cccc3nc1ccccc1cc23 ' 'C1CCCCCC1' 'OC1CCCCCC1'], task_names: ['measured log solubility in mols per litre']> train. X [ 0 ] array([0., 0., 0., ..., 0., 0., 0. ]) Note that unlike the earlier invocation we have numpy arrays produced by dc. feat. Circular Fingerprint instead of Conv Mol objects produced by dc. feat. Conv Mol Featurizer. Give it a try for yourself. Try invoking Molecule Net to load some other datasets and experiment with different featurizer/split options and see what happens! Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Discord | deepchem.pdf |
The Deep Chem Discord hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! | deepchem.pdf |
Molecular Fingerprints Molecules can be represented in many ways. This tutorial introduces a type of representation called a "molecular fingerprint". It is a very simple representation that often works well for small drug-like molecules. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem We can now import the deepchem package to play with. import deepchem as dc dc. __version__ '2. 4. 0-rc1. dev' What is a Fingerprint? Deep learning models almost always take arrays of numbers as their inputs. If we want to process molecules with them, we somehow need to represent each molecule as one or more arrays of numbers. Many (but not all) types of models require their inputs to have a fixed size. This can be a challenge for molecules, since different molecules have different numbers of atoms. If we want to use these types of models, we somehow need to represent variable sized molecules with fixed sized arrays. Fingerprints are designed to address these problems. A fingerprint is a fixed length array, where different elements indicate the presence of different features in the molecule. If two molecules have similar fingerprints, that indicates they contain many of the same features, and therefore will likely have similar chemistry. Deep Chem supports a particular type of fingerprint called an "Extended Connectivity Fingerprint", or "ECFP" for short. They also are sometimes called "circular fingerprints". The ECFP algorithm begins by classifying atoms based only on their direct properties and bonds. Each unique pattern is a feature. For example, "carbon atom bonded to two hydrogens and two heavy atoms" would be a feature, and a particular element of the fingerprint is set to 1 for any molecule that contains that feature. It then iteratively identifies new features by looking at larger circular neighborhoods. One specific feature bonded to two other specific features becomes a higher level feature, and the corresponding element is set for any molecule that contains it. This continues for a fixed number of iterations, most often two. Let's take a look at a dataset that has been featurized with ECFP. tasks, datasets, transformers = dc. molnet. load_tox21 ( featurizer = 'ECFP' ) train_dataset, valid_dataset, test_dataset = datasets print ( train_dataset ) <Disk Dataset X. shape: (6264, 1024), y. shape: (6264, 12), w. shape: (6264, 12), task_names: ['NR-AR' 'NR-AR-LBD' ' NR-Ah R' ... 'SR-HSE' 'SR-MMP' 'SR-p53']> The feature array X has shape (6264, 1024). That means there are 6264 samples in the training set. Each one is represented by a fingerprint of length 1024. Also notice that the label array y has shape (6264, 12): this is a multitask dataset. Tox21 contains information about the toxicity of molecules. 12 different assays were used to look for signs of toxicity. The dataset records the results of all 12 assays, each as a different task. Let's also take a look at the weights array. train_dataset. w | deepchem.pdf |
array([[1. 0433141624730409, 1. 0369942196531792, 8. 53921568627451, ..., 1. 060388945752303, 1. 1895710249165168, 1. 0700990099009902], [1. 0433141624730409, 1. 0369942196531792, 1. 1326397919375812, ..., 0. 0, 1. 1895710249165168, 1. 0700990099009902], [0. 0, 0. 0, 0. 0, ..., 1. 060388945752303, 0. 0, 0. 0], ..., [0. 0, 0. 0, 0. 0, ..., 0. 0, 0. 0, 0. 0], [1. 0433141624730409, 1. 0369942196531792, 8. 53921568627451, ..., 1. 060388945752303, 0. 0, 0. 0], [1. 0433141624730409, 1. 0369942196531792, 1. 1326397919375812, ..., 1. 060388945752303, 1. 1895710249165168, 1. 0700990099009902]], dtype=object) Notice that some elements are 0. The weights are being used to indicate missing data. Not all assays were actually performed on every molecule. Setting the weight for a sample or sample/task pair to 0 causes it to be ignored during fitting and evaluation. It will have no effect on the loss function or other metrics. Most of the other weights are close to 1, but not exactly 1. This is done to balance the overall weight of positive and negative samples on each task. When training the model, we want each of the 12 tasks to contribute equally, and on each task we want to put equal weight on positive and negative samples. Otherwise, the model might just learn that most of the training samples are non-toxic, and therefore become biased toward identifying other molecules as non-toxic. Training a Model on Fingerprints Let's train a model. In earlier tutorials we use Graph Conv Model, which is a fairly complicated architecture that takes a complex set of inputs. Because fingerprints are so simple, just a single fixed length array, we can use a much simpler type of model. model = dc. models. Multitask Classifier ( n_tasks = 12, n_features = 1024, layer_sizes = [ 1000 ]) Multitask Classifier is a simple stack of fully connected layers. In this example we tell it to use a single hidden layer of width 1000. We also tell it that each input will have 1024 features, and that it should produce predictions for 12 different tasks. Why not train a separate model for each task? We could do that, but it turns out that training a single model for multiple tasks often works better. We will see an example of that in a later tutorial. Let's train and evaluate the model. import numpy as np model. fit ( train_dataset, nb_epoch = 10 ) metric = dc. metrics. Metric ( dc. metrics. roc_auc_score ) print ( 'training set score:', model. evaluate ( train_dataset, [ metric ], transformers )) print ( 'test set score:', model. evaluate ( test_dataset, [ metric ], transformers )) training set score: {'roc_auc_score': 0. 9550063590563469} test set score: {'roc_auc_score': 0. 7781819573695475} Not bad performance for such a simple model and featurization. More sophisticated models do slightly better on this dataset, but not enormously better. Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! | deepchem.pdf |
Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Intro4, title = { Molecular Fingerprints }, organization = { Deep Chem }, author = { Ramsundar, Bharath }, howpublished = { \ url { https : // github. com / deepchem / deepchem / blob / master / examples / tutorials / Molecular_Fingerprints. ipynb year = { 2021 }, } | deepchem.pdf |
Creating Models with Tensor Flow and Py Torch In the tutorials so far, we have used standard models provided by Deep Chem. This is fine for many applications, but sooner or later you will want to create an entirely new model with an architecture you define yourself. Deep Chem provides integration with both Tensor Flow (Keras) and Py Torch, so you can use it with models from either of these frameworks. Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem There are actually two different approaches you can take to using Tensor Flow or Py Torch models with Deep Chem. It depends on whether you want to use Tensor Flow/Py Torch APIs or Deep Chem APIs for training and evaluating your model. For the former case, Deep Chem's Dataset class has methods for easily adapting it to use with other frameworks. make_tf_dataset() returns a tensorflow. data. Dataset object that iterates over the data. make_pytorch_dataset() returns a torch. utils. data. Iterable Dataset that iterates over the data. This lets you use Deep Chem's datasets, loaders, featurizers, transformers, splitters, etc. and easily integrate them into your existing Tensor Flow or Py Torch code. But Deep Chem also provides many other useful features. The other approach, which lets you use those features, is to wrap your model in a Deep Chem Model object. Let's look at how to do that. Keras Model Keras Model is a subclass of Deep Chem's Model class. It acts as a wrapper around a tensorflow. keras. Model. Let's see an example of using it. For this example, we create a simple sequential model consisting of two dense layers. import deepchem as dc import tensorflow as tf keras_model = tf. keras. Sequential ([ tf. keras. layers. Dense ( 1000, activation = 'relu' ), tf. keras. layers. Dropout ( rate = 0. 5 ), tf. keras. layers. Dense ( 1 ) ]) model = dc. models. Keras Model ( keras_model, dc. models. losses. L2Loss ()) For this example, we used the Keras Sequential class. Our model consists of a dense layer with Re LU activation, 50% dropout to provide regularization, and a final layer that produces a scalar output. We also need to specify the loss function to use when training the model, in this case L 2 loss. We can now train and evaluate the model exactly as we would with any other Deep Chem model. For example, let's load the Delaney solubility dataset. How does our model do at predicting the solubilities of molecules based on their extended-connectivity fingerprints (ECFPs)? tasks, datasets, transformers = dc. molnet. load_delaney ( featurizer = 'ECFP', splitter = 'random' ) train_dataset, valid_dataset, test_dataset = datasets model. fit ( train_dataset, nb_epoch = 50 ) metric = dc. metrics. Metric ( dc. metrics. pearson_r2_score ) print ( 'training set score:', model. evaluate ( train_dataset, [ metric ])) print ( 'test set score:', model. evaluate ( test_dataset, [ metric ])) training set score: {'pearson_r2_score': 0. 9766804253639305} test set score: {'pearson_r2_score': 0. 7048814451615332} Torch Model Torch Model works just like Keras Model, except it wraps a torch. nn. Module. Let's use Py Torch to create another model just like the previous one and train it on the same data. import torch pytorch_model = torch. nn. Sequential ( torch. nn. Linear ( 1024, 1000 ), torch. nn. Re LU (), | deepchem.pdf |
torch. nn. Dropout ( 0. 5 ), torch. nn. Linear ( 1000, 1 ) ) model = dc. models. Torch Model ( pytorch_model, dc. models. losses. L2Loss ()) model. fit ( train_dataset, nb_epoch = 50 ) print ( 'training set score:', model. evaluate ( train_dataset, [ metric ])) print ( 'test set score:', model. evaluate ( test_dataset, [ metric ])) training set score: {'pearson_r2_score': 0. 9760781898204121} test set score: {'pearson_r2_score': 0. 6981331812360332} Computing Losses Now let's see a more advanced example. In the above models, the loss was computed directly from the model's output. Often that is fine, but not always. Consider a classification model that outputs a probability distribution. While it is possible to compute the loss from the probabilities, it is more numerically stable to compute it from the logits. To do this, we create a model that returns multiple outputs, both probabilities and logits. Keras Model and Torch Model let you specify a list of "output types". If a particular output has type 'prediction', that means it is a normal output that should be returned when you call predict(). If it has type 'loss', that means it should be passed to the loss function in place of the normal outputs. Sequential models do not allow multiple outputs, so instead we use a subclassing style model. class Classification Model ( tf. keras. Model ): def __init__ ( self ): super ( Classification Model, self ). __init__ () self. dense1 = tf. keras. layers. Dense ( 1000, activation = 'relu' ) self. dense2 = tf. keras. layers. Dense ( 1 ) def call ( self, inputs, training = False ): y = self. dense1 ( inputs ) if training : y = tf. nn. dropout ( y, 0. 5 ) logits = self. dense2 ( y ) output = tf. nn. sigmoid ( logits ) return output, logits keras_model = Classification Model () output_types = [ 'prediction', 'loss' ] model = dc. models. Keras Model ( keras_model, dc. models. losses. Sigmoid Cross Entropy (), output_types = output_types ) We can train our model on the BACE dataset. This is a binary classification task that tries to predict whether a molecule will inhibit the enzyme BACE-1. tasks, datasets, transformers = dc. molnet. load_bace_classification ( feturizer = 'ECFP', splitter = 'scaffold' ) train_dataset, valid_dataset, test_dataset = datasets model. fit ( train_dataset, nb_epoch = 100 ) metric = dc. metrics. Metric ( dc. metrics. roc_auc_score ) print ( 'training set score:', model. evaluate ( train_dataset, [ metric ])) print ( 'test set score:', model. evaluate ( test_dataset, [ metric ])) training set score: {'roc_auc_score': 0. 9995809177900399} test set score: {'roc_auc_score': 0. 7629528985507246} Similarly, we will create a custom Classifier Model class to be used with Torch Model. Using similar reasoning to the above Keras Model, a custom model allows for easy capturing of the unscaled output (logits in Tensorflow) of the second dense layer. The custom class allows definition of how forward pass is done; enabling capture of the logits right before the final sigmoid is applied to produce the prediction. Finally, an instance of Classification Model is coupled with a loss function that requires both the prediction and logits to produce an instance of Torch Model to train. class Classification Model ( torch. nn. Module ): def __init__ ( self ): super ( Classification Model, self ). __init__ () self. dense1 = torch. nn. Linear ( 1024, 1000 ) self. dense2 = torch. nn. Linear ( 1000, 1 ) def forward ( self, inputs ): y = torch. nn. functional. relu ( self. dense1 ( inputs ) ) y = torch. nn. functional. dropout ( y, p = 0. 5, training = self. training ) logits = self. dense2 ( y ) output = torch. sigmoid ( logits ) return output, logits | deepchem.pdf |
torch_model = Classification Model () output_types = [ 'prediction', 'loss' ] model = dc. models. Torch Model ( torch_model, dc. models. losses. Sigmoid Cross Entropy (), output_types = output_types ) We will use the same BACE dataset. As before, the model will try to do a binary classification task that tries to predict whether a molecule will inhibit the enzyme BACE-1. tasks, datasets, transformers = dc. molnet. load_bace_classification ( feturizer = 'ECFP', splitter = 'scaffold' ) train_dataset, valid_dataset, test_dataset = datasets model. fit ( train_dataset, nb_epoch = 100 ) metric = dc. metrics. Metric ( dc. metrics. roc_auc_score ) print ( 'training set score:', model. evaluate ( train_dataset, [ metric ])) print ( 'test set score:', model. evaluate ( test_dataset, [ metric ])) training set score: {'roc_auc_score': 0. 9996340015366347} test set score: {'roc_auc_score': 0. 7615036231884058} Other Features Keras Model and Torch Model have lots of other features. Here are some of the more important ones. Automatically saving checkpoints during training. Logging progress to the console, to Tensor Board, or to Weights & Biases. Custom loss functions that you define with a function of the form f(outputs, labels, weights). Early stopping using the Validation Callback class. Loading parameters from pre-trained models. Estimating uncertainty in model outputs. Identifying important features through saliency mapping. By wrapping your own models in a Keras Model or Torch Model, you get immediate access to all these features. See the API documentation for full details on them. Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Discord The Deep Chem Discord hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Intro1, title = { 5 }, organization = { Deep Chem }, author = { Ramsundar, Bharath and Rebel, Alles }, howpublished = { \ url { https : // github. com / deepchem / deepchem / blob / master / examples / tutorials / Creating_Models_with_Tensor Flow_and_Py Torch year = { 2021 }, } | deepchem.pdf |
Introduction to Graph Convolutions In this tutorial we will learn more about "graph convolutions. " These are one of the most powerful deep learning tools for working with molecular data. The reason for this is that molecules can be naturally viewed as graphs. Note how standard chemical diagrams of the sort we're used to from high school lend themselves naturally to visualizing molecules as graphs. In the remainder of this tutorial, we'll dig into this relationship in significantly more detail. This will let us get a deeper understanding of how these systems work. Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem What are Graph Convolutions? Consider a standard convolutional neural network (CNN) of the sort commonly used to process images. The input is a grid of pixels. There is a vector of data values for each pixel, for example the red, green, and blue color channels. The data passes through a series of convolutional layers. Each layer combines the data from a pixel and its neighbors to produce a new data vector for the pixel. Early layers detect small scale local patterns, while later layers detect larger, more abstract patterns. Often the convolutional layers alternate with pooling layers that perform some operation such as max or min over local regions. Graph convolutions are similar, but they operate on a graph. They begin with a data vector for each node of the graph (for example, the chemical properties of the atom that node represents). Convolutional and pooling layers combine information from connected nodes (for example, atoms that are bonded to each other) to produce a new data vector for each node. Training a Graph Conv Model Let's use the Molecule Net suite to load the Tox21 dataset. To featurize the data in a way that graph convolutional networks can use, we set the featurizer option to 'Graph Conv'. The Molecule Net call returns a training set, a validation set, and a test set for us to use. It also returns tasks, a list of the task names, and transformers, a list of data transformations that were applied to preprocess the dataset. (Most deep networks are quite finicky and require a set of data transformations to ensure that training proceeds stably. ) import deepchem as dc | deepchem.pdf |
tasks, datasets, transformers = dc. molnet. load_tox21 ( featurizer = 'Graph Conv' ) train_dataset, valid_dataset, test_dataset = datasets Let's now train a graph convolutional network on this dataset. Deep Chem has the class Graph Conv Model that wraps a standard graph convolutional architecture underneath the hood for user convenience. Let's instantiate an object of this class and train it on our dataset. n_tasks = len ( tasks ) model = dc. models. Graph Conv Model ( n_tasks, mode = 'classification' ) model. fit ( train_dataset, nb_epoch = 50 ) 0. 28185401916503905 Let's try to evaluate the performance of the model we've trained. For this, we need to define a metric, a measure of model performance. dc. metrics holds a collection of metrics already. For this dataset, it is standard to use the ROC-AUC score, the area under the receiver operating characteristic curve (which measures the tradeoff between precision and recall). Luckily, the ROC-AUC score is already available in Deep Chem. To measure the performance of the model under this metric, we can use the convenience function model. evaluate(). metric = dc. metrics. Metric ( dc. metrics. roc_auc_score ) print ( 'Training set score:', model. evaluate ( train_dataset, [ metric ], transformers )) print ( 'Test set score:', model. evaluate ( test_dataset, [ metric ], transformers )) Training set score: {'roc_auc_score': 0. 96959686893055} Test set score: {'roc_auc_score': 0. 795793783300876} The results are pretty good, and Graph Conv Model is very easy to use. But what's going on under the hood? Could we build Graph Conv Model ourselves? Of course! Deep Chem provides Keras layers for all the calculations involved in a graph convolution. We are going to apply the following layers from Deep Chem. Graph Conv layer: This layer implements the graph convolution. The graph convolution combines per-node feature vectures in a nonlinear fashion with the feature vectors for neighboring nodes. This "blends" information in local neighborhoods of a graph. Graph Pool layer: This layer does a max-pooling over the feature vectors of atoms in a neighborhood. You can think of this layer as analogous to a max-pooling layer for 2D convolutions but which operates on graphs instead. Graph Gather : Many graph convolutional networks manipulate feature vectors per graph-node. For a molecule for example, each node might represent an atom, and the network would manipulate atomic feature vectors that summarize the local chemistry of the atom. However, at the end of the application, we will likely want to work with a molecule level feature representation. This layer creates a graph level feature vector by combining all the node-level feature vectors. Apart from this we are going to apply standard neural network layers such as Dense, Batch Normalization and Softmax layer. from deepchem. models. layers import Graph Conv, Graph Pool, Graph Gather import tensorflow as tf import tensorflow. keras. layers as layers batch_size = 100 class My Graph Conv Model ( tf. keras. Model ): def __init__ ( self ): super ( My Graph Conv Model, self ). __init__ () self. gc1 = Graph Conv ( 128, activation_fn = tf. nn. tanh ) self. batch_norm1 = layers. Batch Normalization () self. gp1 = Graph Pool () self. gc2 = Graph Conv ( 128, activation_fn = tf. nn. tanh ) self. batch_norm2 = layers. Batch Normalization () self. gp2 = Graph Pool () self. dense1 = layers. Dense ( 256, activation = tf. nn. tanh ) self. batch_norm3 = layers. Batch Normalization () self. readout = Graph Gather ( batch_size = batch_size, activation_fn = tf. nn. tanh ) self. dense2 = layers. Dense ( n_tasks * 2 ) self. logits = layers. Reshape (( n_tasks, 2 )) self. softmax = layers. Softmax () def call ( self, inputs ): gc1_output = self. gc1 ( inputs ) batch_norm1_output = self. batch_norm1 ( gc1_output ) gp1_output = self. gp1 ([ batch_norm1_output ] + inputs [ 1 :]) | deepchem.pdf |
gc2_output = self. gc2 ([ gp1_output ] + inputs [ 1 :]) batch_norm2_output = self. batch_norm1 ( gc2_output ) gp2_output = self. gp2 ([ batch_norm2_output ] + inputs [ 1 :]) dense1_output = self. dense1 ( gp2_output ) batch_norm3_output = self. batch_norm3 ( dense1_output ) readout_output = self. readout ([ batch_norm3_output ] + inputs [ 1 :]) logits_output = self. logits ( self. dense2 ( readout_output )) return self. softmax ( logits_output ) We can now see more clearly what is happening. There are two convolutional blocks, each consisting of a Graph Conv, followed by batch normalization, followed by a Graph Pool to do max pooling. We finish up with a dense layer, another batch normalization, a Graph Gather to combine the data from all the different nodes, and a final dense layer to produce the global output. Let's now create the Deep Chem model which will be a wrapper around the Keras model that we just created. We will also specify the loss function so the model knows the objective to minimize. model = dc. models. Keras Model ( My Graph Conv Model (), loss = dc. models. losses. Categorical Cross Entropy ()) What are the inputs to this model? A graph convolution requires a complete description of each molecule, including the list of nodes (atoms) and a description of which ones are bonded to each other. In fact, if we inspect the dataset we see that the feature array contains Python objects of type Conv Mol. test_dataset. X [ 0 ] <deepchem. feat. mol_graphs. Conv Mol at 0x14d0b1650> Models expect arrays of numbers as their inputs, not Python objects. We must convert the Conv Mol objects into the particular set of arrays expected by the Graph Conv, Graph Pool, and Graph Gather layers. Fortunately, the Conv Mol class includes the code to do this, as well as to combine all the molecules in a batch to create a single set of arrays. The following code creates a Python generator that given a batch of data generates the lists of inputs, labels, and weights whose values are Numpy arrays. atom_features holds a feature vector of length 75 for each atom. The other inputs are required to support minibatching in Tensor Flow. degree_slice is an indexing convenience that makes it easy to locate atoms from all molecules with a given degree. membership determines the membership of atoms in molecules (atom i belongs to molecule membership[i] ). deg_adjs is a list that contains adjacency lists grouped by atom degree. For more details, check out the code. from deepchem. metrics import to_one_hot from deepchem. feat. mol_graphs import Conv Mol import numpy as np def data_generator ( dataset, epochs = 1 ): for ind, ( X_b, y_b, w_b, ids_b ) in enumerate ( dataset. iterbatches ( batch_size, epochs, deterministic = False, pad_batches = True )): multi Conv Mol = Conv Mol. agglomerate_mols ( X_b ) inputs = [ multi Conv Mol. get_atom_features (), multi Conv Mol. deg_slice, np. array ( multi Conv Mol. membership )] for i in range ( 1, len ( multi Conv Mol. get_deg_adjacency_lists ())): inputs. append ( multi Conv Mol. get_deg_adjacency_lists ()[ i ]) labels = [ to_one_hot ( y_b. flatten (), 2 ). reshape (-1, n_tasks, 2 )] weights = [ w_b ] yield ( inputs, labels, weights ) Now, we can train the model using fit_generator(generator) which will use the generator we've defined to train the model. model. fit_generator ( data_generator ( train_dataset, epochs = 50 )) 0. 21941944122314452 Now that we have trained our graph convolutional method, let's evaluate its performance. We again have to use our defined generator to evaluate model performance. print ( 'Training set score:', model. evaluate_generator ( data_generator ( train_dataset ), [ metric ], transformers )) print ( 'Test set score:', model. evaluate_generator ( data_generator ( test_dataset ), [ metric ], transformers )) Training set score: {'roc_auc_score': 0. 8425638289185731} Test set score: {'roc_auc_score': 0. 7378436684114341} Success! The model we've constructed behaves nearly identically to Graph Conv Model. If you're looking to build your own custom models, you can follow the example we've provided here to do so. We hope to see exciting constructions from your end soon! | deepchem.pdf |
Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! | deepchem.pdf |
Going Deeper On Molecular Featurizations One of the most important steps of doing machine learning on molecular data is transforming the data into a form amenable to the application of learning algorithms. This process is broadly called "featurization" and involves turning a molecule into a vector or tensor of some sort. There are a number of different ways of doing that, and the choice of featurization is often dependent on the problem at hand. We have already seen two such methods: molecular fingerprints, and Conv Mol objects for use with graph convolutions. In this tutorial we will look at some of the others. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem import deepchem deepchem. __version__ Featurizers In Deep Chem, a method of featurizing a molecule (or any other sort of input) is defined by a Featurizer object. There are three different ways of using featurizers. 1. When using the Molecule Net loader functions, you simply pass the name of the featurization method to use. We have seen examples of this in earlier tutorials, such as featurizer='ECFP' or featurizer='Graph Conv'. 2. You also can create a Featurizer and directly apply it to molecules. For example: import deepchem as dc featurizer = dc. feat. Circular Fingerprint () print ( featurizer ([ 'CC', 'CCC', 'CCO' ])) [[0. 0. 0. ... 0. 0. 0. ] [0. 0. 0. ... 0. 0. 0. ] [0. 0. 0. ... 0. 0. 0. ]] 3. When creating a new dataset with the Data Loader framework, you can specify a Featurizer to use for processing the data. We will see this in a future tutorial. We use propane (CH 3 CH 2 CH 3, represented by the SMILES string 'CCC' ) as a running example throughout this tutorial. Many of the featurization methods use conformers of the molecules. A conformer can be generated using the Conformer Generator class in deepchem. utils. conformers. RDKit Descriptors RDKit Descriptors featurizes a molecule by using RDKit to compute values for a list of descriptors. These are basic physical and chemical properties: molecular weight, polar surface area, numbers of hydrogen bond donors and acceptors, etc. This is most useful for predicting things that depend on these high level properties rather than on detailed molecular structure. Intrinsic to the featurizer is a set of allowed descriptors, which can be accessed using RDKit Descriptors. allowed Descriptors. The featurizer uses the descriptors in rdkit. Chem. Descriptors. desc List, checks if they are in the list of allowed descriptors, and computes the descriptor value for the molecule. Let's print the values of the first ten descriptors for propane. rdkit_featurizer = dc. feat. RDKit Descriptors () features = rdkit_featurizer ([ 'CCC' ])[ 0 ] for feature, descriptor in zip ( features [: 10 ], rdkit_featurizer. descriptors ): print ( descriptor, feature ) | deepchem.pdf |
Max EState Index 2. 125 Min EState Index 1. 25 Max Abs EState Index 2. 125 Min Abs EState Index 1. 25 qed 0. 3854706587740357 Mol Wt 44. 097 Heavy Atom Mol Wt 36. 033 Exact Mol Wt 44. 062600255999996 Num Valence Electrons 20. 0 Num Radical Electrons 0. 0 Of course, there are many more descriptors than this. print ( 'The number of descriptors present is: ', len ( features )) The number of descriptors present is: 200 Weave Featurizer and Mol Graph Conv Featurizer We previously looked at graph convolutions, which use Conv Mol Featurizer to convert molecules into Conv Mol objects. Graph convolutions are a special case of a large class of architectures that represent molecules as graphs. They work in similar ways but vary in the details. For example, they may associate data vectors with the atoms, the bonds connecting them, or both. They may use a variety of techniques to calculate new data vectors from those in the previous layer, and a variety of techniques to compute molecule level properties at the end. Deep Chem supports lots of different graph based models. Some of them require molecules to be featurized in slightly different ways. Because of this, there are two other featurizers called Weave Featurizer and Mol Graph Conv Featurizer. They each convert molecules into a different type of Python object that is used by particular models. When using any graph based model, just check the documentation to see what featurizer you need to use with it. Coulomb Matrix All the models we have looked at so far consider only the intrinsic properties of a molecule: the list of atoms that compose it and the bonds connecting them. When working with flexible molecules, you may also want to consider the different conformations the molecule can take on. For example, when a drug molecule binds to a protein, the strength of the binding depends on specific interactions between pairs of atoms. To predict binding strength, you probably want to consider a variety of possible conformations and use a model that takes them into account when making predictions. The Coulomb matrix is one popular featurization for molecular conformations. Recall that the electrostatic Coulomb interaction between two charges is proportional to where and are the charges and is the distance between them. For a molecule with atoms, the Coulomb matrix is a matrix where each element gives the strength of the electrostatic interaction between two atoms. It contains information both about the charges on the atoms and the distances between them. More information on the functional forms used can be found here. To apply this featurizer, we first need a set of conformations for the molecule. We can use the Conformer Generator class to do this. It takes a RDKit molecule, generates a set of energy minimized conformers, and prunes the set to only include ones that are significantly different from each other. Let's try running it for propane. from rdkit import Chem generator = dc. utils. Conformer Generator ( max_conformers = 5 ) propane_mol = generator. generate_conformers ( Chem. Mol From Smiles ( 'CCC' )) print ( "Number of available conformers for propane: ", len ( propane_mol. Get Conformers ())) Number of available conformers for propane: 1 It only found a single conformer. This shouldn't be surprising, since propane is a very small molecule with hardly any flexibility. Let's try adding another carbon. butane_mol = generator. generate_conformers ( Chem. Mol From Smiles ( 'CCCC' )) print ( "Number of available conformers for butane: ", len ( butane_mol. Get Conformers ())) | deepchem.pdf |
Number of available conformers for butane: 3 Now we can create a Coulomb matrix for our molecule. coulomb_mat = dc. feat. Coulomb Matrix ( max_atoms = 20 ) features = coulomb_mat ( propane_mol ) print ( features ) | deepchem.pdf |
[[[36. 8581052 12. 48684429 7. 5619687 2. 85945193 2. 85804514 2. 85804556 1. 4674015 1. 46740144 0. 91279491 1. 14239698 1. 14239675 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [12. 48684429 36. 8581052 12. 48684388 1. 46551218 1. 45850736 1. 45850732 2. 85689525 2. 85689538 1. 4655122 1. 4585072 1. 4585072 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 7. 5619687 12. 48684388 36. 8581052 0. 9127949 1. 14239695 1. 14239692 1. 46740146 1. 46740145 2. 85945178 2. 85804504 2. 85804493 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 2. 85945193 1. 46551218 0. 9127949 0. 5 0. 29325367 0. 29325369 0. 21256978 0. 21256978 0. 12268391 0. 13960187 0. 13960185 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 2. 85804514 1. 45850736 1. 14239695 0. 29325367 0. 5 0. 29200271 0. 17113413 0. 21092513 0. 13960186 0. 1680002 0. 20540029 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 2. 85804556 1. 45850732 1. 14239692 0. 29325369 0. 29200271 0. 5 0. 21092513 0. 17113413 0. 13960187 0. 20540032 0. 16800016 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 1. 4674015 2. 85689525 1. 46740146 0. 21256978 0. 17113413 0. 21092513 0. 5 0. 29351308 0. 21256981 0. 2109251 0. 17113412 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 1. 46740144 2. 85689538 1. 46740145 0. 21256978 0. 21092513 0. 17113413 0. 29351308 0. 5 0. 21256977 0. 17113412 0. 21092513 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 91279491 1. 4655122 2. 85945178 0. 12268391 0. 13960186 0. 13960187 0. 21256981 0. 21256977 0. 5 0. 29325366 0. 29325365 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 1. 14239698 1. 4585072 2. 85804504 0. 13960187 0. 1680002 0. 20540032 0. 2109251 0. 17113412 0. 29325366 0. 5 0. 29200266 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 1. 14239675 1. 4585072 2. 85804493 0. 13960185 0. 20540029 0. 16800016 0. 17113412 0. 21092513 0. 29325365 0. 29200266 0. 5 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]]] /Users/peastman/workspace/deepchem/deepchem/feat/molecule_featurizers/coulomb_matrices. py:141: Runtime Warning: d ivide by zero encountered in true_divide m = np. outer(z, z) / d | deepchem.pdf |
Notice that many elements are 0. To combine multiple molecules in a batch we need all the Coulomb matrices to be the same size, even if the molecules have different numbers of atoms. We specified max_atoms=20, so the returned matrix has size (20, 20). The molecule only has 11 atoms, so only an 11 by 11 submatrix is nonzero. Coulomb Matrix Eig An important feature of Coulomb matrices is that they are invariant to molecular rotation and translation, since the interatomic distances and atomic numbers do not change. Respecting symmetries like this makes learning easier. Rotating a molecule does not change its physical properties. If the featurization does change, then the model is forced to learn that rotations are not important, but if the featurization is invariant then the model gets this property automatically. Coulomb matrices are not invariant under another important symmetry: permutations of the atoms' indices. A molecule's physical properties do not depend on which atom we call "atom 1", but the Coulomb matrix does. To deal with this, the Coulumb Matrix Eig featurizer was introduced, which uses the eigenvalue spectrum of the Coulumb matrix and is invariant to random permutations of the atom's indices. The disadvantage of this featurization is that it contains much less information ( eigenvalues instead of an matrix), so models will be more limited in what they can learn. Coulomb Matrix Eig inherits from Coulomb Matrix and featurizes a molecule by first computing the Coulomb matrices for different conformers of the molecule and then computing the eigenvalues for each Coulomb matrix. These eigenvalues are then padded to account for variation in number of atoms across molecules. coulomb_mat_eig = dc. feat. Coulomb Matrix Eig ( max_atoms = 20 ) features = coulomb_mat_eig ( propane_mol ) print ( features ) [[60. 07620303 29. 62963149 22. 75497781 0. 5713786 0. 28781332 0. 28548338 0. 27558187 0. 18163794 0. 17460999 0. 17059719 0. 16640098 0. 0. 0. 0. 0. 0. 0. 0. 0. ]] Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Intro7, title = { Going Deeper on Molecular Featurizations }, organization = { Deep Chem }, author = { Ramsundar, Bharath }, howpublished = { \ url { https : // github. com / deepchem / deepchem / blob / master / examples / tutorials / Going_Deeper_on_Molecular_Featurizations year = { 2021 }, } | deepchem.pdf |
Working With Splitters When using machine learning, you typically divide your data into training, validation, and test sets. The Molecule Net loaders do this automatically. But how should you divide up the data? This question seems simple at first, but it turns out to be quite complicated. There are many ways of splitting up data, and which one you choose can have a big impact on the reliability of your results. This tutorial introduces some of the splitting methods provided by Deep Chem. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem import deepchem deepchem. __version__ '2. 7. 1' Splitters In Deep Chem, a method of splitting samples into multiple datasets is defined by a Splitter object. Choosing an appropriate method for your data is very important. Otherwise, your trained model may seem to work much better than it really does. Consider a typical drug development pipeline. You might begin by screening many thousands of molecules to see if they bind to your target of interest. Once you find one that seems to work, you try to optimize it by testing thousands of minor variations on it, looking for one that binds more strongly. Then perhaps you test it in animals and find it has unacceptable toxicity, so you try more variations to fix the problems. This has an important consequence for chemical datasets: they often include lots of molecules that are very similar to each other. If you split the data into training and test sets in a naive way, the training set will include many molecules that are very similar to the ones in the test set, even if they are not exactly identical. As a result, the model may do very well on the test set, but then fail badly when you try to use it on other data that is less similar to the training data. Let's take a look at a few of the splitters found in Deep Chem. General Splitters β Random Splitter β Random Group Splitter β Random Stratified Splitter β Singletask Stratified Splitter β Index Splitter β Specified Splitter β Task Splitter Molecular Splitters β Scaffold Splitter β Molecular Weight Splitter β Max Min Splitter β Butina Splitter β Fingerprint Splitter Let's take a look how different splitters work. Random Splitter This is one of the simplest splitters. It just selects samples for the training, validation, and test sets in a completely random way. Didn't we just say that's a bad idea? Well, it depends on your data. If every sample is truly independent of every other, then this is just as good a way as any to split the data. There is no universally best choice of splitter. It all depends on your particular dataset, and for some datasets this is a fine choice. | deepchem.pdf |
Random Stratified Splitter Some datasets are very unbalanced: only a tiny fraction of all samples are positive. In that case, random splitting may sometimes lead to the validation or test set having few or even no positive samples for some tasks. That makes it unable to evaluate performance. Random Stratified Splitter addresses this by dividing up the positive and negative samples evenly. If you ask for a 80/10/10 split, the validation and test sets will contain not just 10% of samples, but also 10% of the positive samples for each task. Scaffold Splitter This splitter tries to address the problem discussed above where many molecules are very similar to each other. It identifies the scaffold that forms the core of each molecule, and ensures that all molecules with the same scaffold are put into the same dataset. This is still not a perfect solution, since two molecules may have different scaffolds but be very similar in other ways, but it usually is a large improvement over random splitting. Butina Splitter This is another splitter that tries to address the problem of similar molecules. It clusters them based on their molecular fingerprints, so that ones with similar fingerprints will tend to be in the same dataset. The time required by this splitting algorithm scales as the square of the number of molecules, so it is mainly useful for small to medium sized datasets. Specified Splitter This splitter leaves everything up to the user. You tell it exactly which samples to put in each dataset. This is useful when you know in advance that a particular splitting is appropriate for your data. An example is temporal splitting. Consider a research project where you are continually generating and testing new molecules. As you gain more data, you periodically retrain your model on the steadily growing dataset, then use it to predict results for other not yet tested molecules. A good way of validating whether this works is to pick a particular cutoff date, train the model on all data you had at that time, and see how well it predicts other data that was generated later. Task Splitter Provides a simple interface for splitting datasets task-wise. For some learning problems, the training and test datasets should have different tasks entirely. This is a different paradigm from the usual Splitter, which ensures that split datasets have different data points, not different tasks. This method improves multi-task learning and problem decomposition situations by enhancing their efficiency and performance. Singletask Stratified Splitter Another way of splitting data, particularly for classification tasks with imbalanced class distributions is the single-task stratified splitter. The single-task stratified splitter maintains the class distribution in the original dataset across training, validation and test sets. This is crucial when working with imbalanced datasets where some classes may be under-represented. Fingerprint Splitter Class for doing data splits based on the Tanimoto similarity(Tanimoto similarity measures overlap between two sets succinctly) between ECFP4 fingerprints(ECFP4 fingerprints encode unique parts of molecules for efficient comparison). This class tries to split the data such that the molecules in each dataset are as different as possible from the ones in the other datasets. This makes it a very stringent test of models. Predicting the test and validation sets may require extrapolating far outside the training data. It splits molecular datasets using Tanimoto similarity scores calculated from ECFP4 fingerprints. ECFP4, based on Morgan fingerprints, encodes molecular substructures. Molecular Weight Splitter Another splitter performs data splits based on molecular weight Effect of Using Different Splitters Let's look at an example. We will load the Tox21 toxicity dataset using random, fingerprint, scaffold, and Butina splitting. For each one we train a model and evaluate it on the training and test sets. | deepchem.pdf |
import deepchem as dc splitters = [ 'random', 'scaffold', 'butina', 'fingerprint' ] metric = dc. metrics. Metric ( dc. metrics. roc_auc_score ) for splitter in splitters : tasks, datasets, transformers = dc. molnet. load_tox21 ( featurizer = 'ECFP', splitter = splitter ) train_dataset, valid_dataset, test_dataset = datasets model = dc. models. Multitask Classifier ( n_tasks = len ( tasks ), n_features = 1024, layer_sizes = [ 1000 ]) model. fit ( train_dataset, nb_epoch = 10 ) print ( 'splitter:', splitter ) print ( 'training set score:', model. evaluate ( train_dataset, [ metric ], transformers )) print ( 'test set score:', model. evaluate ( test_dataset, [ metric ], transformers )) print () splitter: random training set score: {'roc_auc_score': 0. 9554904185889012} test set score: {'roc_auc_score': 0. 7854105497196335} splitter: scaffold training set score: {'roc_auc_score': 0. 958752269558084} test set score: {'roc_auc_score': 0. 6849149319233084} splitter: butina training set score: {'roc_auc_score': 0. 9584914471889929} test set score: {'roc_auc_score': 0. 6061155305251504} splitter: fingerprint training set score: {'roc_auc_score': 0. 954193849465875} test set score: {'roc_auc_score': 0. 6235667313881933} All of them produce very similar performance on the training set, but the random splitter has much higher performance on the test set. Scaffold splitting has a lower test set score, and Butina splitting is even lower. Does that mean random splitting is better? No! It means random splitting doesn't give you an accurate measure of how well your model works. Because the test set contains lots of molecules that are very similar to ones in the training set, it isn't truly independent. It makes the model appear to work better than it really does. Scaffold splitting and Butina splitting give a better indication of what you can expect on independent data in the future. Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Discord The Deep Chem discord hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Intro8, title = { Working With Splitters }, organization = { Deep Chem }, author = { Eastman, Peter, Mohapatra, Bibhusundar and Ramsundar, Bharath }, howpublished = { \ url { https : // github. com / deepchem / deepchem / blob / master / examples / tutorials / Working_With_Splitters. ipynb year = { 2021 }, } | deepchem.pdf |
Advanced Model Training In the tutorials so far we have followed a simple procedure for training models: load a dataset, create a model, call fit(), evaluate it, and call ourselves done. That's fine for an example, but in real machine learning projects the process is usually more complicated. In this tutorial we will look at a more realistic workflow for training a model. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b Setup To run Deep Chem within Colab, you'll need to run the following installation commands. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Deep Chem in your local machine again. ! pip install --pre deepchem import deepchem deepchem. __version__ Hyperparameter Optimization Let's start by loading the HIV dataset. It classifies over 40,000 molecules based on whether they inhibit HIV replication. import deepchem as dc tasks, datasets, transformers = dc. molnet. load_hiv ( featurizer = 'ECFP', split = 'scaffold' ) train_dataset, valid_dataset, test_dataset = datasets Now let's train a model on it. We will use a Multitask Classifier, which is just a stack of dense layers. But that still leaves a lot of options. How many layers should there be, and how wide should each one be? What dropout rate should we use? What learning rate? These are called hyperparameters. The standard way to select them is to try lots of values, train each model on the training set, and evaluate it on the validation set. This lets us see which ones work best. You could do that by hand, but usually it's easier to let the computer do it for you. Deep Chem provides a selection of hyperparameter optimization algorithms, which are found in the dc. hyper package. For this example we'll use Grid Hyperparam Opt, which is the most basic method. We just give it a list of options for each hyperparameter and it exhaustively tries all combinations of them. The lists of options are defined by a dict that we provide. For each of the model's arguments, we provide a list of values to try. In this example we consider three possible sets of hidden layers: a single layer of width 500, a single layer of width 1000, or two layers each of width 1000. We also consider two dropout rates (20% and 50%) and two learning rates (0. 001 and 0. 0001). params_dict = { 'n_tasks' : [ len ( tasks )], 'n_features' : [ 1024 ], 'layer_sizes' : [[ 500 ], [ 1000 ], [ 1000, 1000 ]], 'dropouts' : [ 0. 2, 0. 5 ], 'learning_rate' : [ 0. 001, 0. 0001 ] } optimizer = dc. hyper. Grid Hyperparam Opt ( dc. models. Multitask Classifier ) metric = dc. metrics. Metric ( dc. metrics. roc_auc_score ) best_model, best_hyperparams, all_results = optimizer. hyperparam_search ( params_dict, train_dataset, valid_dataset, metric, transformers ) hyperparam_search() returns three arguments: the best model it found, the hyperparameters for that model, and a full listing of the validation score for every model. Let's take a look at the last one. all_results | deepchem.pdf |
{'_dropouts_0. 200000_layer_sizes[500]_learning_rate_0. 001000_n_features_1024_n_tasks_1': 0. 759624393738977, '_dropouts_0. 200000_layer_sizes[500]_learning_rate_0. 000100_n_features_1024_n_tasks_1': 0. 7680791323731138, '_dropouts_0. 500000_layer_sizes[500]_learning_rate_0. 001000_n_features_1024_n_tasks_1': 0. 7623870149911817, '_dropouts_0. 500000_layer_sizes[500]_learning_rate_0. 000100_n_features_1024_n_tasks_1': 0. 7552282358416618, '_dropouts_0. 200000_layer_sizes[1000]_learning_rate_0. 001000_n_features_1024_n_tasks_1': 0. 7689915858318636, '_dropouts_0. 200000_layer_sizes[1000]_learning_rate_0. 000100_n_features_1024_n_tasks_1': 0. 7619292572996277, '_dropouts_0. 500000_layer_sizes[1000]_learning_rate_0. 001000_n_features_1024_n_tasks_1': 0. 7641491524593376, '_dropouts_0. 500000_layer_sizes[1000]_learning_rate_0. 000100_n_features_1024_n_tasks_1': 0. 7609877155594749, '_dropouts_0. 200000_layer_sizes[1000, 1000]_learning_rate_0. 001000_n_features_1024_n_tasks_1': 0. 7707169802077 21, '_dropouts_0. 200000_layer_sizes[1000, 1000]_learning_rate_0. 000100_n_features_1024_n_tasks_1': 0. 7750327625906 329, '_dropouts_0. 500000_layer_sizes[1000, 1000]_learning_rate_0. 001000_n_features_1024_n_tasks_1': 0. 7259723140799 53, '_dropouts_0. 500000_layer_sizes[1000, 1000]_learning_rate_0. 000100_n_features_1024_n_tasks_1': 0. 7546280986674 505} We can see a few general patterns. Using two layers with the larger learning rate doesn't work very well. It seems the deeper model requires a smaller learning rate. We also see that 20% dropout usually works better than 50%. Once we narrow down the list of models based on these observations, all the validation scores are very close to each other, probably close enough that the remaining variation is mainly noise. It doesn't seem to make much difference which of the remaining hyperparameter sets we use, so let's arbitrarily pick a single layer of width 1000 and learning rate of 0. 0001. Early Stopping There is one other important hyperparameter we haven't considered yet: how long we train the model for. Grid Hyperparam Opt trains each for a fixed, fairly small number of epochs. That isn't necessarily the best number. You might expect that the longer you train, the better your model will get, but that isn't usually true. If you train too long, the model will usually start overfitting to irrelevant details of the training set. You can tell when this happens because the validation set score stops increasing and may even decrease, while the score on the training set continues to improve. Fortunately, we don't need to train lots of different models for different numbers of steps to identify the optimal number. We just train it once, monitor the validation score, and keep whichever parameters maximize it. This is called "early stopping". Deep Chem's Validation Callback class can do this for us automatically. In the example below, we have it compute the validation set's ROC AUC every 1000 training steps. If you add the save_dir argument, it will also save a copy of the best model parameters to disk. model = dc. models. Multitask Classifier ( n_tasks = len ( tasks ), n_features = 1024, layer_sizes = [ 1000 ], dropouts = 0. 2, learning_rate = 0. 0001 ) callback = dc. models. Validation Callback ( valid_dataset, 1000, metric ) model. fit ( train_dataset, nb_epoch = 50, callbacks = callback ) Step 1000 validation: roc_auc_score=0. 759757 Step 2000 validation: roc_auc_score=0. 770685 Step 3000 validation: roc_auc_score=0. 771588 Step 4000 validation: roc_auc_score=0. 777862 Step 5000 validation: roc_auc_score=0. 773894 Step 6000 validation: roc_auc_score=0. 763762 Step 7000 validation: roc_auc_score=0. 766361 Step 8000 validation: roc_auc_score=0. 767026 Step 9000 validation: roc_auc_score=0. 761239 Step 10000 validation: roc_auc_score=0. 761279 Step 11000 validation: roc_auc_score=0. 765363 Step 12000 validation: roc_auc_score=0. 769481 Step 13000 validation: roc_auc_score=0. 768523 Step 14000 validation: roc_auc_score=0. 761306 Step 15000 validation: roc_auc_score=0. 77397 Step 16000 validation: roc_auc_score=0. 764848 0. 8040038299560547 Learning Rate Schedules In the examples above we use a fixed learning rate throughout training. In some cases it works better to vary the learning rate during training. To do this in Deep Chem, we simply specify a Learning Rate Schedule object instead of a number for the learning_rate argument. In the following example we use a learning rate that decreases exponentially. It starts at 0. 0002, then gets multiplied by 0. 9 after every 1000 steps. learning_rate = dc. models. optimizers. Exponential Decay ( 0. 0002, 0. 9, 1000 ) | deepchem.pdf |
model = dc. models. Multitask Classifier ( n_tasks = len ( tasks ), n_features = 1024, layer_sizes = [ 1000 ], dropouts = 0. 2, learning_rate = learning_rate ) model. fit ( train_dataset, nb_epoch = 50, callbacks = callback ) Step 1000 validation: roc_auc_score=0. 736547 Step 2000 validation: roc_auc_score=0. 758979 Step 3000 validation: roc_auc_score=0. 768361 Step 4000 validation: roc_auc_score=0. 764898 Step 5000 validation: roc_auc_score=0. 775253 Step 6000 validation: roc_auc_score=0. 779898 Step 7000 validation: roc_auc_score=0. 76991 Step 8000 validation: roc_auc_score=0. 771515 Step 9000 validation: roc_auc_score=0. 773796 Step 10000 validation: roc_auc_score=0. 776977 Step 11000 validation: roc_auc_score=0. 778866 Step 12000 validation: roc_auc_score=0. 777066 Step 13000 validation: roc_auc_score=0. 77616 Step 14000 validation: roc_auc_score=0. 775646 Step 15000 validation: roc_auc_score=0. 772785 Step 16000 validation: roc_auc_score=0. 769975 0. 22854619979858398 Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Discord The Deep Chem Discord hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Intro9, title = { Advanced Model Training }, organization = { Deep Chem }, author = { Eastman, Peter and Ramsundar, Bharath }, howpublished = { \ url { https : // github. com / deepchem / deepchem / blob / master / examples / tutorials / Advanced_Model_Training. ipynb year = { 2021 }, } | deepchem.pdf |
Creating a High Fidelity Dataset from Experimental Data In this tutorial, we will look at what is involved in creating a new Dataset from experimental data. As we will see, the mechanics of creating the Dataset object is only a small part of the process. Most real datasets need significant cleanup and QA before they are suitable for training models. Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem import deepchem deepchem. __version__ Working With Data Files Suppose you were given data collected by an experimental collaborator. You would like to use this data to construct a machine learning model. How do you transform this data into a dataset capable of creating a useful model? Building models from novel data can present several challenges. Perhaps the data was not recorded in a convenient manner. Additionally, perhaps the data contains noise. This is a common occurrence with, for example, biological assays due to the large number of external variables and the difficulty and cost associated with collecting multiple samples. This is a problem because you do not want your model to fit to this noise. Hence, there are two primary challenges: Parsing data De-noising data In this tutorial, we will walk through an example of curating a dataset from an excel spreadsheet of experimental drug measurements. Before we dive into this example though, let's do a brief review of Deep Chem's input file handling and featurization capabilities. Input Formats Deep Chem supports a whole range of input files. For example, accepted input formats include . csv, . sdf, . fasta, . png, . tif and other file formats. The loading for a particular file format is governed by the Loader class associated with that format. For example, to load a . csv file we use the CSVLoader class. Here's an example of a . csv file that fits the requirements of CSVLoader. 1. A column containing SMILES strings. 2. A column containing an experimental measurement. 3. (Optional) A column containing a unique compound identifier. Here's an example of a potential input file. Compound ID measured log solubility in mols per litre smiles benzothiazole-1. 5 c2ccc1scnc1c2 Here the "smiles" column contains the SMILES string, the "measured log solubility in mols per litre" contains the experimental measurement, and "Compound ID" contains the unique compound identifier. Data Featurization Most machine learning algorithms require that input data form vectors. However, input data for drug-discovery datasets routinely come in the form of lists of molecules and associated experimental readouts. To load the data, we use a subclass of dc. data. Data Loader such as dc. data. CSVLoader or dc. data. SDFLoader. Users can subclass dc. data. Data Loader to load arbitrary file formats. All loaders must be passed a dc. feat. Featurizer object, which specifies how to transform molecules into vectors. Deep Chem provides a number of different subclasses of dc. feat. Featurizer. | deepchem.pdf |
Parsing data In order to read in the data, we will use the pandas data analysis library. In order to convert the drug names into smiles strings, we will use pubchempy. This isn't a standard Deep Chem dependency, but you can install this library with conda install pubchempy. ! conda install pubchempy import os import pandas as pd from pubchempy import get_cids, get_compounds Pandas is magic but it doesn't automatically know where to find your data of interest. You likely will have to look at it first using a GUI. We will now look at a screenshot of this dataset as rendered by Libre Office. To do this, we will import Image and os. import os from IPython. display import Image, display current_dir = os. path. dirname ( os. path. realpath ( '__file__' )) data_screenshot = os. path. join ( current_dir, 'assets/dataset_preparation_gui. png' ) display ( Image ( filename = data_screenshot )) We see the data of interest is on the second sheet, and contained in columns "TA ID", "N #1 (%)", and "N #2 (%)". Additionally, it appears much of this spreadsheet was formatted for human readability (multicolumn headers, column labels with spaces and symbols, etc. ). This makes the creation of a neat dataframe object harder. For this reason we will cut everything that is unnecesary or inconvenient. import deepchem as dc dc. utils. download_url ( 'https://github. com/deepchem/deepchem/raw/master/datasets/Positive%20Modulators%20Summary_%20918. TUC%20_%20v1. xlsx' current_dir, 'Positive Modulators Summary_ 918. TUC _ v1. xlsx' ) raw_data_file = os. path. join ( current_dir, 'Positive Modulators Summary_ 918. TUC _ v1. xlsx' ) raw_data_excel = pd. Excel File ( raw_data_file ) # second sheet only raw_data = raw_data_excel. parse ( raw_data_excel. sheet_names [ 1 ]) # preview 5 rows of raw dataframe raw_data. loc [ raw_data. index [: 5 ]] | deepchem.pdf |
Unnamed: 0 Unnamed: 1 Unnamed: 2 Metric #1 (-120 m V Peak) Unnamed: 4 Unnamed: 5 Unnamed: 6 Unnamed: 7 0 Na N Na N Na N Vehicle Na N 4 Replications Na N 1 TA ## Position TA ID Mean SD Threshold (%) = Mean + 4x SD N #1 (%) N #2 (%) 2 1 1-A02 Penicillin V Potassium-12. 8689 6. 74705 14. 1193-10. 404-18. 1929 3 2 1-A03 Mycophenolate Mofetil-12. 8689 6. 74705 14. 1193-12. 4453-11. 7175 4 3 1-A04 Metaxalone-12. 8689 6. 74705 14. 1193-8. 65572-17. 7753 Note that the actual row headers are stored in row 1 and not 0 above. # remove column labels (rows 0 and 1), as we will replace them # only take data given in columns "TA ID" "N #1 (%)" (3) and "N #2 (%)" (4) raw_data = raw_data. iloc [ 2 :, [ 2, 6, 7 ]] # reset the index so we keep the label but number from 0 again raw_data. reset_index ( inplace = True ) ## rename columns raw_data. columns = [ 'label', 'drug', 'n1', 'n2' ] # preview cleaner dataframe raw_data. loc [ raw_data. index [: 5 ]] label drug n1 n2 0 2 Penicillin V Potassium-10. 404-18. 1929 1 3 Mycophenolate Mofetil-12. 4453-11. 7175 2 4 Metaxalone-8. 65572-17. 7753 3 5 TerazosinΒ·HCl-11. 5048 16. 0825 4 6 FluvastatinΒ·Na-11. 1354-14. 553 This formatting is closer to what we need. Now, let's take the drug names and get smiles strings for them (format needed for Deep Chem). drugs = raw_data [ 'drug' ]. values For many of these, we can retreive the smiles string via the canonical_smiles attribute of the get_compounds object (using pubchempy ) get_compounds ( drugs [ 1 ], 'name' ) [Compound(5281078)] get_compounds ( drugs [ 1 ], 'name' )[ 0 ]. canonical_smiles 'CC1=C2COC(=O)C2=C(C(=C1OC)CC=C(C)CCC(=O)OCCN3CCOCC3)O' However, some of these drug names have variables spaces and symbols (Β·, (Β±), etc. ), and names that may not be readable by pubchempy. For this task, we will do a bit of hacking via regular expressions. Also, we notice that all ions are written in a shortened form that will need to be expanded. For this reason we use a dictionary, mapping the shortened ion names to versions recognizable to pubchempy. Unfortunately you may have several corner cases that will require more hacking. import re ion_replacements = { 'HBr' : ' hydrobromide', '2Br' : ' dibromide', 'Br' : ' bromide', 'HCl' : ' hydrochloride', '2H2O' : ' dihydrate', 'H20' : ' hydrate', 'Na' : ' sodium' | deepchem.pdf |
} ion_keys = [ 'H20', 'HBr', 'HCl', '2Br', '2H2O', 'Br', 'Na' ] def compound_to_smiles ( cmpd ): # remove spaces and irregular characters compound = re. sub ( r '([^\s\w]|_)+', '', cmpd ) # replace ion names if needed for ion in ion_keys : if ion in compound : compound = compound. replace ( ion, ion_replacements [ ion ]) # query for cid first in order to avoid timeouterror cid = get_cids ( compound, 'name' )[ 0 ] smiles = get_compounds ( cid )[ 0 ]. canonical_smiles return smiles Now let's actually convert all these compounds to smiles. This conversion will take a few minutes so might not be a bad spot to go grab a coffee or tea and take a break while this is running! Note that this conversion will sometimes fail so we've added some error handling to catch these cases below. smiles_map = {} for i, compound in enumerate ( drugs ): try : smiles_map [ compound ] = compound_to_smiles ( compound ) except : print ( "Errored on %s " % i ) continue Errored on 162 Errored on 303 smiles_data = raw_data # map drug name to smiles string smiles_data [ 'drug' ] = smiles_data [ 'drug' ]. apply ( lambda x : smiles_map [ x ] if x in smiles_map else None ) # preview smiles data smiles_data. loc [ smiles_data. index [: 5 ]] label drug n1 n2 0 2 CC1(C(N2C(S1)C(C2=O)NC(=O)COC3=CC=CC=C3)C(=O)[...-10. 404-18. 1929 1 3 CC1=C2COC(=O)C2=C(C(=C1OC)CC=C(C)CCC(=O)OCCN3C...-12. 4453-11. 7175 2 4 CC1=CC(=CC(=C1)OCC2CNC(=O)O2)C-8. 65572-17. 7753 3 5 COC1=C(C=C2C(=C1)C(=NC(=N2)N3CCN(CC3)C(=O)C4CC...-11. 5048 16. 0825 4 6 CC(C)N1C2=CC=CC=C2C(=C1C=CC(CC(CC(=O)[O-])O)O)...-11. 1354-14. 553 Hooray, we have mapped each drug name to its corresponding smiles code. Now, we need to look at the data and remove as much noise as possible. De-noising data In machine learning, we know that there is no free lunch. You will need to spend time analyzing and understanding your data in order to frame your problem and determine the appropriate model framework. Treatment of your data will depend on the conclusions you gather from this process. Questions to ask yourself: What are you trying to accomplish? What is your assay? What is the structure of the data? Does the data make sense? What has been tried previously? For this project (respectively): I would like to build a model capable of predicting the affinity of an arbitrary small molecule drug to a particular ion channel protein For an input drug, data describing channel inhibition A few hundred drugs, with n=2 | deepchem.pdf |
Will need to look more closely at the dataset* Nothing on this particular protein *This will involve plotting, so we will import matplotlib and seaborn. We will also need to look at molecular structures, so we will import rdkit. We will also use the seaborn library which you can install with conda install seaborn. import matplotlib. pyplot as plt % matplotlib inline import seaborn as sns sns. set_style ( 'white' ) from rdkit import Chem from rdkit. Chem import All Chem from rdkit. Chem import Draw, Py Mol, rd FMCS from rdkit. Chem. Draw import IPython Console from rdkit import rd Base import numpy as np Our goal is to build a small molecule model, so let's make sure our molecules are all small. This can be approximated by the length of each smiles string. smiles_data [ 'len' ] = [ len ( i ) if i is not None else 0 for i in smiles_data [ 'drug' ]] smiles_lens = [ len ( i ) if i is not None else 0 for i in smiles_data [ 'drug' ]] sns. histplot ( smiles_lens ) plt. xlabel ( 'len(smiles)' ) plt. ylabel ( 'probability' ) Text(0, 0. 5, 'probability') Some of these look rather large, len(smiles) > 150. Let's see what they look like. # indices of large looking molecules suspiciously_large = np. where ( np. array ( smiles_lens ) > 150 )[ 0 ] # corresponding smiles string long_smiles = smiles_data. loc [ smiles_data. index [ suspiciously_large ]][ 'drug' ]. values # look Draw. _Mols To Grid Image ([ Chem. Mol From Smiles ( i ) for i in long_smiles ], mols Per Row = 6 ) As suspected, these are not small molecules, so we will remove them from the dataset. The argument here is that these molecules could register as inhibitors simply because they are large. They are more likely to sterically block the channel, rather than diffuse inside and bind (which is what we are interested in). The lesson here is to remove data that does not fit your use case. # drop large molecules smiles_data = smiles_data [ ~ smiles_data [ 'drug' ]. isin ( long_smiles )] Now, let's look at the numerical structure of the dataset. First, check for Na Ns. | deepchem.pdf |
nan_rows = smiles_data [ smiles_data. isnull (). T. any (). T ] nan_rows [[ 'n1', 'n2' ]] n1 n2 62 Na N-7. 8266 162-12. 8456-11. 4627 175 Na N-6. 61225 187 Na N-8. 23326 233-8. 21781 Na N 262 Na N-12. 8788 288 Na N-2. 34264 300 Na N-8. 19936 301 Na N-10. 4633 303-5. 61374 8. 42267 311 Na N-8. 78722 I don't trust n=1, so I will throw these out. Then, let's examine the distribution of n1 and n2. df = smiles_data. dropna ( axis = 0, how = 'any' ) # seaborn jointplot will allow us to compare n1 and n2, and plot each marginal sns. jointplot ( x = 'n1', y = 'n2', data = smiles_data ) <seaborn. axisgrid. Joint Grid at 0x14c4e37d0> We see that most of the data is contained in the gaussian-ish blob centered a bit below zero. We see that there are a few clearly active datapoints located in the bottom left, and one on the top right. These are all distinguished from the majority of the data. How do we handle the data in the blob? Because n1 and n2 represent the same measurement, ideally they would be of the same value. This plot should be tightly aligned to the diagonal, and the pearson correlation coefficient should be 1. We see this is not the case. This helps gives us an idea of the error of our assay. Let's look at the error more closely, plotting in the distribution of (n1-n2). diff_df = df [ 'n1' ] - df [ 'n2' ] sns. histplot ( diff_df ) plt. xlabel ( 'difference in n' ) plt. ylabel ( 'probability' ) Text(0, 0. 5, 'probability') | deepchem.pdf |
This looks pretty gaussian, let's get the 95% confidence interval by fitting a gaussian via scipy, and taking 2*the standard deviation from scipy import stats mean, std = stats. norm. fit ( np. asarray ( diff_df, dtype = np. float32 )) ci_95 = std * 2 ci_95 17. 75387954711914 Now, I don't trust the data outside of the confidence interval, and will therefore drop these datapoints from df. For example, in the plot above, at least one datapoint has n1-n2 > 60. This is disconcerting. noisy = diff_df [ abs ( diff_df ) > ci_95 ] df = df. drop ( noisy. index ) sns. jointplot ( x = 'n1', y = 'n2', data = df ) <seaborn. axisgrid. Joint Grid at 0x15a363c10> Now that data looks much better! So, let's average n1 and n2, and take the error bar to be ci_95. avg_df = df [[ 'label', 'drug' ]]. copy () n_avg = df [[ 'n1', 'n2' ]]. mean ( axis = 1 ) avg_df [ 'n' ] = n_avg avg_df. sort_values ( 'n', inplace = True ) Now, let's look at the sorted data with error bars. plt. errorbar ( np. arange ( avg_df. shape [ 0 ]), avg_df [ 'n' ], yerr = ci_95, fmt = 'o' ) plt. xlabel ( 'drug, sorted' ) plt. ylabel ( 'activity' ) Text(0, 0. 5, 'activity') | deepchem.pdf |
Now, let's identify our active compounds. In my case, this required domain knowledge. Having worked in this area, and having consulted with professors specializing on this channel, I am interested in compounds where the absolute value of the activity is greater than 25. This relates to the desired drug potency we would like to model. If you are not certain how to draw the line between active and inactive, this cutoff could potentially be treated as a hyperparameter. actives = avg_df [ abs ( avg_df [ 'n' ])-ci_95 > 25 ][ 'n' ] plt. errorbar ( np. arange ( actives. shape [ 0 ]), actives, yerr = ci_95, fmt = 'o' ) <Errorbar Container object of 3 artists> # summary print ( raw_data. shape, avg_df. shape, len ( actives. index )) (430, 5) (392, 3) 6 In summary, we have: Removed data that did not address the question we hope to answer (small molecules only) Dropped Na Ns Determined the noise of our measurements Removed exceptionally noisy datapoints Identified actives (using domain knowledge to determine a threshold) Determine model type, final form of dataset, and sanity load Now, what model framework should we use? Given that we have 392 datapoints and 6 actives, this data will be used to build a low data one-shot classifier (10. 1021/acscentsci. 6b00367). If there were datasets of similar character, transfer learning could potentially be used, but this is not the case at the moment. Let's apply logic to our dataframe in order to cast it into a binary format, suitable for classification. # 1 if condition for active is met, 0 otherwise avg_df. loc [:, 'active' ] = ( abs ( avg_df [ 'n' ])-ci_95 > 25 ). astype ( int ) Now, save this to file. avg_df. to_csv ( 'modulators. csv', index = False ) Now, we will convert this dataframe to a Deep Chem dataset. | deepchem.pdf |
dataset_file = 'modulators. csv' task = [ 'active' ] featurizer_func = dc. feat. Conv Mol Featurizer () loader = dc. data. CSVLoader ( tasks = task, feature_field = 'drug', featurizer = featurizer_func ) dataset = loader. create_dataset ( dataset_file ) Lastly, it is often advantageous to numerically transform the data in some way. For example, sometimes it is useful to normalize the data, or to zero the mean. This depends in the task at hand. Built into Deep Chem are many useful transformers, located in the deepchem. transformers. transformers base class. Because this is a classification model, and the number of actives is low, I will apply a balancing transformer. I treated this transformer as a hyperparameter when I began training models. It proved to unambiguously improve model performance. transformer = dc. trans. Balancing Transformer ( dataset = dataset ) dataset = transformer. transform ( dataset ) Now let's save the balanced dataset object to disk, and then reload it as a sanity check. dc. utils. save_to_disk ( dataset, 'balanced_dataset. joblib' ) balanced_dataset = dc. utils. load_from_disk ( 'balanced_dataset. joblib' ) Tutorial written by Keri Mc Kiernan (github. com/kmckiern) on September 8, 2016 Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Discord The Deep Chem Discord hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! Bibliography [2] Anderson, Eric, Gilman D. Veith, and David Weininger. "SMILES, a line notation and computerized interpreter for chemical structures. " US Environmental Protection Agency, Environmental Research Laboratory, 1987. Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Intro10, title = { Creating a high fidelity model from experimental data }, organization = { Deep Chem }, author = { Eastman, Peter and Ramsundar, Bharath }, howpublished = { \ url { https : // github. com / deepchem / deepchem / tree / master / examples / tutorials }}, year = { 2021 }, } | deepchem.pdf |
Putting Multitask Learning to Work This notebook walks through the creation of multitask models on MUV [1]. The goal is to demonstrate how multitask methods can provide improved performance in situations with little or very unbalanced data. Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem import deepchem deepchem. __version__ The MUV dataset is a challenging benchmark in molecular design that consists of 17 different "targets" where there are only a few "active" compounds per target. There are 93,087 compounds in total, yet no task has more than 30 active compounds, and many have even less. Training a model with such a small number of positive examples is very challenging. Multitask models address this by training a single model that predicts all the different targets at once. If a feature is useful for predicting one task, it often is useful for predicting several other tasks as well. Each added task makes it easier to learn important features, which improves performance on other tasks [2]. To get started, let's load the MUV dataset. The Molecule Net loader function automatically splits it into training, validation, and test sets. Because there are so few positive examples, we use stratified splitting to ensure the test set has enough of them to evaluate. import deepchem as dc import numpy as np tasks, datasets, transformers = dc. molnet. load_muv ( split = 'stratified' ) train_dataset, valid_dataset, test_dataset = datasets Now let's train a model on it. We'll use a Multitask Classifier, which is a simple stack of fully connected layers. n_tasks = len ( tasks ) n_features = train_dataset. get_data_shape ()[ 0 ] model = dc. models. Multitask Classifier ( n_tasks, n_features ) model. fit ( train_dataset ) 0. 0004961589723825455 Let's see how well it does on the test set. We loop over the 17 tasks and compute the ROC AUC for each one. y_true = test_dataset. y y_pred = model. predict ( test_dataset ) metric = dc. metrics. roc_auc_score for i in range ( n_tasks ): score = metric ( dc. metrics. to_one_hot ( y_true [:, i ]), y_pred [:, i ]) print ( tasks [ i ], score ) MUV-466 0. 9207684040838259 MUV-548 0. 7480655561526062 MUV-600 0. 9927995701235895 MUV-644 0. 9974207415368082 MUV-652 0. 7823481998925309 MUV-689 0. 6636843990686011 MUV-692 0. 6319093677234462 MUV-712 0. 7787838079885365 MUV-713 0. 7910711087229088 MUV-733 0. 4401307540748701 MUV-737 0. 34679383843811573 MUV-810 0. 9564571019165323 MUV-832 0. 9991044241447251 MUV-846 0. 7519881783987103 MUV-852 0. 8516747268493642 MUV-858 0. 5906591438294824 MUV-859 0. 5962954008166774 Not bad! Recall that random guessing would produce a ROC AUC score of 0. 5, and a perfect predictor would score 1. 0. Most of the tasks did much better than random guessing, and many of them are above 0. 9. | deepchem.pdf |
Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! Bibliography [1] https://pubs. acs. org/doi/10. 1021/ci8002649 [2] https://pubs. acs. org/doi/abs/10. 1021/acs. jcim. 7b00146 | deepchem.pdf |
Tutorial Part 13: Modeling Protein-Ligand Interactions By Nathan C. Frey | Twitter and Bharath Ramsundar | Twitter In this tutorial, we'll walk you through the use of machine learning and molecular docking methods to predict the binding energy of a protein-ligand complex. Recall that a ligand is some small molecule which interacts (usually non-covalently) with a protein. Molecular docking performs geometric calculations to find a βbinding poseβ with a small molecule interacting with a protein in a suitable binding pocket (that is, a region on the protein which has a groove in which the small molecule can rest). The structure of proteins can be determined experimentally with techniques like Cryo-EM or X-ray crystallography. This can be a powerful tool for structure-based drug discovery. For more info on docking, read the Auto Dock Vina paper and the deepchem. dock documentation. There are many graphical user and command line interfaces (like Auto Dock) for performing molecular docking. Here, we show how docking can be performed programmatically with Deep Chem, which enables automation and easy integration with machine learning pipelines. As you work through the tutorial, you'll trace an arc including 1. Loading a protein-ligand complex dataset ( PDBbind ) 2. Performing programmatic molecular docking 3. Featurizing protein-ligand complexes with interaction fingerprints 4. Fitting a random forest model and predicting binding affinities To start the tutorial, we'll use a simple pre-processed dataset file that comes in the form of a gzipped file. Each row is a molecular system, and each column represents a different piece of information about that system. For instance, in this example, every row reflects a protein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string of the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines in a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone. Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b Setup To run Deep Chem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ! pip install -q condacolab import condacolab condacolab. install () WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the syst em package manager. It is recommended to use a virtual environment instead: https://pip. pypa. io/warnings/venv β¨β¨ Everything looks OK! ! conda install -c conda-forge openmm Collecting package metadata (current_repodata. json): - ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ done Solving environment: \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ done # All requested packages already installed. ! pip install deepchem | deepchem.pdf |
Looking in indexes: https://pypi. org/simple, https://us-python. pkg. dev/colab-wheels/public/simple/ Requirement already satisfied: deepchem in /usr/local/lib/python3. 10/site-packages (2. 7. 1) Requirement already satisfied: numpy>=1. 21 in /usr/local/lib/python3. 10/site-packages (from deepchem) (1. 24. 3) Requirement already satisfied: scikit-learn in /usr/local/lib/python3. 10/site-packages (from deepchem) (1. 2. 2) Requirement already satisfied: joblib in /usr/local/lib/python3. 10/site-packages (from deepchem) (1. 2. 0) Requirement already satisfied: scipy<1. 9 in /usr/local/lib/python3. 10/site-packages (from deepchem) (1. 8. 1) Requirement already satisfied: pandas in /usr/local/lib/python3. 10/site-packages (from deepchem) (2. 0. 1) Requirement already satisfied: rdkit in /usr/local/lib/python3. 10/site-packages (from deepchem) (2023. 3. 1) Requirement already satisfied: python-dateutil>=2. 8. 2 in /usr/local/lib/python3. 10/site-packages (from pandas->d eepchem) (2. 8. 2) Requirement already satisfied: pytz>=2020. 1 in /usr/local/lib/python3. 10/site-packages (from pandas->deepchem) ( 2023. 3) Requirement already satisfied: tzdata>=2022. 1 in /usr/local/lib/python3. 10/site-packages (from pandas->deepchem) (2023. 3) Requirement already satisfied: Pillow in /usr/local/lib/python3. 10/site-packages (from rdkit->deepchem) (9. 5. 0) Requirement already satisfied: threadpoolctl>=2. 0. 0 in /usr/local/lib/python3. 10/site-packages (from scikit-lear n->deepchem) (3. 1. 0) Requirement already satisfied: six>=1. 5 in /usr/local/lib/python3. 10/site-packages (from python-dateutil>=2. 8. 2->pandas->deepchem) (1. 16. 0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the syst em package manager. It is recommended to use a virtual environment instead: https://pip. pypa. io/warnings/venv ! conda install -c conda-forge pdbfixer Collecting package metadata (current_repodata. json): - ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ done Solving environment: | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ done # All requested packages already installed. ! conda install -c conda-forge vina Collecting package metadata (current_repodata. json): - ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ done Solving environment: / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ- ββ \ ββ | ββ / ββ done # All requested packages already installed. Protein-ligand complex data It is really helpful to visualize proteins and ligands when doing docking. Unfortunately, Google Colab doesn't currently support the Jupyter widgets we need to do that visualization. Install MDTraj and nglview on your local machine to view the protein-ligand complexes we're working with. ! pip install -q mdtraj nglview # !jupyter-nbextension enable nglview --py --sys-prefix # for jupyter notebook # !jupyter labextension install nglview-js-widgets # for jupyter lab WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the syst em package manager. It is recommended to use a virtual environment instead: https://pip. pypa. io/warnings/venv import os import numpy as np import pandas as pd import tempfile from rdkit import Chem from rdkit. Chem import All Chem import deepchem as dc from deepchem. utils import download_url, load_from_disk Skipped loading modules with pytorch-geometric dependency, missing a dependency. No module named 'torch_geometri c' Skipped loading modules with pytorch-geometric dependency, missing a dependency. cannot import name 'DMPNN' from 'deepchem. models. torch_models' (/usr/local/lib/python3. 10/site-packages/deepchem/models/torch_models/__init__. py ) Skipped loading modules with pytorch-lightning dependency, missing a dependency. No module named 'pytorch_lightn ing' Skipped loading some Jax models, missing a dependency. No module named 'haiku' To illustrate the docking procedure, here we'll use a csv that contains SMILES strings of ligands as well as PDB files for the ligand and protein targets from PDBbind. Later, we'll use the labels to train a model to predict binding affinities. We'll also show how to download and featurize PDBbind to train a model from scratch. | deepchem.pdf |
data_dir = dc. utils. get_data_dir () dataset_file = os. path. join ( data_dir, "pdbbind_core_df. csv. gz" ) if not os. path. exists ( dataset_file ): print ( 'File does not exist. Downloading file... ' ) download_url ( "https://s3-us-west-1. amazonaws. com/deepchem. io/datasets/pdbbind_core_df. csv. gz" ) print ( 'File downloaded... ' ) raw_dataset = load_from_disk ( dataset_file ) raw_dataset = raw_dataset [[ 'pdb_id', 'smiles', 'label' ]] Let's see what raw_dataset looks like: raw_dataset. head ( 2 ) pdb_id smiles label 0 2d3u CC1CCCCC1S(O)(O)NC1CC(C2CCC(CN)CC2)SC1C(O)O 6. 92 1 3cyx CC(C)(C)NC(O)C1CC2CCCCC2C[NH+]1CC(O)C(CC1CCCCC... 8. 00 Fixing PDB files Next, let's get some PDB protein files for visualization and docking. We'll use the PDB IDs from our raw_dataset and download the pdb files directly from the Protein Data Bank using pdbfixer. We'll also sanitize the structures with RDKit. This ensures that any problems with the protein and ligand files (non-standard residues, chemical validity, etc. ) are corrected. Feel free to modify these cells and pdbids to consider new protein-ligand complexes. We note here that PDB files are complex and human judgement is required to prepare protein structures for docking. Deep Chem includes a number of docking utilites to assist you with preparing protein files, but results should be inspected before docking is attempted. from openmm. app import PDBFile from pdbfixer import PDBFixer from deepchem. utils. vina_utils import prepare_inputs # consider one protein-ligand complex for visualization pdbid = raw_dataset [ 'pdb_id' ]. iloc [ 1 ] ligand = raw_dataset [ 'smiles' ]. iloc [ 1 ] %%time fixer = PDBFixer ( pdbid = pdbid ) PDBFile. write File ( fixer. topology, fixer. positions, open ( ' %s. pdb' % ( pdbid ), 'w' )) p, m = None, None # fix protein, optimize ligand geometry, and sanitize molecules try : p, m = prepare_inputs ( ' %s. pdb' % ( pdbid ), ligand ) except : print ( ' %s failed PDB fixing' % ( pdbid )) if p and m : # protein and molecule are readable by RDKit print ( pdbid, p. Get Num Atoms ()) Chem. rdmolfiles. Mol To PDBFile ( p, ' %s. pdb' % ( pdbid )) Chem. rdmolfiles. Mol To PDBFile ( m, 'ligand_ %s. pdb' % ( pdbid )) <timed exec>:7: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. Warning: importing 'simtk. openmm' is deprecated. Import 'openmm' instead. 3cyx 1510 CPU times: user 2. 04 s, sys: 157 ms, total: 2. 2 s Wall time: 4. 32 s Visualization If you're outside of Colab, you can expand these cells and use MDTraj and nglview to visualize proteins and ligands. import mdtraj as md import nglview from IPython. display import display, Image Let's take a look at the first protein ligand pair in our dataset: protein_mdtraj = md. load_pdb ( '3cyx. pdb' ) ligand_mdtraj = md. load_pdb ( 'ligand_3cyx. pdb' ) | deepchem.pdf |
We'll use the convenience function nglview. show_mdtraj in order to view our proteins and ligands. Note that this will only work if you uncommented the above cell, installed nglview, and enabled the necessary notebook extensions. v = nglview. show_mdtraj ( ligand_mdtraj ) display ( v ) # interactive view outside Colab NGLWidget() Now that we have an idea of what the ligand looks like, let's take a look at our protein: view = nglview. show_mdtraj ( protein_mdtraj ) display ( view ) # interactive view outside Colab NGLWidget() Molecular Docking Ok, now that we've got our data and basic visualization tools up and running, let's see if we can use molecular docking to estimate the binding affinities between our protein ligand systems. There are three steps to setting up a docking job, and you should experiment with different settings. The three things we need to specify are 1) how to identify binding pockets in the target protein; 2) how to generate poses (geometric configurations) of a ligand in a binding pocket; and 3) how to "score" a pose. Remember, our goal is to identify candidate ligands that strongly interact with a target protein, which is reflected by the score. Deep Chem has a simple built-in method for identifying binding pockets in proteins. It is based on the convex hull method. The method works by creating a 3D polyhedron (convex hull) around a protein structure and identifying the surface atoms of the protein as the ones closest to the convex hull. Some biochemical properties are considered, so the method is not purely geometrical. It has the advantage of having a low computational cost and is good enough for our purposes. finder = dc. dock. binding_pocket. Convex Hull Pocket Finder () pockets = finder. find_pockets ( '3cyx. pdb' ) len ( pockets ) # number of identified pockets 36 Pose generation is quite complex. Luckily, using Deep Chem's pose generator will install the Auto Dock Vina engine under | deepchem.pdf |
the hood, allowing us to get up and running generating poses quickly. vpg = dc. dock. pose_generation. Vina Pose Generator () We could specify a pose scoring function from deepchem. dock. pose_scoring, which includes things like repulsive and hydrophobic interactions and hydrogen bonding. Vina will take care of this, so instead we'll allow Vina to compute scores for poses. ! mkdir -p vina_test %%time complexes, scores = vpg. generate_poses ( molecular_complex = ( '3cyx. pdb', 'ligand_3cyx. pdb' ), # protein-ligand files for docking, out_dir = 'vina_test', generate_scores = True ) CPU times: user 41min 4s, sys: 21. 9 s, total: 41min 26s Wall time: 28min 32s /usr/local/lib/python3. 10/site-packages/vina/vina. py:260: Deprecation Warning: `np. int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np. int`, you may wish to use e. g. `np. int64` or `np. int32` to specify the precision. If yo u wish to review your current use, check the release note link for additional information. Deprecated in Num Py 1. 20; for more details and guidance: https://numpy. org/devdocs/release/1. 20. 0-notes. html#dep recations self. _voxels = np. ceil(np. array(box_size) / self. _spacing). astype(np. int) We used the default value for num_modes when generating poses, so Vina will return the 9 lowest energy poses it found in units of kcal/mol. scores [-9. 484, -9. 405, -9. 195, -9. 151, -8. 9, -8. 696, -8. 687, -8. 633, -8. 557] Can we view the complex with both protein and ligand? Yes, but we'll need to combine the molecules into a single RDkit molecule. complex_mol = Chem. Combine Mols ( complexes [ 0 ][ 0 ], complexes [ 0 ][ 1 ]) Let's now visualize our complex. We can see that the ligand slots into a pocket of the protein. v = nglview. show_rdkit ( complex_mol ) display ( v ) NGLWidget() Now that we understand each piece of the process, we can put it all together using Deep Chem's Docker class. Docker creates a generator that yields tuples of posed complexes and docking scores. docker = dc. dock. docking. Docker ( pose_generator = vpg ) posed_complex, score = next ( docker. dock ( molecular_complex = ( '3cyx. pdb', 'ligand_3cyx. pdb' ), use_pose_generator_scores = True )) Modeling Binding Affinity Docking is a useful, albeit coarse-grained tool for predicting protein-ligand binding affinities. However, it takes some time, especially for large-scale virtual screenings where we might be considering different protein targets and thousands of potential ligands. We might naturally ask then, can we train a machine learning model to predict docking scores? Let's try and find out! | deepchem.pdf |
We'll show how to download the PDBbind dataset. We can use the loader in Molecule Net to get the 4852 protein-ligand complexes from the "refined" set or the entire "general" set in PDBbind. For simplicity, we'll stick with the ~100 complexes we've already processed to train our models. Next, we'll need a way to transform our protein-ligand complexes into representations which can be used by learning algorithms. Ideally, we'd have neural protein-ligand complex fingerprints, but Deep Chem doesn't yet have a good learned fingerprint of this sort. We do however have well-tuned manual featurizers that can help us with our challenge here. We'll make use of two types of fingerprints in the rest of the tutorial, the Circular Fingerprint and Contact Circular Fingerprint. Deep Chem also has voxelizers and grid descriptors that convert a 3D volume containing an arragment of atoms into a fingerprint. These featurizers are really useful for understanding protein-ligand complexes since they allow us to translate complexes into vectors that can be passed into a simple machine learning algorithm. First, we'll create circular fingerprints. These convert small molecules into a vector of fragments. pdbids = raw_dataset [ 'pdb_id' ]. values ligand_smiles = raw_dataset [ 'smiles' ]. values %%time for ( pdbid, ligand ) in zip ( pdbids, ligand_smiles ): fixer = PDBFixer ( url = 'https://files. rcsb. org/download/ %s. pdb' % ( pdbid )) PDBFile. write File ( fixer. topology, fixer. positions, open ( ' %s. pdb' % ( pdbid ), 'w' )) p, m = None, None # skip pdb fixing for speed try : p, m = prepare_inputs ( ' %s. pdb' % ( pdbid ), ligand, replace_nonstandard_residues = False, remove_heterogens = False, remove_water = False, add_hydrogens = False ) except : print ( ' %s failed sanitization' % ( pdbid )) if p and m : # protein and molecule are readable by RDKit Chem. rdmolfiles. Mol To PDBFile ( p, ' %s. pdb' % ( pdbid )) Chem. rdmolfiles. Mol To PDBFile ( m, 'ligand_ %s. pdb' % ( pdbid )) <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:11:45] UFFTYPER: Unrecognized atom type: S_5+4 (7) <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. 3cyx failed sanitization <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:12:02] UFFTYPER: Warning: hybridization set to SP3 for atom 17 <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:12:04] UFFTYPER: Warning: hybridization set to SP3 for atom 6 <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:12:06] UFFTYPER: Warning: hybridization set to SP3 for atom 1 [15:12:06] UFFTYPER: Unrecognized atom type: S_5+4 (21) <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:12:23] UFFTYPER: Warning: hybridization set to SP3 for atom 20 <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. | deepchem.pdf |
<timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:12:31] UFFTYPER: Warning: hybridization set to SP3 for atom 19 <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:12:35] UFFTYPER: Warning: hybridization set to SP3 for atom 29 <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:13:03] UFFTYPER: Unrecognized atom type: S_5+4 (39) <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:13:37] UFFTYPER: Warning: hybridization set to SP3 for atom 33 <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:14:01] UFFTYPER: Unrecognized atom type: S_5+4 (11) <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:14:02] UFFTYPER: Unrecognized atom type: S_5+4 (47) <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. <timed exec>:8: Deprecation Warning: Call to deprecated function prepare_inputs. Please use the corresponding fun ction in deepchem. utils. docking_utils. [15:14:14] UFFTYPER: Unrecognized atom type: S_5+4 (1) | deepchem.pdf |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 45