text
stringlengths
0
12.7k
source
stringclasses
1 value
Seq Record(seq=Seq('TGTAACGAACGGTGCAATAGTGATCCACACCCAACGCCTGAAATCAGATCCAGG... CTG'), id='NC_005816. 1', name='NC_0 05816', description='Yersinia pestis biovar Microtus str. 91001 plasmid p PCP1, complete sequence', dbxrefs=['Pr oject:58037']) Seq IO Objects. count() Method The . count() method in Biopython's Seq object behaves similar to the . count() method of Python strings. It returns the number of non-overlapping occurrences of a specific subsequence within the sequence. from Bio. Seq import Seq my_seq = Seq ( "AGTACACATTG" ) count_a = my_seq. count ( 'A' ) count_tg = my_seq. count ( 'TG' ) print ( count_a ) # Output: 3 print ( count_tg ) # Output: 1 4 1 Mutable Seq objects Just like the normal Python string, the Seq object is “read only”, or in Python terminology, immutable. Apart from wanting the Seq object to act like a string, this is also a useful default since in many biological applications you want to ensure you are not changing your sequence data: you can convert it into a mutable sequence (a Mutable Seq object) and do pretty much anything you want with it from Bio. Seq import Mutable Seq mutable_seq = Mutable Seq ( "GCCATTGTAATGGGCCGCTGAAAGGGTGCCCGA" )
deepchem.pdf
Multisequence Alignment (MSA) Proteins are made up of sequences of amino acids chained together. Their amino acid sequence determines their structure and function. Finding proteins with similar sequences, or homologous proteins, is very useful in identifying the structures and functions of newly discovered proteins as well as identifying their ancestry. Below is an example of what a protein amino acid multisequence alignment may look like, taken from [2]. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b HH-suite This tutorial will show you the basics of how to use hh-suite. hh-suite is an open source package for searching protein sequence alignments for homologous proteins. It is the current state of the art for building highly accurate multisequence alignments (MSA) from a single sequence or from MSAs. References: [1] Steinegger M, Meier M, Mirdita M, Vöhringer H, Haunsberger S J, and Söding J (2019) HH-suite3 for fast remote homology detection and deep protein annotation, BMC Bioinformatics, 473. doi: 10. 1186/s12859-019-3019-7 [2] Kunzmann, P., Mayer, B. E. & Hamacher, K. Substitution matrix based color schemes for sequence alignment visualization. BMC Bioinformatics 21, 209 (2020). https://doi. org/10. 1186/s12859-020-3526-6 Setup Let's start by importing the deepchem sequence_utils module and downloading a database to compare our query sequence to. hh-suite provides a set of HMM databases that will work with the software, which you can find here: http://wwwuser. gwdg. de/~compbiol/data/hhsuite/databases/hhsuite_dbs db CAN is a good one for this tutorial because it is a relatively smaller download. from deepchem. utils import sequence_utils %%bash mkdir hh
deepchem.pdf
cd hh mkdir databases ; cd databases wget http://wwwuser. gwdg. de/~compbiol/data/hhsuite/databases/hhsuite_dbs/db CAN-fam-V9. tar. gz tar xzvf db CAN-fam-V9. tar. gz db CAN-fam-V9_a3m. ffdata db CAN-fam-V9_a3m. ffindex db CAN-fam-V9_hhm. ffdata db CAN-fam-V9_hhm. ffindex db CAN-fam-V9_cs219. ffdata db CAN-fam-V9_cs219. ffindex db CAN-fam-V9. md5sum--2022-02-11 12:47:57-- http://wwwuser. gwdg. de/~compbiol/data/hhsuite/databases/hhsuite_dbs/db CAN-fam-V9. tar. gz Resolving wwwuser. gwdg. de (wwwuser. gwdg. de)... 134. 76. 10. 111 Connecting to wwwuser. gwdg. de (wwwuser. gwdg. de)|134. 76. 10. 111|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 25882327 (25M) [application/x-gzip] Saving to: 'db CAN-fam-V9. tar. gz' 0K .......... .......... .......... .......... .......... 0% 195K 2m10s 50K .......... .......... .......... .......... .......... 0% 391K 97s 100K .......... .......... .......... .......... .......... 0% 55. 2M 65s 150K .......... .......... .......... .......... .......... 0% 392K 64s 200K .......... .......... .......... .......... .......... 0% 127M 51s 250K .......... .......... .......... .......... .......... 1% 50. 8M 43s 300K .......... .......... .......... .......... .......... 1% 85. 5M 37s 350K .......... .......... .......... .......... .......... 1% 395K 40s 400K .......... .......... .......... .......... .......... 1% 102M 35s 450K .......... .......... .......... .......... .......... 1% 55. 2M 32s 500K .......... .......... .......... .......... .......... 2% 392K 35s 550K .......... .......... .......... .......... .......... 2% 74. 8M 32s 600K .......... .......... .......... .......... .......... 2% 102M 29s 650K .......... .......... .......... .......... .......... 2% 399K 32s 700K .......... .......... .......... .......... .......... 2% 23. 6M 29s 750K .......... .......... .......... .......... .......... 3% 39. 5M 28s 800K .......... .......... .......... .......... .......... 3% 1. 26M 27s 850K .......... .......... .......... .......... .......... 3% 571K 28s 900K .......... .......... .......... .......... .......... 3% 35. 9M 26s 950K .......... .......... .......... .......... .......... 3% 52. 9M 25s 1000K .......... .......... .......... .......... .......... 4% 401K 27s 1050K .......... .......... .......... .......... .......... 4% 17. 2M 25s 1100K .......... .......... .......... .......... .......... 4% 64. 7M 24s 1150K .......... .......... .......... .......... .......... 4% 406K 26s 1200K .......... .......... .......... .......... .......... 4% 14. 1M 25s 1250K .......... .......... .......... .......... .......... 5% 45. 9M 24s 1300K .......... .......... .......... .......... .......... 5% 1. 30M 23s 1350K .......... .......... .......... .......... .......... 5% 572K 24s 1400K .......... .......... .......... .......... .......... 5% 21. 6M 23s 1450K .......... .......... .......... .......... .......... 5% 39. 8M 22s 1500K .......... .......... .......... .......... .......... 6% 405K 24s 1550K .......... .......... .......... .......... .......... 6% 16. 8M 23s 1600K .......... .......... .......... .......... .......... 6% 28. 4M 22s 1650K .......... .......... .......... .......... .......... 6% 413K 23s 1700K .......... .......... .......... .......... .......... 6% 16. 0M 22s 1750K .......... .......... .......... .......... .......... 7% 19. 0M 22s 1800K .......... .......... .......... .......... .......... 7% 31. 5M 21s 1850K .......... .......... .......... .......... .......... 7% 407K 22s 1900K .......... .......... .......... .......... .......... 7% 14. 5M 21s 1950K .......... .......... .......... .......... .......... 7% 40. 5M 21s 2000K .......... .......... .......... .......... .......... 8% 410K 22s 2050K .......... .......... .......... .......... .......... 8% 10. 1M 21s 2100K .......... .......... .......... .......... .......... 8% 66. 8M 21s 2150K .......... .......... .......... .......... .......... 8% 44. 7M 20s 2200K .......... .......... .......... .......... .......... 8% 409K 21s 2250K .......... .......... .......... .......... .......... 9% 11. 5M 21s 2300K .......... .......... .......... .......... .......... 9% 64. 3M 20s 2350K .......... .......... .......... .......... .......... 9% 389K 21s 2400K .......... .......... .......... .......... .......... 9% 111M 20s 2450K .......... .......... .......... .......... .......... 9% 63. 7M 20s 2500K .......... .......... .......... .......... .......... 10% 110M 19s 2550K .......... .......... .......... .......... .......... 10% 392K 20s 2600K .......... .......... .......... .......... .......... 10% 114M 20s 2650K .......... .......... .......... .......... .......... 10% 52. 9M 19s 2700K .......... .......... .......... .......... .......... 10% 398K 20s 2750K .......... .......... .......... .......... .......... 11% 31. 7M 20s 2800K .......... .......... .......... .......... .......... 11% 79. 2M 19s 2850K .......... .......... .......... .......... .......... 11% 71. 9M 19s 2900K .......... .......... .......... .......... .......... 11% 399K 19s 2950K .......... .......... .......... .......... .......... 11% 26. 2M 19s 3000K .......... .......... .......... .......... .......... 12% 75. 7M 19s 3050K .......... .......... .......... .......... .......... 12% 1. 46M 19s 3100K .......... .......... .......... .......... .......... 12% 537K 19s 3150K .......... .......... .......... .......... .......... 12% 34. 0M 19s
deepchem.pdf
3200K .......... .......... .......... .......... .......... 12% 78. 4M 18s 3250K .......... .......... .......... .......... .......... 13% 402K 19s 3300K .......... .......... .......... .......... .......... 13% 26. 9M 18s 3350K .......... .......... .......... .......... .......... 13% 30. 2M 18s 3400K .......... .......... .......... .......... .......... 13% 52. 6M 18s 3450K .......... .......... .......... .......... .......... 13% 400K 18s 3500K .......... .......... .......... .......... .......... 14% 41. 8M 18s 3550K .......... .......... .......... .......... .......... 14% 46. 1M 18s 3600K .......... .......... .......... .......... .......... 14% 1. 44M 18s 3650K .......... .......... .......... .......... .......... 14% 544K 18s 3700K .......... .......... .......... .......... .......... 14% 40. 0M 18s 3750K .......... .......... .......... .......... .......... 15% 35. 8M 17s 3800K .......... .......... .......... .......... .......... 15% 402K 18s 3850K .......... .......... .......... .......... .......... 15% 46. 9M 18s 3900K .......... .......... .......... .......... .......... 15% 19. 3M 17s 3950K .......... .......... .......... .......... .......... 15% 82. 5M 17s 4000K .......... .......... .......... .......... .......... 16% 404K 17s 4050K .......... .......... .......... .......... .......... 16% 43. 3M 17s 4100K .......... .......... .......... .......... .......... 16% 17. 1M 17s 4150K .......... .......... .......... .......... .......... 16% 81. 5M 17s 4200K .......... .......... .......... .......... .......... 16% 398K 17s 4250K .......... .......... .......... .......... .......... 17% 50. 2M 17s 4300K .......... .......... .......... .......... .......... 17% 45. 0M 17s 4350K .......... .......... .......... .......... .......... 17% 1. 48M 17s 4400K .......... .......... .......... .......... .......... 17% 539K 17s 4450K .......... .......... .......... .......... .......... 17% 42. 5M 17s 4500K .......... .......... .......... .......... .......... 18% 32. 2M 16s 4550K .......... .......... .......... .......... .......... 18% 1. 51M 16s 4600K .......... .......... .......... .......... .......... 18% 539K 16s 4650K .......... .......... .......... .......... .......... 18% 36. 4M 16s 4700K .......... .......... .......... .......... .......... 18% 34. 1M 16s 4750K .......... .......... .......... .......... .......... 18% 440K 16s 4800K .......... .......... .......... .......... .......... 19% 4. 12M 16s 4850K .......... .......... .......... .......... .......... 19% 38. 7M 16s 4900K .......... .......... .......... .......... .......... 19% 44. 3M 16s 4950K .......... .......... .......... .......... .......... 19% 406K 16s 5000K .......... .......... .......... .......... .......... 19% 21. 4M 16s 5050K .......... .......... .......... .......... .......... 20% 39. 1M 16s 5100K .......... .......... .......... .......... .......... 20% 43. 1M 16s 5150K .......... .......... .......... .......... .......... 20% 407K 16s 5200K .......... .......... .......... .......... .......... 20% 13. 1M 16s 5250K .......... .......... .......... .......... .......... 20% 43. 6M 15s 5300K .......... .......... .......... .......... .......... 21% 62. 2M 15s 5350K .......... .......... .......... .......... .......... 21% 399K 16s 5400K .......... .......... .......... .......... .......... 21% 58. 5M 15s 5450K .......... .......... .......... .......... .......... 21% 38. 1M 15s 5500K .......... .......... .......... .......... .......... 21% 55. 0M 15s 5550K .......... .......... .......... .......... .......... 22% 401K 15s 5600K .......... .......... .......... .......... .......... 22% 27. 5M 15s 5650K .......... .......... .......... .......... .......... 22% 50. 0M 15s 5700K .......... .......... .......... .......... .......... 22% 47. 0M 15s 5750K .......... .......... .......... .......... .......... 22% 403K 15s 5800K .......... .......... .......... .......... .......... 23% 21. 3M 15s 5850K .......... .......... .......... .......... .......... 23% 29. 3M 15s 5900K .......... .......... .......... .......... .......... 23% 59. 4M 15s 5950K .......... .......... .......... .......... .......... 23% 406K 15s 6000K .......... .......... .......... .......... .......... 23% 19. 4M 15s 6050K .......... .......... .......... .......... .......... 24% 34. 9M 15s 6100K .......... .......... .......... .......... .......... 24% 39. 6M 14s 6150K .......... .......... .......... .......... .......... 24% 408K 15s 6200K .......... .......... .......... .......... .......... 24% 18. 4M 14s 6250K .......... .......... .......... .......... .......... 24% 36. 5M 14s 6300K .......... .......... .......... .......... .......... 25% 29. 7M 14s 6350K .......... .......... .......... .......... .......... 25% 420K 14s 6400K .......... .......... .......... .......... .......... 25% 9. 03M 14s 6450K .......... .......... .......... .......... .......... 25% 26. 0M 14s 6500K .......... .......... .......... .......... .......... 25% 37. 5M 14s 6550K .......... .......... .......... .......... .......... 26% 453K 14s 6600K .......... .......... .......... .......... .......... 26% 3. 58M 14s 6650K .......... .......... .......... .......... .......... 26% 18. 8M 14s 6700K .......... .......... .......... .......... .......... 26% 40. 3M 14s 6750K .......... .......... .......... .......... .......... 26% 455K 14s 6800K .......... .......... .......... .......... .......... 27% 3. 64M 14s 6850K .......... .......... .......... .......... .......... 27% 15. 0M 14s 6900K .......... .......... .......... .......... .......... 27% 49. 0M 14s 6950K .......... .......... .......... .......... .......... 27% 733K 14s 7000K .......... .......... .......... .......... .......... 27% 896K 14s 7050K .......... .......... .......... .......... .......... 28% 23. 0M 13s 7100K .......... .......... .......... .......... .......... 28% 27. 6M 13s 7150K .......... .......... .......... .......... .......... 28% 1. 59M 13s 7200K .......... .......... .......... .......... .......... 28% 537K 13s 7250K .......... .......... .......... .......... .......... 28% 24. 2M 13s 7300K .......... .......... .......... .......... .......... 29% 23. 2M 13s
deepchem.pdf
7350K .......... .......... .......... .......... .......... 29% 68. 7M 13s 7400K .......... .......... .......... .......... .......... 29% 408K 13s 7450K .......... .......... .......... .......... .......... 29% 29. 0M 13s 7500K .......... .......... .......... .......... .......... 29% 13. 4M 13s 7550K .......... .......... .......... .......... .......... 30% 46. 7M 13s 7600K .......... .......... .......... .......... .......... 30% 413K 13s 7650K .......... .......... .......... .......... .......... 30% 19. 8M 13s 7700K .......... .......... .......... .......... .......... 30% 29. 5M 13s 7750K .......... .......... .......... .......... .......... 30% 17. 0M 13s 7800K .......... .......... .......... .......... .......... 31% 463K 13s 7850K .......... .......... .......... .......... .......... 31% 3. 53M 13s 7900K .......... .......... .......... .......... .......... 31% 19. 9M 13s 7950K .......... .......... .......... .......... .......... 31% 13. 7M 12s 8000K .......... .......... .......... .......... .......... 31% 1. 72M 12s 8050K .......... .......... .......... .......... .......... 32% 539K 12s 8100K .......... .......... .......... .......... .......... 32% 13. 4M 12s 8150K .......... .......... .......... .......... .......... 32% 21. 9M 12s 8200K .......... .......... .......... .......... .......... 32% 43. 8M 12s 8250K .......... .......... .......... .......... .......... 32% 415K 12s 8300K .......... .......... .......... .......... .......... 33% 14. 1M 12s 8350K .......... .......... .......... .......... .......... 33% 25. 5M 12s 8400K .......... .......... .......... .......... .......... 33% 23. 7M 12s 8450K .......... .......... .......... .......... .......... 33% 416K 12s 8500K .......... .......... .......... .......... .......... 33% 45. 8M 12s 8550K .......... .......... .......... .......... .......... 34% 13. 0M 12s 8600K .......... .......... .......... .......... .......... 34% 19. 6M 12s 8650K .......... .......... .......... .......... .......... 34% 765K 12s 8700K .......... .......... .......... .......... .......... 34% 895K 12s 8750K .......... .......... .......... .......... .......... 34% 14. 2M 12s 8800K .......... .......... .......... .......... .......... 35% 18. 2M 12s 8850K .......... .......... .......... .......... .......... 35% 43. 9M 12s 8900K .......... .......... .......... .......... .......... 35% 416K 12s 8950K .......... .......... .......... .......... .......... 35% 15. 8M 12s 9000K .......... .......... .......... .......... .......... 35% 32. 0M 11s 9050K .......... .......... .......... .......... .......... 36% 12. 4M 11s 9100K .......... .......... .......... .......... .......... 36% 439K 12s 9150K .......... .......... .......... .......... .......... 36% 7. 92M 11s 9200K .......... .......... .......... .......... .......... 36% 14. 8M 11s 9250K .......... .......... .......... .......... .......... 36% 11. 0M 11s 9300K .......... .......... .......... .......... .......... 36% 1. 85M 11s 9350K .......... .......... .......... .......... .......... 37% 537K 11s 9400K .......... .......... .......... .......... .......... 37% 16. 8M 11s 9450K .......... .......... .......... .......... .......... 37% 25. 4M 11s 9500K .......... .......... .......... .......... .......... 37% 12. 7M 11s 9550K .......... .......... .......... .......... .......... 37% 422K 11s 9600K .......... .......... .......... .......... .......... 38% 38. 8M 11s 9650K .......... .......... .......... .......... .......... 38% 14. 7M 11s 9700K .......... .......... .......... .......... .......... 38% 11. 1M 11s 9750K .......... .......... .......... .......... .......... 38% 1. 84M 11s 9800K .......... .......... .......... .......... .......... 38% 538K 11s 9850K .......... .......... .......... .......... .......... 39% 15. 3M 11s 9900K .......... .......... .......... .......... .......... 39% 31. 6M 11s 9950K .......... .......... .......... .......... .......... 39% 12. 4M 11s 10000K .......... .......... .......... .......... .......... 39% 422K 11s 10050K .......... .......... .......... .......... .......... 39% 14. 1M 11s 10100K .......... .......... .......... .......... .......... 40% 30. 8M 11s 10150K .......... .......... .......... .......... .......... 40% 12. 6M 10s 10200K .......... .......... .......... .......... .......... 40% 1. 84M 10s 10250K .......... .......... .......... .......... .......... 40% 532K 10s 10300K .......... .......... .......... .......... .......... 40% 22. 0M 10s 10350K .......... .......... .......... .......... .......... 41% 25. 4M 10s 10400K .......... .......... .......... .......... .......... 41% 11. 9M 10s 10450K .......... .......... .......... .......... .......... 41% 471K 10s 10500K .......... .......... .......... .......... .......... 41% 3. 24M 10s 10550K .......... .......... .......... .......... .......... 41% 17. 7M 10s 10600K .......... .......... .......... .......... .......... 42% 13. 3M 10s 10650K .......... .......... .......... .......... .......... 42% 1. 92M 10s 10700K .......... .......... .......... .......... .......... 42% 532K 10s 10750K .......... .......... .......... .......... .......... 42% 12. 6M 10s 10800K .......... .......... .......... .......... .......... 42% 39. 9M 10s 10850K .......... .......... .......... .......... .......... 43% 12. 8M 10s 10900K .......... .......... .......... .......... .......... 43% 803K 10s 10950K .......... .......... .......... .......... .......... 43% 849K 10s 11000K .......... .......... .......... .......... .......... 43% 23. 2M 10s 11050K .......... .......... .......... .......... .......... 43% 22. 8M 10s 11100K .......... .......... .......... .......... .......... 44% 11. 8M 10s 11150K .......... .......... .......... .......... .......... 44% 431K 10s 11200K .......... .......... .......... .......... .......... 44% 12. 4M 10s 11250K .......... .......... .......... .......... .......... 44% 17. 2M 10s 11300K .......... .......... .......... .......... .......... 44% 14. 0M 9s 11350K .......... .......... .......... .......... .......... 45% 1. 87M 9s 11400K .......... .......... .......... .......... .......... 45% 533K 9s 11450K .......... .......... .......... .......... .......... 45% 13. 6M 9s
deepchem.pdf
11500K .......... .......... .......... .......... .......... 45% 43. 6M 9s 11550K .......... .......... .......... .......... .......... 45% 14. 5M 9s 11600K .......... .......... .......... .......... .......... 46% 819K 9s 11650K .......... .......... .......... .......... .......... 46% 839K 9s 11700K .......... .......... .......... .......... .......... 46% 16. 7M 9s 11750K .......... .......... .......... .......... .......... 46% 40. 2M 9s 11800K .......... .......... .......... .......... .......... 46% 11. 6M 9s 11850K .......... .......... .......... .......... .......... 47% 453K 9s 11900K .......... .......... .......... .......... .......... 47% 4. 96M 9s 11950K .......... .......... .......... .......... .......... 47% 18. 2M 9s 12000K .......... .......... .......... .......... .......... 47% 17. 6M 9s 12050K .......... .......... .......... .......... .......... 47% 1. 87M 9s 12100K .......... .......... .......... .......... .......... 48% 533K 9s 12150K .......... .......... .......... .......... .......... 48% 11. 7M 9s 12200K .......... .......... .......... .......... .......... 48% 35. 4M 9s 12250K .......... .......... .......... .......... .......... 48% 15. 3M 9s 12300K .......... .......... .......... .......... .......... 48% 1. 88M 9s 12350K .......... .......... .......... .......... .......... 49% 533K 9s 12400K .......... .......... .......... .......... .......... 49% 12. 8M 9s 12450K .......... .......... .......... .......... .......... 49% 22. 3M 9s 12500K .......... .......... .......... .......... .......... 49% 21. 2M 8s 12550K .......... .......... .......... .......... .......... 49% 806K 8s 12600K .......... .......... .......... .......... .......... 50% 859K 8s 12650K .......... .......... .......... .......... .......... 50% 15. 8M 8s 12700K .......... .......... .......... .......... .......... 50% 14. 4M 8s 12750K .......... .......... .......... .......... .......... 50% 21. 4M 8s 12800K .......... .......... .......... .......... .......... 50% 484K 8s 12850K .......... .......... .......... .......... .......... 51% 3. 11M 8s 12900K .......... .......... .......... .......... .......... 51% 12. 4M 8s 12950K .......... .......... .......... .......... .......... 51% 18. 9M 8s 13000K .......... .......... .......... .......... .......... 51% 20. 2M 8s 13050K .......... .......... .......... .......... .......... 51% 429K 8s 13100K .......... .......... .......... .......... .......... 52% 11. 1M 8s 13150K .......... .......... .......... .......... .......... 52% 26. 1M 8s 13200K .......... .......... .......... .......... .......... 52% 17. 4M 8s 13250K .......... .......... .......... .......... .......... 52% 1. 98M 8s 13300K .......... .......... .......... .......... .......... 52% 532K 8s 13350K .......... .......... .......... .......... .......... 53% 8. 56M 8s 13400K .......... .......... .......... .......... .......... 53% 18. 6M 8s 13450K .......... .......... .......... .......... .......... 53% 37. 9M 8s 13500K .......... .......... .......... .......... .......... 53% 1. 83M 8s 13550K .......... .......... .......... .......... .......... 53% 518K 8s 13600K .......... .......... .......... .......... .......... 54% 53. 3M 8s 13650K .......... .......... .......... .......... .......... 54% 17. 0M 8s 13700K .......... .......... .......... .......... .......... 54% 32. 4M 8s 13750K .......... .......... .......... .......... .......... 54% 1. 88M 7s 13800K .......... .......... .......... .......... .......... 54% 518K 8s 13850K .......... .......... .......... .......... .......... 54% 35. 2M 7s 13900K .......... .......... .......... .......... .......... 55% 20. 6M 7s 13950K .......... .......... .......... .......... .......... 55% 26. 7M 7s 14000K .......... .......... .......... .......... .......... 55% 1. 90M 7s 14050K .......... .......... .......... .......... .......... 55% 508K 7s 14100K .......... .......... .......... .......... .......... 55% 111M 7s 14150K .......... .......... .......... .......... .......... 56% 21. 6M 7s 14200K .......... .......... .......... .......... .......... 56% 43. 4M 7s 14250K .......... .......... .......... .......... .......... 56% 1. 90M 7s 14300K .......... .......... .......... .......... .......... 56% 509K 7s 14350K .......... .......... .......... .......... .......... 56% 47. 6M 7s 14400K .......... .......... .......... .......... .......... 57% 29. 2M 7s 14450K .......... .......... .......... .......... .......... 57% 36. 3M 7s 14500K .......... .......... .......... .......... .......... 57% 1. 07M 7s 14550K .......... .......... .......... .......... .......... 57% 639K 7s 14600K .......... .......... .......... .......... .......... 57% 19. 1M 7s 14650K .......... .......... .......... .......... .......... 58% 113M 7s 14700K .......... .......... .......... .......... .......... 58% 40. 6M 7s 14750K .......... .......... .......... .......... .......... 58% 1. 08M 7s 14800K .......... .......... .......... .......... .......... 58% 638K 7s 14850K .......... .......... .......... .......... .......... 58% 16. 9M 7s 14900K .......... .......... .......... .......... .......... 59% 50. 2M 7s 14950K .......... .......... .......... .......... .......... 59% 16. 4M 7s 15000K .......... .......... .......... .......... .......... 59% 2. 03M 7s 15050K .......... .......... .......... .......... .......... 59% 514K 7s 15100K .......... .......... .......... .......... .......... 59% 18. 5M 7s 15150K .......... .......... .......... .......... .......... 60% 38. 8M 6s 15200K .......... .......... .......... .......... .......... 60% 12. 3M 6s 15250K .......... .......... .......... .......... .......... 60% 2. 15M 6s 15300K .......... .......... .......... .......... .......... 60% 528K 6s 15350K .......... .......... .......... .......... .......... 60% 6. 73M 6s 15400K .......... .......... .......... .......... .......... 61% 110M 6s 15450K .......... .......... .......... .......... .......... 61% 19. 5M 6s 15500K .......... .......... .......... .......... .......... 61% 2. 14M 6s 15550K .......... .......... .......... .......... .......... 61% 528K 6s 15600K .......... .......... .......... .......... .......... 61% 6. 48M 6s
deepchem.pdf
15650K .......... .......... .......... .......... .......... 62% 51. 5M 6s 15700K .......... .......... .......... .......... .......... 62% 114M 6s 15750K .......... .......... .......... .......... .......... 62% 2. 36M 6s 15800K .......... .......... .......... .......... .......... 62% 530K 6s 15850K .......... .......... .......... .......... .......... 62% 4. 16M 6s 15900K .......... .......... .......... .......... .......... 63% 70. 9M 6s 15950K .......... .......... .......... .......... .......... 63% 50. 3M 6s 16000K .......... .......... .......... .......... .......... 63% 2. 44M 6s 16050K .......... .......... .......... .......... .......... 63% 531K 6s 16100K .......... .......... .......... .......... .......... 63% 4. 72M 6s 16150K .......... .......... .......... .......... .......... 64% 20. 3M 6s 16200K .......... .......... .......... .......... .......... 64% 28. 9M 6s 16250K .......... .......... .......... .......... .......... 64% 54. 9M 6s 16300K .......... .......... .......... .......... .......... 64% 476K 6s 16350K .......... .......... .......... .......... .......... 64% 2. 72M 6s 16400K .......... .......... .......... .......... .......... 65% 20. 8M 6s 16450K .......... .......... .......... .......... .......... 65% 38. 4M 6s 16500K .......... .......... .......... .......... .......... 65% 33. 5M 5s 16550K .......... .......... .......... .......... .......... 65% 914K 5s 16600K .......... .......... .......... .......... .......... 65% 766K 5s 16650K .......... .......... .......... .......... .......... 66% 9. 17M 5s 16700K .......... .......... .......... .......... .......... 66% 70. 4M 5s 16750K .......... .......... .......... .......... .......... 66% 18. 1M 5s 16800K .......... .......... .......... .......... .......... 66% 2. 35M 5s 16850K .......... .......... .......... .......... .......... 66% 513K 5s 16900K .......... .......... .......... .......... .......... 67% 8. 14M 5s 16950K .......... .......... .......... .......... .......... 67% 44. 1M 5s 17000K .......... .......... .......... .......... .......... 67% 16. 1M 5s 17050K .......... .......... .......... .......... .......... 67% 2. 90M 5s 17100K .......... .......... .......... .......... .......... 67% 532K 5s 17150K .......... .......... .......... .......... .......... 68% 4. 43M 5s 17200K .......... .......... .......... .......... .......... 68% 17. 9M 5s 17250K .......... .......... .......... .......... .......... 68% 17. 0M 5s 17300K .......... .......... .......... .......... .......... 68% 35. 4M 5s 17350K .......... .......... .......... .......... .......... 68% 490K 5s 17400K .......... .......... .......... .......... .......... 69% 2. 72M 5s 17450K .......... .......... .......... .......... .......... 69% 14. 4M 5s 17500K .......... .......... .......... .......... .......... 69% 35. 5M 5s 17550K .......... .......... .......... .......... .......... 69% 16. 3M 5s 17600K .......... .......... .......... .......... .......... 69% 967K 5s 17650K .......... .......... .......... .......... .......... 70% 772K 5s 17700K .......... .......... .......... .......... .......... 70% 7. 62M 5s 17750K .......... .......... .......... .......... .......... 70% 24. 5M 5s 17800K .......... .......... .......... .......... .......... 70% 15. 2M 5s 17850K .......... .......... .......... .......... .......... 70% 2. 99M 5s 17900K .......... .......... .......... .......... .......... 71% 515K 5s 17950K .......... .......... .......... .......... .......... 71% 5. 63M 5s 18000K .......... .......... .......... .......... .......... 71% 16. 3M 4s 18050K .......... .......... .......... .......... .......... 71% 59. 2M 4s 18100K .......... .......... .......... .......... .......... 71% 15. 1M 4s 18150K .......... .......... .......... .......... .......... 72% 499K 4s 18200K .......... .......... .......... .......... .......... 72% 2. 69M 4s 18250K .......... .......... .......... .......... .......... 72% 14. 5M 4s 18300K .......... .......... .......... .......... .......... 72% 18. 4M 4s 18350K .......... .......... .......... .......... .......... 72% 14. 8M 4s 18400K .......... .......... .......... .......... .......... 72% 1. 39M 4s 18450K .......... .......... .......... .......... .......... 73% 628K 4s 18500K .......... .......... .......... .......... .......... 73% 7. 11M 4s 18550K .......... .......... .......... .......... .......... 73% 20. 1M 4s 18600K .......... .......... .......... .......... .......... 73% 19. 7M 4s 18650K .......... .......... .......... .......... .......... 73% 2. 87M 4s 18700K .......... .......... .......... .......... .......... 74% 534K 4s 18750K .......... .......... .......... .......... .......... 74% 5. 43M 4s 18800K .......... .......... .......... .......... .......... 74% 11. 6M 4s 18850K .......... .......... .......... .......... .......... 74% 22. 7M 4s 18900K .......... .......... .......... .......... .......... 74% 13. 0M 4s 18950K .......... .......... .......... .......... .......... 75% 998K 4s 19000K .......... .......... .......... .......... .......... 75% 776K 4s 19050K .......... .......... .......... .......... .......... 75% 6. 50M 4s 19100K .......... .......... .......... .......... .......... 75% 31. 0M 4s 19150K .......... .......... .......... .......... .......... 75% 19. 4M 4s 19200K .......... .......... .......... .......... .......... 76% 2. 86M 4s 19250K .......... .......... .......... .......... .......... 76% 530K 4s 19300K .......... .......... .......... .......... .......... 76% 4. 17M 4s 19350K .......... .......... .......... .......... .......... 76% 22. 7M 4s 19400K .......... .......... .......... .......... .......... 76% 32. 2M 4s 19450K .......... .......... .......... .......... .......... 77% 12. 6M 3s 19500K .......... .......... .......... .......... .......... 77% 555K 3s 19550K .......... .......... .......... .......... .......... 77% 2. 00M 3s 19600K .......... .......... .......... .......... .......... 77% 6. 86M 3s 19650K .......... .......... .......... .......... .......... 77% 24. 7M 3s 19700K .......... .......... .......... .......... .......... 78% 33. 6M 3s 19750K .......... .......... .......... .......... .......... 78% 2. 84M 3s
deepchem.pdf
19800K .......... .......... .......... .......... .......... 78% 530K 3s 19850K .......... .......... .......... .......... .......... 78% 6. 11M 3s 19900K .......... .......... .......... .......... .......... 78% 7. 70M 3s 19950K .......... .......... .......... .......... .......... 79% 47. 5M 3s 20000K .......... .......... .......... .......... .......... 79% 11. 6M 3s 20050K .......... .......... .......... .......... .......... 79% 1024K 3s 20100K .......... .......... .......... .......... .......... 79% 776K 3s 20150K .......... .......... .......... .......... .......... 79% 5. 19M 3s 20200K .......... .......... .......... .......... .......... 80% 81. 5M 3s 20250K .......... .......... .......... .......... .......... 80% 43. 4M 3s 20300K .......... .......... .......... .......... .......... 80% 2. 88M 3s 20350K .......... .......... .......... .......... .......... 80% 529K 3s 20400K .......... .......... .......... .......... .......... 80% 6. 19M 3s 20450K .......... .......... .......... .......... .......... 81% 6. 76M 3s 20500K .......... .......... .......... .......... .......... 81% 58. 5M 3s 20550K .......... .......... .......... .......... .......... 81% 12. 3M 3s 20600K .......... .......... .......... .......... .......... 81% 1. 51M 3s 20650K .......... .......... .......... .......... .......... 81% 619K 3s 20700K .......... .......... .......... .......... .......... 82% 5. 38M 3s 20750K .......... .......... .......... .......... .......... 82% 40. 0M 3s 20800K .......... .......... .......... .......... .......... 82% 51. 5M 3s 20850K .......... .......... .......... .......... .......... 82% 13. 3M 3s 20900K .......... .......... .......... .......... .......... 82% 509K 3s 20950K .......... .......... .......... .......... .......... 83% 2. 89M 3s 21000K .......... .......... .......... .......... .......... 83% 6. 67M 3s 21050K .......... .......... .......... .......... .......... 83% 40. 1M 2s 21100K .......... .......... .......... .......... .......... 83% 16. 8M 2s 21150K .......... .......... .......... .......... .......... 83% 3. 17M 2s 21200K .......... .......... .......... .......... .......... 84% 528K 2s 21250K .......... .......... .......... .......... .......... 84% 6. 20M 2s 21300K .......... .......... .......... .......... .......... 84% 8. 17M 2s 21350K .......... .......... .......... .......... .......... 84% 17. 7M 2s 21400K .......... .......... .......... .......... .......... 84% 24. 9M 2s 21450K .......... .......... .......... .......... .......... 85% 1. 00M 2s 21500K .......... .......... .......... .......... .......... 85% 771K 2s 21550K .......... .......... .......... .......... .......... 85% 5. 64M 2s 21600K .......... .......... .......... .......... .......... 85% 12. 3M 2s 21650K .......... .......... .......... .......... .......... 85% 112M 2s 21700K .......... .......... .......... .......... .......... 86% 3. 08M 2s 21750K .......... .......... .......... .......... .......... 86% 597K 2s 21800K .......... .......... .......... .......... .......... 86% 2. 91M 2s 21850K .......... .......... .......... .......... .......... 86% 6. 65M 2s 21900K .......... .......... .......... .......... .......... 86% 12. 6M 2s 21950K .......... .......... .......... .......... .......... 87% 53. 0M 2s 22000K .......... .......... .......... .......... .......... 87% 3. 17M 2s 22050K .......... .......... .......... .......... .......... 87% 531K 2s 22100K .......... .......... .......... .......... .......... 87% 5. 84M 2s 22150K .......... .......... .......... .......... .......... 87% 8. 21M 2s 22200K .......... .......... .......... .......... .......... 88% 10. 0M 2s 22250K .......... .......... .......... .......... .......... 88% 111M 2s 22300K .......... .......... .......... .......... .......... 88% 3. 32M 2s 22350K .......... .......... .......... .......... .......... 88% 531K 2s 22400K .......... .......... .......... .......... .......... 88% 4. 38M 2s 22450K .......... .......... .......... .......... .......... 89% 17. 7M 2s 22500K .......... .......... .......... .......... .......... 89% 10. 1M 2s 22550K .......... .......... .......... .......... .......... 89% 29. 9M 2s 22600K .......... .......... .......... .......... .......... 89% 1. 03M 2s 22650K .......... .......... .......... .......... .......... 89% 779K 2s 22700K .......... .......... .......... .......... .......... 90% 5. 55M 1s 22750K .......... .......... .......... .......... .......... 90% 24. 9M 1s 22800K .......... .......... .......... .......... .......... 90% 10. 3M 1s 22850K .......... .......... .......... .......... .......... 90% 52. 2M 1s 22900K .......... .......... .......... .......... .......... 90% 512K 1s 22950K .......... .......... .......... .......... .......... 90% 3. 46M 1s 23000K .......... .......... .......... .......... .......... 91% 5. 22M 1s 23050K .......... .......... .......... .......... .......... 91% 14. 6M 1s 23100K .......... .......... .......... .......... .......... 91% 18. 8M 1s 23150K .......... .......... .......... .......... .......... 91% 3. 36M 1s 23200K .......... .......... .......... .......... .......... 91% 541K 1s 23250K .......... .......... .......... .......... .......... 92% 5. 19M 1s 23300K .......... .......... .......... .......... .......... 92% 8. 97M 1s 23350K .......... .......... .......... .......... .......... 92% 12. 7M 1s 23400K .......... .......... .......... .......... .......... 92% 13. 2M 1s 23450K .......... .......... .......... .......... .......... 92% 3. 80M 1s 23500K .......... .......... .......... .......... .......... 93% 540K 1s 23550K .......... .......... .......... .......... .......... 93% 4. 97M 1s 23600K .......... .......... .......... .......... .......... 93% 9. 17M 1s 23650K .......... .......... .......... .......... .......... 93% 13. 6M 1s 23700K .......... .......... .......... .......... .......... 93% 8. 62M 1s 23750K .......... .......... .......... .......... .......... 94% 4. 47M 1s 23800K .......... .......... .......... .......... .......... 94% 536K 1s 23850K .......... .......... .......... .......... .......... 94% 5. 21M 1s 23900K .......... .......... .......... .......... .......... 94% 9. 52M 1s
deepchem.pdf
23950K .......... .......... .......... .......... .......... 94% 6. 19M 1s 24000K .......... .......... .......... .......... .......... 95% 24. 5M 1s 24050K .......... .......... .......... .......... .......... 95% 4. 77M 1s 24100K .......... .......... .......... .......... .......... 95% 519K 1s 24150K .......... .......... .......... .......... .......... 95% 8. 05M 1s 24200K .......... .......... .......... .......... .......... 95% 8. 30M 1s 24250K .......... .......... .......... .......... .......... 96% 6. 79M 1s 24300K .......... .......... .......... .......... .......... 96% 14. 7M 1s 24350K .......... .......... .......... .......... .......... 96% 1. 96M 1s 24400K .......... .......... .......... .......... .......... 96% 621K 0s 24450K .......... .......... .......... .......... .......... 96% 6. 38M 0s 24500K .......... .......... .......... .......... .......... 97% 10. 8M 0s 24550K .......... .......... .......... .......... .......... 97% 6. 81M 0s 24600K .......... .......... .......... .......... .......... 97% 13. 8M 0s 24650K .......... .......... .......... .......... .......... 97% 1. 18M 0s 24700K .......... .......... .......... .......... .......... 97% 755K 0s 24750K .......... .......... .......... .......... .......... 98% 9. 82M 0s 24800K .......... .......... .......... .......... .......... 98% 10. 3M 0s 24850K .......... .......... .......... .......... .......... 98% 6. 37M 0s 24900K .......... .......... .......... .......... .......... 98% 16. 3M 0s 24950K .......... .......... .......... .......... .......... 98% 1. 18M 0s 25000K .......... .......... .......... .......... .......... 99% 755K 0s 25050K .......... .......... .......... .......... .......... 99% 10. 5M 0s 25100K .......... .......... .......... .......... .......... 99% 9. 45M 0s 25150K .......... .......... .......... .......... .......... 99% 6. 48M 0s 25200K .......... .......... .......... .......... .......... 99% 15. 5M 0s 25250K .......... .......... ..... 100% 3. 71M=14s 2022-02-11 12:48:12 (1. 72 MB/s) - 'db CAN-fam-V9. tar. gz' saved [25882327/25882327] Using hhsearch hhblits and hhsearch are the main functions in hhsuite which identify homologous proteins. They do this by calculating a profile hidden Markov model (HMM) from a given alignment and searching over a reference HMM proteome database using the Viterbi algorithm. Then the most similar HMMs are realigned and output to the user. To learn more, check out the original paper in the references above. Run a function from hhsuite with no parameters to read its documentation. ! hhsearch
deepchem.pdf
HHsearch 3. 3. 0 Search a database of HMMs with a query alignment or query HMM (c) The HH-suite development team Steinegger M, Meier M, Mirdita M, Vöhringer H, Haunsberger S J, and Söding J (2019) HH-suite3 for fast remote homology detection and deep protein annotation. BMC Bioinformatics, doi:10. 1186/s12859-019-3019-7 Usage: hhsearch -i query -d database [options] -i <file> input/query multiple sequence alignment (a2m, a3m, FASTA) or HMM Options: -d <name> database name (e. g. uniprot20_29Feb2012) Multiple databases may be specified with '-d <db1> -d <db2> ... ' -e [0,1] E-value cutoff for inclusion in result alignment (def=0. 001) Input alignment format: -M a2m use A2M/A3M (default): upper case = Match; lower case = Insert; '-' = Delete; '. ' = gaps aligned to inserts (may be omitted) -M first use FASTA: columns with residue in 1st sequence are match states -M [0,100] use FASTA: columns with fewer than X% gaps are match states -tags/-notags do NOT / do neutralize His-, C-myc-, FLAG-tags, and trypsin recognition sequence to background distribution (def=-notags) Output options: -o <file> write results in standard format to file (default=<infile. hhr>) -oa3m <file> write result MSA with significant matches in a3m format -blasttab <name> write result in tabular BLAST format (compatible to -m 8 or -outfmt 6 output) 1 2 3 4 5 6 7 8 9 10 11 12 query target #match/t Len aln Len #mismatch #gap Open qstart qend tstart tend eval score -add_cons generate consensus sequence as master sequence of query MSA (default=don't) -hide_cons don't show consensus sequence in alignments (default=show) -hide_pred don't show predicted 2ndary structure in alignments (default=show) -hide_dssp don't show DSSP 2ndary structure in alignments (default=show) -show_ssconf show confidences for predicted 2ndary structure in alignments Filter options applied to query MSA, database MSAs, and result MSA -all show all sequences in result MSA; do not filter result MSA -id [0,100] maximum pairwise sequence identity (def=90) -diff [0,inf[ filter MSAs by selecting most diverse set of sequences, keeping at least this many seqs in each MSA block of length 50 Zero and non-numerical values turn off the filtering. (def=100) -cov [0,100] minimum coverage with master sequence (%) (def=0) -qid [0,100] minimum sequence identity with master sequence (%) (def=0) -qsc [0,100] minimum score per column with master sequence (default=-20. 0) -neff [1,inf] target diversity of multiple sequence alignment (default=off) -mark do not filter out sequences marked by ">@"in their name line HMM-HMM alignment options: -norealign do NOT realign displayed hits with MAC algorithm (def=realign) -ovlp <int> banded alignment: forbid <ovlp> largest diagonals |i-j| of DP matrix (def=0) -mact [0,1[ posterior prob threshold for MAC realignment controlling greedi- ness at alignment ends: 0:global >0. 1:local (default=0. 35) -glob/-loc use global/local alignment mode for searching/ranking (def=local) Other options: -v <int> verbose mode: 0:no screen output 1:only warnings 2: verbose (def=2) -cpu <int> number of CPUs to use (for shared memory SMPs) (default=2) An extended list of options can be obtained by calling 'hhblits -h all' Example: hhsearch -i a. 1. 1. 1. a3m -d scop70_1. 71 Download databases from <http://wwwuser. gwdg. de/~compbiol/data/hhsuite/databases/hhsuite_dbs/>.- 12:48:13. 127 ERROR: Database is missing (see -d)! Let's do an example. Say we have a protein which we want to compare to a MSA in order to identify any homologous regions. For this we can use hhsearch. Now let's take some protein sequence and search through the db CAN database to see if we can find any potential homologous regions. First we will specify the sequence and save it as a FASTA file or a3m file in order to be readable by hhsearch. I pulled this sequence from the example query. a3m in the hhsuite data directory. with open ( 'protein. fasta', 'w' ) as f : f. write ( """ >Uncharacterized bovine protein (Fragment)--PAGGQCtgi WHLLTRPLRP--QGRLPGLRVKYVFLVWLGVFAGSWMAYTHYSSYAELCRGHICQVVICDQFRKGIISGSICQDLCHLHQVEWRTCLSSVPGQQVYSGLWQGKEVTIKCGIEESLNSKAGSDGAPRRELVLFDKPSRGTSIKEFREMTLSFLKANLGDLPSLPALVGRVLLMADFNKDNRVSLAEAKSVWALLQRNEFLLLLSLQEKEHASRLLGYCGDLYVTEGVPLSSWPGATLPPLLRPLLPPALHGALQQWLGPAWPWRAKIAMGLLEFVEDLFHGAYGNFYMCETTLANVGYTAKYDFRMADLQQVAPEAAVRRFLRGRRCEHSADCTYGRDCRAPCDTLMRQCKGDLVQPNLAKVCELLRDYLLPGAPAALRPELGKQLRTCTTLSGLASQVEAHHSLVLSHLKSLLWKEISDSRYT """ ) Then we can call hhsearch, specifying the query sequence with the -i flag, the database to search through with -d, and the output with -o. from deepchem. utils import sequence_utils dataset_path = 'protein. fasta'
deepchem.pdf
data_dir = 'hh/databases' results = sequence_utils. hhsearch ( dataset_path, database = 'db CAN-fam-V9', data_dir = data_dir )- 12:48:13. 301 INFO: Search results will be written to /home/tony/github/deepchem/examples/tutorials/protein. hhr- 12:48:13. 331 INFO: /home/tony/github/deepchem/examples/tutorials/protein. fasta is in A2M, A3M or FASTA format- 12:48:13. 331 WARNING: Input alignment /home/tony/github/deepchem/examples/tutorials/protein. fasta looks like a ligned FASTA instead of A2M/A3M format. Consider using '-M first' or '-M 50'- 12:48:13. 331 INFO: NOTE: Use the '-add_cons' option to calculate a consensus sequence as first sequence of the alignment with hhconsensus or hhmake.- 12:48:13. 331 INFO: Searching 683 database HHMs without prefiltering- 12:48:13. 332 INFO: Iteration 1- 12:48:13. 420 INFO: Scoring 683 HMMs using HMM-HMM Viterbi alignment- 12:48:13. 460 INFO: Alternative alignment: 0- 12:48:13. 611 INFO: 683 alignments done- 12:48:13. 612 INFO: Alternative alignment: 1- 12:48:13. 625 INFO: 38 alignments done- 12:48:13. 625 INFO: Alternative alignment: 2- 12:48:13. 629 INFO: 3 alignments done- 12:48:13. 629 INFO: Alternative alignment: 3- 12:48:13. 655 INFO: Premerge done- 12:48:13. 656 INFO: Realigning 10 HMM-HMM alignments using Maximum Accuracy algorithm- 12:48:13. 692 INFO: 0 sequences belonging to 0 database HMMs found with an E-value < 0. 001- 12:48:13. 692 INFO: Number of effective sequences of resulting query HMM: Neff = 1 #open the results and print them f = open ( "protein. hhr", "r" ) print ( f. read ()) Query Uncharacterized bovine protein (Fragment) Match_columns 431 No_of_seqs 1 out of 1 Neff 1 Searched_HMMs 683 Date Fri Feb 11 12:48:13 2022 Command hhsearch -i /home/tony/github/deepchem/examples/tutorials/protein. fasta -d hh/databases/db CAN-fam-V9 -oa3m /home/tony/github/deepchem/examples/tutorials/results. a3m -cpu 4 -e 0. 001 No Hit Prob E-value P-value Score SS Cols Query HMM Template HMM 1 ABJ15796. 1|231-344|9. 6e-33 8. 2 2. 9 0. 0042 25. 2 0. 0 13 224-236 40-52 (116) 2 lcl|consensus 5. 1 5. 2 0. 0076 17. 1 0. 0 14 182-195 1-14 (21) 3 ABW08129. 1|GT4|GT97||563-891 4. 8 5. 7 0. 0084 26. 6 0. 0 46 104-150 93-140 (329) 4 AEO62162. 1|AA13||19-250 4. 6 6 0. 0087 25. 5 0. 0 18 330-347 139-156 (232) 5 BAF49076. 1|GH5_26. hmm|8. 3e-11| 2. 4 13 0. 02 21. 9 0. 0 12 287-298 45-56 (141) 6 BBD44721. 1 Hypothetical protei 2. 3 14 0. 02 25. 7 0. 0 81 110-221 326-406 (552) 7 AAU92474. 1|CBM2|2-82|1. 9e-23 2. 3 14 0. 02 19. 1 0. 0 19 222-240 13-33 (104) 8 BAX82587. 1 hypothetical protei 2. 3 14 0. 021 25. 7 0. 0 25 104-128 466-490 (656) 9 AHE46274. 1|GH13_13. hmm|1. 6e-20 2. 0 16 0. 024 24. 1 0. 0 45 143-199 99-143 (393) 10 ACF55060. 1|GH13_13. hmm|2. 5e-47 1. 9 17 0. 025 23. 2 0. 0 22 144-165 74-95 (330) No 1 >ABJ15796. 1|231-344|9. 6e-33 Probab=8. 16 E-value=2. 9 Score=25. 22 Aligned_cols=13 Identities=46% Similarity=0. 795 Sum_probs=10. 2 Templa te_Neff=3. 400 Q Uncharacterize 223 YCGDLYVTEGVPL 235 (430) Q Consensus 224 ycgdlyvtegvpl 236 (431) --||||. ||||--T Consensus 40 I~Gnlyi~e GVG~ 52 (116) T ABJ15796. 1|231 40 INGNLYIAEGVGE 52 (116) Confidence 3599999999853 No 2 >lcl|consensus Probab=5. 13 E-value=5. 2 Score=17. 13 Aligned_cols=14 Identities=29% Similarity=0. 437 Sum_probs=10. 2 Templa te_Neff=4. 300
deepchem.pdf
Q Uncharacterize 181 DFNKDNRVSLAEAK 194 (430) Q Consensus 182 dfnkdnrvslaeak 195 (431) |. |. |++|+-. ++-T Consensus 1 Dv N~DG~Vna~D~~ 14 (21) T lcl|consensus_ 1 DVNGDGKVNALDLA 14 (21) Confidence 67888888766553 No 3 >ABW08129. 1|GT4|GT97||563-891 Probab=4. 78 E-value=5. 7 Score=26. 58 Aligned_cols=46 Identities=20% Similarity=0. 367 Sum_probs=28. 5 Templa te_Neff=1. 500 Q Uncharacterize 103 YSGLWQGKEVTIKCGIEESLN--SKAGSDGAPRRELVLFDKPSRGTSIK 149 (430) Q Consensus 104 ysglwqgkevtikcgieesln--skagsdgaprrelvlfdkpsrgtsik 150 (431) ..|+|. |... +.-. |.-... . +..-|. +|..|-++||-|. ||-+-+ T Consensus 93 ~~G~W~-~~~~~~~~i~~~~Dhe G~r~m~~~~~~~T~i~e~~Rk~~~~~ 140 (329) T ABW08129. 1|GT4 93 FTGKWE-KHFQTSPKIDYRFDHEGKRSMDDVFSEETFIMEFPRKNGIDK 140 (329) Confidence 457774 33333333433333 45556778888889999988887654 No 4 >AEO62162. 1|AA13||19-250 Probab=4. 61 E-value=6 Score=25. 50 Aligned_cols=18 Identities=39% Similarity=0. 936 Sum_probs=14. 8 Template _Neff=1. 600 Q Uncharacterize 329 RGRRCEHSADCTYGRDCR 346 (430) Q Consensus 330 rgrrcehsadctygrdcr 347 (431) . |..|..|+||+-|..|-T Consensus 139 ~Gq~C~y~p DC~~gq~C~ 156 (232) T AEO62162. 1|AA1 139 SGQTCGYSPDCSPGQPCW 156 (232) Confidence 467899999999998774 No 5 >BAF49076. 1|GH5_26. hmm|8. 3e-11|182-335 Probab=2. 39 E-value=13 Score=21. 92 Aligned_cols=12 Identities=33% Similarity=0. 720 Sum_probs=9. 5 Template _Neff=1. 900 Q Uncharacterize 286 HGAYGNFYMCET 297 (430) Q Consensus 287 hgaygnfymcet 298 (431) . |+|++|||-.. T Consensus 45 ~G~yn~~Y~l~s 56 (141) T BAF49076. 1|GH5 45 QGTYNGNYMLTS 56 (141) Confidence 478999998654 No 6 >BBD44721. 1 Hypothetical protein PEIBARAKI_4714 [Petrimonas sp. IBARAKI] Probab=2. 34 E-value=14 Score=25. 75 Aligned_cols=81 Identities=23% Similarity=0. 240 Sum_probs=46. 6 Templat e_Neff=3. 400 Q Uncharacterize 109 GKEVTIKCGIEESLNSKAGSDGAPRRELVLFDKPSRGTSIKEFREMTLSFLKANLGDLPSLPALVGRVLLMADFNKDNRV 188 (430 ) Q Consensus 110 gkevtikcgieeslnskagsdgaprrelvlfdkpsrgtsikefremtlsflkanlgdlpslpalvgrvllmadfnkdnrv 189 (431 ) |..|+|-. |+|+..+-+. +++-.-. |+.-+|-+. +++|. +--. . |-+ ||-. +-+ T Consensus 326 g~~V~Iya~l~~~~~~~~-------------~~~~~~~S~~~~Rg~Aa~~L~r GAd----------GIyl---FN~f~~~ 379 (552 ) T BBD44721. 1_con 326 GTGVKIYAGLEDARAPDP-------------STRRETNSLEAYRGRAANALSRGAD----------GIYL---FNYFYPP 379 (552 ) Confidence 777888888888744332 4455566888888888888765322 2333 4443332 Q Uncharacterize 189 SLAEAKSVWALLQRNEFLLLLSLQEKEHASRL 220 (430) Q Consensus 190 slaeaksvwallqrnefllllslqekehasrl 221 (431) .. --. |||.-. =+-. |.-|+|. |+-.. T Consensus 380 ~~-----~~~llrelgd~~~L~~~~K~y~~s~ 406 (552) T BBD44721. 1_con 380 QM-----RSPLLRELGDLETLATQEKLYALSI 406 (552) Confidence 11 1233443333445566788876543 No 7 >AAU92474. 1|CBM2|2-82|1. 9e-23 Probab=2. 33 E-value=14 Score=19. 07 Aligned_cols=19 Identities=26% Similarity=0. 562 Sum_probs=14. 3 Templat e_Neff=6. 600 Q Uncharacterize 221 LGYCGDLYVT--EGVPLSSWP 239 (430) Q Consensus 222 lgycgdlyvt--egvplsswp 240 (431) -||++++-|+ ... +++. |-T Consensus 13 ~Gf~~~v~vt N~~~~~i~~W~ 33 (104) T AAU92474. 1|CBM 13 GGFQANVTVTNTGSSAISGWT 33 (104)
deepchem.pdf
Confidence 3789998888 567777774 No 8 >BAX82587. 1 hypothetical protein ALGA_4297 [Marinifilaceae bacterium SPP2] Probab=2. 28 E-value=14 Score=25. 65 Aligned_cols=25 Identities=40% Similarity=0. 314 Sum_probs=21. 4 Templat e_Neff=1. 500 Q Uncharacterize 103 YSGLWQGKEVTIKCGIEESLNSKAG 127 (430) Q Consensus 104 ysglwqgkevtikcgieeslnskag 128 (431) -+. +||+|. +. ||..||. |-|-+--T Consensus 466 ~kd~~~tk~~sik~kiet Sen Ftl~ 490 (656) T BAX82587. 1_con 466 NKDLNQTKQVSIKTKIETSENFTLS 490 (656) Confidence 5789999999999999998876543 No 9 >AHE46274. 1|GH13_13. hmm|1. 6e-201|415-835 Probab=2. 04 E-value=16 Score=24. 14 Aligned_cols=45 Identities=29% Similarity=0. 332 Sum_probs=25. 7 Templat e_Neff=3. 900 Q Uncharacterize 142 PSRGTSIKEFREMTLSFLKANLGDLPSLPALVGRVLLMADFNKDNRVSLAEAKSVWA 198 (430) Q Consensus 143 psrgtsikefremtlsflkanlgdlpslpalvgrvllmadfnkdnrvslaeaksvwa 199 (431) |.-. +-|+|||+|.-+. -+. =-||. +=. =||--+-.-+ . ++||.. T Consensus 99 ~~g~~Ri~Ef R~MV~al-----------h~~Glr Vv~DVVy NHT~~sg~-~~~SVl D 143 (393) T AHE46274. 1|GH1 99 PDGVARIKEFRAMVQAL-----------HAMGLRVVMDVVYNHTAASGQ-YDNSVLD 143 (393) Confidence 34445589999998653 222235655555676655544 3345543 No 10 >ACF55060. 1|GH13_13. hmm|2. 5e-47|336-542 Probab=1. 94 E-value=17 Score=23. 24 Aligned_cols=22 Identities=23% Similarity=0. 371 Sum_probs=17. 6 Templat e_Neff=4. 500 Q Uncharacterize 143 SRGTSIKEFREMTLSFLKANLG 164 (430) Q Consensus 144 srgtsikefremtlsflkanlg 165 (431) +.-+. |+||++|... +=++. ++ T Consensus 74 dp~~RI~E~K~m I~~l H~~GI~ 95 (330) T ACF55060. 1|GH1 74 DPYGRIREFKQMIQALHDAGIR 95 (330) Confidence 4456799999999988887765 Two files are output and saved to the dataset directory, results. hhr and results. a3m. results. hhr is the hhsuite results file, which is a summary of the results. results. a3m is the actual MSA file. In the hhr file, the 'Prob' column describes the estimated probability of the query sequence being at least partially homologous to the template. Probabilities of 95% or more are nearly certain, and probabilities of 30% or more call for closer consideration. The E value tells you how many random matches with a better score would be expected if the searched database was unrelated to the query sequence. These results show that none of the sequences align well with our randomly chosen protein, which is to be expected because our query sequence was chosen at random. Now let's check the results if we use a sequence that we know will align with something in the db CAN database. I pulled this protein from the dockerin. faa file in db CAN. with open ( 'protein2. fasta', 'w' ) as f : f. write ( """>dockerin,22,NCBI-Bacteria,gi|125972715|ref|YP_001036625. 1|,162-245,0. 033 SCADLNGDGKITSSDYNLLKRYILHLIDKFPIGNDETDEGINDGFNDETDEDINDSFIEANSKFAFDIFKQISKDEQGKNVFIS """ ) dataset_path = 'protein2. fasta' sequence_utils. hhsearch ( dataset_path, database = 'db CAN-fam-V9', data_dir = data_dir ) #open the results and print them f = open ( "protein2. hhr", "r" ) print ( f. read ())
deepchem.pdf
- 12:48:13. 823 INFO: Search results will be written to /home/tony/github/deepchem/examples/tutorials/protein2. hh r- 12:48:13. 851 INFO: /home/tony/github/deepchem/examples/tutorials/protein2. fasta is in A2M, A3M or FASTA format- 12:48:13. 852 INFO: Searching 683 database HHMs without prefiltering- 12:48:13. 852 INFO: Iteration 1- 12:48:13. 873 INFO: Scoring 683 HMMs using HMM-HMM Viterbi alignment- 12:48:13. 913 INFO: Alternative alignment: 0- 12:48:13. 979 INFO: 683 alignments done- 12:48:13. 979 INFO: Alternative alignment: 1- 12:48:13. 982 INFO: 10 alignments done- 12:48:13. 982 INFO: Alternative alignment: 2- 12:48:13. 984 INFO: 3 alignments done- 12:48:13. 984 INFO: Alternative alignment: 3- 12:48:13. 986 INFO: 3 alignments done Query dockerin,22,NCBI-Bacteria,gi|125972715|ref|YP_001036625. 1|,162-245,0. 033 Match_columns 84 No_of_seqs 1 out of 1 Neff 1 Searched_HMMs 683 Date Fri Feb 11 12:48:14 2022 Command hhsearch -i /home/tony/github/deepchem/examples/tutorials/protein2. fasta -d hh/databases/db CAN-fam-V9 -oa3m /home/tony/github/deepchem/examples/tutorials/results. a3m -cpu 4 -e 0. 001 No Hit Prob E-value P-value Score SS Cols Query HMM Template HMM 1 lcl|consensus 97. 0 5. 9E-08 8. 7E-11 43. 5 0. 0 21 4-24 1-21 (21) 2 ABN51673. 1|GH124|2-334|2. 6e-21 92. 5 0. 00033 4. 8E-07 45. 5 0. 0 68 1-75 21-88 (318) 3 AAK20911. 1|PL11|47-657|0 15. 7 1. 1 0. 0017 27. 6 0. 0 14 1-14 329-342 (606) 4 AGE62576. 1|PL11_1. hmm|0|1-596 10. 2 2. 1 0. 0031 26. 0 0. 0 13 1-13 118-130 (602) 5 AAZ21803. 1|GH103|26-328|1. 7e-8 9. 3 2. 4 0. 0035 22. 4 0. 0 10 4-13 175-184 (293) 6 AGE62576. 1|PL11_1. hmm|0|1-596 5. 5 4. 8 0. 007 23. 9 0. 0 12 1-12 329-340 (602) 7 AAK20911. 1|PL11|47-657|0 5. 5 4. 8 0. 007 23. 8 0. 0 13 1-13 118-130 (606) 8 APU21542. 1|PL11_2. hmm|1. 4e-162 4. 9 5. 6 0. 0082 23. 5 0. 0 14 2-15 318-331 (579) 9 AAK20911. 1|PL11|47-657|0 4. 7 5. 8 0. 0084 23. 4 0. 0 10 3-12 184-193 (606) 10 AGE62576. 1|PL11_1. hmm|0|1-596 4. 6 6 0. 0088 23. 3 0. 0 7 4-10 185-191 (602) No 1 >lcl|consensus Probab=97. 03 E-value=5. 9e-08 Score=43. 48 Aligned_cols=21 Identities=57% Similarity=1. 061 Sum_probs=20. 1 T emplate_Neff=4. 300 Q dockerin,22,NC 4 DLNGDGKITSSDYNLLKRYIL 24 (84) Q Consensus 4 dlngdgkitssdynllkryil 24 (84) |+|+||+|+..|+. ++|||+| T Consensus 1 Dv N~DG~Vna~D~~~l~~~l~ 21 (21) T lcl|consensus_ 1 DVNGDGKVNALDLALLKKYLL 21 (21) Confidence 899999999999999999986 No 2 >ABN51673. 1|GH124|2-334|2. 6e-219 Probab=92. 52 E-value=0. 00033 Score=45. 54 Aligned_cols=68 Identities=31% Similarity=0. 523 Sum_probs=51. 6 T emplate_Neff=1. 400 Q dockerin,22,NC 1 SCADLNGDGKITSSDYNLLKRYILHLIDKFPIGNDETDEGINDGFNDETDEDINDSFIEANSKFAFDIFKQISKD 75 (84) Q Consensus 1 scadlngdgkitssdynllkryilhlidkfpigndetdegindgfndetdedindsfieanskfafdifkqiskd 75 (84) ++||+||||+|+||||+|||| ||++|++||+++|||+|++|.. .-|+|.--. +-. ++..... +. +. |. T Consensus 21 v~GD~n~dgvv~isd~vl~k~-~l~~~a~~~a~~d~w~g~v N~d------d~I~D~d~~~~kryll~mir~~pk~ 88 (318) T ABN51673. 1|GH1 21 VIGDVNADGVVNISDYVLMKR-ILRIIADFPADDDMWVGDVNGD------DVINDIDCNYLKRYLLHMIREFPKN 88 (318) Confidence 489999999999999999999 9999999999999999999854 3333333333344444444444443 No 3 >AAK20911. 1|PL11|47-657|0 Probab=15. 69 E-value=1. 1 Score=27. 56 Aligned_cols=14 Identities=50% Similarity=0. 641 Sum_probs=10. 4 Templ ate_Neff=3. 500 Q dockerin,22,NC 1 SCADLNGDGKITSS 14 (84) Q Consensus 1 scadlngdgkitss 14 (84) |++|+|+||+=. |-T Consensus 329 sva DVDg DGk DEIi 342 (606)
deepchem.pdf
T AAK20911. 1|PL1 329 SVADVDGDGKDEII 342 (606) Confidence 57889998886553 No 4 >AGE62576. 1|PL11_1. hmm|0|1-596 Probab=10. 22 E-value=2. 1 Score=26. 01 Aligned_cols=13 Identities=46% Similarity=0. 772 Sum_probs=10. 8 Templ ate_Neff=3. 300 Q dockerin,22,NC 1 SCADLNGDGKITS 13 (84) Q Consensus 1 scadlngdgkits 13 (84) |+||||+||... | T Consensus 118 SVGDLDGDG~YEi 130 (602) T AGE62576. 1|PL1 118 SVGDLDGDGEYEI 130 (602) Confidence 6899999998654 No 5 >AAZ21803. 1|GH103|26-328|1. 7e-83 Probab=9. 26 E-value=2. 4 Score=22. 41 Aligned_cols=10 Identities=40% Similarity=0. 833 Sum_probs=9. 2 Templat e_Neff=5. 600 Q dockerin,22,NC 4 DLNGDGKITS 13 (84) Q Consensus 4 dlngdgkits 13 (84) |. |+||++++ T Consensus 175 D~Dg DG~~Dl 184 (293) T AAZ21803. 1|GH1 175 DFDGDGRRDL 184 (293) Confidence 8899999997 No 6 >AGE62576. 1|PL11_1. hmm|0|1-596 Probab=5. 50 E-value=4. 8 Score=23. 90 Aligned_cols=12 Identities=58% Similarity=0. 847 Sum_probs=7. 5 Templat e_Neff=3. 300 Q dockerin,22,NC 1 SCADLNGDGKIT 12 (84) Q Consensus 1 scadlngdgkit 12 (84) |+||+|+||+=. T Consensus 329 sva DVDg DG~DE 340 (602) T AGE62576. 1|PL1 329 SVADVDGDGKDE 340 (602) Confidence 467777777633 No 7 >AAK20911. 1|PL11|47-657|0 Probab=5. 47 E-value=4. 8 Score=23. 84 Aligned_cols=13 Identities=46% Similarity=0. 772 Sum_probs=10. 6 Templa te_Neff=3. 500 Q dockerin,22,NC 1 SCADLNGDGKITS 13 (84) Q Consensus 1 scadlngdgkits 13 (84) |+||||+||... + T Consensus 118 SVGDLDGDG~y Ei 130 (606) T AAK20911. 1|PL1 118 SVGDLDGDGEYEI 130 (606) Confidence 6899999998654 No 8 >APU21542. 1|PL11_2. hmm|1. 4e-162|44-417 Probab=4. 86 E-value=5. 6 Score=23. 51 Aligned_cols=14 Identities=50% Similarity=0. 715 Sum_probs=9. 4 Templat e_Neff=2. 600 Q dockerin,22,NC 2 CADLNGDGKITSSD 15 (84) Q Consensus 2 cadlngdgkitssd 15 (84) +. |+|+||+=. +++ T Consensus 318 ~~Dv D~DG~DEi~~ 331 (579) T APU21542. 1|PL1 318 IVDVDGDGKDEISD 331 (579) Confidence 45777777766655 No 9 >AAK20911. 1|PL11|47-657|0 Probab=4. 74 E-value=5. 8 Score=23. 38 Aligned_cols=10 Identities=50% Similarity=0. 896 Sum_probs=5. 6 Templat e_Neff=3. 500 Q dockerin,22,NC 3 ADLNGDGKIT 12 (84) Q Consensus 3 adlngdgkit 12 (84) -|+|+|||-. T Consensus 184 y D~DGDGk AE 193 (606) T AAK20911. 1|PL1 184 YDFDGDGKAE 193 (606) Confidence 3666666543 No 10
deepchem.pdf
>AGE62576. 1|PL11_1. hmm|0|1-596 Probab=4. 58 E-value=6 Score=23. 30 Aligned_cols=7 Identities=71% Similarity=1. 426 Sum_probs=0. 0 Template_N eff=3. 300 Q dockerin,22,NC 4 DLNGDGK 10 (84) Q Consensus 4 dlngdgk 10 (84) ||||||| T Consensus 185 D~DGDGk 191 (602) T AGE62576. 1|PL1 185 DFDGDGK 191 (602)- 12:48:14. 063 INFO: Premerge done- 12:48:14. 063 INFO: Realigning 10 HMM-HMM alignments using Maximum Accuracy algorithm- 12:48:14. 084 INFO: 4 sequences belonging to 4 database HMMs found with an E-value < 0. 001- 12:48:14. 084 INFO: Number of effective sequences of resulting query HMM: Neff = 1. 39047 As you can see, there are 2 sequences which are a match for our query sequence. Using hhblits hhblits works in much the same way as hhsearch, but it is much faster and slightly less sensitive. This would be more suited to searching very large databases, or producing a MSA with multiple sequences instead of just one. Let's make use of that by using our query sequence to create an MSA. We could then use that MSA, with its family of proteins, to search a larger database for potential matches. This will be much more effective than searching a large database with a single sequence. We will use the same db CAN database. I will pull a glycoside hydrolase protein from Unip Prot, so it will likely be related to some proteins in db CAN, which has carbohydrate-active enzymes. The option -oa3m will tell hhblits to output an MSA as an a3m file. The -n option specifies the number of iterations. This is recommended to keep between 1 and 4, we will try 2. ! wget -O protein3. fasta https://www. uniprot. org/uniprot/G8M3C3. fasta dataset_path = 'protein3. fasta' sequence_utils. hhblits ( dataset_path, database = 'db CAN-fam-V9', data_dir = data_dir ) #open the results and print them f = open ( "protein3. hhr", "r" ) print ( f. read ())--2022-02-11 12:48:14-- https://www. uniprot. org/uniprot/G8M3C3. fasta Resolving www. uniprot. org (www. uniprot. org)... 193. 62. 193. 81 Connecting to www. uniprot. org (www. uniprot. org)|193. 62. 193. 81|:443... connected. HTTP request sent, awaiting response... 200 Length: 897 [text/plain] Saving to: 'protein3. fasta' protein3. fasta 100%[===================>] 897 --.-KB/s in 0s 2022-02-11 12:48:15 (1. 70 GB/s) - 'protein3. fasta' saved [897/897]
deepchem.pdf
- 12:48:15. 242 WARNING: Ignoring unknown option -n- 12:48:15. 242 WARNING: Ignoring unknown option 2- 12:48:15. 242 INFO: Search results will be written to /home/tony/github/deepchem/examples/tutorials/protein3. hh r- 12:48:15. 270 INFO: /home/tony/github/deepchem/examples/tutorials/protein3. fasta is in A2M, A3M or FASTA format- 12:48:15. 271 INFO: Searching 683 database HHMs without prefiltering- 12:48:15. 271 INFO: Iteration 1- 12:48:15. 424 INFO: Scoring 683 HMMs using HMM-HMM Viterbi alignment- 12:48:15. 465 INFO: Alternative alignment: 0- 12:48:15. 658 INFO: 683 alignments done- 12:48:15. 659 INFO: Alternative alignment: 1- 12:48:15. 697 INFO: 92 alignments done- 12:48:15. 697 INFO: Alternative alignment: 2- 12:48:15. 710 INFO: 7 alignments done- 12:48:15. 710 INFO: Alternative alignment: 3 Query tr|G8M3C3|G8M3C3_HUNCD Dockerin-like protein OS=Hungateiclostridium clariflavum (strain DSM 19732 / NBRC 101661 / EBR45) OX=720554 GN=Clocl_4007 PE=4 SV=1 Match_columns 728 No_of_seqs 1 out of 1 Neff 1 Searched_HMMs 683 Date Fri Feb 11 12:48:16 2022 Command hhsearch -i /home/tony/github/deepchem/examples/tutorials/protein3. fasta -d hh/databases/db CAN-fam-V9 -oa3m /home/tony/github/deepchem/examples/tutorials/results. a3m -cpu 4 -n 2 -e 0. 001 No Hit Prob E-value P-value Score SS Cols Query HMM Template HMM 1 AAA91086. 1|GH48|150-238|4. 7e-1 100. 0 7E-195 1E-197 1475. 1 0. 0 608 31-644 1-619 (620) 2 lcl|consensus 91. 8 0. 00051 7. 4E-07 37. 5 0. 0 20 668-687 1-20 (21) 3 ABN51673. 1|GH124|2-334|2. 6e-21 52. 5 0. 096 0. 00014 40. 1 0. 0 66 663-728 19-85 (318) 4 CAR68154. 1|GH88|62-388|4. 9e-13 10. 5 2 0. 003 30. 7 0. 0 43 421-463 181-223 (329) 5 ACY49347. 1|GH105|46-385|1. 1e-1 6. 4 4 0. 0058 28. 2 0. 0 60 324-383 169-228 (329) 6 QGI59602. 1|GH16_22|78-291 5. 4 4. 9 0. 0072 27. 6 0. 0 10 391-400 33-42 (224) 7 QGI59602. 1|GH16_22|78-291 5. 3 5 0. 0073 27. 5 0. 0 18 581-598 204-221 (224) 8 AQA16748. 1|GH5_51. hmm|7. 4e-189 4. 9 5. 5 0. 0081 28. 6 0. 0 37 644-680 253-291 (351) 9 CCF60459. 1|GH5_12. hmm|1. 2e-238 3. 3 9. 1 0. 013 28. 3 0. 0 27 357-383 298-324 (541) 10 ACI55886. 1|GH25|58-236|2. 7e-60 3. 0 10 0. 015 22. 2 0. 0 41 594-634 18-61 (174) No 1 >AAA91086. 1|GH48|150-238|4. 7e-10 Probab=100. 00 E-value=6. 7e-195 Score=1475. 15 Aligned_cols=608 Identities=60% Similarity=1. 105 Sum_probs=60 4. 0 Template_Neff=2. 700 Q tr|G8M3C3|G8M3 31 FKDRFNYMYNKIHDPANGYFDSEGIPYHSVETLCVEAPDYGHESTSEAASYYAWLEAVNGKLNGKWSGLTEAWNVVEKYF 110 (728 ) Q Consensus 31 fkdrfnymynkihdpangyfdsegipyhsvetlcveapdyghestseaasyyawleavngklngkwsglteawnvvekyf 110 (728 ) |. ||||+||+|||||+|||||++||||||||||||||||||||||||||||++|||||||+|||||++|++||++||+|| T Consensus 1 Y~~r Fl~l Y~k I~dp~n GYFS~~Gi PYHsv ETliv EAPDy GHe TTSEA~SY~~WLe Amyg~itgd~s~~~~AW~~m E~y~ 80 (620 ) T AAA91086. 1|GH4 1 YKQRFLELYNKIHDPANGYFSPEGIPYHSVETLIVEAPDYGHETTSEAYSYYVWLEAMYGKITGDWSGFNKAWDTMEKYI 80 (620 ) Confidence 78999999999999999999999999999999999999999999999999999999999999999999999999999999 Q tr|G8M3C3|G8M3 111 IPSESIQKGMNRYNPSSPAGYADEFPLPDDYPAQIQSNVTVGQDPIHQELVSAYNTYAMYGMHWLVDVDNWYGYGT---- 186 (728 ) Q Consensus 111 ipsesiqkgmnrynpsspagyadefplpddypaqiqsnvtvgqdpihqelvsayntyamygmhwlvdvdnwygygt---- 186 (728 ) ||++++||+|+. |||++|||||||+++|++||++|+++++||+|||++||+++||+++||+||||||||||||||+ T Consensus 81 IP~~~~Qp~~~~Ynp~~p Atya~E~~~P~~YPs~l~~~~~v G~DPi~~e L~sa Ygt~~i Y~MHWLl DVDN~YGf G~~g~~ 160 (620 ) T AAA91086. 1|GH4 81 IPSHQDQPTMSSYNPSSPATYAPEYDTPSQYPSQLDFNVPVGQDPIANELKSAYGTDDIYGMHWLLDVDNWYGFGNLGDG 160 (620 ) Confidence 9999999999999999999999999999999999999999999999999999999999999999999999999999 Q tr|G8M3C3|G8M3 187 GTNCTFINTYQRGEQESVFETVPHPSIEEFKYGGRQGFSDLFTAG-ETQPKWAFTIASDADGRLIQVQYWANKWAKEQGQ 265 (728 )
deepchem.pdf
Q Consensus 187 gtnctfintyqrgeqesvfetvphpsieefkyggrqgfsdlftag-etqpkwaftiasdadgrliqvqywankwakeqgq 265 (728 ) +++|+||||||||+||||||||||||||+|||||+||||+||++| ++++|||||||||||||||||+|||++||+|||+ T Consensus 161 ~~~psy INTf QRG~q ESv We Tvp~P~~d~fk~Gg~n Gfldl Ft~d~~ya~Qwk YTn Ap DADARav Qa~Yw A~~Wa~e~G~ 240 (620 ) T AAA91086. 1|GH4 161 TSGPSYINTFQRGEQESVWETVPHPSCEEFKYGGPNGFLDLFTKDSSYAKQWRYTNAPDADARAVQAAYWANQWAKEQGK 240 (620 ) Confidence 899999999999999999999999999999999999999999999 9999999999999999999999999999999999 Q tr|G8M3C3|G8M3 266 --NLSTLNAKAAKLGDYLRYSMFDKYFMKIG--AQGKTPASGYDSCHYLLAWYYAWGGAIAG-DWSWKIGCSHVHWGYQA 340 (728 ) Q Consensus 266 --nlstlnakaaklgdylrysmfdkyfmkig--aqgktpasgydschyllawyyawggaiag-dwswkigcshvhwgyqa 340 (728 ) +|+++++||+||||||||+|||||||||| +. +|++|+||||||||||||++|||++++ +|+|||||||+|||||| T Consensus 241 ~~~is~~~~KAa Km GDy LRY~mf DKYfkki G~~~~s~~ag~Gkd Sa HYLls WY~a WGG~~~~~~Wa Wr IG~Sh~H~GYQN 320 (620 ) T AAA91086. 1|GH4 241 ESEISSTVAKAAKMGDYLRYAMFDKYFKKIGVGPSSCPAGTGKDSAHYLLSWYYAWGGALDGSGWAWRIGSSHAHFGYQN 320 (620 ) Confidence 99999999999999999999999999999 89999999999999999999999999999 99999999999999999 Q tr|G8M3C3|G8M3 341 PLAAYALANDPDLKPKSANGAKDWNSSFKRQVELYAWLQSAEGAIAGGVTNSVGGQYKSY-NGASTFYDMAYTYAPVYAD 419 (728 ) Q Consensus 341 plaayalandpdlkpksangakdwnssfkrqvelyawlqsaegaiaggvtnsvggqyksy-ngastfydmaytyapvyad 419 (728 ) ||||||||++++|||||+|+++||++||+||||||+||||+||+|||||||||+|+|++| +|++|||||+|+++||||| T Consensus 321 P~AAya Ls~~~~lk Pks~ta~~DW~~SL~RQl Efy~w LQS~e G~i AGGa TNSW~G~Y~~~Psg~~TFyg M~Yd~~PVY~D 400 (620 ) T AAA91086. 1|GH4 321 PLAAYALSNDSDLKPKSPTAASDWAKSLDRQLEFYQWLQSAEGAIAGGATNSWNGRYETYPSGTSTFYGMAYDEHPVYHD 400 (620 ) Confidence 999999999999999999999999999999999999999999999999999999999999 9999999999999999999 Q tr|G8M3C3|G8M3 420 PPSNNWFGMQAWSMQRMCEVYYETGDSLAKEICDKWVAWAESVCEADIEAGTWKIPATLEWSGQPDTWRGTKPSNNNLHC 499 (728 ) Q Consensus 420 ppsnnwfgmqawsmqrmcevyyetgdslakeicdkwvawaesvceadieagtwkipatlewsgqpdtwrgtkpsnnnlhc 499 (728 ) ||||+|||||+|+|||||||||+|||++||+||||||+|++++|+|+ ++|+|+||++|+|+|||||||+++++|+|||| T Consensus 401 Pp SN~Wf G~Q~Wsm~Rv Aey YY~t GD~~ak~ild KWv~W~~~~~~~~-~dg~~~i Ps~L~Ws Gq PDt W~gs~~~N~~lhv 479 (620 ) T AAA91086. 1|GH4 401 PPSNRWFGMQAWSMQRVAEYYYVTGDARAKAILDKWVAWVKSNTTVN-SDGTFQIPSTLEWSGQPDTWNGSYTGNPNLHV 479 (620 ) Confidence 99999999999999999999999999999999999999999999999 88999999999999999999999999999999 Q tr|G8M3C3|G8M3 500 KVVNYGNDIGITGSLANAFLFYDQATQRWNGNTTLGKKAADKALAMLQVVWDTCRDQYGVGVKETNESLNRIFTQEVFIP 579 (728 ) Q Consensus 500 kvvnygndigitgslanaflfydqatqrwngnttlgkkaadkalamlqvvwdtcrdqygvgvketneslnriftqevfip 579 (728 ) +|+++|+|||||+||||||+||||++|+++ | ++||++||+|||+||++|||++||+++|+|+||+||++++|||| T Consensus 480 ~V~~yg~dv Gva~s~A~t L~y YAa~sg~~~-d----~~ak~~Ak~LLD~~w~~~~d~~Gvs~~E~r~dy~Rf~d~~VYi P 554 (620 ) T AAA91086. 1|GH4 480 TVTDYGQDVGVAASLAKTLMYYAAASGKYG-D----TAAKNLAKQLLDAMWKNYRDDKGVSTPETRGDYKRFFDQEVYIP 554 (620 ) Confidence 999999999999999999999999999888 7 88999999999999999999999999999999999999999999 Q tr|G8M3C3|G8M3 580 AGWTGKMPNGDVIQQGVKFIDIRSKYKDDPWYEGLKKQAEQGIPFEYTLHRFWHQVDYAVALGIA 644 (728) Q Consensus 580 agwtgkmpngdviqqgvkfidirskykddpwyeglkkqaeqgipfeytlhrfwhqvdyavalgia 644 (728) +||+|+|||||+|++|+|||||||+||+||+|++||++|++|++|+|+|||||+|+|||||+|+. T Consensus 555 ~gwt G~m Pn GD~I~~g~t Fl~IRs~Yk~Dp~w~kvq~~l~g G~~p~f~YHRFWa Q~di A~A~g~y 619 (620) T AAA91086. 1|GH4 555 SGWTGTMPNGDVIKSGATFLDIRSKYKQDPDWPKVEAYLNGGAAPEFTYHRFWAQADIAMANGTY 619 (620) Confidence 99999999999999999999999999999999999999999999999999999999999999863 No 2 >lcl|consensus Probab=91. 79 E-value=0. 00051 Score=37. 45 Aligned_cols=20 Identities=55% Similarity=0. 811 Sum_probs=11. 9 T emplate_Neff=4. 300 Q tr|G8M3C3|G8M3 668 DINFDGDINSIDYALLKAHL 687 (728) Q Consensus 668 dinfdgdinsidyallkahl 687 (728) |+|-||. +|++|++++|. ++ T Consensus 1 Dv N~DG~Vna~D~~~l~~~l 20 (21) T lcl|consensus_ 1 DVNGDGKVNALDLALLKKYL 20 (21) Confidence 45566666666666665554 No 3 >ABN51673. 1|GH124|2-334|2. 6e-219 Probab=52. 47 E-value=0. 096 Score=40. 07 Aligned_cols=66 Identities=35% Similarity=0. 533 Sum_probs=48. 5 Tem plate_Neff=1. 400
deepchem.pdf
Q tr|G8M3C3|G8M3 663 DIKLGDINFDGDINSIDYALLKAHLLGINKLSGDAL-KAADVDQNGDVNSIDYAKMKSYLLGISKDF 728 (728) Q Consensus 663 diklgdinfdgdinsidyallkahllginklsgdal-kaadvdqngdvnsidyakmksyllgiskdf 728 (728) . +..||. |-||-+|--||. |+|..|.-|. +... +.- -..+++....++. +|-.-+|. |||. +-++| T Consensus 19 kav~GD~n~dgvv~isd~vl~k~~l~~~a~~~a~~d~w~g~v N~dd~I~D~d~~~~kryll~mir~~ 85 (318) T ABN51673. 1|GH1 19 KAVIGDVNADGVVNISDYVLMKRILRIIADFPADDDMWVGDVNGDDVINDIDCNYLKRYLLHMIREF 85 (318) Confidence 567899999999999999999997766666543321 123444445577888888999999876553 No 4 >CAR68154. 1|GH88|62-388|4. 9e-137 Probab=10. 51 E-value=2 Score=30. 74 Aligned_cols=43 Identities=19% Similarity=0. 234 Sum_probs=34. 3 Templat e_Neff=5. 400 Q tr|G8M3C3|G8M3 421 PSNNWFGMQAWSMQRMCEVYYETGDSLAKEICDKWVAWAESVC 463 (728) Q Consensus 421 psnnwfgmqawsmqrmcevyyetgdslakeicdkwvawaesvc 463 (728) . +..|=-=|+|. |==. |..|..|||++--++-. +=+. +++++. T Consensus 181 d~S~Ws RGQAWai YG~a~~yr~t~d~~y L~~A~~~a~yfl~~l 223 (329) T CAR68154. 1|GH8 181 DDSAWARGQAWAIYGFALAYRYTKDPEYLDTAKKVADYFLNRL 223 (329) Confidence 3678999999999999999999999995555555555555555 No 5 >ACY49347. 1|GH105|46-385|1. 1e-131 Probab=6. 37 E-value=4 Score=28. 22 Aligned_cols=60 Identities=25% Similarity=0. 330 Sum_probs=50. 1 Template _Neff=6. 200 Q tr|G8M3C3|G8M3 324 DWSWKIGCSHVHWGYQAPLAAYALANDPDLKPKSANGAKDWNSSFKRQVELYAWLQSAEG 383 (728) Q Consensus 324 dwswkigcshvhwgyqaplaayalandpdlkpksangakdwnssfkrqvelyawlqsaeg 383 (728) . |+=.-|. |..+. |==|=-. +. ||...-++-|+......... +. |++|++=..=+|+. +| T Consensus 169 ~wa~~t~~s~~f W~Rgn GW~~~a L~~~L~~l P~~~p~r~~l~~~~~~~~~al~~~Qd~~G 228 (329) T ACY49347. 1|GH1 169 NWADPTGGSPAFWGRGNGWVAMALVDVLELLPEDHPDRRFLIDILKEQAAALAKYQDESG 228 (329) Confidence 444445667777777788889999999999998888889999999999998888999776 No 6 >QGI59602. 1|GH16_22|78-291 Probab=5. 38 E-value=4. 9 Score=27. 58 Aligned_cols=10 Identities=30% Similarity=0. 245 Sum_probs=6. 0 Templat e_Neff=3. 000 Q tr|G8M3C3|G8M3 391 NSVGGQYKSY 400 (728) Q Consensus 391 nsvggqyksy 400 (728) ||-+..|-.. T Consensus 33 NS~n Nvyie~ 42 (224) T QGI59602. 1|GH1 33 NSPNNVYIEK 42 (224) Confidence 6666666555 No 7 >QGI59602. 1|GH16_22|78-291 Probab=5. 33 E-value=5 Score=27. 55 Aligned_cols=18 Identities=28% Similarity=0. 730 Sum_probs=9. 8 Template_ Neff=3. 000 Q tr|G8M3C3|G8M3 581 GWTGKMPNGDVIQQGVKF 598 (728) Q Consensus 581 gwtgkmpngdviqqgvkf 598 (728) . |+|. |.-|+...-++.. T Consensus 204 ~Ws Gn M~vg~sa~lq Iq W 221 (224) T QGI59602. 1|GH1 204 SWSGNMSVGDSAYLQIQW 221 (224) Confidence 366666666654444333 No 8 >AQA16748. 1|GH5_51. hmm|7. 4e-189|58-409 Probab=4. 89 E-value=5. 5 Score=28. 57 Aligned_cols=37 Identities=32% Similarity=0. 524 Sum_probs=28. 0 Templa te_Neff=2. 200 Q tr|G8M3C3|G8M3 644 AEIFGYKPPK--GGSGGGETGDIKLGDINFDGDINSIDY 680 (728) Q Consensus 644 aeifgykppk--ggsgggetgdiklgdinfdgdinsidy 680 (728) |..+||.-|+ |. +|-|||. |. +..|+.-..-. +. ++-T Consensus 253 a Hf Yg YTGP~ht Gatg~get~dp RY~Dl~~~~l~~~l~~ 291 (351) T AQA16748. 1|GH5 253 AHFYGYTGPNHTGATGIGETHDPRYRDLSPAELAAVLDD 291 (351) Confidence 5678998775 677789999999999877655555443 No 9 >CCF60459. 1|GH5_12. hmm|1. 2e-238|14-567 Probab=3. 29 E-value=9. 1 Score=28. 28 Aligned_cols=27 Identities=26% Similarity=0. 540 Sum_probs=17. 1 Templa te_Neff=4. 100 Q tr|G8M3C3|G8M3 357 SANGAKDWNSSFKRQVELYAWLQSAEG 383 (728) Q Consensus 357 sangakdwnssfkrqvelyawlqsaeg 383 (728) . |. +++-|. +.-+|+=.-|-|-. +++-T Consensus 298 np~G~sa Wl~~~~~~d~~ygw~r~~~w 324 (541)
deepchem.pdf
T CCF60459. 1|GH5 298 NPKGVSAWLSGEERDDKKYGWKRDPEW 324 (541) Confidence 345667777777777666777655443 No 10 >ACI55886. 1|GH25|58-236|2. 7e-60 Probab=3. 03 E-value=10 Score=22. 20 Aligned_cols=41 Identities=20% Similarity=0. 114 Sum_probs=30. 1 Templat e_Neff=7. 700 Q tr|G8M3C3|G8M3 594 QGVKFIDIRSKY---KDDPWYEGLKKQAEQGIPFEYTLHRFWHQ 634 (728) Q Consensus 594 qgvkfidirsky---kddpwyeglkkqaeqgipfeytlhrfwhq 634 (728) +|..|.-||.-+ -. ||. |..=-+....-..|. =. ||-+. +. T Consensus 18 ~gi~Fv~ikate G~~~~D~~f~~n~~~a~~a Gl~~G~Yhf~~~~ 61 (174) T ACI55886. 1|GH2 18 SGVDFVIIKATEGTSYVDPYFASNWAGARAAGLPVGAYHFARPC 61 (174) Confidence 389999999765 36888876555555556788889988854- 12:48:16. 066 INFO: Premerge done- 12:48:16. 066 INFO: Realigning 10 HMM-HMM alignments using Maximum Accuracy algorithm- 12:48:16. 115 INFO: 4 sequences belonging to 4 database HMMs found with an E-value < 0. 001- 12:48:16. 115 INFO: Number of effective sequences of resulting query HMM: Neff = 2. 41642 We can see that the exact protein was found in db CAN in hit 1, but also some highly related proteins were found in hits 1-5. This query. a3m MSA can then be useful if we want to search a larger database like Uni Prot or Uniclust because it includes this more diverse selection of related protein sequences. Other hh-suite functions hhsuite contains other functions which may be useful if you are working with MSA or HMMs. For more detailed information, see the documentation at https://github. com/soedinglab/hh-suite/wiki hhmake: Build an HMM from an input MSA hhfilter: Filter an MSA by max sequence identity, coverage, and other criteria hhalign: Calculate pairwise alignments etc. for two HMMs/MSAs hhconsensus: Calculate the consensus sequence for an A3M/FASTA input file reformat. pl: Reformat one or many MSAs addss. pl: Add PSIPRED predicted secondary structure to an MSA or HHM file hhmakemodel. pl: Generate MSAs or coarse 3D models from HHsearch or HHblits results hhmakemodel. py: Generates coarse 3D models from HHsearch or HHblits results and modifies cif files such that they are compatible with MODELLER hhsuitedb. py: Build HHsuite database with prefiltering, packed MSA/HMM, and index files splitfasta. pl: Split a multiple-sequence FASTA file into multiple single-sequence files renumberpdb. pl: Generate PDB file with indices renumbered to match input sequence indices HHPaths. pm: Configuration file with paths to the PDB, BLAST, PSIPRED etc. mergeali. pl: Merge MSAs in A3M format according to an MSA of their seed sequences pdb2fasta. pl: Generate FASTA sequence file from SEQRES records of globbed pdb files cif2fasta. py: Generate a FASTA sequence from the pdbx_seq_one_letter_code entry of the entity_poly of globbed cif files pdbfilter. pl: Generate representative set of PDB/SCOP sequences from pdb2fasta. pl output pdbfilter. py: Generate representative set of PDB/SCOP sequences from cif2fasta. py output Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways:
deepchem.pdf
Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
deepchem.pdf
Scan Py Scan Py is a scalable toolkit for analyzing single-cell gene expression data. It includes methods for preprocessing, visualization, clustering, pseudotime and trajectory inference, differential expression testing, and simulation of gene regulatory networks. There are many advantage of using a Python-based platform to process sc RNA-seq data including increased processing efficiency and running speed as well as seamless integration with machine learning frameworks. ANNDATA was presented alongside Scan Py as a generic class for handling annotated data matrices that can deal with the sparsity inherent in gene expression data. This tutorial is largely adapted from the original tutorials which can be found in Scanpy's read the docs and from this notebook. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b Make sure you've installed Scanpy: % conda install -c conda-forge scanpy python-igraph leidenalg Collecting package metadata (current_repodata. json): done Solving environment: done # All requested packages already installed. Note: you may need to restart the kernel to use updated packages. # import necessary packages import numpy as np import pandas as pd import scanpy as sc sc. settings. verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3) # see what package versions you have installed sc. logging. print_versions () # customize resolution and color of your figures sc. settings. set_figure_params ( dpi = 80, facecolor = 'white' ) # download the test data for this tutorial ! mkdir data ! wget http://cf. 10xgenomics. com/samples/cell-exp/1. 1. 0/pbmc3k/pbmc3k_filtered_gene_bc_matrices. tar. gz -O data/pbmc3k_filtered_gene_bc_matrices. tar. gz ! cd data ; tar -xzf pbmc3k_filtered_gene_bc_matrices. tar. gz ! mkdir write results_file = 'write/pbmc3k. h5ad' # the file that will store the analysis results adata = sc. read_10x_mtx ( 'data/filtered_gene_bc_matrices/hg19/', # the directory with the `. mtx` file var_names = 'gene_symbols', # use gene symbols for the variable names (variables-axis index) cache = True ) # write a cache file for faster subsequent reading... reading from cache file cache/data-filtered_gene_bc_matrices-hg19-matrix. h5ad See [anndata-tutorials/getting-started] for a more comprehensive introduction to Ann Data. adata. var_names_make_unique () # this is unnecessary if using `var_names='gene_ids'` in `sc. read_10x_mtx` # look at what the Ann Data object contains adata Ann Data object with n_obs × n_vars = 2700 × 32738 var: 'gene_ids' Pre-processing
deepchem.pdf
Check for highly expressed genes Show genes that yield the highest fraction of counts in each single cell, across all cells. The sc. pl. highest_expr_genes command normalizes counts per cell, and plots the genes that are most abundant in each cell. sc. pl. highest_expr_genes ( adata, n_top = 20, ) normalizing counts per cell finished (0:00:00) Note that MALAT1, a non-coding RNA that is known to be extremely abundant in many cells, ranks at the top. Basic filtering: remove cells and genes with low expression or missing values. sc. pp. filter_cells ( adata, min_genes = 200 ) sc. pp. filter_genes ( adata, min_cells = 3 ) filtered out 19024 genes that are detected in less than 3 cells Check mitochondrial genes for Quality Control Let's assemble some information about mitochondrial genes, which are important for quality control. Citing from “Simple Single Cell” workflows (Lun, Mc Carthy & Marioni, 2017): High proportions are indicative of poor-quality cells (Islam et al. 2014; Ilicic et al. 2016), possibly because of loss of cytoplasmic RNA from perforated cells. The reasoning is that mitochondria are larger than individual transcript molecules and less likely to escape through tears in the cell membrane. With pp. calculate_qc_metrics, we can compute many metrics very efficiently. adata. var [ 'mt' ] = adata. var_names. str. startswith ( 'MT-' ) # annotate the group of mitochondrial genes as 'mt' sc. pp. calculate_qc_metrics ( adata, qc_vars = [ 'mt' ], percent_top = None, log1p = False, inplace = True ) A violin plot of some of the computed quality measures: the number of genes expressed in the count matrix the total counts per cell the percentage of counts in mitochondrial genes sc. pl. violin ( adata, [ 'n_genes_by_counts', 'total_counts', 'pct_counts_mt' ], jitter = 0. 4, multi_panel = True )
deepchem.pdf
Remove cells that have too many mitochondrial genes expressed or too many total counts. High proportions of mitochondrial genes indicate poor-quality cells, potentially because of loss of cytoplasmic RNA from perforated cells. sc. pl. scatter ( adata, x = 'total_counts', y = 'pct_counts_mt' ) sc. pl. scatter ( adata, x = 'total_counts', y = 'n_genes_by_counts' ) Filter data based on QC Check current datset and filter it by slicing the Ann Data object. print ( adata ) Ann Data object with n_obs × n_vars = 2700 × 13714 obs: 'n_genes', 'n_genes_by_counts', 'total_counts', 'total_counts_mt', 'pct_counts_mt' var: 'gene_ids', 'n_cells', 'mt', 'n_cells_by_counts', 'mean_counts', 'pct_dropout_by_counts', 'total_counts ' # slice the adata object so you only keep genes and cells that pass the QC adata = adata [ adata. obs. n_genes_by_counts < 2500, :] adata = adata [ adata. obs. pct_counts_mt < 5, :] print ( adata ) View of Ann Data object with n_obs × n_vars = 2638 × 13714 obs: 'n_genes', 'n_genes_by_counts', 'total_counts', 'total_counts_mt', 'pct_counts_mt' var: 'gene_ids', 'n_cells', 'mt', 'n_cells_by_counts', 'mean_counts', 'pct_dropout_by_counts', 'total_counts '
deepchem.pdf
Data normalization To correct differences in library sizes across cells, normalize the total read count of the data matrix to 10,000 reads per cell so that counts become comparable among cells. sc. pp. normalize_total ( adata, target_sum = 1e4 ) normalizing counts per cell finished (0:00:00) /Users/paulinampaiz/opt/anaconda3/envs/deepchem/lib/python3. 10/site-packages/scanpy/preprocessing/_normalization. py:170: User Warning: Received a view of an Ann Data. Making a copy. view_to_actual(adata) Log transform the data for later use in differential gene expression as well as in visualizations. The natural logarithm is used, and log1p means that an extra read is added to cells of the count matrix as a pseudo-read. See here for more information on why log scale makes more sense for genomic data. sc. pp. log1p ( adata ) Identify highly-variable genes. The function sc. pp. highly_variable_genes can detect marker genes that can help us identify cells based on a few manually set parameters, including mininum mean expression, maximum mean expression, and minimum dispersion. We will focus our analysis on such genes. sc. pp. highly_variable_genes ( adata, min_mean = 0. 0125, max_mean = 3, min_disp = 0. 5 ) extracting highly variable genes finished (0:00:00)--> added 'highly_variable', boolean vector (adata. var) 'means', float vector (adata. var) 'dispersions', float vector (adata. var) 'dispersions_norm', float vector (adata. var) # visualize the highly variable genes with a plot sc. pl. highly_variable_genes ( adata ) Set the . raw attribute of the Ann Data object to the normalized and logarithmized raw gene expression for later use in differential testing and visualizations of gene expression. This simply freezes the state of the Ann Data object. adata. raw = adata Filter the adata object so that only genes that are highly variable are kept. adata = adata [:, adata. var. highly_variable ] Correct for the effects of counts per cell and mitochondrial gene expression Regress out effects of total counts per cell and the percentage of mitochondrial genes expressed. This can consume some memory and take some time because the input data is sparse. sc. pp. regress_out ( adata, [ 'n_genes_by_counts', 'pct_counts_mt' ])
deepchem.pdf
regressing out ['n_genes_by_counts', 'pct_counts_mt'] sparse input is densified and may lead to high memory use finished (0:00:11) Center the data to zero and scale to unit variance Use sc. pp. scale to center the average expression per gene to zero. Here we are also clipping scaled values that exceed standard deviation of 10. sc. pp. scale ( adata, max_value = 10 ) Dimension reduction with PCA We first use principal component analysis (PCA), a linear dimention-reduction technique, to reveal the main axes of variation and denoise the data. sc. tl. pca ( adata, svd_solver = 'arpack' ) computing PCA on highly variable genes with n_comps=50 finished (0:00:00) We can make a scatter plot in the PCA coordinates, but we will not use that later on. sc. pl. pca ( adata, color = "CST3" ) The variance ratio plot lists contributions of individual principal components (PC) to the total variance in the data. This piece of information helps us to choose an appropriate number of PCs in order to compute the neighborhood relationships between the cells, for instance, using the clustering method Louvain sc. tl. louvain() or the embedding method t SNE sc. tl. tsne() for dimension-reduction. According to the authors of Scanpy, a rough estimate of the number of PCs does fine. sc. pl. pca_variance_ratio ( adata, log = True ) Save the result up to PCA analysis. ! mkdir -p write adata. write ( results_file )
deepchem.pdf
Note that our adata object has following elements: observations annotation (obs), variables (var), unstructured annotation (uns), multi-dimensional observations annotation (obsm), and multi-dimensional variables annotation (varm). The meanings of these parameters are documented in the anndata package, available at anndata documentation. adata Ann Data object with n_obs × n_vars = 2638 × 1838 obs: 'n_genes', 'n_genes_by_counts', 'total_counts', 'total_counts_mt', 'pct_counts_mt' var: 'gene_ids', 'n_cells', 'mt', 'n_cells_by_counts', 'mean_counts', 'pct_dropout_by_counts', 'total_count s', 'highly_variable', 'means', 'dispersions', 'dispersions_norm', 'mean', 'std' uns: 'log1p', 'hvg', 'pca' obsm: 'X_pca' varm: 'PCs' Computing and Embedding the neighborhood graph Use the PCA representation of the data matrix to compute the neighborhood graph of cells. sc. pp. neighbors ( adata, n_neighbors = 10, n_pcs = 40 ) The auhours of Scanpy suggest embedding the graph in two dimensions using UMAP ( Mc Innes et al., 2018 ). UMAP is potentially more faithful to the global connectivity of the manifold than t SNE, i. e., it better preserves trajectories. sc. tl. umap ( adata ) sc. pl. umap ( adata, color = [ 'CST3', 'NKG7', 'PPBP' ]) As we set the . raw attribute of adata, the previous plots showed the “raw” (normalized, logarithmized, but uncorrected) gene expression. You can also plot the scaled and corrected gene expression by explicitly stating that you don't want to use . raw. sc. pl. umap ( adata, color = [ 'CST3', 'NKG7', 'PPBP' ], use_raw = False ) In some ocassions, you might still observe disconnected clusters and similar connectivity violations. They can usually be remedied by running: tl. paga ( adata ) pl. paga ( adata, plot = False ) # remove `plot=False` if you want to see the coarse-grained graph tl. umap ( adata, init_pos = 'paga' ) Clustering the neighborhood graph As with Seurat and many other frameworks, we recommend the Leiden graph-clustering method (community detection based on optimizing modularity) by Traag et al. (2018). Note that Leiden clustering directly clusters the neighborhood graph of cells, which we already computed in the previous section. Compared with the Louvain algorithm, the Leiden algorithm yields communities that are guaranteed to be connected. When applied iteratively, the Leiden algorithm converges to a partition in which all subsets of all communicities are locally optimally assigned. Last but not least, it runs faster. sc. tl. leiden ( adata ) Plot the clusters using sc. pl. umap. Note that the color parameter accepts both individual genes and the clustering method (leiden in this case). sc. pl. umap ( adata, color = [ 'leiden', 'CST3', 'NKG7' ]) We save the result again. adata. write ( results_file ) Identifying marker genes Compute a ranking for the highly differential genes in each cluster. For this, by default, the . raw attribute of Ann Data is used in case it has been initialized before. The simplest and fastest method to do so is the t-test. Other methods include Wilcoxon rank-sum (Mann-Whitney-U) test, MAST, limma, DESeq2, and diffxpy by the Theis lab. The authours of Scanpy reccomend using the Wilcoxon rank-sum test in publications.
deepchem.pdf
The Wilcoxon's test For simplicity, we start with the Mann-Whitney-U test. The null hypothesis is that the rank of a gene in a cluster is the same as its rank in all cells. The alternative hypothesis is that the rank of a gene in a cluster is much higher than its rank in all cells (one-sided). The function sc. tl. rank_genes_groups performs the test, and sc. pl. rank_genes_groups plots the top genes. sc. tl. rank_genes_groups ( adata, groupby = 'leiden', method = 'wilcoxon' ) sc. pl. rank_genes_groups ( adata, n_genes = 15, sharey = False ) sc. settings. verbosity = 2 # reduce the verbosity adata. write ( results_file ) # write the output to the results file The Student's t-test An alternative to the non-parametric Wilcoxon test is the t-test. sc. tl. rank_genes_groups ( adata, 'leiden', method = 't-test' ) sc. pl. rank_genes_groups ( adata, n_genes = 15, sharey = False ) As an alternative, let us rank genes using logistic regression. For instance, this has been suggested by Natranos et al. (2018). The essential difference is that here, we use a multi-variate appraoch whereas conventional differential tests are uni-variate. Clark et al. (2014) has more details. sc. tl. rank_genes_groups ( adata, 'leiden', method = "logreg" ) sc. pl. rank_genes_groups ( adata, n_genes = 15, sharey = False ) Let us also define a list of marker genes for later reference. marker_genes = [ 'IL7R', 'CD79A', 'MS4A1', 'CD8A', 'CD8B', 'LYZ', 'CD14', 'LGALS3', 'S100A8', 'GNLY', 'NKG7', 'KLRB1', 'FCGR3A', 'MS4A7', 'FCER1A', 'CST3', 'PPBP' ] Listing signatures using the results of the Wilcoxon's test We use the results of the Wilcoxon's test for downstream analysis. Reload the object that has been save with the Wilcoxon Rank-Sum test result. adata = sc. read ( results_file ) Show the 10 top ranked genes per cluster 0, 1, ..., 7 in a dataframe. pd. Data Frame ( adata. uns [ 'rank_genes_groups' ][ 'names' ]). head ( 5 ) Get a table with the scores and groups. result = adata. uns [ 'rank_genes_groups' ] groups = result [ 'names' ]. dtype. names pd. Data Frame ( { group + '_' + key [: 1 ]: result [ key ][ group ] for group in groups for key in [ 'names', 'pvals' ]}). head ( 5 ) Compare to a single cluster: sc. tl. rank_genes_groups ( adata, 'leiden', groups = [ '0' ], reference = '1', method = 'wilcoxon' ) sc. pl. rank_genes_groups ( adata, groups = [ '0' ], n_genes = 20 ) If we want a more detailed view for a certain group, use sc. pl. rank_genes_groups_violin. sc. pl. rank_genes_groups_violin ( adata, groups = '0', n_genes = 8 ) If you want to compare a certain gene across groups, use the following. sc. pl. violin ( adata, [ 'CST3', 'NKG7', 'PPBP' ], groupby = 'leiden' ) Actually mark the cell types. new_cluster_names = [ 'CD4 T', 'CD14 Monocytes', 'B', 'CD8 T',
deepchem.pdf
'NK', 'FCGR3A Monocytes', 'Dendritic', 'Megakaryocytes' ] adata. rename_categories ( 'leiden', new_cluster_names ) # plot the UMAP sc. pl. umap ( adata, color = 'leiden', legend_loc = 'on data', title = '', frameon = False, save = '. pdf' ) Now that we annotated the cell types, let us visualize the marker genes. sc. pl. dotplot ( adata, marker_genes, groupby = 'leiden' ); There is also a very compact violin plot. sc. pl. stacked_violin ( adata, marker_genes, groupby = 'leiden', rotation = 90 ); During the course of this analysis, the Ann Data accumulated the following annotations. adata If you want to save your results: adata. write ( results_file, compression = 'gzip' ) # `compression='gzip'` saves disk space, but slows down writing and subsequent reading Get a rough overview of the file using h5ls, which has many options - for more details see here. The file format might still be subject to further optimization in the future. All reading functions will remain backwards-compatible, though. If you want to share this file with people who merely want to use it for visualization, a simple way to reduce the file size is by removing the dense scaled and corrected data matrix. The file still contains the raw data used in the visualizations in adata. raw. adata. raw. to_adata (). write ( '. /write/pbmc3k_without X. h5ad' ) If you want to export to “csv”, you have the following options: # Export single fields of the annotation of observations # adata. obs[['n_counts', 'louvain_groups']]. to_csv( # '. /write/pbmc3k_corrected_louvain_groups. csv') # Export single columns of the multidimensional annotation # adata. obsm. to_df()[['X_pca1', 'X_pca2']]. to_csv( # '. /write/pbmc3k_corrected_X_pca. csv') # Or export everything except the data using `. write_csvs`. # Set `skip_data=False` if you also want to export the data. # adata. write_csvs(results_file[:-5], )
deepchem.pdf
Deep Probabilistic Analysis of Single-Cell Omics Data Recordings at single-cell resolution can give us a better understanding of the biological differences in our sample. As sequencing technologies and instruments have become better and cheaper, generating single-cell data is becoming more popular. In order to derive meaningful biological insights, it is important to select reliable analysis tools such as the one we will cover in this tutorial. scvi-tools (single-cell variational inference tools) is a package for probabilistic modeling and analysis of single-cell omics data, built on top of Py Torch and Ann Data that aims to address some of the limitations that arise when developing and implementing probabilistic models. scvi-tools is used in tandem with Scanpy for which Deepchem also offers a tutorial. In the broader analysis pipeline, sc VI sits downstream of initial quality control (QC)-driven preprocessing and generates outputs that may be further interpreted via general single-cell analysis tools. In this introductory tutorial, we go through the different steps of an scvi-tools workflow. While we focus on sc VI in this tutorial, the API is consistent across all models. Please note that this tutorial was largely adapted from the one provided by scvi-tools and you can head to their page to find more information. Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b # install necessary packages ! pip install --quiet scvi-colab from scvi_colab import install install () import scvi import scanpy as sc import matplotlib. pyplot as plt # set preferences for figures and plots sc. set_figure_params ( figsize = ( 4, 4 )) # for white background of figures (only for docs rendering) % config Inline Backend. print_figure_kwargs={'facecolor' : "w"} % config Inline Backend. figure_format='retina' Global seed set to 0 Loading and preparing data Let us first load a subsampled version of the heart cell atlas dataset described in Litviňuková et al. (2020). scvi-tools has many "built-in" datasets as well as support for loading arbitrary . csv, . loom, and . h5ad (Ann Data) files. Please see our tutorial on data loading for more examples. Litviňuková, M., Talavera-López, C., Maatz, H., Reichart, D., Worth, C. L., Lindberg, E. L., ... & Teichmann, S. A. (2020). Cells of the adult human heart. Nature, 588(7838), 466-472. Important All scvi-tools models require Ann Data objects as input. adata = scvi. data. heart_cell_atlas_subsampled () INFO File data/hca_subsampled_20k. h5ad already downloaded Now we preprocess the data to remove, for example, genes that are very lowly expressed and other outliers. For these tasks we prefer the Scanpy preprocessing module. sc. pp. filter_genes ( adata, min_counts = 3 ) In sc RNA-seq analysis, it's popular to normalize the data. These values are not used by scvi-tools, but given their popularity in other tasks as well as for visualization, we store them in the anndata object separately (via the . raw attribute).
deepchem.pdf
Important Unless otherwise specified, scvi-tools models require the raw counts (not log library size normalized). adata. layers [ "counts" ] = adata. X. copy () # preserve counts sc. pp. normalize_total ( adata, target_sum = 1e4 ) sc. pp. log1p ( adata ) adata. raw = adata # freeze the state in `. raw` Finally, we perform feature selection, to reduce the number of features (genes in this case) used as input to the scvi-tools model. For best practices of how/when to perform feature selection, please refer to the model-specific tutorial. For sc VI, we recommend anywhere from 1,000 to 10,000 HVGs, but it will be context-dependent. sc. pp. highly_variable_genes ( adata, n_top_genes = 1200, subset = True, layer = "counts", flavor = "seurat_v3", batch_key = "cell_source" ) Now it's time to run setup_anndata(), which alerts scvi-tools to the locations of various matrices inside the anndata. It's important to run this function with the correct arguments so scvi-tools is notified that your dataset has batches, annotations, etc. For example, if batches are registered with scvi-tools, the subsequent model will correct for batch effects. See the full documentation for details. In this dataset, there is a "cell_source" categorical covariate, and within each "cell_source", multiple "donors", "gender" and "age_group". There are also two continuous covariates we'd like to correct for: "percent_mito" and "percent_ribo". These covariates can be registered using the categorical_covariate_keys argument. If you only have one categorical covariate, you can also use the batch_key argument instead. scvi. model. SCVI. setup_anndata ( adata, layer = "counts", categorical_covariate_keys = [ "cell_source", "donor" ], continuous_covariate_keys = [ "percent_mito", "percent_ribo" ] ) Warning If the adata is modified after running setup_anndata, please run setup_anndata again, before creating an instance of a model. Creating and training a model While we highlight the sc VI model here, the API is consistent across all scvi-tools models and is inspired by that of scikit-learn. For a full list of options, see the scvi documentation. model = scvi. model. SCVI ( adata ) We can see an overview of the model by printing it. model SCVI Model with the following params: n_hidden: 128, n_latent: 10, n_layers: 1, dropout_rate: 0. 1, dispersion: gene, gene_likelihood: zinb, latent_distribution: normal Training status: Not Trained Important All scvi-tools models run faster when using a GPU. By default, scvi-tools will use a GPU if one is found to be available. Please see the installation page for more information about installing scvi-tools when a GPU is available. model. train ()
deepchem.pdf
GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Epoch 400/400: 100%|██████████| 400/400 [05:43<00:00, 1. 16it/s, loss=284, v_num=1] Saving and loading Saving consists of saving the model neural network weights, as well as parameters used to initialize the model. # model. save("my_model/") # model = scvi. model. SCVI. load("my_model/", adata=adata, use_gpu=True) Obtaining model outputs latent = model. get_latent_representation () It's often useful to store the outputs of scvi-tools back into the original anndata, as it permits interoperability with Scanpy. adata. obsm [ "X_sc VI" ] = latent The model. get... () functions default to using the anndata that was used to initialize the model. It's possible to also query a subset of the anndata, or even use a completely independent anndata object as long as the anndata is organized in an equivalent fashion. adata_subset = adata [ adata. obs. cell_type == "Fibroblast" ] latent_subset = model. get_latent_representation ( adata_subset ) INFO Received view of anndata, making copy. INFO:scvi. model. base. _base_model:Received view of anndata, making copy. INFO Input Ann Data not setup with scvi-tools. attempting to transfer Ann Data setup INFO:scvi. model. base. _base_model:Input Ann Data not setup with scvi-tools. attempting to transfer Ann Data setup denoised = model. get_normalized_expression ( adata_subset, library_size = 1e4 ) denoised. iloc [: 5, : 5 ] INFO Received view of anndata, making copy. INFO:scvi. model. base. _base_model:Received view of anndata, making copy. INFO Input Ann Data not setup with scvi-tools. attempting to transfer Ann Data setup INFO:scvi. model. base. _base_model:Input Ann Data not setup with scvi-tools. attempting to transfer Ann Data setup ISG15 TNFRSF18 VWA1 HES5 SPSB1 GTCAAGTCATGCCACG-1-HCAHeart7702879 0. 343808 0. 118372 1. 945633 0. 062702 4. 456603 GAGTCATTCTCCGTGT-1-HCAHeart8287128 1. 552080 0. 275485 1. 457585 0. 013442 14. 617260 CCTCTGATCGTGACAT-1-HCAHeart7702881 5. 157080 0. 295140 1. 195748 0. 143792 2. 908867 CGCCATTCATCATCTT-1-H0035_apex 0. 352172 0. 019281 0. 570386 0. 105633 6. 325405 TCGTAGAGTAGGACTG-1-H0015_septum 0. 290155 0. 040910 0. 400771 0. 723409 8. 142258 Let's store the normalized values back in the anndata. adata. layers [ "scvi_normalized" ] = model. get_normalized_expression ( library_size = 10e4 ) Interoperability with Scanpy Scanpy is a powerful python library for visualization and downstream analysis of sc RNA-seq data. We show here how to feed the objects produced by scvi-tools into a scanpy workflow. Visualization without batch correction Warning We use UMAP to qualitatively assess our low-dimension embeddings of cells. We do not advise using UMAP or any similar approach quantitatively. We do recommend using the embeddings produced by sc VI as a plug-in replacement of what you would get from PCA, as we show below.
deepchem.pdf
First, we demonstrate the presence of nuisance variation with respect to nuclei/whole cell, age group, and donor by plotting the UMAP results of the top 30 PCA components for the raw count data. # run PCA then generate UMAP plots sc. tl. pca ( adata ) sc. pp. neighbors ( adata, n_pcs = 30, n_neighbors = 20 ) sc. tl. umap ( adata, min_dist = 0. 3 ) sc. pl. umap ( adata, color = [ "cell_type" ], frameon = False, ) sc. pl. umap ( adata, color = [ "donor", "cell_source" ], ncols = 2, frameon = False, ) We see that while the cell types are generally well separated, nuisance variation plays a large part in the variation of the data. Visualization with batch correction (sc VI) Now, let us try using the sc VI latent space to generate the same UMAP plots to see if sc VI successfully accounts for batch effects in the data. # use sc VI latent space for UMAP generation sc. pp. neighbors ( adata, use_rep = "X_sc VI" ) sc. tl. umap ( adata, min_dist = 0. 3 ) sc. pl. umap ( adata, color = [ "cell_type" ], frameon = False, ) sc. pl. umap ( adata, color = [ "donor", "cell_source" ], ncols = 2, frameon = False, )
deepchem.pdf
We can see that sc VI was able to correct for nuisance variation due to nuclei/whole cell, age group, and donor, while maintaining separation of cell types. Clustering on the sc VI latent space The user will note that we imported curated labels from the original publication. Our interface with scanpy makes it easy to cluster the data with scanpy from sc VI's latent space and then reinject them into sc VI (e. g., for differential expression). # neighbors were already computed using sc VI sc. tl. leiden ( adata, key_added = "leiden_sc VI", resolution = 0. 5 ) sc. pl. umap ( adata, color = [ "leiden_sc VI" ], frameon = False, ) Differential expression We can also use many scvi-tools models for differential expression. For further details on the methods underlying these functions as well as additional options, please see the API docs. adata. obs. cell_type. head ()
deepchem.pdf
AACTCCCCACGAGAGT-1-HCAHeart7844001 Myeloid ATAACGCAGAGCTGGT-1-HCAHeart7829979 Ventricular_Cardiomyocyte GTCAAGTCATGCCACG-1-HCAHeart7702879 Fibroblast GGTGATTCAAATGAGT-1-HCAHeart8102858 Endothelial AGAGAATTCTTAGCAG-1-HCAHeart8102863 Endothelial Name: cell_type, dtype: category Categories (11, object): ['Adipocytes', 'Atrial_Cardiomyocyte', 'Endothelial', 'Fibroblast', ..., 'Neuronal', ' Pericytes', 'Smooth_muscle_cells', 'Ventricular_Cardiomyocyte'] For example, a 1-vs-1 DE test is as simple as: de_df = model. differential_expression ( groupby = "cell_type", group1 = "Endothelial", group2 = "Fibroblast" ) de_df. head () DE... : 100%|██████████| 1/1 [00:00<00:00, 3. 07it/s] proba_de proba_not_de bayes_factor scale1 scale2 pseudocounts delta lfc_mean lfc_median lfc_std SOX17 0. 9998 0. 0002 8. 516943 0. 001615 0. 000029 0. 0 0. 25 6. 222365 6. 216846 1. 967564 SLC9A3R2 0. 9996 0. 0004 7. 823621 0. 010660 0. 000171 0. 0 0. 25 5. 977907 6. 049340 1. 672150 ABCA10 0. 9990 0. 0010 6. 906745 0. 000081 0. 006355 0. 0 0. 25-8. 468659-9. 058912 2. 959383 EGFL7 0. 9986 0. 0014 6. 569875 0. 008471 0. 000392 0. 0 0. 25 4. 751251 4. 730982 1. 546327 VWF 0. 9984 0. 0016 6. 436144 0. 014278 0. 000553 0. 0 0. 25 5. 013347 5. 029471 1. 758744 5 rows × 22 columns We can also do a 1-vs-all DE test, which compares each cell type with the rest of the dataset: de_df = model. differential_expression ( groupby = "cell_type", ) de_df. head () DE... : 100%|██████████| 11/11 [00:03<00:00, 3. 08it/s] proba_de proba_not_de bayes_factor scale1 scale2 pseudocounts delta lfc_mean lfc_median lfc_std CIDEC 0. 9988 0. 0012 6. 724225 0. 002336 0. 000031 0. 0 0. 25 7. 082959 7. 075700 2. 681833 ADIPOQ 0. 9988 0. 0012 6. 724225 0. 003627 0. 000052 0. 0 0. 25 7. 722131 7. 461277 3. 332577 GPAM 0. 9986 0. 0014 6. 569875 0. 025417 0. 000202 0. 0 0. 25 7. 365266 7. 381156 2. 562121 PLIN1 0. 9984 0. 0016 6. 436144 0. 004482 0. 000048 0. 0 0. 25 7. 818194 7. 579515 2. 977385 GPD1 0. 9974 0. 0026 5. 949637 0. 002172 0. 000044 0. 0 0. 25 6. 543847 6. 023436 2. 865962 5 rows × 22 columns We now extract top markers for each cluster using the DE results. markers = {} cats = adata. obs. cell_type. cat. categories for i, c in enumerate ( cats ): cid = " {} vs Rest". format ( c ) cell_type_df = de_df. loc [ de_df. comparison == cid ] cell_type_df = cell_type_df [ cell_type_df. lfc_mean > 0 ] cell_type_df = cell_type_df [ cell_type_df [ "bayes_factor" ] > 3 ] cell_type_df = cell_type_df [ cell_type_df [ "non_zeros_proportion1" ] > 0. 1 ] markers [ c ] = cell_type_df. index. tolist ()[: 3 ]
deepchem.pdf
sc. tl. dendrogram ( adata, groupby = "cell_type", use_rep = "X_sc VI" ) sc. pl. dotplot ( adata, markers, groupby = 'cell_type', dendrogram = True, color_map = "Blues", swap_axes = True, use_raw = True, standard_scale = "var", ) We can also visualize the sc VI normalized gene expression values with the layer option. sc. pl. heatmap ( adata, markers, groupby = 'cell_type', layer = "scvi_normalized", standard_scale = "var", dendrogram = True,
deepchem.pdf
figsize = ( 8, 12 ) ) Logging information Verbosity varies in the following way: logger. set Level(logging. WARNING) will show a progress bar. logger. set Level(logging. INFO) will show global logs including the number of jobs done. logger. set Level(logging. DEBUG) will show detailed logs for each training (e. g the parameters tested). This function's behaviour can be customized, please refer to its documentation for information about the different parameters available. In general, you can use scvi. settings. verbosity to set the verbosity of the scvi package. Note that verbosity corresponds to the logging levels of the standard python logging module. By default, that verbosity level is set to INFO (=20). As a reminder the logging levels are: Level Numeric value
deepchem.pdf
CRITICAL 50 ERROR 40 WARNING 30 INFO 20 DEBUG 10 NOTSET 0 Reference If you use scvi-tools in your research, please consider citing @article { Gayoso2022, author = { Gayoso, Adam and Lopez, Romain and Xing, Galen and Boyeau, Pierre and Valiollah Pour Amiri, Valeh title = { A Python library for probabilistic analysis of single-cell omics data }, journal = { Nature Biotechnology }, year = { 2022 }, month = { Feb }, day = { 07 }, issn = { 1546-1696 }, doi = { 10. 1038 / s41587-021-01206-w }, url = { https : // doi. org / 10. 1038 / s41587-021-01206-w } } along with the publicaton describing the model used. This tutorial was contributed to Deepchem by: @manual { Bioinformatics, title = { Deep Probabilistic Analysis of Single-Cell Omics Data }, organization = { Deep Chem }, author = { Paiz, Paulina }, howpublished = { \ url { https : // github. com / deepchem / deepchem / blob / master / examples / tutorials / Deep_probabilistic_analysis_of_single year = { 2022 }, }
deepchem.pdf
Cell Counting Cell counting is a fundamental task found in many biological research and medical diagnostic processes. It underlies decisions in cell culture, drug development, and disease analysis. However, traditional manual cell counting methods are often time-consuming and prone to human error. This variability can hinder research progress and lead to inconsistencies across studies. Although cell counting machines exist, they are expensive and may not be readily available to all researchers. Automating cell counting using machine learning offers a powerful solution to this problem. ML-powered cell counters can quickly and accurately analyze large volumes of cell samples, freeing up researchers' time and minimizing inconsistencies. Ready to build your own cell counter and revolutionize your research efficiency? This tutorial equips you with the knowledge and skills to create a customized tool that streamlines your cell counting needs. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b Setup To run Deep Chem within Colab, you'll need to run the following installation commands. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Deep Chem in your local machine again. ! pip install --pre deepchem import deepchem as dc dc. __version__ Requirement already satisfied: deepchem in /usr/local/lib/python3. 10/dist-packages (2. 7. 2. dev20240221173509) Requirement already satisfied: joblib in /usr/local/lib/python3. 10/dist-packages (from deepchem) (1. 3. 2) Requirement already satisfied: numpy>=1. 21 in /usr/local/lib/python3. 10/dist-packages (from deepchem) (1. 25. 2) Requirement already satisfied: pandas in /usr/local/lib/python3. 10/dist-packages (from deepchem) (1. 5. 3) Requirement already satisfied: scikit-learn in /usr/local/lib/python3. 10/dist-packages (from deepchem) (1. 2. 2) Requirement already satisfied: sympy in /usr/local/lib/python3. 10/dist-packages (from deepchem) (1. 12) Requirement already satisfied: scipy>=1. 10. 1 in /usr/local/lib/python3. 10/dist-packages (from deepchem) (1. 11. 4) Requirement already satisfied: rdkit in /usr/local/lib/python3. 10/dist-packages (from deepchem) (2023. 9. 5) Requirement already satisfied: python-dateutil>=2. 8. 1 in /usr/local/lib/python3. 10/dist-packages (from pandas->d eepchem) (2. 8. 2) Requirement already satisfied: pytz>=2020. 1 in /usr/local/lib/python3. 10/dist-packages (from pandas->deepchem) ( 2023. 4) Requirement already satisfied: Pillow in /usr/local/lib/python3. 10/dist-packages (from rdkit->deepchem) (9. 4. 0) Requirement already satisfied: threadpoolctl>=2. 0. 0 in /usr/local/lib/python3. 10/dist-packages (from scikit-lear n->deepchem) (3. 3. 0) Requirement already satisfied: mpmath>=0. 19 in /usr/local/lib/python3. 10/dist-packages (from sympy->deepchem) (1. 3. 0) Requirement already satisfied: six>=1. 5 in /usr/local/lib/python3. 10/dist-packages (from python-dateutil>=2. 8. 1->pandas->deepchem) (1. 16. 0) '2. 7. 2. dev' Now we will import all the necessary packages and functions import numpy as np import matplotlib. pyplot as plt from deepchem. data import Numpy Dataset from deepchem. models. torch_models import CNN BBBC Datasets We used the image set BBBC002v1 [ Carpenter et al., Genome Biology, 2006 ] from the Broad Bioimage Benchmark Collection [ Ljosa et al., Nature Methods, 2012 ] for this tutorial. The Broad Bioimage Benchmark Collection Dataset 002 (BBBC002) contains images of Drosophila Kc167 cells. The ground truth labels consist of cell counts. Full details about this dataset are present at
deepchem.pdf
https://bbbc. broadinstitute. org/BBBC002. For counting cells, our dataset needs to have images as inputs and the corresponding cell counts as the ground truth labels. We have several BBBC datasets that can be loaded using the deepchem package. These datasets are an extension to Molecule Net and can be accessed through dc. molnet. The BBBC002 dataset consists of 60 images, each 512x512 pixels in size, which are split into train, validation and test sets in a 80/10/10 split by default. We also use splitter='random' in order to ensure that these images are randomly split into the train, validation and test sets in the above mention ratios. bbbc2_dataset = dc. molnet. load_bbbc002 ( splitter = 'random' ) tasks, dataset, transforms = bbbc2_dataset train, val, test = dataset train_x, train_y, train_w, train_ids = train. X, train. y, train. w, train. ids val_x, val_y, val_w, val_ids = val. X, val. y, val. w, val. ids test_x, test_y, test_w, test_ids = test. X, test. y, test. w, test. ids Now that we've loaded the dataset and randomly split it, let's take a look at the data. print ( f "Shape of train data: { train_x. shape } " ) print ( f "Shape of train labels: { train_y. shape } " ) Shape of train data: (40, 512, 512) Shape of train labels: (40,) We can confirm that a sample from our dataset is in the form of a 512x512 image. Let's visualize this sample: i = 2 plt. figure ( figsize = ( 5, 5 )) plt. imshow ( train_x [ i ]) plt. title ( f "Cell Count: { train_y [ i ] } " ) plt. show () Now let's prepare the data for the model. Py Torch based CNN Models require that images be in the shape of (C, H, W), wherein 'C' is the number of input channels, 'H' is the height of the image and 'W' is the width of the image. So we will reshape the data. train_x = np. array ( train_x. reshape (-1, 512, 512, 1 ), dtype = np. float32 ) train_y = np. array ( train_y. reshape (-1 ), dtype = np. float32 ) val_x = np. array ( val_x. reshape (-1, 512, 512, 1 ), dtype = np. float32 ) val_y = np. array ( val_y. reshape (-1 ), dtype = np. float32 ) test_x = np. array ( test_x. reshape (-1, 512, 512, 1 ), dtype = np. float32 ) test_y = np. array ( test_y. reshape (-1 ), dtype = np. float32 ) train_data = Numpy Dataset ( train_x, train_y ) val_data = Numpy Dataset ( val_x, val_y )
deepchem.pdf
test_data = Numpy Dataset ( test_x, test_y ) Creating and training our model We will use the rms_score metric for our Validation Callback in order to monitor the performance of the model during training. For more information on how to use callbacks, refer to this tutorial on Advanced Model Training We will use the CNN model from the deepchem package. Since cell counting is a relational problem, we will use the regression mode. We will use a 2D CNN model, with 6 hidden layers of the following sizes [32, 64, 128, 128, 64, 32] and a kernel size of 3 across all the filters, you can modify both the kernel size and the number of filters per layer. We have also used average pooling made residual connections and added dropout layers between subsequent layers in order to improve performance. Feel free to experiment with various models. regression_metric = dc. metrics. Metric ( dc. metrics. rms_score ) model = CNN ( n_tasks = 1, n_features = 1, dims = 2, layer_filters = [ 32, 64, 128, 128, 64, 32 ], kernel_size = 3, learning_rate mode = 'regression', padding = 'same', batch_size = 4, residual = True, dropouts = 0. 1, pool_type = 'average' ) callback = dc. models. Validation Callback ( val_data, 10, [ regression_metric ]) avg_loss = model. fit ( train_data, nb_epoch = 20, callbacks = callback ) /usr/local/lib/python3. 10/dist-packages/torch/nn/modules/lazy. py:180: User Warning: Lazy modules are a new featur e under heavy development so changes to the API or functionality can happen at any moment. warnings. warn('Lazy modules are a new feature under heavy development ' Step 10 validation: rms_score=43. 5554 Step 20 validation: rms_score=61. 2546 Step 30 validation: rms_score=39. 8772 Step 40 validation: rms_score=52. 2475 Step 50 validation: rms_score=42. 5802 Step 60 validation: rms_score=38. 1957 Step 70 validation: rms_score=72. 0341 Step 80 validation: rms_score=35. 5798 Step 90 validation: rms_score=46. 5774 Step 100 validation: rms_score=31. 5153 Step 110 validation: rms_score=38. 8215 Step 120 validation: rms_score=35. 6907 Step 130 validation: rms_score=29. 9797 Step 140 validation: rms_score=28. 0428 Step 150 validation: rms_score=44. 3926 Step 160 validation: rms_score=37. 7657 Step 170 validation: rms_score=34. 5076 Step 180 validation: rms_score=26. 8319 Step 190 validation: rms_score=26. 6618 Step 200 validation: rms_score=26. 4627 Evaluating the performance of our model Now let's use mean_absolute_error as our test metric and print out the results of our model. We have also created a graph of True vs Predicted values in order to visualize our model's performance We can see that the model performs fairly well with a test loss of about 14. 6. This means that on average, the predicted number of cells for a sample image is off by 14. 6 cells when compared to the ground truth. Although this seems like a very high value for test loss, we will see that a difference of about 15 cells is actually not bad for this particular task. test_metric = dc. metrics. Metric ( dc. metrics. mean_absolute_error ) preds = np. array ( model. predict ( train_data ), dtype = np. uint32 ) print ( "Train loss: ", test_metric. compute_metric ( train_y, preds )) preds = np. array ( model. predict ( val_data ), dtype = np. uint32 ) print ( "Val Loss: ", test_metric. compute_metric ( val_y, preds )) preds = np. array ( model. predict ( test_data ), dtype = np. uint32 ) print ( "Test Loss: ", test_metric. compute_metric ( test_y, preds )) plt. figure ( figsize = ( 4, 4 )) plt. title ( "True vs. Predicted" ) plt. plot ( test_y, color = 'red', label = 'true' ) plt. plot ( preds, color = 'blue', label = 'preds' ) plt. legend () plt. show ()
deepchem.pdf
Train loss: 19. 05 Val Loss: 22. 2 Test Loss: 14. 6 Let us print out the mean cell count of our predictions and compare them with the ground truth. We will also print out the maximum difference between the ground truth and the prediction from the test set. print ( f "Mean of True Values: { np. mean ( test_y ) :. 2f } " ) print ( f "Mean of Predictions: { np. mean ( preds ) :. 2f } " ) diff = [] for i in range ( len ( test_y )): diff. append ( abs ( test_y [ i ] - preds [ i ])) print ( f "Max of Difference: { np. max ( diff ) } " ) Mean of True Values: 87. 60 Mean of Predictions: 87. 80 Max of Difference: 31. 0 We can observe that the averages of our predictions and the ground truth are very close with a difference of just 0. 20. Although we see a maximum difference of 31 cells between the prediction and true value, when we take into account the Test Loss, the close proximity of the means of predictions and the true labels, and the small size of our test set, we can say that our model performs fairly well. Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Discord The Deep Chem Discord hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Bioinformatics, title = { Cell Counting Tutorial }, organization = { Deep Chem }, author = { Menezes, Aaron }, howpublished = { \ url { https : // github. com / deepchem / deepchem / blob / master / examples / tutorials / Cell_Counting_Tutorial. ipynb year = { 2024 },
deepchem.pdf
}
deepchem.pdf
Introduction To Material Science Table of Contents: Introduction Setup Featurizers Crystal Featurizers Compound Featurizers Datasets Predicting structural properties of a crystal Further Reading Introduction One of the most exciting applications of machine learning in the recent time is it's application to material science domain. Deep Chem helps in development and application of machine learning to solid-state systems. As a starting point of applying machine learning to material science domain, Deep Chem provides material science datasets as part of the Molecule Net suite of datasets, data featurizers and implementation of popular machine learning algorithms specific to material science domain. This tutorial serves as an introduction of using Deep Chem for machine learning related tasks in material science domain. Traditionally, experimental research were used to find and characterize new materials. But traditional methods have high limitations by constraints of required resources and equipments. Material science is one of the booming areas where machine learning is making new in-roads. The discovery of new material properties holds key to lot of problems like climate change, development of new semi-conducting materials etc. Deep Chem acts as a toolbox for using machine learning in material science. This tutorial can also be used in Google colab. If you'd like to open this notebook in colab, you can use the following link. This notebook is made to run without any GPU support. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem Deep Chem for material science will also require the additiona libraries pymatgen and matminer. These two libraries assist machine learning in material science. For graph neural network models which we will be used in the backend, Deep Chem requires dgl library. All these can be installed using pip. Note when using locally, install a higher version of the jupyter notebook (>6. 5. 5, here on colab). ! pip install -q pymatgen == 2023. 12. 18 ! pip install -q matminer == 0. 9. 0 ! pip install -q dgl ! pip install -q tqdm import deepchem as dc dc. __version__ from tqdm import tqdm import matplotlib. pyplot as plt import pymatgen as mg from pymatgen import core as core import os os. environ [ 'DEEPCHEM_DATA_DIR' ] = os. getcwd () Featurizers Material Structure Featurizers Crystal are geometric structures which has to be featurized for using in machine learning algorithms. The following featurizers provided by Deep Chem helps in featurizing crystals: The Sine Coulomb Matrix featurizer a crystal by calculating sine coulomb matrix for the crystals. It can be called using dc. featurizers. Sine Coulomb Matrix function. [1]
deepchem.pdf
The CGCNNFeaturizer calculates structure graph features of crystals. It can be called using dc. featurizers. CGCNNFeaturizer function. [2] The LCNNFeaturizer calculates the 2-D Surface graph features in 6 different permutations. It can be used using the utility dc. feat. LCNNFeaturizer. [3] [1] Faber et al. “Crystal Structure Representations for Machine Learning Models of Formation Energies”, Inter. J. Quantum Chem. 115, 16, 2015. https://arxiv. org/abs/1503. 07406 [2] T. Xie and J. C. Grossman, “Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties”, Phys. Rev. Lett. 120, 2018, https://arxiv. org/abs/1710. 10324 [3] Jonathan Lym, Geun Ho Gu, Yousung Jung, and Dionisios G. Vlachos, Lattice Convolutional Neural Network Modeling of Adsorbate Coverage Effects, J. Phys. Chem. C 2019 https://pubs. acs. org/doi/10. 1021/acs. jpcc. 9b03370 Example: Featurizing a crystal In this part, we will be using pymatgen for representing the crystal structure of Caesium Chloride and calculate structure graph features using CGCNNFeaturizer. The Cs Cl crystal is a cubic lattice with the chloride atoms lying upon the lattice points at the edges of the cube, while the caesium atoms lie in the holes in the center of the cubes. The green colored atoms are the caesium atoms in this crystal structure and chloride atoms are the grey ones. Source: Wikipedia # the lattice paramter of a cubic cell a = 4. 2 lattice = core. Lattice. cubic ( a ) # Atoms in a crystal atomic_species = [ "Cs", "Cl" ] # Coordinates of atoms in a crystal cs_coords = [ 0, 0, 0 ] cl_coords = [ 0. 5, 0. 5, 0. 5 ] structure = mg. core. Structure ( lattice, atomic_species, [ cs_coords, cl_coords ]) structure Structure Summary Lattice abc : 4. 2 4. 2 4. 2 angles : 90. 0 90. 0 90. 0 volume : 74. 08800000000001 A : 4. 2 0. 0 0. 0 B : 0. 0 4. 2 0. 0 C : 0. 0 0. 0 4. 2 pbc : True True True Periodic Site: Cs (0. 0, 0. 0, 0. 0) [0. 0, 0. 0, 0. 0] Periodic Site: Cl (2. 1, 2. 1, 2. 1) [0. 5, 0. 5, 0. 5] In above code sample, we first defined a cubic lattice using the cubic lattice parameter a. Then, we created a structure
deepchem.pdf
with atoms in the crystal and their coordinates as features. A nice introduction to crystallographic coordinates can be found here. Once a structure is defined, it can be featurized using CGCNN Featurizer. Featurization of a crystal using CGCNNFeaturizer returns a Deep Chem Graph Data object which can be used for machine learning tasks. featurizer = dc. feat. CGCNNFeaturizer () features = featurizer. featurize ([ structure ]) features [ 0 ] Graph Data(node_features=[2, 92], edge_index=[2, 24], edge_features=[24, 41]) Material Composition Featurizers The above part discussed about using Deep Chem for featurizing crystal structures. Here, we will be seeing about featurizing material compositions. Deep Chem supports the following material composition featurizers: The Element Property Fingerprint can be used to find fingerprint of elements based on elemental stoichiometry. It can be used using a call to dc. featurizers. Element Property Fingerprint. [4] The Elem Net Featurizer returns a vector containing fractional compositions of each element in the compound. It can be used using a call to dc. feat. Elem Net Featurizer. [5] [4] Ward, L., Agrawal, A., Choudhary, A. et al. A general-purpose machine learning framework for predicting properties of inorganic materials. npj Comput Mater 2, 16028 (2016). https://doi. org/10. 1038/npjcompumats. 2016. 28 [5] Jha, D., Ward, L., Paul, A. et al. "Elem Net: Deep Learning the Chemistry of Materials From Only Elemental Composition", Sci Rep 8, 17593 (2018). https://doi. org/10. 1038/s41598-018-35934-y Example: Featurizing a compund In the below example, we featurize Ferric Oxide (Fe2O3) using Element Property Fingerprint featurizer . The featurizer returns the compounds elemental stoichoimetry properties as features. comp = core. Composition ( "Fe2O3" ) featurizer = dc. feat. Element Property Fingerprint () features = featurizer. featurize ([ comp ]) features [ 0 ] /usr/local/lib/python3. 10/dist-packages/pymatgen/core/periodic_table. py:186: User Warning: No data available for electrical_resistivity for O warnings. warn(f"No data available for {item} for {self. symbol}") /usr/local/lib/python3. 10/dist-packages/pymatgen/core/periodic_table. py:186: User Warning: No data available for bulk_modulus for O warnings. warn(f"No data available for {item} for {self. symbol}") /usr/local/lib/python3. 10/dist-packages/pymatgen/core/periodic_table. py:186: User Warning: No data available for coefficient_of_linear_thermal_expansion for O warnings. warn(f"No data available for {item} for {self. symbol}") array([1. 83000000e+00, 3. 44000000e+00, 1. 61000000e+00, 2. 79600000e+00, 1. 13844192e+00, 2. 00000000e+00, 4. 00000000e+00, 2. 00000000e+00, 2. 80000000e+00, 1. 41421356e+00, 8. 00000000e+00, 1. 60000000e+01, 8. 00000000e+00, 1. 28000000e+01, 5. 65685425e+00, 2. 00000000e+00, 3. 00000000e+00, 1. 00000000e+00, 2. 40000000e+00, 7. 07106781e-01, 1. 59994000e+01, 5. 58450000e+01, 3. 98456000e+01, 3. 19376400e+01, 2. 81750940e+01, 6. 00000000e-01, 1. 40000000e+00, 8. 00000000e-01, 9. 20000000e-01, 5. 65685425e-01, 6. 10000000e+01, 1. 01000000e+02, 4. 00000000e+01, 8. 50000000e+01, 2. 82842712e+01, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 3. 17500000e+02, 4. 91000000e+03, 4. 59250000e+03, 2. 15450000e+03, 3. 24738789e+03, 2. 65800000e-02, 8. 00000000e+01, 7. 99734200e+01, 3. 20159480e+01, 5. 65497476e+01, 5. 48000000e+01, 1. 81100000e+03, 1. 75620000e+03, 7. 57280000e+02, 1. 24182093e+03, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00, 0. 00000000e+00]) Datasets Deep Chem has the following material properties dataset as part of Molecule Net suite of datasets. These datasets can be used for a variety of tasks in material science like predicting structure formation energy, metallicity of a compound etc. The Band Gap dataset contains 4604 experimentally measured band gaps for inorganic crystal structure compositions. The dataset can be loaded using dc. molnet. load_bandgap utility. The Perovskite dataset contains 18928 perovskite structures and their formation energies. It can be loaded using a call to dc. molnet. load_perovskite. The Formation Energy dataset contains 132752 calculated formation energies and inorganic crystal structures from the Materials Project database. It can be loaded using a call to dc. molnet. load_mp_formation_energy.
deepchem.pdf
The Metallicity dataset contains 106113 inorganic crystal structures from the Materials Project database labeled as metals or nonmetals. It can be loaded using dc. molnet. load_mp_metallicity utility. In the below example, we will demonstrate loading perovskite dataset and use it to predict formation energy of new crystals. Perovskite structures are structures adopted by many oxides. Ideally it is a cubic structure but non-cubic variants also exists. Each datapoint in the perovskite dataset contains the lattice structure as a pymatgen. core. Structure object and the formation energy of the corresponding structure. It can be used by calling for machine learning tasks by calling dc. molnet. load_perovskite utility. The utility takes care of loading, featurizing and splitting the dataset for machine learning tasks. dataset_config = { "reload" : True, "featurizer" : dc. feat. CGCNNFeaturizer (), "transformers" : []} tasks, datasets, transformers = dc. molnet. load_perovskite ( ** dataset_config ) train_dataset, valid_dataset, test_dataset = datasets train_dataset. get_data_shape deepchem. data. datasets. Disk Dataset. get_data_shape def get_data_shape() -> Shape Gets array shape of datapoints in this dataset. Predicting Formation Energy Along with the dataset and featurizers, Deep Chem also provide implementation of various machine learning algorithms which can be used on the fly for material science applications. For predicting formation energy, we use CGCNNModel as described in the paper [1]. model = dc. models. CGCNNModel ( mode = 'regression', batch_size = 256, learning_rate = 0. 0008 ) losses = [] for _ in tqdm ( range ( 10 ), desc = "Training" ): loss = model. fit ( train_dataset, nb_epoch = 1 ) losses. append ( loss ) plt. plot ( losses ) Training: 100%|██████████| 10/10 [11:26<00:00, 68. 65s/it] [<matplotlib. lines. Line2D at 0x7967a9a126e0>] Once fitting the model, we evaluate the performance of the model using mean squared error metric since it is a regression task. For selection a metric, dc. metrics. mean_squared_error function can be used and we evaluate the model by calling dc. model. evaluate. ` metric = dc. metrics. Metric ( dc. metrics. mean_absolute_error ) print ( "Training set score:", model. evaluate ( train_dataset, [ metric ], transformers )) print ( "Test set score:", model. evaluate ( test_dataset, [ metric ], transformers )) Training set score: {'mean_absolute_error': 0. 1310973839991837} Test set score: {'mean_absolute_error': 0. 13470105945654667}
deepchem.pdf
The original paper achieved a MAE of 0. 130 e V/atom on the same dataset (with a 60:20:20 split, instead of the 80:10:10 being used in this tutorial). Further Reading For further reading on getting started on using machine learning for material science, here are two great resources: Getting Started in Material Informatics A Collection of Open Source Material Informatics Resources Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
deepchem.pdf
Using Reinforcement Learning to Play Pong This tutorial demonstrates using reinforcement learning to train an agent to play Pong. This task isn't directly related to chemistry, but video games make an excellent demonstration of reinforcement learning techniques. Colab This tutorial and the rest in this sequence can be done in Google Colab (although the visualization at the end doesn't work correctly on Colab, so you might prefer to run this tutorial locally). If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem import deepchem deepchem. __version__ Requirement already satisfied: deepchem in c:\users\hp\deepchem_2 (2. 8. 1. dev20240501183346) Requirement already satisfied: joblib in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from deepchem) (1. 3. 2) Requirement already satisfied: numpy>=1. 21 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from deepchem) (1. 26. 4) Requirement already satisfied: pandas in c:\users\hp\anaconda3\envs\deep\lib\site-packages\pandas-2. 2. 1-py3. 10-w in-amd64. egg (from deepchem) (2. 2. 1) Requirement already satisfied: scikit-learn in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from deepchem) (1. 4. 1. post1) Requirement already satisfied: sympy in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from deepchem) (1. 12) Requirement already satisfied: scipy>=1. 10. 1 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from deepchem ) (1. 12. 0) Requirement already satisfied: rdkit in c:\users\hp\anaconda3\envs\deep\lib\site-packages\rdkit-2023. 9. 5-py3. 10-win-amd64. egg (from deepchem) (2023. 9. 5) Requirement already satisfied: python-dateutil>=2. 8. 2 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from pandas->deepchem) (2. 8. 2) Requirement already satisfied: pytz>=2020. 1 in c:\users\hp\anaconda3\envs\deep\lib\site-packages\pytz-2024. 1-py3. 10. egg (from pandas->deepchem) (2024. 1) Requirement already satisfied: tzdata>=2022. 7 in c:\users\hp\anaconda3\envs\deep\lib\site-packages\tzdata-2024. 1-py3. 10. egg (from pandas->deepchem) (2024. 1) Requirement already satisfied: Pillow in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from rdkit->deepchem ) (10. 2. 0) Requirement already satisfied: threadpoolctl>=2. 0. 0 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from s cikit-learn->deepchem) (3. 3. 0) Requirement already satisfied: mpmath>=0. 19 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from sympy->de epchem) (1. 3. 0) Requirement already satisfied: six>=1. 5 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from python-dateut il>=2. 8. 2->pandas->deepchem) (1. 16. 0) No normalization for SPS. Feature removed! No normalization for Avg Ipc. Feature removed! WARNING:tensorflow:From c:\Users\HP\anaconda3\envs\deep\lib\site-packages\keras\src\losses. py:2976: The name tf. losses. sparse_softmax_cross_entropy is deprecated. Please use tf. compat. v1. losses. sparse_softmax_cross_entropy i nstead. WARNING:tensorflow:From c:\Users\HP\anaconda3\envs\deep\lib\site-packages\tensorflow\python\util\deprecation. py: 588: calling function (from tensorflow. python. eager. polymorphic_function. polymorphic_function) with experimental _relax_shapes is deprecated and will be removed in a future version. Instructions for updating: experimental_relax_shapes is deprecated, use reduce_retracing instead Skipped loading modules with pytorch-geometric dependency, missing a dependency. No module named 'dgl' Skipped loading modules with transformers dependency. No module named 'transformers' cannot import name 'Hugging Face Model' from 'deepchem. models. torch_models' (c:\users\hp\deepchem_2\deepchem\model s\torch_models\__init__. py) Skipped loading modules with pytorch-lightning dependency, missing a dependency. No module named 'lightning' Skipped loading some Jax models, missing a dependency. No module named 'jax' '2. 8. 1. dev' ! pip install "gym[atari,accept-rom-license]"
deepchem.pdf
Requirement already satisfied: gym[accept-rom-license,atari] in c:\users\hp\anaconda3\envs\deep\lib\site-package s (0. 26. 2) Requirement already satisfied: numpy>=1. 18. 0 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from gym[acce pt-rom-license,atari]) (1. 26. 4) Requirement already satisfied: cloudpickle>=1. 2. 0 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from gym [accept-rom-license,atari]) (3. 0. 0) Requirement already satisfied: gym-notices>=0. 0. 4 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from gym [accept-rom-license,atari]) (0. 0. 8) Requirement already satisfied: ale-py~=0. 8. 0 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from gym[acce pt-rom-license,atari]) (0. 8. 1) Requirement already satisfied: autorom~=0. 4. 2 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from autorom [accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-rom-license,atari]) (0. 4. 2) Requirement already satisfied: importlib-resources in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from al e-py~=0. 8. 0->gym[accept-rom-license,atari]) (6. 4. 0) Requirement already satisfied: typing-extensions in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from ale-py~=0. 8. 0->gym[accept-rom-license,atari]) (4. 9. 0) Requirement already satisfied: click in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from autorom~=0. 4. 2-> autorom[accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-rom-license,atari]) (8. 1. 7) Requirement already satisfied: requests in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from autorom~=0. 4. 2->autorom[accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-rom-license,atari]) (2. 31. 0) Requirement already satisfied: tqdm in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from autorom~=0. 4. 2->a utorom[accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-rom-license,atari]) (4. 66. 2) Requirement already satisfied: Auto ROM. accept-rom-license in c:\users\hp\anaconda3\envs\deep\lib\site-packages ( from autorom[accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-rom-license,atari]) (0. 6. 1) Requirement already satisfied: colorama in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from click->autoro m~=0. 4. 2->autorom[accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-rom-license,atari]) (0. 4. 6) Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (fr om requests->autorom~=0. 4. 2->autorom[accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-rom-l icense,atari]) (3. 3. 2) Requirement already satisfied: idna<4,>=2. 5 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from requests->autorom~=0. 4. 2->autorom[accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-rom-license,atari ]) (3. 6) Requirement already satisfied: urllib3<3,>=1. 21. 1 in c:\users\hp\anaconda3\envs\deep\lib\site-packages (from req uests->autorom~=0. 4. 2->autorom[accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-rom-license,atari]) (2. 2. 1) Requirement already satisfied: certifi>=2017. 4. 17 in c:\users\hp\appdata\roaming\python\python310\site-packages (from requests->autorom~=0. 4. 2->autorom[accept-rom-license]~=0. 4. 2; extra == "accept-rom-license"->gym[accept-ro m-license,atari]) (2022. 5. 18. 1) Reinforcement Learning Reinforcement learning involves an agent that interacts with an environment. In this case, the environment is the video game and the agent is the player. By trial and error, the agent learns a policy that it follows to perform some task (winning the game). As it plays, it receives rewards that give it feedback on how well it is doing. In this case, it receives a positive reward every time it scores a point and a negative reward every time the other player scores a point. The first step is to create an Environment that implements this task. Fortunately, Open AI Gym already provides an implementation of Pong (and many other tasks appropriate for reinforcement learning). Deep Chem's Gym Environment class provides an easy way to use environments from Open AI Gym. We could just use it directly, but in this case we subclass it and preprocess the screen image a little bit to make learning easier. import deepchem as dc import numpy as np class Pong Env ( dc. rl. Gym Environment ): def __init__ ( self ): super ( Pong Env, self ). __init__ ( 'Pong-v4' ) self. _state_shape = ( 80, 80 ) @property def state ( self ): # Crop everything outside the play area, reduce the image size, # and convert it to black and white. state_array = self. _state cropped = state_array [ 34 : 194, :, :] reduced = cropped [ 0 :-1 : 2, 0 :-1 : 2 ] grayscale = np. sum ( reduced, axis = 2 ) bw = np. zeros ( grayscale. shape, dtype = np. float32 ) bw [ grayscale != 233 ] = 1 return bw def __deepcopy__ ( self, memo ): return Pong Env () env = Pong Env () Next we create a model to implement our policy. This model receives the current state of the environment (the pixels
deepchem.pdf
being displayed on the screen at this moment) as its input. Given that input, it decides what action to perform. In Pong there are three possible actions at any moment: move the paddle up, move it down, or leave it where it is. The policy model produces a probability distribution over these actions. It also produces a value output, which is interpreted as an estimate of how good the current state is. This turns out to be important for efficient learning. The model begins with two convolutional layers to process the image. That is followed by a dense (fully connected) layer to provide plenty of capacity for game logic. We also add a small Gated Recurrent Unit (GRU). That gives the network a little bit of memory, so it can keep track of which way the ball is moving. Just from the screen image, you cannot tell whether the ball is moving to the left or to the right, so having memory is important. We concatenate the dense and GRU outputs together, and use them as inputs to two final layers that serve as the network's outputs. One computes the action probabilities, and the other computes an estimate of the state value function. We also provide an input for the initial state of the GRU, and return its final state at the end. This is required by the learning algorithm. import torch import torch. nn as nn import torch. nn. functional as F class Pong Policy ( dc. rl. Policy ): def __init__ ( self ): super ( Pong Policy, self ). __init__ ([ 'action_prob', 'value', 'rnn_state' ], [ np. zeros ( 16, dtype = np. float32 )]) def create_model ( self, ** kwargs ): class Test Model ( nn. Module ): def __init__ ( self ): super ( Test Model, self ). __init__ () # Convolutional layers self. conv1 = nn. Conv2d ( 1, 16, kernel_size = 8, stride = 4 ) self. conv2 = nn. Conv2d ( 16, 32, kernel_size = 4, stride = 2 ) self. fc1 = nn. Linear ( 2048, 256 ) self. gru = nn. GRU ( 256, 16, batch_first = True ) self. action_prob = nn. Linear ( 272, env. n_actions ) self. value = nn. Linear ( 272, 1 ) def forward ( self, inputs ): state = ( torch. from_numpy (( inputs [ 0 ]))) rnn_state = ( torch. from_numpy ( inputs [ 1 ])) reshaped = state. view (-1, 1, 80, 80 ) conv1 = F. relu ( self. conv1 ( reshaped )) conv2 = F. relu ( self. conv2 ( conv1 )) conv2 = conv2. view ( conv2. size ( 0 ), -1 ) x = F. relu ( self. fc1 ( conv2 )) reshaped_x = x. view ( 1, -1, 256 ) #x = torch. flatten(x, 1) gru_out, rnn_final_state = self. gru ( reshaped_x, rnn_state. unsqueeze ( 0 )) rnn_final_state = rnn_final_state. view (-1, 16 ) gru_out = gru_out. view (-1, 16 ) concat = torch. cat (( x, gru_out ), dim = 1 ) #concat = concat. view(-1, 272) action_prob = F. softmax ( self. action_prob ( concat ), dim =-1 ) value = self. value ( concat ) return action_prob, value, rnn_final_state return Test Model () policy = Pong Policy () We will optimize the policy using the Advantage Actor Critic (A2C) algorithm. There are lots of hyperparameters we could specify at this point, but the default values for most of them work well on this problem. The only one we need to customize is the learning rate. import torch. nn. functional as F from deepchem. rl. torch_rl. torch_a2c import A2C from deepchem. models. optimizers import Adam a2c = A2C ( env, policy, model_dir = 'model', optimizer = Adam ( learning_rate = 0. 0002 )) Optimize for as long as you have patience to. By 1 million steps you should see clear signs of learning. Around 3 million steps it should start to occasionally beat the game's built in AI. By 7 million steps it should be winning almost every time. Running on my laptop, training takes about 20 minutes for every million steps. # Change this to train as many steps as you have patience for. a2c. fit ( 1000 ) c:\Users\HP\anaconda3\envs\deep\lib\site-packages\gym\utils\passive_env_checker. py:233: Deprecation Warning: `np. bool8` is a deprecated alias for `np. bool_`. (Deprecated Num Py 1. 24) if not isinstance(terminated, (bool, np. bool8)):
deepchem.pdf
Let's watch it play and see how it does! # This code doesn't work well on Colab env. reset () while not env. terminated : env. env. render () env. step ( a2c. select_action ( env. state )) c:\Users\HP\anaconda3\envs\deep\lib\site-packages\gym\utils\passive_env_checker. py:289: User Warning: WARN: No re nder fps was declared in the environment (env. metadata['render_fps'] is None or not defined), rendering may occu r at inconsistent fps. logger. warn( Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
deepchem.pdf
Uncertainty in Deep Learning A common criticism of deep learning models is that they tend to act as black boxes. A model produces outputs, but doesn't given enough context to interpret them properly. How reliable are the model's predictions? Are some predictions more reliable than others? If a model predicts a value of 5. 372 for some quantity, should you assume the true value is between 5. 371 and 5. 373? Or that it's between 2 and 8? In some fields this situation might be good enough, but not in science. For every value predicted by a model, we also want an estimate of the uncertainty in that value so we can know what conclusions to draw based on it. Deep Chem makes it very easy to estimate the uncertainty of predicted outputs (at least for the models that support it— not all of them do). Let's start by seeing an example of how to generate uncertainty estimates. We load a dataset, create a model, train it on the training set, predict the output on the test set, and then derive some uncertainty estimates. Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem import deepchem deepchem. __version__ We'll use the Delaney dataset from the Molecule Net suite to run our experiments in this tutorial. Let's load up our dataset for our experiments, and then make some uncertainty predictions. import deepchem as dc import numpy as np import matplotlib. pyplot as plot tasks, datasets, transformers = dc. molnet. load_delaney () train_dataset, valid_dataset, test_dataset = datasets model = dc. models. Multitask Regressor ( len ( tasks ), 1024, uncertainty = True ) model. fit ( train_dataset, nb_epoch = 20 ) y_pred, y_std = model. predict_uncertainty ( test_dataset ) All of this looks exactly like any other example, with just two differences. First, we add the option uncertainty=True when creating the model. This instructs it to add features to the model that are needed for estimating uncertainty. Second, we call predict_uncertainty() instead of predict() to produce the output. y_pred is the predicted outputs. y_std is another array of the same shape, where each element is an estimate of the uncertainty (standard deviation) of the corresponding element in y_pred. And that's all there is to it! Simple, right? Of course, it isn't really that simple at all. Deep Chem is doing a lot of work to come up with those uncertainties. So now let's pull back the curtain and see what is really happening. (For the full mathematical details of calculating uncertainty, see https://arxiv. org/abs/1703. 04977 ) To begin with, what does "uncertainty" mean? Intuitively, it is a measure of how much we can trust the predictions. More formally, we expect that the true value of whatever we are trying to predict should usually be within a few standard deviations of the predicted value. But uncertainty comes from many sources, ranging from noisy training data to bad modelling choices, and different sources behave in different ways. It turns out there are two fundamental types of uncertainty we need to take into account. Aleatoric Uncertainty Consider the following graph. It shows the best fit linear regression to a set of ten data points. # Generate some fake data and plot a regression line. x = np. linspace ( 0, 5, 10 ) y = 0. 15 * x + np. random. random ( 10 ) plot. scatter ( x, y ) fit = np. polyfit ( x, y, 1 ) line_x = np. linspace (-1, 6, 2 ) plot. plot ( line_x, np. poly1d ( fit )( line_x )) plot. show ()
deepchem.pdf
The line clearly does not do a great job of fitting the data. There are many possible reasons for this. Perhaps the measuring device used to capture the data was not very accurate. Perhaps y depends on some other factor in addition to x, and if we knew the value of that factor for each data point we could predict y more accurately. Maybe the relationship between x and y simply isn't linear, and we need a more complicated model to capture it. Regardless of the cause, the model clearly does a poor job of predicting the training data, and we need to keep that in mind. We cannot expect it to be any more accurate on test data than on training data. This is known as aleatoric uncertainty. How can we estimate the size of this uncertainty? By training a model to do it, of course! At the same time it is learning to predict the outputs, it is also learning to predict how accurately each output matches the training data. For every output of the model, we add a second output that produces the corresponding uncertainty. Then we modify the loss function to make it learn both outputs at the same time. Epistemic Uncertainty Now consider these three curves. They are fit to the same data points as before, but this time we are using 10th degree polynomials. plot. figure ( figsize = ( 12, 3 )) line_x = np. linspace ( 0, 5, 50 ) for i in range ( 3 ): plot. subplot ( 1, 3, i + 1 ) plot. scatter ( x, y ) fit = np. polyfit ( np. concatenate ([ x, [ 3 ]]), np. concatenate ([ y, [ i ]]), 10 ) plot. plot ( line_x, np. poly1d ( fit )( line_x )) plot. show () Each of them perfectly interpolates the data points, yet they clearly are different models. (In fact, there are infinitely many 10th degree polynomials that exactly interpolate any ten data points. ) They make identical predictions for the data we fit them to, but for any other value of x they produce different predictions. This is called epistemic uncertainty. It means the data does not fully constrain the model. Given the training data, there are many different models we could have found, and those models make different predictions. The ideal way to measure epistemic uncertainty is to train many different models, each time using a different random seed and possibly varying hyperparameters. Then use all of them for each input and see how much the predictions vary. This is very expensive to do, since it involves repeating the whole training process many times. Fortunately, we can approximate the same effect in a less expensive way: by using dropout. Recall that when you train a model with dropout, you are effectively training a huge ensemble of different models all at once. Each training sample is evaluated with a different dropout mask, corresponding to a different random subset of the connections in the full model. Usually we only perform dropout during training and use a single averaged mask for prediction. But instead, let's use dropout for prediction too. We can compute the output for lots of different dropout masks, then see how much the predictions vary. This turns out to give a reasonable estimate of the epistemic uncertainty in the outputs. Uncertain Uncertainty? Now we can combine the two types of uncertainty to compute an overall estimate of the error in each output:
deepchem.pdf
This is the value Deep Chem reports. But how much can you trust it? Remember how I started this tutorial: deep learning models should not be used as black boxes. We want to know how reliable the outputs are. Adding uncertainty estimates does not completely eliminate the problem; it just adds a layer of indirection. Now we have estimates of how reliable the outputs are, but no guarantees that those estimates are themselves reliable. Let's go back to the example we started with. We trained a model on the SAMPL training set, then generated predictions and uncertainties for the test set. Since we know the correct outputs for all the test samples, we can evaluate how well we did. Here is a plot of the absolute error in the predicted output versus the predicted uncertainty. abs_error = np. abs ( y_pred. flatten ()-test_dataset. y. flatten ()) plot. scatter ( y_std. flatten (), abs_error ) plot. xlabel ( 'Standard Deviation' ) plot. ylabel ( 'Absolute Error' ) plot. show () The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors. (Strictly speaking, we expect the absolute error to be less than the predicted uncertainty. Even a very uncertain number could still happen to be close to the correct value by chance. If the model is working well, there should be more points below the diagonal than above it. ) Now let's see how well the values satisfy the expected distribution. If the standard deviations are correct, and if the errors are normally distributed (which is certainly not guaranteed to be true!), we expect 95% of the values to be within two standard deviations, and 99% to be within three standard deviations. Here is a histogram of errors as measured in standard deviations. plot. hist ( abs_error / y_std. flatten (), 20 ) plot. show () All the values are in the expected range, and the distribution looks roughly Gaussian although not exactly. Perhaps this indicates the errors are not normally distributed, but it may also reflect inaccuracies in the uncertainties. This is an important reminder: the uncertainties are just estimates, not rigorous measurements. Most of them are pretty good, but you should not put too much confidence in any single value. Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways:
deepchem.pdf
Star Deep Chem on Git Hub Starring Deep Chem on Git Hub helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
deepchem.pdf
O p e n i n C o l a b O p e n i n C o l a b ! pip install --pre deepchem [ jax ] Collecting deepchem[jax] Downloading deepchem-2. 6. 0. dev20210924223259-py3-none-any. whl (609 k B) |▌ | 10 k B 24. 0 MB/s eta 0:00:01 |█ | 20 k B 25. 7 MB/s eta 0:00:01 |█▋ | 30 k B 20. 2 MB/s eta 0:00:01 |██▏ | 40 k B 16. 4 MB/s eta 0:00:01 |██▊ | 51 k B 10. 7 MB/s eta 0:00:01 |███▎ | 61 k B 12. 3 MB/s eta 0:00:01 |███▊ | 71 k B 11. 7 MB/s eta 0:00:01 |████▎ | 81 k B 12. 9 MB/s eta 0:00:01 |████▉ | 92 k B 11. 7 MB/s eta 0:00:01 |█████▍ | 102 k B 11. 0 MB/s eta 0:00:01 |██████ | 112 k B 11. 0 MB/s eta 0:00:01 |██████▌ | 122 k B 11. 0 MB/s eta 0:00:01 |███████ | 133 k B 11. 0 MB/s eta 0:00:01 |███████▌ | 143 k B 11. 0 MB/s eta 0:00:01 |████████ | 153 k B 11. 0 MB/s eta 0:00:01 |████████▋ | 163 k B 11. 0 MB/s eta 0:00:01 |█████████▏ | 174 k B 11. 0 MB/s eta 0:00:01 |█████████▊ | 184 k B 11. 0 MB/s eta 0:00:01 |██████████▏ | 194 k B 11. 0 MB/s eta 0:00:01 |██████████▊ | 204 k B 11. 0 MB/s eta 0:00:01 |███████████▎ | 215 k B 11. 0 MB/s eta 0:00:01 |███████████▉ | 225 k B 11. 0 MB/s eta 0:00:01 |████████████▍ | 235 k B 11. 0 MB/s eta 0:00:01 |█████████████ | 245 k B 11. 0 MB/s eta 0:00:01 |█████████████▌ | 256 k B 11. 0 MB/s eta 0:00:01 |██████████████ | 266 k B 11. 0 MB/s eta 0:00:01 |██████████████▌ | 276 k B 11. 0 MB/s eta 0:00:01 |███████████████ | 286 k B 11. 0 MB/s eta 0:00:01 |███████████████▋ | 296 k B 11. 0 MB/s eta 0:00:01 |████████████████▏ | 307 k B 11. 0 MB/s eta 0:00:01 |████████████████▊ | 317 k B 11. 0 MB/s eta 0:00:01 |█████████████████▏ | 327 k B 11. 0 MB/s eta 0:00:01 |█████████████████▊ | 337 k B 11. 0 MB/s eta 0:00:01 |██████████████████▎ | 348 k B 11. 0 MB/s eta 0:00:01 |██████████████████▉ | 358 k B 11. 0 MB/s eta 0:00:01 |███████████████████▍ | 368 k B 11. 0 MB/s eta 0:00:01 |████████████████████ | 378 k B 11. 0 MB/s eta 0:00:01 |████████████████████▍ | 389 k B 11. 0 MB/s eta 0:00:01 |█████████████████████ | 399 k B 11. 0 MB/s eta 0:00:01 |█████████████████████▌ | 409 k B 11. 0 MB/s eta 0:00:01 |██████████████████████ | 419 k B 11. 0 MB/s eta 0:00:01 |██████████████████████▋ | 430 k B 11. 0 MB/s eta 0:00:01 |███████████████████████▏ | 440 k B 11. 0 MB/s eta 0:00:01 |███████████████████████▊ | 450 k B 11. 0 MB/s eta 0:00:01 |████████████████████████▏ | 460 k B 11. 0 MB/s eta 0:00:01 |████████████████████████▊ | 471 k B 11. 0 MB/s eta 0:00:01 |█████████████████████████▎ | 481 k B 11. 0 MB/s eta 0:00:01 |█████████████████████████▉ | 491 k B 11. 0 MB/s eta 0:00:01 |██████████████████████████▍ | 501 k B 11. 0 MB/s eta 0:00:01 |███████████████████████████ | 512 k B 11. 0 MB/s eta 0:00:01 |███████████████████████████▍ | 522 k B 11. 0 MB/s eta 0:00:01 |████████████████████████████ | 532 k B 11. 0 MB/s eta 0:00:01 |████████████████████████████▌ | 542 k B 11. 0 MB/s eta 0:00:01 |█████████████████████████████ | 552 k B 11. 0 MB/s eta 0:00:01 |█████████████████████████████▋ | 563 k B 11. 0 MB/s eta 0:00:01 |██████████████████████████████▏ | 573 k B 11. 0 MB/s eta 0:00:01 |██████████████████████████████▋ | 583 k B 11. 0 MB/s eta 0:00:01 |███████████████████████████████▏| 593 k B 11. 0 MB/s eta 0:00:01 |███████████████████████████████▊| 604 k B 11. 0 MB/s eta 0:00:01 |████████████████████████████████| 609 k B 11. 0 MB/s Requirement already satisfied: scipy in /usr/local/lib/python3. 7/dist-packages (from deepchem[jax]) (1. 4. 1) Requirement already satisfied: scikit-learn in /usr/local/lib/python3. 7/dist-packages (from deepchem[jax]) (0. 22. 2. post1) Requirement already satisfied: numpy in /usr/local/lib/python3. 7/dist-packages (from deepchem[jax]) (1. 19. 5) Requirement already satisfied: pandas in /usr/local/lib/python3. 7/dist-packages (from deepchem[jax]) (1. 1. 5) Requirement already satisfied: joblib in /usr/local/lib/python3. 7/dist-packages (from deepchem[jax]) (1. 0. 1) Collecting optax Downloading optax-0. 0. 9-py3-none-any. whl (118 k B) |████████████████████████████████| 118 k B 52. 6 MB/s Requirement already satisfied: jaxlib in /usr/local/lib/python3. 7/dist-packages (from deepchem[jax]) (0. 1. 71+cud a111) Collecting dm-haiku Downloading dm_haiku-0. 0. 5. dev0-py3-none-any. whl (284 k B) |████████████████████████████████| 284 k B 30. 1 MB/s
deepchem.pdf
Requirement already satisfied: jax in /usr/local/lib/python3. 7/dist-packages (from deepchem[jax]) (0. 2. 21) Requirement already satisfied: tabulate>=0. 8. 9 in /usr/local/lib/python3. 7/dist-packages (from dm-haiku->deepche m[jax]) (0. 8. 9) Requirement already satisfied: typing-extensions in /usr/local/lib/python3. 7/dist-packages (from dm-haiku->deepc hem[jax]) (3. 7. 4. 3) Requirement already satisfied: absl-py>=0. 7. 1 in /usr/local/lib/python3. 7/dist-packages (from dm-haiku->deepchem [jax]) (0. 12. 0) Requirement already satisfied: six in /usr/local/lib/python3. 7/dist-packages (from absl-py>=0. 7. 1->dm-haiku->dee pchem[jax]) (1. 15. 0) Requirement already satisfied: opt-einsum in /usr/local/lib/python3. 7/dist-packages (from jax->deepchem[jax]) (3. 3. 0) Requirement already satisfied: flatbuffers<3. 0,>=1. 12 in /usr/local/lib/python3. 7/dist-packages (from jaxlib->de epchem[jax]) (1. 12) Collecting chex>=0. 0. 4 Downloading chex-0. 0. 8-py3-none-any. whl (57 k B) |████████████████████████████████| 57 k B 6. 0 MB/s Requirement already satisfied: dm-tree>=0. 1. 5 in /usr/local/lib/python3. 7/dist-packages (from chex>=0. 0. 4->optax->deepchem[jax]) (0. 1. 6) Requirement already satisfied: toolz>=0. 9. 0 in /usr/local/lib/python3. 7/dist-packages (from chex>=0. 0. 4->optax-> deepchem[jax]) (0. 11. 1) Requirement already satisfied: python-dateutil>=2. 7. 3 in /usr/local/lib/python3. 7/dist-packages (from pandas->de epchem[jax]) (2. 8. 2) Requirement already satisfied: pytz>=2017. 2 in /usr/local/lib/python3. 7/dist-packages (from pandas->deepchem[jax ]) (2018. 9) Installing collected packages: chex, optax, dm-haiku, deepchem Successfully installed chex-0. 0. 8 deepchem-2. 6. 0. dev20210924223259 dm-haiku-0. 0. 5. dev0 optax-0. 0. 9 import numpy as np import functools try : import jax import jax. numpy as jnp import haiku as hk import optax from deepchem. models import PINNModel, Jax Model from deepchem. data import Numpy Dataset from deepchem. models. optimizers import Adam from jax import jacrev has_haiku_and_optax = True except : has_haiku_and_optax = False Given Physical Data We have a 10 random points between and its corresponding value f(x) We know that data follows an underlying physical rule that import matplotlib. pyplot as plt give_size = 10 in_given = np. linspace (-2 * np. pi, 2 * np. pi, give_size ) out_given = np. cos ( in_given ) + 0. 1 * np. random. normal ( loc = 0. 0, scale = 1, size = give_size ) # red for numpy. sin() plt. figure ( figsize = ( 13, 7 )) plt. scatter ( in_given, out_given, color = 'green', marker = "o" ) plt. xlabel ( "x --> ", fontsize = 18 ) plt. ylabel ( "f (x) -->", fontsize = 18 ) plt. legend ([ "Supervised Data" ], prop = { 'size' : 16 }, loc = "lower right" ) plt. title ( "Data of our physical system", fontsize = 18 ) Text(0. 5, 1. 0, 'Data of our physical system')
deepchem.pdf
From simple integeration, we can easily solve the diffrential equation and the solution will be -import matplotlib. pyplot as plt test = np. expand_dims ( np. linspace (-2. 5 * np. pi, 2. 5 * np. pi, 100 ), 1 ) out_array = np. cos ( test ) plt. figure ( figsize = ( 13, 7 )) plt. plot ( test, out_array, color = 'blue', alpha = 0. 5 ) plt. scatter ( in_given, out_given, color = 'green', marker = "o" ) plt. xlabel ( "x --> ", fontsize = 18 ) plt. ylabel ( "f (x) -->", fontsize = 18 ) plt. legend ([ "Actual data" , "Supervised Data" ], prop = { 'size' : 16 }, loc = "lower right" ) plt. title ( "Data of our physical system", fontsize = 18 ) Text(0. 5, 1. 0, 'Data of our physical system')
deepchem.pdf
Building a Simple Neural Network Model -We define a simple Feed-forward Neural Network with 2 hidden layers of size 256 & 128 neurons. # defining the Haiku model # A neural network is defined as a function of its weights & operations. # NN(x) = F(x, W) # forward function defines the F which describes the mathematical operations like Matrix & dot products, Signmoid functions, etc # W is the init_params def f ( x ): net = hk. nets. MLP ( output_sizes = [ 256, 128, 1 ], activation = jax. nn. softplus ) val = net ( x ) return val init_params, forward_fn = hk. transform ( f ) rng = jax. random. PRNGKey ( 500 ) params = init_params ( rng, np. random. rand ( 1000, 1 )) /usr/local/lib/python3. 7/dist-packages/jax/_src/numpy/lax_numpy. py:3634: User Warning: Explicitly requested dtype float64 requested in zeros is not available, and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github. com /google/jax#current-gotchas for more. lax. _check_user_dtype_supported(dtype, "zeros") Fitting a simple Neural Network solution to the Physical Data train_dataset = Numpy Dataset ( np. expand_dims ( in_given, axis = 1 ), np. expand_dims ( out_given, axis = 1 )) rms_loss = lambda pred, tar, w : jnp. mean ( optax. l2_loss ( pred, tar )) # Jax Model Working nn_model = Jax Model ( forward_fn, params, rms_loss, batch_size = 100, learning_rate = 0. 001, log_frequency = 2 ) nn_model. fit ( train_dataset, nb_epochs = 10000, deterministic = True ) /usr/local/lib/python3. 7/dist-packages/deepchem/models/jax_models/jax_model. py:160: User Warning: Jax Model is sti ll in active development and all features may not yet be implemented 'Jax Model is still in active development and all features may not yet be implemented' 2. 1729056921826473e-11 dataset_test = Numpy Dataset ( test ) nn_output = nn_model. predict ( dataset_test ) plt. figure ( figsize = ( 13, 7 )) plt. plot ( test, out_array, color = 'blue', alpha = 0. 5 ) plt. scatter ( in_given, out_given, color = 'green', marker = "o" ) plt. plot ( test, nn_output, color = 'red', marker = "o", alpha = 0. 7 ) plt. xlabel ( "x --> ", fontsize = 18 ) plt. ylabel ( "f (x) -->", fontsize = 18 ) plt. legend ([ "Actual data", "Vanilla NN", "Supervised Data" ], prop = { 'size' : 16 }, loc = "lower right" ) plt. title ( "Data of our physical system", fontsize = 18 ) Text(0. 5, 1. 0, 'Data of our physical system')
deepchem.pdf
Learning to fit the Data using the underlying Diffrential equation Lets ensure that final output of the neural network satisfies the diffrential equation as a loss function-def create_eval_fn ( forward_fn, params ): """ Calls the function to evaluate the model """ @jax. jit def eval_model ( x, rng = None ): bu = forward_fn ( params, rng, x ) return jnp. squeeze ( bu ) return eval_model def gradient_fn ( forward_fn, loss_outputs, initial_data ): """ This function calls the gradient function, to implement the backpropagation """ boundary_data = initial_data [ 'X0' ] boundary_target = initial_data [ 'u0' ] @jax. jit def model_loss ( params, target, weights, rng, x_train ): @functools. partial ( jax. vmap, in_axes = ( None, 0 )) def periodic_loss ( params, x ): """ diffrential equation => grad(f(x)) = - sin(x) minimize f(x) := grad(f(x)) + sin(x) """ x = jnp. expand_dims ( x, 0 )
deepchem.pdf
u_x = jacrev ( forward_fn, argnums = ( 2 ))( params, rng, x ) return u_x + jnp. sin ( x ) u_pred = forward_fn ( params, rng, boundary_data ) loss_u = jnp. mean (( u_pred - boundary_target ) ** 2 ) f_pred = periodic_loss ( params, x_train ) loss_f = jnp. mean (( f_pred ** 2 )) return loss_u + loss_f return model_loss initial_data = { 'X0' : jnp. expand_dims ( in_given, 1 ), 'u0' : jnp. expand_dims ( out_given, 1 ) } opt = Adam ( learning_rate = 1e-3 ) pinn_model = PINNModel ( forward_fn = forward_fn, params = params, initial_data = initial_data, batch_size = 1000, optimizer = opt, grad_fn = gradient_fn, eval_fn = create_eval_fn, deterministic = True, log_frequency = 1000 ) # defining our training data. We feed 100 points between [-2. 5pi, 2. 5pi] without the labels, # which will be used as the differential loss(regulariser) X_f = np. expand_dims ( np. linspace (-3 * np. pi, 3 * np. pi, 1000 ), 1 ) dataset = Numpy Dataset ( X_f ) pinn_model. fit ( dataset, nb_epochs = 3000 ) /usr/local/lib/python3. 7/dist-packages/deepchem/models/jax_models/pinns_model. py:157: User Warning: Pinn Model is still in active development and we could change the design of the API in the future. 'Pinn Model is still in active development and we could change the design of the API in the future. ' /usr/local/lib/python3. 7/dist-packages/deepchem/models/jax_models/jax_model. py:160: User Warning: Jax Model is sti ll in active development and all features may not yet be implemented 'Jax Model is still in active development and all features may not yet be implemented' 0. 026332732232287527 import matplotlib. pyplot as plt pinn_output = pinn_model. predict ( dataset_test ) plt. figure ( figsize = ( 13, 7 )) plt. plot ( test, out_array, color = 'blue', alpha = 0. 5 ) plt. scatter ( in_given, out_given, color = 'green', marker = "o" ) # plt. plot(test, nn_output, color = 'red', marker = "x", alpha = 0. 3) plt. scatter ( test, pinn_output, color = 'red', marker = "o", alpha = 0. 7 ) plt. xlabel ( "x --> ", fontsize = 18 ) plt. ylabel ( "f (x) -->", fontsize = 18 ) plt. legend ([ "Actual data" , "Supervised Data", "PINN" ], prop = { 'size' : 16 }, loc = "lower right" ) plt. title ( "Data of our physical system", fontsize = 18 ) Text(0. 5, 1. 0, 'Data of our physical system')
deepchem.pdf
Comparing the results between PINN & Vanilla NN model plt. figure ( figsize = ( 13, 7 )) # plt. plot(test, out_array, color = 'blue', alpha = 0. 5) # plt. scatter(in_given, out_given, color = 'green', marker = "o") plt. scatter ( test, nn_output, color = 'blue', marker = "x", alpha = 0. 3 ) plt. scatter ( test, pinn_output, color = 'red', marker = "o", alpha = 0. 7 ) plt. xlabel ( "x --> ", fontsize = 18 ) plt. ylabel ( "f (x) -->", fontsize = 18 ) plt. legend ([ "Vanilla NN", "PINN" ], prop = { 'size' : 16 }, loc = "lower right" ) plt. title ( "Data of our physical system", fontsize = 18 ) Text(0. 5, 1. 0, 'Data of our physical system')
deepchem.pdf
deepchem.pdf
About Neural ODE : Using Torchdiffeq with Deepchem Author : Anshuman Mishra : Linkedin O p e n i n C o l a b O p e n i n C o l a b Before getting our hands dirty with code , let us first understand little bit about what Neural ODEs are ? Neural ODEs and torchdiffeq Neural ODE stands for "Neural Ordinary Differential Equation. You heard right. Let me guess . Your first impression of the word is : "Has it something to do with differential equations that we studied in the school ?" Spot on ! Let's see the formal definition as stated by the original paper : Neural ODEs are a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a blackbox differential equation solver. These are continuous-depth models that have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. In simple words perceive Neural ODEs as yet another type of layer like Linear, Conv2D, MHA... In this tutorial we will be using torchdiffeq. This library provides ordinary differential equation (ODE) solvers implemented in Py Torch framework. The library provides a clean API of ODE solvers for usage in deep learning applications. As the solvers are implemented in Py Torch, algorithms in this repository are fully supported to run on the GPU. What will you learn after completing this tutorial ? 1. How to implement a Neural ODE in a Neural Network ? 2. Using torchdiffeq with deepchem. Installing Libraries ! pip install torchdiffeq ! pip install --pre deepchem
deepchem.pdf
Collecting torchdiffeq Downloading torchdiffeq-0. 2. 2-py3-none-any. whl (31 k B) Requirement already satisfied: torch>=1. 3. 0 in /usr/local/lib/python3. 7/dist-packages (from torchdiffeq) (1. 10. 0 +cu111) Requirement already satisfied: scipy>=1. 4. 0 in /usr/local/lib/python3. 7/dist-packages (from torchdiffeq) (1. 4. 1) Requirement already satisfied: numpy>=1. 13. 3 in /usr/local/lib/python3. 7/dist-packages (from scipy>=1. 4. 0->torch diffeq) (1. 21. 5) Requirement already satisfied: typing-extensions in /usr/local/lib/python3. 7/dist-packages (from torch>=1. 3. 0->t orchdiffeq) (3. 10. 0. 2) Installing collected packages: torchdiffeq Successfully installed torchdiffeq-0. 2. 2 Collecting deepchem Downloading deepchem-2. 6. 1-py3-none-any. whl (608 k B) |████████████████████████████████| 608 k B 8. 9 MB/s Requirement already satisfied: scipy in /usr/local/lib/python3. 7/dist-packages (from deepchem) (1. 4. 1) Requirement already satisfied: scikit-learn in /usr/local/lib/python3. 7/dist-packages (from deepchem) (1. 0. 2) Requirement already satisfied: pandas in /usr/local/lib/python3. 7/dist-packages (from deepchem) (1. 3. 5) Collecting rdkit-pypi Downloading rdkit_pypi-2021. 9. 4-cp37-cp37m-manylinux_2_17_x86_64. manylinux2014_x86_64. whl (20. 6 MB) |████████████████████████████████| 20. 6 MB 8. 2 MB/s Requirement already satisfied: joblib in /usr/local/lib/python3. 7/dist-packages (from deepchem) (1. 1. 0) Requirement already satisfied: numpy>=1. 21 in /usr/local/lib/python3. 7/dist-packages (from deepchem) (1. 21. 5) Requirement already satisfied: python-dateutil>=2. 7. 3 in /usr/local/lib/python3. 7/dist-packages (from pandas->de epchem) (2. 8. 2) Requirement already satisfied: pytz>=2017. 3 in /usr/local/lib/python3. 7/dist-packages (from pandas->deepchem) (2 018. 9) Requirement already satisfied: six>=1. 5 in /usr/local/lib/python3. 7/dist-packages (from python-dateutil>=2. 7. 3-> pandas->deepchem) (1. 15. 0) Requirement already satisfied: Pillow in /usr/local/lib/python3. 7/dist-packages (from rdkit-pypi->deepchem) (7. 1. 2) Requirement already satisfied: threadpoolctl>=2. 0. 0 in /usr/local/lib/python3. 7/dist-packages (from scikit-learn->deepchem) (3. 1. 0) Installing collected packages: rdkit-pypi, deepchem Successfully installed deepchem-2. 6. 1 rdkit-pypi-2021. 9. 4 Import Libraries import torch import torch. nn as nn from torchdiffeq import odeint import math import numpy as np import deepchem as dc import matplotlib. pyplot as plt Before diving into the core of this tutorial , let's first acquaint ourselves with usage of torchdiffeq. Let's solve following differential equation . when The process to do it by hand is : Let's solve it using ODE Solver called odeint from torchdiffeq def f ( t, z ): return t z0 = torch. Tensor ([ 0 ]) t = torch. linspace ( 0, 2, 100 ) out = odeint ( f, z0, t ); Let's plot our result . It should be a parabola (remember general equation of parabola as ) plt. plot ( t, out, 'go--' ) plt. axes (). set_aspect ( 'equal', 'datalim' ) plt. grid () plt. show ()
deepchem.pdf
/usr/local/lib/python3. 7/dist-packages/ipykernel_launcher. py:2: Matplotlib Deprecation Warning: Adding an axes usi ng the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new inst ance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior en sured, by passing a unique label to each axes instance. What is Neural Differential Equation ? A neural differential equation is a differential equation using a neural network to parameterize the vector field. The canonical example is a neural ordinary differential equation : Here θ represents some vector of learnt parameters, is any standard neural architecture and is the solution. For many applications will just be a simple feedforward network. Here is the dimension. Reference The central idea now is to use a differential equation solver as part of a learnt differentiable computation graph (the sort of computation graph ubiquitous to deep learning) As simple example, suppose we observe some picture (RGB and 32x32 pixels), and wish to classify it as a picture of a cat or as a picture of a dog. With torchdiffeq , we can solve even complex higher order differential equations too. Following is a real world example , a set of differential equations that models a spring - mass damper system with initial state t=0 , x=1
deepchem.pdf
The right hand side may be regarded as a particular differentiable computation graph. The parameters may be fitted by setting up a loss between the trajectories of the model and the observed trajectories in the data, backpropagating through the model, and applying stochastic gradient descent. class System Of Equations : def __init__ ( self, km, p, g, r ): self. mat = torch. Tensor ([[ 0, 1, 0 ],[-km, p, 0 ],[ 0, g,-r ]]) def solve ( self, t, x0, dx0, ddx0 ): y0 = torch. cat ([ x0, dx0, ddx0 ]) out = odeint ( self. func, y0, t ) return out def func ( self, t, y ): out = y @self. mat return out x0 = torch. Tensor ([ 1 ]) dx0 = torch. Tensor ([ 0 ]) ddx0 = torch. Tensor ([ 1 ]) t = torch. linspace ( 0, 4 * np. pi, 1000 ) solver = System Of Equations ( 1, 6, 3, 2 ) out = solver. solve ( t, x0, dx0, ddx0 ) plt. plot ( t, out, 'r' ) plt. axes () plt. grid () plt. show () /usr/local/lib/python3. 7/dist-packages/ipykernel_launcher. py:25: Matplotlib Deprecation Warning: Adding an axes us ing the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new ins tance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior e nsured, by passing a unique label to each axes instance. This is precisely the same procedure as the more general neural ODEs we introduced earlier. At first glance, the NDE approach of 'putting a neural network in a differential equation' may seem unusual, but it is actually in line with standard practice. All that has happened is to change the parameterisation of the vector field. Model Let us have a look at how to embed an ODEsolver in a neural network . from torchdiffeq import odeint_adjoint as odeadj class f ( nn. Module ): def __init__ ( self, dim ): super ( f, self ). __init__ () self. model = nn. Sequential ( nn. Linear ( dim, 124 ), nn. Re LU (), nn. Linear ( 124, 124 ), nn. Re LU (), nn. Linear ( 124, dim ), nn. Tanh () ) def forward ( self, t, x ): return self. model ( x )
deepchem.pdf
function f in above code cell , is wrapped in an nn. Module (see codecell below) thus forming the dynamics of embedded within a neural Network. ODE Block treats the received input x as the initial value of the differential equation. The integration interval of ODE Block is fixed at [0, 1]. And it returns the output of the layer at. class ODEBlock ( nn. Module ): # This is ODEBlock. Think of it as a wrapper over ODE Solver , so as to easily connect it with our neurons ! def __init__ ( self, f ): super ( ODEBlock, self ). __init__ () self. f = f self. integration_time = torch. Tensor ([ 0, 1 ]). float () def forward ( self, x ): self. integration_time = self. integration_time. type_as ( x ) out = odeadj ( self. f, x, self. integration_time ) return out [ 1 ] class ODENet ( nn. Module ): #This is our main neural network that uses ODEBlock within a sequential module def __init__ ( self, in_dim, mid_dim, out_dim ): super ( ODENet, self ). __init__ () fx = f ( dim = mid_dim ) self. fc1 = nn. Linear ( in_dim, mid_dim ) self. relu1 = nn. Re LU ( inplace = True ) self. norm1 = nn. Batch Norm1d ( mid_dim ) self. ode_block = ODEBlock ( fx ) self. dropout = nn. Dropout ( 0. 4 ) self. norm2 = nn. Batch Norm1d ( mid_dim ) self. fc2 = nn. Linear ( mid_dim, out_dim ) def forward ( self, x ): batch_size = x. shape [ 0 ] x = x. view ( batch_size, -1 ) out = self. fc1 ( x ) out = self. relu1 ( out ) out = self. norm1 ( out ) out = self. ode_block ( out ) out = self. norm2 ( out ) out = self. dropout ( out ) out = self. fc2 ( out ) return out As mentioned before , Neural ODE Networks acts similar (has advantages though) to other neural networks , so we can solve any problem with them as the existing models do. We are gonna reuse the training process mentioned in this deepchem tutorial. So Rather than demonstrating how to use Neural ODE model with a normal dataset, we shall use the Delaney solubility dataset provided under deepchem . Our model will learn to predict the solubilities of molecules based on their extended-connectivity fingerprints (ECFPs) . For performance metrics we use pearson_r2_score . Here loss is computed directly from the model's output tasks, dataset, transformers = dc. molnet. load_delaney ( featurizer = 'ECFP', splitter = 'random' ) train_set, valid_set, test_set = dataset metric = dc. metrics. Metric ( dc. metrics. pearson_r2_score ) Time to Train We train our model for 50 epochs, with L2 as Loss Function. # Like mentioned before one can use GPUs with Py Torch and torchdiffeq device = torch. device ( "cuda" if torch. cuda. is_available () else "cpu" )
deepchem.pdf
model = ODENet ( in_dim = 1024, mid_dim = 1000, out_dim = 1 ). to ( device ) model = dc. models. Torch Model ( model, dc. models. losses. L2Loss ()) model. fit ( train_set, nb_epoch = 50 ) print ( 'Training set score : ', model. evaluate ( train_set,[ metric ])) print ( 'Test set score : ', model. evaluate ( test_set,[ metric ])) Training set score : {'pearson_r2_score': 0. 9708644701066554} Test set score : {'pearson_r2_score': 0. 7104556551957734} Neural ODEs are invertible neural nets Reference Invertible neural networks have been a significant thread of research in the ICML community for several years. Such transformations can offer a range of unique benefits: They preserve information, allowing perfect reconstruction (up to numerical limits) and obviating the need to store hidden activations in memory for backpropagation. They are often designed to track the changes in probability density that applying the transformation induces (as in normalizing flows). Like autoregressive models, normalizing flows can be powerful generative models which allow exact likelihood computations; with the right architecture, they can also allow for much cheaper sampling than autoregressive models. While many researchers are aware of these topics and intrigued by several high-profile papers, few are familiar enough with the technical details to easily follow new developments and contribute. Many may also be unaware of the wide range of applications of invertible neural networks, beyond generative modelling and variational inference. Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Discord The Deep Chem Discord hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
deepchem.pdf
Differentiation Infrastructure in Deepchem Author : Rakshit Kr. Singh : Website : Linked In : Git Hub Scientific advancement in machine learning hinges on the effective resolution of complex optimization problems. From material property design to drug discovery, these problems often involve numerous variables and intricate relationships. Traditional optimization techniques often face hurdles when addressing such challenges, often resulting in slow convergence or solutions deemed unreliable. We introduce solutions that are differentiable and also seamlessly integrable into machine learning systems, offering a novel approach to resolving these complexities. This tutorials introduces Deep Chem's comprehensive set of differentiable optimisation tools to empower researchers across the physical sciences. Deep Chem addresses limitations of conventional methods by offering a diverse set of optimization algorithms. These includes established techniques like Broyden's first and second methods alongside cutting-edge advancements, allowing researchers to select the most effective approach for their specific problem. Overview of Differentiation Utilities in Deepchem Deep Chem provides a number of optimisation algorithms and Utilities for implementing more algorithms. Some of the optimisation algorithms provided by Deep Chem are: Broyden's First Method Broyden's Second Method Anderson Acceleration Gradient Descent Adam Along with these optimisation algorithms, Deep Chem also provides a number of utilities for implementing more algorithms. What are Non Linear Equations? and why do they matter? Nonlinear equations are mathematical expressions where the relationship between the variables is not linear. Unlike linear equations, which have a constant rate of change, nonlinear equations involve terms with higher powers or functions like exponentials, logarithms, trigonometric functions, etc. Nonlinear equations are essential across various disciplines, including physics, engineering, economics, biology, and finance. They describe complex relationships and phenomena that cannot be adequately modeled with linear equations. From gravitational interactions in celestial bodies to biochemical reactions in living organisms, non-linear equations play a vital role in understanding and predicting real-world systems. Whether it's optimizing structures, analyzing market dynamics, or designing machine learning algorithms. Some Simple Non Linear Equations:, is a trigonometric function defined for all real numbers. It represents the ratio of the length of the side opposite an angle in a right triangle to the length of the hypotenuse., is an another trigonometric function. It represents the ratio of the length of the adjacent side of a right triangle to the length of the hypotenuse when x is the measure of an acute angle., is a parabola, symmetric around the y-axis, with its vertex at the origin. It represents a mathematical model of quadratic growth or decay. In physical systems, it often describes phenomena where the rate of change is proportional to the square of the quantity involved. import matplotlib. pyplot as plt import numpy as np x = np. linspace ( 0, 10, 100 ) y1 = np. sin ( x ) y2 = np. cos ( x ) y3 = x ** 2 fig, axs = plt. subplots ( 1, 3, figsize = ( 9, 3 )) axs [ 0 ]. plot ( x, y1, color = 'blue' )
deepchem.pdf
axs [ 0 ]. set_title ( 'Sin' ) axs [ 1 ]. plot ( x, y2, color = 'red' ) axs [ 1 ]. set_title ( 'Cos' ) axs [ 2 ]. plot ( x, y3, color = 'green' ) axs [ 2 ]. set_title ( 'x^2' ) plt. tight_layout () plt. show () Root Finder Methods deepchem. utils. differentiation_utils. optimize. rootfinder provides a collection of algorithms for solving nonlinear equations. These methods are designed to find the roots of functions efficiently, making them indispensable for a wide range of applications in mathematics, physics, engineering, and other fields. At its core, rootfinding seeks to determine the solutions (roots) of equations, where a function equals zero. This operation plays a pivotal role in numerous real-world applications, making it indispensable in both theoretical and practical domains. Broyden's First Method: Broyden's First Method is an iterative numerical method used for solving systems of nonlinear equations. It's particularly useful when the Jacobian matrix (the matrix of partial derivatives of the equations) is difficult or expensive to compute. Broyden's Method is an extension of the Secant Method for systems of nonlinear equations. It iteratively updates an approximation to the Jacobian matrix using the information from previous iterations. The algorithm converges to the solution by updating the variables in the direction that minimizes the norm of the system of equations. Steps: 1. Initialize the approximation to the Jacobian matrix. 2. Initialize the variables. 3. Compute the function values. 4. Update the variables. 5. Compute the change in variables. 6. Compute the function values. 7. Update the approximation to the Jacobian matrix.
deepchem.pdf
8. Repeat steps 4-7 until convergence criteria are met. References: [1] "A class of methods for solving nonlinear simultaneous equations" by Charles G. Broyden import torch from deepchem. utils. differentiation_utils import rootfinder def func1 ( y, A ): return torch. tanh ( A @ y + 0. 1 ) + y / 2. 0 A = torch. tensor ([[ 1. 1, 0. 4 ], [ 0. 3, 0. 8 ]]). requires_grad_ () y0 = torch. zeros (( 2, 1 )) # Broyden's First Method yroot = rootfinder ( func1, y0, params = ( A,), method = 'broyden1' ) print ( "Root By Broyden's First Method:" ) print ( yroot ) print ( "Function Value at Calculated Root:" ) print ( func1 ( yroot, A )) Root By Broyden's First Method: tensor([[-0. 0459], [-0. 0663]], grad_fn=<_Root Finder Backward>) Function Value at Calculated Root: tensor([[1. 1735e-07], [1. 7881e-07]], grad_fn=<Add Backward0>) from deepchem. utils. differentiation_utils. optimize. rootsolver import broyden1 def fcn ( x ): return x ** 2 - 4 + torch. tan ( x ) x0 = torch. tensor ( 0. 0, requires_grad = True ) x = broyden1 ( fcn, x0 ) x, fcn ( x ) (tensor(2. 2752, grad_fn=<View Backward0>), tensor(1. 7881e-06, grad_fn=<Add Backward0>)) Broyden's Second Method: Broyden's Second Method differs from the first method in how it updates the approximation to the Jacobian matrix. Instead of using the change in variables and function values, it uses the change in the residuals (the difference between the function values and the target values) to update the Jacobian matrix. This approach can be more stable and robust in certain situations. Steps: 1... 6 are same as Broyden's First Method. 7. Update the approximation to the Jacobian matrix. 8. Repeat steps 4-7 until convergence criteria are met. # Broyden's Second Method import torch from deepchem. utils. differentiation_utils import rootfinder def func1 ( y, A ): return torch. tanh ( A @ y + 0. 1 ) + y / 2. 0 A = torch. tensor ([[ 1. 1, 0. 4 ], [ 0. 3, 0. 8 ]]). requires_grad_ () y0 = torch. zeros (( 2, 1 )) yroot = rootfinder ( func1, y0, params = ( A,), method = 'broyden2' ) print ( " \n Root by Broyden's Second Method:" ) print ( yroot ) print ( "Function Value at Calculated Root:" ) print ( func1 ( yroot, A )) Root by Broyden's Second Method: tensor([[-0. 0459], [-0. 0663]], grad_fn=<_Root Finder Backward>) Function Value at Calculated Root: tensor([[ 1. 0300e-06], [-3. 2783e-07]], grad_fn=<Add Backward0>)
deepchem.pdf
Equilibrium Methods (Fixed Point Iteration) deepchem. utils. differentiation_utils. optimize. equilibrium contains algorithms for solving equilibrium problems, where the goal is to find a fixed point of a function. While all the rootfinding methods can be used to solve equilibrium problems, these specialized algorithms are designed to exploit the structure of equilibrium problems for more efficient convergence. Equilibrium methods are essential in machine learning for optimizing models, ensuring stability and convergence, regularizing parameters, and analyzing strategic interactions in multi-agent systems. By leveraging equilibrium principles and techniques, machine learning practitioners can train more robust and generalizable models capable of addressing a wide range of real-world challenges. The Fixed-Point Problem: Given the function, compute a fixed-point such that Classical Approach: Steps: 1. Initialize the variables. 2. Compute the function values. 3. Update the variables. 4. Repeat steps 2-3 until convergence criteria are met. Anderson Acceleration Approach (Anderson Mixing): Anderson Acceleration is an iterative method for accelerating the convergence of fixed-point iterations. It combines information from previous iterations to construct a better approximation to the fixed-point. The algorithm uses a history of function values and updates to compute a new iterate that minimizes the residual norm. Steps: 1. , fixed-point mapping 2. Choose (e. g., for some integer ). Select weights based on the last iterations satisfying.. 3.
deepchem.pdf
import torch import matplotlib. pyplot as plt from deepchem. utils. differentiation_utils. optimize. equilibrium import anderson_acc x_value, f_value = [], [] def fcn ( x, a ): x_value. append ( x. item ()) f_value. append (( a / x + x ). item () / 2 ) return ( a / x + x ) / 2 a = 2. 0 x0 = torch. tensor ([ 1. 0 ], requires_grad = True ) x = anderson_acc ( fcn, x0, params = [ a ], maxiter = 16 ) print ( "Root by Anderson Acceleration:", x. item ()) print ( "Function Value at Calculated Root:", fcn ( x, a ). item ()) # Plotting the convergence of Anderson Acceleration plt. plot ( x_value, label = 'Input Value' ) plt. plot ( f_value, label = 'Func. Value by Anderson Acce. ' ) plt. legend ( loc = 'lower right' ) plt. xlabel ( 'Iteration' ) plt. ylabel ( 'Function Value' ) plt. title ( 'Convergence of Anderson Acceleration' ) plt. show () Root by Anderson Acceleration: 1. 4142135381698608 Function Value at Calculated Root: 1. 4142135381698608 Minimizer deepchem. utils. differentiation_utils. optimize. minimizer provides a collection of algorithms for minimizing functions. These methods are designed to find the minimum of a function efficiently, making them indispensable for a wide range of applications in mathematics, physics, engineering, and other fields. Minimization algorithms, including variants of gradient descent like ADAM, are fundamental tools in various fields of science, engineering, and optimization. Gradient Descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for finding a local minimum of a differentiable multivariate function. It is used to minimize the cost function in various machine learning and optimization problems. It iteratively updates the parameters in the direction of the negative gradient of the cost function. Steps: 1. Denote the parameter vector to be optimized -
deepchem.pdf
and represent the initial guess. 2. Calculate the gradient of the cost function with respect to each parameter. Adjust the parameters in the opposite direction of the gradient to minimize the cost function according to the learning rate : Steps 2 and 3 until the algorithm converges or Stops. import torch from deepchem. utils. differentiation_utils. optimize. minimizer import gd def fcn ( x ): return 2 * x + ( x - 2 ) ** 2, 2 * ( x - 2 ) + 2 x0 = torch. tensor ( 0. 0, requires_grad = True ) x = gd ( fcn, x0, []) print ( "Minimum by Gradient Descent:", x. item ()) print ( "Function Value at Calculated Minimum:", fcn ( x )[ 0 ]. item ()) Minimum by Gradient Descent: 0. 9973406791687012 Function Value at Calculated Minimum: (tensor(3. 0000), tensor(-0. 0053)) ADAM (Adaptive Moment Estimation) ADAM is an optimization algorithm used for training deep learning models. It's an extension of the gradient descent optimization algorithm and combines the ideas of both momentum and RMSProp algorithms. Steps: 1. ADAM initializes two moving average variables: (the first moment, similar to momentum) and (the second moment, similar to RMSProp). 2. At each iteration of training, the gradients of the parameters concerning the loss function are computed. 3. The moving averages and are updated using exponential decay, with momentum and RMSProp components respectively: 4. Due to the initialization of the moving averages to zero vectors, there's a bias towards zero, especially during the initial iterations. To correct this bias, ADAM applies a bias correction step:
deepchem.pdf
5. Finally, the parameters (weights and biases) of the model are updated using the moving averages and the learning rate : import torch from deepchem. utils. differentiation_utils. optimize. minimizer import adam def fcn ( x ): return 2 * x + ( x - 2 ) ** 2, 2 * ( x - 2 ) + 2 x0 = torch. tensor ( 10. 0, requires_grad = True ) x = adam ( fcn, x0, [], maxiter = 20000 ) print ( "X at Minimum by Adam:", x. item ()) print ( "Function Value at Calculated Minimum:", fcn ( x )[ 0 ]. item ()) X at Minimum by Adam: 1. 0067708492279053 Function Value at Calculated Minimum: 3. 0000457763671875 Conclusion Differentiable optimization techniques are essential for many advanced computational experiments involving Environment Simulations like DFT, Physics Informed Neural Networks and as fundamental mathematical foundation for Molecular Simulation like Monte Carlo and Molecular Dynamics. By integrating deep learning into simulations, we optimize efficiency and accuracy by leveraging trainable neural networks to replace costly or less precise components. This advancement holds immense potential for expediting scientific advancements and addressing longstanding mysteries with greater efficacy. References [1] Raissi M, Perdikaris P, Karniadakis GE. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics (2019) [2] Muhammad F. Kasim, Sam M. Vinko. Learning the exchange-correlation functional from nature with fully differentiable density functional theory. 2021 American Physical Society [3] Nathan Argaman, Guy Makov. Density Functional Theory -- an introduction. American Journal of Physics 68 (2000), 69-79 [4] John Ingraham et al. Learning Protein Structure with a Differentiable Simulator. ICLR. 2019. Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Quantum Chemistry, title={Differentiation Infrastructure in Deepchem }, organization={Deep Chem}, author={Singh, Rakshit kr. }, howpublished = {\url{https://github. com/deepchem/deepchem/blob/master/examples/tutorials/Differentiation_Infrastructure_in_Deepchem. ipynb}}, year={2024}, } Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways:
deepchem.pdf
Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Discord The Deep Chem Discord hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
deepchem.pdf
Ordinary Differential Equation Solving Author : Rakshit Kr. Singh : Website : Linked In : Git Hub Ordinary Differential Equations (ODEs) are a cornerstone of mathematical modeling, essential for understanding dynamic systems in scientific and engineering fields. This tutorial aims to introduce you to ODE solving tools in Deep Chem. What are Ordinary Differential Equations? An ordinary differential equation (ODE) is a type of differential equation that depends on a single independent variable. In contrast to partial differential equations (PDEs), which involve multiple independent variables, ODEs focus on relationships where changes occur with respect to just one variable. The term "ordinary" distinguishes these equations from stochastic differential equations (SDEs), which incorporate random processes. ODEs consist of unknown functions and their derivatives, establishing relationships that describe how a quantity changes over time or space. These equations are fundamental in expressing the dynamics of systems. General Form of an ODE -Here, is the derivative of with regards to, and is a function of and. Why we should Care About Ordinary Differential Equations They are essential because they model how physical quantities change over time. ODEs are used in: Physics: To describe the motion of particles, the evolution of wave functions, and more. Engineering: To design control systems, signal processing, and electrical circuits. Biology: To model population dynamics, the spread of diseases, and biological processes. Economics: To analyze growth models, market equilibrium, and financial systems. Control Systems and Robotics: In control systems and robotics, ODEs are fundamental in describing the dynamics of systems. Solving Ordinary Differential Equations Steps: Formulate the ODE: An ODE involves an unknown function and its derivatives. Find the general solution: The goal is to find the function that satisfies the ODE. This process often involves integration.
deepchem.pdf
Apply initial or boundary condition: To find a specific solution, you often need additional information, such as the value of the function and its derivatives at certain points. These are called initial or boundary conditions. Methods for Solving ODEs in Deep Chem Deep Chem boasts a number of methods for solving ODEs. Some of them are: Euler's Method Mid Point Method 3/8 Method RK-4 Method Euler's Method (1st Order Runge-Kutta Method) It is the simplest Runge-Kutta method Explicit Runge-Kutta method with one stage Mid-Point Method (2nd Order Runge-Kutta Method) It is also called modified Euler's method second-order method with two stages from deepchem. utils. differentiation_utils. integrate. explicit_rk import mid_point_ivp, fwd_euler_ivp import matplotlib. pyplot as plt import torch # Simple ODE dy/dt = a*y def ode ( t, y, params ): a = params [ 0 ] return y * a t = torch. linspace ( 0, 20, 5 ) y_0 = torch. tensor ([ 1 ]) a = torch. tensor ([ 1 ]) sol_fwd_euler = fwd_euler_ivp ( ode, y_0, t, [ a ]) sol_mid_point = mid_point_ivp ( ode, y_0, t, [ a ]) plt. plot ( t, sol_fwd_euler, "-b", label = "Euler's method" ) plt. plot ( t, sol_mid_point, "-r", label = "Mid-point method" ) plt. legend ( loc = "upper left" ) plt. show () No normalization for SPS. Feature removed! No normalization for Avg Ipc. Feature removed! Skipped loading some Tensorflow models, missing a dependency. No module named 'tensorflow' Skipped loading modules with pytorch-geometric dependency, missing a dependency. No module named 'dgl' Skipped loading modules with transformers dependency. No module named 'transformers' cannot import name 'Hugging Face Model' from 'deepchem. models. torch_models' (/home/gigavolt/deepchem/deepchem/mode ls/torch_models/__init__. py) Skipped loading modules with pytorch-lightning dependency, missing a dependency. No module named 'lightning' Skipped loading some Jax models, missing a dependency. No module named 'jax' Skipped loading some Py Torch models, missing a dependency. No module named 'tensorflow'
deepchem.pdf
RK4 Method for n = 0, 1, 2, 3, ... Second Order Differential Equation Example: given, and at t=0 is -5 Procedure:
deepchem.pdf
from deepchem. utils. differentiation_utils. integrate. explicit_rk import rk4_ivp import matplotlib. pyplot as plt import torch def sode ( variables, t, params ): y, z = variables a = params [ 0 ] dydt = z dzdt = a * y - z return torch. tensor ([ dydt, dzdt ]) params = torch. tensor ([ 6 ]) t = torch. linspace ( 0, 1, 100 ) y0 = torch. tensor ([ 5, -5 ]) sol = rk4_ivp ( sode, y0, t, params ) plt. plot ( t, sol [:, 0 ]) plt. show () Comparing with Particular Solution Particular Solution: yy = 2 * torch. exp ( 2 * t ) + 3 * torch. exp (-3 * t ) plt. plot ( t, yy, "-b", label = "Known solution" ) plt. plot ( t, sol [:, 0 ], "-r", label = "RK4 method" ) plt. legend ( loc = "upper left" ) plt. show ()
deepchem.pdf
Solving Lotka Volterra using Deepchem The Lotka-Volterra equations, also known as the Lotka-Volterra predator-prey model, are a pair of first-order nonlinear differential equations, frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey. The populations change through time according to the pair of equations: The Lotka-Volterra system of equations is an example of a Kolmogorov model, which is a more general framework that can model the dynamics of ecological systems with predator-prey interactions, competition, disease, and mutualism. from deepchem. utils. differentiation_utils. integrate. explicit_rk import rk38_ivp import matplotlib. pyplot as plt import torch def lotka_volterra ( y, x, params ): y1, y2 = y a, b, c, d = params return torch. tensor ([( a * y1 - b * y1 * y2 ), ( c * y2 * y1 - d * y2 )]) t = torch. linspace ( 0, 50, 10000 ) solver_param = [ lotka_volterra, torch. tensor ([ 10., 1. ]), t, torch. tensor ([ 1. 1, 0. 4, 0. 1, 0. 4 ])] sol_rk38 = rk38_ivp ( * solver_param ) plt. plot ( t, sol_rk38 ) plt. show ()
deepchem.pdf
lotka volterra (Parammeter Estimation) Parameter Estimation is used to estimate the values of the changable parameters in the ODE. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. import pandas as pd dataset = pd. read_csv ( 'assets/population_data. csv' ) years = torch. tensor ( dataset [ 'year' ]) fish_pop = torch. tensor ( dataset [ 'fish_hundreds' ]) bears_pop = torch. tensor ( dataset [ 'bears_hundreds' ]) plt. plot ( fish_pop, "-b", label = "Fish population" ) plt. plot ( bears_pop, "-r", label = "Bear population" ) plt. legend ( loc = "upper left" ) plt. title ( "Population data" ) plt. show () from deepchem. utils. differentiation_utils. integrate. explicit_rk import rk4_ivp import torch import matplotlib. pyplot as plt def lotka_volterra ( y, x, params ): y1, y2 = y a, b, c, d = params return torch. tensor ([ a * y1 - b * y1 * y2, c * y2 * y1 - d * y2 ])
deepchem.pdf
def loss_function ( params, years, fish_pop, bears_pop ): y0 = torch. tensor ([ fish_pop [ 0 ], bears_pop [ 0 ]]) t = torch. linspace ( years [ 0 ], years [-1 ], len ( years )) output = rk4_ivp ( lotka_volterra, y0, t, params ) loss = 0 for i in range ( len ( years )): data_fish = fish_pop [ i ] model_fish = output [ i, 0 ] data_bears = bears_pop [ i ] model_bears = output [ i, 1 ] res = ( data_fish - model_fish ) ** 2 + ( data_bears - model_bears ) ** 2 loss += res return ( loss ) import scipy. optimize params0 = torch. tensor ([ 1. 1, . 4, . 1, . 4 ]) minimum = scipy. optimize. fmin ( loss_function, params0, args = ( years, fish_pop, bears_pop )) alpha_fit = minimum [ 0 ] beta_fit = minimum [ 1 ] delta_fit = minimum [ 2 ] gamma_fit = minimum [ 3 ] params = torch. tensor ([ alpha_fit, beta_fit, delta_fit, gamma_fit ]) y0 = torch. tensor ([ fish_pop [ 0 ], bears_pop [ 0 ]]) t = torch. linspace ( years [ 0 ], years [-1 ], 1000 ) output = rk4_ivp ( lotka_volterra, y0, t, params ) plt. plot ( t, output ) plt. show () Optimization terminated successfully. Current function value: 42. 135876 Iterations: 155 Function evaluations: 256 SIR Epidemiology The SIR model is one of the simplest compartmental models, and many models are derivatives of this basic form. The model consists of three compartments: S: The number of susceptible individuals. When a susceptible and an infectious individual come into "infectious
deepchem.pdf
contact", the susceptible individual contracts the disease and transitions to the infectious compartment. I: The number of infectious individuals. These are individuals who have been infected and are capable of infecting susceptible individuals. R: The number of removed (and immune) or deceased individuals. These are individuals who have been infected and have either recovered from the disease and entered the removed compartment, or died. It is assumed that the number of deaths is negligible with respect to the total population. This compartment may also be called "recovered" or "resistant". import torch import matplotlib. pyplot as plt from deepchem. utils. differentiation_utils. integrate. explicit_rk import rk4_ivp def sim ( variables, t, params ): S, I, R = variables N = S + I + R beta, gamma = params d Sdt = - beta * I * S / N d Idt = beta * I * S / N - gamma * I d Rdt = gamma * I return torch. tensor ([ d Sdt, d Idt, d Rdt ]) t = torch. linspace ( 0, 500, 1000 ) beta = 0. 04 gamma = 0. 01 params = torch. tensor ([ beta, gamma ]) y0 = torch. tensor ([ 100, 1, 0 ]) y = rk4_ivp ( sim, y0, t, params ) plt. plot ( t, y ) plt. legend ([ "Susceptible", "Infectious", "Removed" ]) plt. show ()
deepchem.pdf
SIS Model Some infections, for example, those from the common cold and influenza, do not confer any long-lasting immunity. Such infections may give temporary resistance but do not give long-term immunity upon recovery from infection, and individuals become susceptible again. Model: Total Population: import torch import matplotlib. pyplot as plt from deepchem. utils. differentiation_utils. integrate. explicit_rk import rk4_ivp def sim ( variables, t, params ): S, I = variables N = S + I beta, gamma = params d Sdt = - beta * I * S / N + gamma * I d Idt = beta * I * S / N - gamma * I return torch. tensor ([ d Sdt, d Idt ]) t = torch. linspace ( 0, 500, 1000 ) beta = 0. 04 gamma = 0. 01 params = torch. tensor ([ beta, gamma ]) y0 = torch. tensor ([ 100, 1 ]) y = rk4_ivp ( sim, y0, t, params ) plt. plot ( t, y ) plt. legend ([ "Susceptible", "Infectious" ]) plt. show ()
deepchem.pdf
References 1. More Computational Biology and Python by Mike Saint-Antoine https://www. youtube. com/playlist? list=PLWVKUEZ25V97W2q S7fagg Hrv5gdh Pcgjq 2. Compartmental models in epidemiology. (2024, May 27). In Wikipedia. https://en. wikipedia. org/wiki/Compartmental_models_in_epidemiology 3. Runge-Kutta methods. (2024, June 1). In Wikipedia. https://en. wikipedia. org/wiki/Runge%E2%80%93Kutta_methods Citing This Tutorial If you found this tutorial useful please consider citing it using the provided Bib Te X. @manual { Differential Equation, title={Differentiation Infrastructure in Deepchem }, organization={Deep Chem}, author={Singh, Rakshit kr. and Ramsundar, Bharath}, howpublished = {\url{https://github. com/deepchem/deepchem/blob/master/examples/tutorials/ODE_Solving. ipynb}}, year={2024}, } Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Discord The Deep Chem Discord hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
deepchem.pdf
Introduction To Equivariance and Equivariant Modeling Table of Contents: Introduction What is Equivariance Why Do We Need Equivariance Example References Introduction In the preceding sections of this tutorial series, we focused on training models using Deep Chem for various applications. However, we haven't yet addressed the important topic of equivariant modeling. Equivariant modeling ensures that the relationship between input and output remains consistent even when subjected to symmetry operations. By incorporating equivariant modeling techniques, we can effectively analyze and predict diverse properties by leveraging the inherent symmetries present in the data. This is particularly valuable in the fields of cheminformatics, bioinformatics, and material sciences, where understanding the interplay between symmetries and properties of molecules and materials is critical. This tutorial aims to explore the concept of equivariance and its significance within the domains of chemistry, biology, and material sciences. We will delve into the reasons why equivariant modeling is vital for accurately characterizing and predicting the properties of molecules and materials. By the end, you will have a solid understanding of the importance of equivariance and how it can significantly enhance our modeling capabilities in these areas. You can follow this tutorial using the Google Colab. If you'd like to open this notebook in colab, you can use the following link. O p e n i n C o l a b O p e n i n C o l a b What is Equivariance A key aspect of the structure in our data is the presence of certain symmetries. To effectively capture this structure, our model should incorporate our knowledge of these symmetries. Therefore, our model should retain the symmetries of the input data in its outputs. In other words, when we apply a symmetry operation (denoted by σ) to the input and pass it through the model, the result should be the same as applying σ to the output of the model. Mathematically, we can express this idea as an equation: f(σ(x)) = σ(f(x)) Here, f represents the function learned by our model. If this equation holds for every symmetry operation in a collection S, we say that f is equivariant with respect to S. While a precise definition of equivariance involves group theory and allows for differences between the applied symmetry operations on the input and output, we'll focus on the case where they are identical to keep things simpler. Group Equivariant Convolutional Networks exemplify this stricter definition of equivariance. Interestingly, equivariance shares a similarity with linearity. Just as linear functions are equivariant with respect to scalar multiplication, equivariant functions allow symmetry operations to be applied inside or outside the function. To gain a better understanding, let's consider Convolutional Neural Networks (CNNs). The image below demonstrates how CNNs exhibit equivariance with respect to translation: a shift in the input image directly corresponds to a shift in the output features.
deepchem.pdf
It is also useful to relate equivariance to the concept of invariance, which is more familiar. If a function f is invariant, its output remains unchanged when σ is applied to the input. In this case, the equation simplifies to: f(σ(x)) = f(x) An equivariant embedding in one layer can be transformed into an invariant embedding in a subsequent layer. The feasibility and meaningfulness of this transformation depend on the implementation of equivariance. Notably, networks with multiple convolutional layers followed by a global average pooling layer (GAP) achieve this conversion. In such cases, everything up to the GAP layer exhibits translation equivariance, while the output of the GAP layer (and the entire network) becomes invariant to translations of the input. Why Do We Need Equivariance Equivariance is a critical concept in modeling various types of data, particularly when dealing with structures and symmetries. It provides a powerful framework for capturing and leveraging the inherent properties and relationships present in the data. In this section, we will explore the reasons why equivariance is essential and how it is particularly advantageous when working with graph-structured data. 1. Preserving Structural Information Data often exhibits inherent structural properties and symmetries. Equivariant models preserve the structural information present in the data, allowing us to analyze and manipulate it while maintaining consistency under symmetry operations. By doing so, equivariant models capture the underlying relationships and patterns, leading to more accurate and meaningful insights. 2. Handling Symmetry and Invariance Symmetries and invariances are prevalent in many real-world systems. Equivariant models ensure that the learned representations and predictions are consistent under these symmetries and invariances. By explicitly modeling equivariance, we can effectively handle and exploit these properties, leading to robust and reliable models. 3. Improved Generalization Equivariant models have the advantage of generalizing well to unseen data. By incorporating the known symmetries and structures of the domain into the model architecture, equivariance ensures that the model can effectively capture and utilize these patterns even when presented with novel examples. This leads to improved generalization performance, making equivariant models valuable in scenarios where extrapolation or prediction on unseen instances is crucial.
deepchem.pdf
4. Efficient Processing of Graph-Structured Data Graph-structured data possess rich relational information and symmetries. Equivariant models specifically tailored for graph data offer a natural and efficient way to model and reason about these complex relationships. By considering the symmetries of the graph, equivariant models can effectively capture the local and global patterns, enabling tasks such as node classification, link prediction, and graph generation. Example Traditional machine learning (ML) algorithms face challenges when predicting molecular properties due to the representation of molecules. Typically, molecules are represented as 3D Cartesian arrays with a shape of (points, 3). However, neural networks (NN) cannot directly process such arrays because each position in the array lacks individual significance. For instance, a molecule can be represented by one Cartesian array centered at (0, 0, 0) and another centered at (15, 15, 15), both representing the same molecule but with distinct numerical values. This exemplifies translational variance. Similarly, rotational variance arises when the molecule is rotated instead of translated. In these examples, if the different arrays representing the same molecule are inputted into the NN, it would perceive them as distinct molecules, which is not the case. To address these issues of translational and rotational variance, considerable efforts have been devoted to devising alternative input representations for molecules. Let's demonstrate with some code how to go about creating functions that obey set of equivariances. We won't be training these models because training has no effect on equivariances. To define the molecule, we represent it as a collection of coordinates (denoted as ) and corresponding features (denoted as ). The features are encoded as one-hot vectors, where [1, 0] indicates a carbon atom, and [0, 1] indicates a hydrogen atom. In this specific example, our focus is on predicting the energy associated with the molecule. It's important to note that we will not be training our models, meaning the predicted energy values will not be accurate. Let's define a random molecule with 12 atoms as an example: import numpy as np np. random. seed ( 42 ) # seed for reproducibility R_i = np. random. rand ( 12, 3 ) # 12 atoms with xyz coordinates N = R_i. shape [ 0 ] # number of atoms X_i = np. zeros (( N, 2 )) # feature vectors for the atoms with shape (N, 2) X_i [: 4, 0 ] = 1 X_i [ 4 :, 1 ] = 1 An example of a model that lacks equivariances is a one-hidden layer dense neural network. In this model, we concatenate the positions and features of our data into a single input tensor, which is then passed through a dense layer. The dense layer utilizes the hyperbolic tangent (tanh) activation function and has a hidden layer dimension of 16. The output layer, which performs regression to energy, does not have an activation function. The weights of the model are always initialized randomly. Let's define our hidden model and initialize the weights: def hidden_model ( r : np. ndarray, x : np. ndarray, w1 : np. ndarray, w2 : np. ndarray, b1 : np. ndarray, b2 : float ) -> np. ndarray r """Computes the output of a 1-hidden layer neural network model. Parameters ---------- r : np. ndarray Input array for position values. Shape: (num_atoms, num_positions) x : np. ndarray Input array for features. Shape: (num_atoms, num_features) w1 : np. ndarray Weight matrix for the first layer. Shape: (num_positions + num_features, hidden_size) w2 : np. ndarray Weight matrix for the second layer. Shape: (hidden_size, output_size) b1 : np. ndarray Bias vector for the first layer. Shape: (hidden_size,) b2 : float Bias value for the second layer. Returns -------
deepchem.pdf
float Predicted energy of the molecule """ i = np. concatenate (( r, x ), axis = 1 ). flatten () # Stack inputs into one large input v = np. tanh ( i @ w1 + b1 ) # Apply activation function to first layer v = v @ w2 + b2 # Multiply with weights and add bias for the second layer return v # Initialize weights for a network with hidden size is 16 w1 = np. random. normal ( size = ( N * 5, 16 )) # 3(#positions) + 2(#features) = 5 b1 = np. random. normal ( size = ( 16,)) w2 = np. random. normal ( size = ( 16,)) b2 = np. random. normal () Although our model is not trained, we are not concerned about. Since, we only want see if our model is affected by permutations, translations and rotations import scipy. spatial. transform as transform rotate = transform. Rotation. from_euler ( "x", 60, degrees = True ) # Rotate around x axis by 60 degrees permuted_R_i = np. copy ( R_i ) permuted_R_i [ 0 ], permuted_R_i [ 1 ] = R_i [ 1 ], R_i [ 0 ] # Swap the rows of R_i print ( "without change:", hidden_model ( R_i, X_i, w1, w2, b1, b2 )) print ( "after permutation:", hidden_model ( permuted_R_i, X_i, w1, w2, b1, b2 )) print ( "after translation:", hidden_model ( R_i + np. array ([ 3, 3, 3 ]), X_i, w1, w2, b1, b2 )) print ( "after rotation:", hidden_model ( rotate. apply ( R_i ), X_i, w1, w2, b1, b2 )) without change: 9. 945112980229641 after permutation: 9. 461406567572851 after translation: 5. 963826685170721 after rotation: 7. 191211524244547 As expected, our model is not invariant to any permutations, translations, or rotations. Let's fix them. Permutational Invariance In a molecular context, the arrangement or ordering of points in an input tensor holds no significance. Therefore, it is crucial to be cautious and avoid relying on this ordering. To ensure this, we adopt a strategy of solely performing atom-wise operations within the network to obtain atomic property predictions. When predicting molecular properties, we need to cumulatively combine these atomic predictions, such as using summation, to arrive at the desired result. This approach guarantees that the model does not depend on the arbitrary ordering of atoms within the input tensor. Let's fix permutation invariance problem exists in our hidden model. def hidden_model_perm ( r : np. ndarray, x : np. ndarray, w1 : np. ndarray, w2 : np. ndarray, b1 : np. ndarray, b2 : float ) -> np r """Computes the output of a 1-hidden layer neural network model with permutation invariance. Parameters ---------- r : np. ndarray Input array for position values. Shape: (num_atoms, num_positions) x : np. ndarray Input array for features. Shape: (num_atoms, num_features) w1 : np. ndarray Weight matrix for the first layer. Shape: (num_positions + num_features, hidden_size) w2 : np. ndarray Weight matrix for the second layer. Shape: (hidden_size, output_size) b1 : np. ndarray Bias vector for the first layer. Shape: (hidden_size,) b2 : float Bias value for the second layer. Returns ------- float Predicted energy of the molecule """ i = np. concatenate (( r, x ), axis = 1 ) # Stack inputs into one large input v = np. tanh ( i @ w1 + b1 ) # Apply activation function to first layer v = np. sum ( v, axis = 0 ) # Reduce the output by summing across the axis which gives permutational invariance v = v @ w2 + b2 # Multiply with weights and add bias for the second layer return v
deepchem.pdf
# Initialize weights w1 = np. random. normal ( size = ( 5, 16 )) b1 = np. random. normal ( size = ( 16,)) w2 = np. random. normal ( size = ( 16,)) b2 = np. random. normal () In the original implementation, the model computes intermediate activations v for each input position separately and then concatenates them along the axis 0. By summing across axis 0 with (np. sum(v, axis=0)), the model effectively collapses all the intermediate activations into a single vector, regardless of the order of the input positions. This reduction operation allows the model to be permutation invariant because the final output is only dependent on the aggregated information from the intermediate activations and is not affected by the specific order of the input positions. Therefore, the model produces the same output for different permutations of the input positions, ensuring permutation invariance. Now let's see if this changes affected our model's sensitivity to permutations. print ( "without change:", hidden_model_perm ( R_i, X_i, w1, w2, b1, b2 )) print ( "after permutation:", hidden_model_perm ( permuted_R_i, X_i, w1, w2, b1, b2 )) print ( "after translation:", hidden_model_perm ( R_i + np. array ([ 3, 3, 3 ]), X_i, w1, w2, b1, b2 )) print ( "after rotation:", hidden_model_perm ( rotate. apply ( R_i ), X_i, w1, w2, b1, b2 )) without change: -19. 370847873678944 after permutation: -19. 370847873678944 after translation: -67. 71502903638384 after rotation: 5. 311140035302996 Indeed! As anticipated, our model demonstrates invariance to permutations while remaining sensitive to translations or rotations. Translational Invariance To address the issue of translational variance in modeling molecules, one approach is to compute the distance matrix of the molecule. This distance matrix provides a representation that is invariant to translation. However, this approach introduces a challenge as the distance features change from having three features per atom to features per atom. Consequently, we have introduced a dependency on the number of atoms in our distance features, making it easier to inadvertently break permutation invariance. To mitigate this issue, we can simply sum over the newly added axis, effectively collapsing the information into a single value. This summation ensures that the model remains invariant to permutations, restoring the desired permutation invariance property. Let's fix translation invariance problem exists in our hidden model def hidden_model_permute_translate ( r : np. ndarray, x : np. ndarray, w1 : np. ndarray, w2 : np. ndarray, b1 : np. ndarray, b2 r """Computes the output of a 1-hidden layer neural network model with permutation and translation invariance. Parameters ---------- r : np. ndarray Input array for position values. Shape: (num_atoms, num_positions) x : np. ndarray Input array for features. Shape: (num_atoms, num_features) w1 : np. ndarray Weight matrix for the first layer. Shape: (num_positions + num_features, hidden_size) w2 : np. ndarray Weight matrix for the second layer. Shape: (hidden_size, output_size) b1 : np. ndarray Bias vector for the first layer. Shape: (hidden_size,) b2 : float Bias value for the second layer. Returns ------- float Predicted energy of the molecule """ d = r - r [:, np. newaxis ] # Compute pairwise distances using broadcasting # Stack inputs into one large input of N x N x 5 # Concatenate doesn't broadcast, so we manually broadcast the Nx2 x matrix # into N x N x 2 i = np. concatenate (( d, np. broadcast_to ( x, ( d. shape [:-1 ] + x. shape [-1 :]))), axis =-1 )
deepchem.pdf
v = np. tanh ( i @ w1 + b1 ) # Apply activation function to first layer v = np. sum ( v, axis = ( 0, 1 )) # Reduce the output over both axes by summing v = v @ w2 + b2 # Multiply with weights and add bias for the second layer return v To achieve translational invariance, the function calculates pairwise distances between the position values in the r array. This is done by subtracting r from r[:, np. newaxis], which broadcasts r along a new axis, enabling element-wise subtraction. The pairwise distance calculation is based on the fact that subtracting the positions r from each other effectively measures the distance or difference between them. By including the pairwise distances in the input, the model can learn and capture the relationship between the distances and the features. This allows the model to be invariant to translations, meaning that shifting the positions within each set while preserving their relative distances will result in the same output. Now let's see if this changes affected our model's sensitivity to permutations. print ( "without change:", hidden_model_permute_translate ( R_i, X_i, w1, w2, b1, b2 )) print ( "after permutation:", hidden_model_permute_translate ( permuted_R_i, X_i, w1, w2, b1, b2 )) print ( "after translation:", hidden_model_permute_translate ( R_i + np. array ([ 3, 3, 3 ]), X_i, w1, w2, b1, b2 )) print ( "after rotation:", hidden_model_permute_translate ( rotate. apply ( R_i ), X_i, w1, w2, b1, b2 )) without change: 193. 79734623037373 after permutation: 193. 79734623037385 after translation: 193. 79734623037368 after rotation: 188. 36773620787383 Yes! Our model is invariant to both permutations and translations but not to rotations. Rotational Invariance Atom-centered symmetry functions exhibit rotational invariance due to the invariance of the distance matrix. While this property is suitable for tasks where scalar values, such as energy, are predicted from molecules, it poses a challenge for problems that depend on directionality. In such cases, achieving rotational equivariance is desired, where the output of the network rotates in the same manner as the input. Examples of such problems include force prediction and molecular dynamics. To address this, we can convert the pairwise vectors into pairwise distances. To simplify the process, we utilize squared distances. This conversion allows us to incorporate directional information while maintaining simplicity. By considering the squared distances, we enable the network to capture and process the relevant geometric relationships between atoms, enabling rotational equivariance and facilitating accurate predictions for direction-dependent tasks. def hidden_model_permute_translate_rotate ( r : np. ndarray, x : np. ndarray, w1 : np. ndarray, w2 : np. ndarray, b1 : np. ndarray r """Computes the output of a 1-hidden layer neural network model with permutation, translation, and rotation invariance. Parameters ---------- r : np. ndarray Input array for position values. Shape: (num_atoms, num_positions) x : np. ndarray Input array for features. Shape: (num_atoms, num_features) w1 : np. ndarray Weight matrix for the first layer. Shape: (num_positions, hidden_size) w2 : np. ndarray Weight matrix for the second layer. Shape: (hidden_size, output_size) b1 : np. ndarray Bias vector for the first layer. Shape: (hidden_size,) b2 : float Bias value for the second layer. Returns ------- float Predicted energy of the molecule """ # Compute pairwise distances using broadcasting d = r - r [:, np. newaxis ] # Compute squared distances d2 = np. sum ( d ** 2, axis =-1, keepdims = True ) # Stack inputs into one large input of N x N x 3 # Concatenate doesn't broadcast, so we manually broadcast the Nx2 x matrix
deepchem.pdf
# into N x N x 2 i = np. concatenate (( d2, np. broadcast_to ( x, ( d2. shape [:-1 ] + x. shape [-1 :]))), axis =-1 ) v = np. tanh ( i @ w1 + b1 ) # Apply activation function to first layer # Reduce the output over both axes by summing v = np. sum ( v, axis = ( 0, 1 )) v = v @ w2 + b2 # Multiply with weights and add bias for the second layer return v # Initialize weights w1 = np. random. normal ( size = ( 3, 16 )) b1 = np. random. normal ( size = ( 16,)) w2 = np. random. normal ( size = ( 16,)) b2 = np. random. normal () The hidden_model_permute_trans_rotate function achieves rotational invariance through the utilization of pairwise squared distances between atoms, instead of the pairwise vectors themselves. By using squared distances, the function is able to incorporate directional information while still maintaining simplicity in the calculation. Squared distances inherently encode geometric relationships between atoms, such as their relative positions and orientations. This information is essential for capturing the directionality of interactions and phenomena in tasks like force prediction and molecular dynamics, where rotational equivariance is desired. The conversion from pairwise vectors to pairwise squared distances allows the model to capture and process these geometric relationships. Since squared distances only consider the magnitudes of vectors, disregarding their directions, the resulting network output remains invariant under rotations of the input. Now let's see if this changes affected our model's sensitivity to rotations. print ( "without change:", hidden_model_permute_trans_rotate ( R_i, X_i, w1, w2, b1, b2 )) print ( "after permutation:", hidden_model_permute_trans_rotate ( permuted_R_i, X_i, w1, w2, b1, b2 )) print ( "after translation:", hidden_model_permute_trans_rotate ( R_i + np. array ([ 3, 3, 3 ]), X_i, w1, w2, b1, b2 )) print ( "after rotation:", hidden_model_permute_trans_rotate ( rotate. apply ( R_i ), X_i, w1, w2, b1, b2 )) without change: 585. 1386319324105 after permutation: 585. 1386319324106 after translation: 585. 1386319324105 after rotation: 585. 1386319324105 Yes! Now our model is invariant to both permutations, translations, and rotations. With these new changes, our model exhibits improved representation capacity and generalization while preserving the symmetry of the molecules. References Bronstein, M. M., Bruna, J., Cohen, T., & Velivckovi'c, P. (2021). Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. Ar Xiv, abs/2104. 13478. White, A. D. (2022). Deep learning for molecules and materials. Living Journal of Computational Molecular Science. Geiger, M., & Smidt, T. E. (2022). e3nn: Euclidean Neural Networks. Ar Xiv, abs/2207. 09453. Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with Deep Chem, we encourage you to finish the rest of the tutorials in this series. You can also help the Deep Chem community in the following ways: Star Deep Chem on Git Hub This helps build awareness of the Deep Chem project and the tools for open source drug discovery that we're trying to build. Join the Deep Chem Gitter The Deep Chem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
deepchem.pdf