text "understanding deep learning simon j.d. prince december 24, 2023 the most recent version of this document can be found at http://udlbook.com. copyright in this work has been licensed exclusively to the mit press, https://mitpress.mit.edu, which will be releasing the final version to the public in 2024. all inquiries regarding rights should be addressed to the mit press, rights and permissions department. this work is subject to a creative commons cc-by-nc-nd license. i would really appreciate help improving this document. no detail too small! please mail suggestions, factual inaccuracies, ambiguities, questions, and errata to udlbookmail@gmail.com.thisbookisdedicatedtoblair, calvert, coppola, ellison, faulkner, kerpatenko, morris, robinson, sträussler, wallace, waymon, wojnarowicz, and all the others whose work is even more important and interesting than deep learning.contents preface ix acknowledgements xi 1 introduction 1 1.1 supervised learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 unsupervised learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 reinforcement learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4 ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5 structure of book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.6 other books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.7 how to read this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2 supervised learning 17 2.1 supervised learning overview . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 linear regression example . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3 shallow neural networks 25 3.1 neural network example . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 universal approximation theorem . . . . . . . . . . . . . . . . . . . . . . 29 3.3 multivariate inputs and outputs . . . . . . . . . . . . . . . . . . . . . . . 30 3.4 shallow neural networks: general case . . . . . . . . . . . . . . . . . . . . 33 3.5 terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4 deep neural networks 41 4.1 composing neural networks . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 from composing networks to deep networks . . . . . . . . . . . . . . . . 43 4.3 deep neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4 matrix notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.5 shallow vs. deep neural networks . . . . . . . . . . . . . . . . . . . . . . 49 4."