text "considers unsupervised learning models. 1.2 unsupervised learning constructing a model from input data without corresponding output labels is termed unsupervised learning; theabsenceofoutputlabelsmeanstherecanbeno“supervision.” rather than learning a mapping from input to output, the goal is to describe or under- stand the structure of the data. as was the case for supervised learning, the data may have very different characteristics; it may be discrete or continuous, low-dimensional or high-dimensional, and of constant or variable length. 1.2.1 generative models this book focuses on generative unsupervised models, which learn to synthesize new data examples that are statistically indistinguishable from the training data. some generativemodelsexplicitlydescribetheprobabilitydistributionovertheinputdataand herenewexamplesaregeneratedbysamplingfromthisdistribution. othersmerelylearn a mechanism to generate new examples without explicitly describing their distribution. state-of-the-art generative models can synthesize examples that are extremely plau- sible but distinct from the training examples. they have been particularly successful at generating images (figure 1.5) and text (figure 1.6). they can also synthesize data under the constraint that some outputs are predetermined (termed conditional genera- tion). examples include image inpainting (figure 1.7) and text completion (figure 1.8). indeed, modern generative models for text are so powerful that they can appear intel- ligent. given a body of text followed by a question, the model can often “fill in” the missing answer by generating the most likely completion of the document. however, in reality, the model only knows about the statistics of language and does not understand the significance of its answers. draft: please send errata to udlbookmail@gmail.com.8 1 introduction figure 1.5 generative models for images. left: two images were generated from a model trained on pictures of cats. these are not real cats, but samples from a probabilitymodel. right: twoimagesgeneratedfromamodeltrainedonimages of buildings. adapted from karras et al. (2020b). themoonhadrisenbythetimeireachedtheedgeoftheforest,andthelightthatfilteredthroughthe treeswassilverandcold. ishivered, thoughiwasnotcold, andquickenedmypace. ihadneverbeen so far from the village before, and i was not sure what to expect. i had been walking for hours, and i was tired and hungry. i had left in such a hurry that i had not thought to pack any food, and i had notthoughttobringaweapon. iwasunarmedandaloneinastrangeplace,andididnotknowwhat iwasdoing. ihadbeenwalkingforsolongthatihadlostallsenseoftime,andihadnoideahowfarihadcome. i only knew that i had to keep going. i had to find her. i was getting close. i could feel it. she was nearby,andshewasintrouble. ihadtofindherandhelpher,beforeitwastoolate. figure 1.6 short story synthesized from a generative model of text data. the model describes a probability distribution that assigns a probability to every output string. sampling from the model creates strings that follow the statistics of the training data (here, short stories) but have never been seen before. figure 1.7 inpainting. in the original image (left), the boy is obscured by metal cables. theseundesirableregions(center)areremovedandthegenerativemodel synthesizes a new image (right) under the constraint that the remaining pixels must stay the same. adapted from saharia et al. (2022a). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.2 unsupervised learning 9 i was a little nervous before my first lecture at the university of bath. it seemed like there were hundredsofstudentsandtheylookedintimidating. isteppeduptothelecternandwasabouttospeak whensomethingbizarrehappened. suddenly, the room was filled with a deafening noise, like a giant roar. it was so loud that i couldn’t hear anything else and i had to cover my ears. i could see the students looking around, con- fusedandfrightened. then,asquicklyasithadstarted,thenoisestoppedandthe"