text "search through the family of possible equations (possible cyan curves) relating input to output to find the one that describes the training data most accurately. itfollowsthatthemodelsinfigure1.2requirelabeledinput/outputpairsfortraining. for example, the music classification model would require a large number of audio clips where a human expert had identified the genre of each. these input/output pairs take theroleofateacherorsupervisorforthetrainingprocess,andthisgivesrisetotheterm supervised learning. 1.1.4 deep neural networks thisbookconcernsdeepneuralnetworks,whichareaparticularlyusefultypeofmachine learning model. they are equations that can represent an extremely broad family of relationships between input and output, and where it is particularly easy to search through this family to find the relationship that describes the training data. deep neural networks can process inputs that are very large, of variable length, and contain various kinds of internal structures. they can output single real numbers (regression),multiplenumbers(multivariateregression),orprobabilitiesovertwoormore classes (binary and multiclass classification, respectively). as we shall see in the next section, their outputs may also be very large, of variable length, and contain internal structure. itisprobablyhardtoimagineequationswiththeseproperties,andthereader should endeavor to suspend disbelief for now. 1.1.5 structured outputs figure1.4adepictsamultivariatebinaryclassificationmodelforsemanticsegmentation. here, every pixel of an input image is assigned a binary label that indicates whether it belongs to a cow or the background. figure 1.4b shows a multivariate regression model where the input is an image of a street scene and the output is the depth at each pixel. in both cases, the output is high-dimensional and structured. however, this structure is closely tied to the input, and this can be exploited; if a pixel is labeled as “cow,” then a neighbor with a similar rgb value probably has the same label. figures 1.4c–e depict three models where the output has a complex structure that is not so closely tied to the input. figure 1.4c shows a model where the input is an audio file and the output is the transcribed words from that file. figure 1.4d is a translation draft: please send errata to udlbookmail@gmail.com.6 1 introduction figure 1.4 supervised learning tasks with structured outputs. a) this semantic segmentation model maps an rgb image to a binary image indicating whether each pixel belongs to the background or a cow (adapted from noh et al., 2015). b) this monocular depth estimation model maps an rgb image to an output image where each pixel represents the depth (adapted from cordts et al., 2016). c) this audio transcription model maps an audio sample to a transcription of the spoken words in the audio. d) this translation model maps an english text stringtoitsfrenchtranslation. e)thisimagesynthesismodelmapsacaptionto animage(examplefromhttps://openai.com/dall-e-2/). ineachcase,theoutput has a complex internal structure or grammar. in some cases, many outputs are compatible with the input. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.2 unsupervised learning 7 modelinwhichtheinputisabodyoftextinenglish,andtheoutputcontainsthefrench translation. figure 1.4e depicts a very challenging task in which the input is descriptive text, and the model must produce an image that matches this description. inprinciple, thelatterthreetaskscanbetackledinthestandardsupervisedlearning framework, but they are more difficult for two reasons. first, the output may genuinely beambiguous;therearemultiplevalidtranslationsfromanenglishsentencetoafrench one and multiple images that are compatible with any caption. second, the output contains considerable structure; not all strings of words make valid english and french sentences, and not all collections of rgb values make plausible images. in addition to learning the mapping, we also have to respect the “grammar” of the output. fortunately, this “grammar” can be learned without the need for output labels. for example, wecanlearnhowtoformvalidenglishsentencesbylearningthestatisticsofa large corpus of text data. this provides a connection with the next section of the book, which"