dl_dataset_1 / dataset_chunk_100.csv
Vishwas1's picture
Upload dataset_chunk_100.csv with huggingface_hub
776bd95 verified
text
". in practice, this takes the form of one sgd-like update within another. keskar this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 159 et al. (2017) showed that sgd finds wider minima as the batch size is reduced. this may be because of the batch variance term that results from implicit regularization by sgd. ishida et al. (2020) use a technique named flooding, in which they intentionally prevent the traininglossfrombecomingzero. thisencouragesthesolutiontoperformarandomwalkover the loss landscape and drift into a flatter area with better generalization. bayesian approaches: for some models, including the simplified neural network model in figure 9.11, the bayesian predictive distribution can be computed in closed form (see bishop, 2006; prince, 2012). for neural networks, the posterior distribution over the parameters can- not be represented in closed form and must be approximated. the two main approaches are variationalbayes(hinton&vancamp,1993;mackay,1995;barber&bishop,1997;blundell et al., 2015), in which the posterior is approximated by a simpler tractable distribution, and markovchainmontecarlo(mcmc)methods,whichapproximatethedistributionbydrawing a set of samples (neal, 1995; welling & teh, 2011; chen et al., 2014; ma et al., 2015; li et al., 2016a). the generation of samples can be integrated into sgd, and this is known as stochas- tic gradient mcmc (see ma et al., 2015). it has recently been discovered that “cooling” the posteriordistributionovertheparameters(makingitsharper)improvespredictionsfromthese models(wenzeletal.,2020a),butthisisnotcurrentlyfullyunderstood(seenocietal.,2021). transfer learning: transfer learning for visual tasks works extremely well (sharif razavian etal.,2014)andhassupportedrapidprogressincomputervision,includingtheoriginalalexnet results(krizhevskyetal.,2012). transferlearninghasalsoimpactednaturallanguageprocess- ing(nlp),wheremanymodelsarebasedonpre-trainedfeaturesfromthebertmodel(devlin et al., 2019). more information can be found in zhuang et al. (2020) and yang et al. (2020b). self-supervised learning: self-supervised learning techniques for images have included in- paintingmaskedimageregions(pathaketal.,2016),predictingtherelativepositionofpatches in an image (doersch et al., 2015), re-arranging permuted image tiles back into their original configuration (noroozi & favaro, 2016), colorizing grayscale images (zhang et al., 2016b), and transforming rotated images back to their original orientation (gidaris et al., 2018). in sim- clr (chen et al., 2020c), a network is learned that maps versions of the same image that have been photometrically and geometrically transformed to the same representation while re- pelling versions of different images, with the goal of becoming indifferent to irrelevant image transformations. jing & tian (2020) present a survey of self-supervised learning in images. self-supervised learning in nlp can be based on predicting masked words(devlin et al., 2019), predicting the next word in a sentence (radford et al., 2019; brown et al., 2020), or predicting whethertwosentencesfollowoneanother(devlinetal.,2019). inautomaticspeechrecognition, the wav2vec model (schneider et al., 2019) aims to distinguish an original audio sample from one where 10ms of audio has been swapped out from elsewhere in the clip. self-supervision has also been applied to graph neural networks (chapter 13). tasks include recovering masked features(youetal.,2020)andrecoveringtheadjacencystructureofthegraph(kipf&welling, 2016). liu et al. (2023a) review self-supervised learning for graph models. data augmentation: data augmentation for images dates back to at least lecun et al. (1998)andcontributedtothesuccessofalexnet(krizhevskyetal.,2012),inwhichthedataset was increased by a factor of 2048. image augmentation approaches include geometric transfor- mations,"