dl_dataset_1 / dataset_chunk_110.csv
Vishwas1's picture
Upload dataset_chunk_110.csv with huggingface_hub
190a0e3 verified
text
"% top-1errorrate. atthetime,thiswasanenormousleapforwardinperformanceatatask considered far beyond the capabilities of contemporary methods. this result revealed the potential of deep learning and kick-started the modern era of ai research. the vgg network was also targeted at classification in the imagenet task and achieved a considerably better performance of 6.8% top-5 error rate and a 23.7% top-1 error rate. this network is similarly composed of a series of interspersed convolutional and max pooling layers, where the spatial size of the representation gradually decreases, but the number of channels increase. these are followed by three fully connected layers (figure 10.17). the vgg network was also trained using data augmentation, weight decay, and dropout. althoughtherewerevariousminordifferencesinthetrainingregime,themostimpor- tant change between alexnet and vgg was the depth of the network. the latter used problem10.18 19 hidden layers and 144 million parameters. the networks in figures 10.16 and 10.17 are depicted at the same scale for comparison. there was a general trend for several years for performance on this task to improve as the depth of the networks increased, and this is evidence that depth is important in neural networks. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.5 applications 177 figure 10.17 vggnetwork(simonyan& zisserman,2014)depicted atthe same scale as alexnet (see figure 10.16). this network consists of a series of convolu- tional layers and max pooling operations, in which the spatial scale of the rep- resentation gradually decreases, but the number of channels gradually increases. the hidden layer after the last convolutional operation is resized to a 1d vector and three fully connected layers follow. the network outputs 1000 activations corresponding to the class labels that are passed through a softmax function to create class probabilities. 10.5.2 object detection in object detection, thegoal is to identifyand localize multipleobjects within the image. an early method based on convolutional networks was you only look once, or yolo for short. the input to the yolo network is a 448×448 rgb image. this is passed through 24 convolutional layers that gradually decrease the representation size using max pooling operations while concurrently increasing the number of channels, similarly tothevggnetwork. thefinalconvolutionallayerisofsize7×7andhas1024channels. this is reshaped to a vector, and a fully connected layer maps it to 4096 values. one further fully connected layer maps this representation to the output. the output values encode which class is present at each of a 7×7 grid of locations (figure 10.18a–b). for each location, the output values also encode a fixed number of bounding boxes. five parameters define each box: the x- and y-positions of the center, the height and width of the box, and the confidence of the prediction (figure 10.18c). the confidence estimates the overlap between the predicted and ground truth bound- ing boxes. the system is trained using momentum, weight decay, dropout, and data augmentation. transfer learning is employed; the network is initially trained on the imagenet classification task and is then fine-tuned for object detection. after the network is run, a heuristic process is used to remove rectangles with low confidenceandtosuppresspredictedboundingboxesthatcorrespondtothesameobject so only the most confident one is retained. draft: please send errata to [email protected] 10 convolutional networks figure10.18yoloobjectdetection. a)theinputimageisreshapedto448×448 anddividedintoaregular7×7grid. b)thesystempredictsthemostlikelyclass ateachgridcell. c)italsopredictstwoboundingboxespercell,andaconfidence value (represented by thickness of line). d) during inference, the most likely boundingboxesareretained,andboxeswithlowerconfidencevaluesthatbelong to the same object are suppressed. adapted from redmon et al. (2016). 10.5.3 semantic segmentation thegoalofsemanticsegmentationistoassignalabeltoeachpixelaccordingtotheobject thatitbelongstoornolabelifthatpixeldoesnotcorrespondtoanythinginthetraining database"