Vishwas1 commited on
Commit
77a5b57
1 Parent(s): 5390123

Upload dataset_chunk_122.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_122.csv +2 -0
dataset_chunk_122.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "information over a 3×3 pixel area using fewer parameters. problem11.8 the resnet-200 model (figure 11.8) contains 200 layers and was used for image clas- sification on the imagenet database (figure 10.15). the architecture resembles alexnet and vgg but uses bottleneck residual blocks instead of vanilla convolutional layers. as with alexnet and vgg, these are periodically interspersed with decreases in spatial resolution and simultaneous increases in the number of channels. here, the resolution is decreasedbydownsamplingusingconvolutionswithstridetwo. thenumberofchannels is increased either by appending zeros to the representation or by using an extra 1×1 convolution. at the start of the network is a 7×7 convolutional layer, followed by a downsampling operation. at the end, a fully connected layer maps the block to a vector of length 1000. this is passed through a softmax layer to generate class probabilities. the resnet-200 model achieved a remarkable 4.8% error rate for the correct class beinginthetopfiveand20.1%foridentifyingthecorrectclasscorrectly. thiscompared favorably with alexnet (16.4%, 38.1%) and vgg (6.8%, 23.7%) and was one of the first networks to exceed human performance (5.1% for being in the top five guesses). however, this model was conceived in 2016 and is far from state-of-the-art. at the time of writing, the best-performing model on this task has a 9.0% error for identifying the class correctly (see figure 10.21). this and all the other current top-performing models for image classification are now based on transformers (see chapter 12). 11.5.2 densenet residual blocks receive the output from the previous layer, modify it by passing it through some network layers, and add it back to the original input. an alternative is to concatenate the modified and original signals. this increases the representation size draft: please send errata to [email protected] 11 residual networks figure 11.7 resnet blocks. a) a standard block in the resnet architecture con- tains a batch normalization operation, followed by an activation function, and a 3×3 convolutional layer. then, this sequence is repeated. b). a bottleneck resnet block still integrates information over a 3×3 region but uses fewer pa- rameters. it contains three convolutions. the first 1×1 convolution reduces the number of channels. the second 3×3 convolution is applied to the smaller rep- resentation. a final 1×1 convolution increases the number of channels again so that it can be added back to the input. figure 11.8 resnet-200 model. a standard 7×7 convolutional layer with stride two is applied, followed by a maxpool operation. a series of bottleneck residual blocks follow (number in brackets is channels after first 1×1 convolution), with periodic downsampling and accompanying increases in the number of channels. the network concludes with average pooling across all spatial positions and a fully connected layer that maps to pre-softmax activations. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.11.5 common residual architectures 197 figure11.9densenet. thisarchitectureusesresidualconnectionstoconcatenate theoutputsofearlierlayerstolaterones. here,thethree-channelinputimageis processed to form a 32-channel representation. the input image is concatenated to this to give a total of 35 channels. this combined representation is processed tocreateanother32-channelrepresentation, andbothearlierrepresentationsare concatenated to this to create a total of 67 channels and so on. (in terms of channels for a convolutional network), but an optional subsequent linear transformation can map back to the original size (a 1×1 convolution for a convolutional network). this allows the model to add the representations together, take a weighted sum, or combine them in a more complex way. the densenet architecture uses concatenation so that the input to a layer comprises the concatenated outputs from all previous layers (figure 11.9). these are processed to create a new representation that is itself concatenated with the previous representation and passed to the next layer. this concatenation means there is a direct contribution from earlier layers to the output, so the loss surface behaves reasonably. inpractice"