Update model card
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ tags:
|
|
5 |
|
6 |
# DETR (End-to-End Object Detection) model with ResNet-50 backbone
|
7 |
|
8 |
-
DEtection
|
9 |
|
10 |
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
|
11 |
|
@@ -13,6 +13,12 @@ Disclaimer: The team releasing DETR did not write a model card for this model so
|
|
13 |
|
14 |
The DETR model is an encoder-decoder transformer with a convolutional backbone.
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
## Intended uses & limitations
|
17 |
|
18 |
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
|
|
|
5 |
|
6 |
# DETR (End-to-End Object Detection) model with ResNet-50 backbone
|
7 |
|
8 |
+
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
|
9 |
|
10 |
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
|
11 |
|
|
|
13 |
|
14 |
The DETR model is an encoder-decoder transformer with a convolutional backbone.
|
15 |
|
16 |
+
First, an image is sent through a CNN backbone, outputting a lower-resolution feature map, typically of shape (1, 2048, height/32, width/32). This is then projected to match the hidden dimension of the Transformer, which is 256 by default, using a nn.Conv2D layer. Next, the feature map is flattened and transposed to obtain a tensor of shape (batch_size, seq_len, d_model) = (1, width/32*height/32, 256).
|
17 |
+
|
18 |
+
This is sent through the encoder, outputting encoder_hidden_states of the same shape. Next, so-called object queries are sent through the decoder. This is just a tensor of shape (batch_size, num_queries, d_model), with num_queries typically set to 100 and is initialized with zeros. Each object query looks for a particular object in the image. Next, the decoder updates these object queries through multiple self-attention and encoder-decoder attention layers to output decoder_hidden_states of the same shape: (batch_size, num_queries, d_model). Next, two heads are added on top for object detection: a linear layer for classifying each object query into one of the objects or "no object", and a MLP to predict bounding boxes for each query. So the number of queries actually determines the maximum number of objects the model can detect in an image.
|
19 |
+
|
20 |
+
The model is trained using a "bipartite matching loss": so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy for the classes and L1 regression loss for the bounding boxes are used to optimize the parameters of the model.
|
21 |
+
|
22 |
## Intended uses & limitations
|
23 |
|
24 |
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
|