_id
stringlengths 36
36
| text
stringlengths 5
665k
| marker
stringlengths 3
6
| marker_offsets
sequence | label
stringlengths 28
32
|
---|---|---|---|---|
4ddf79b3-fcfc-4730-a50a-8a67c1cbeeb9 | We validate our approach, MODEST (Mobile Object Detection with Ephemerality and Self-Training) on the Lyft Level 5 Perception Dataset [1]} and the nuScenes Dataset [2]} with various types of detectors [3]}, [4]}, [5]}, [6]}. We demonstrate that MODEST yields remarkably accurate mobile object detectors, comparable to their supervised counterparts.
Concretely, our contributions are three-fold:
| [2] | [
[
164,
167
]
] | https://openalex.org/W3035574168 |
58439ca8-5b5b-4830-ab80-cc8b7bff123a | We validate our approach, MODEST (Mobile Object Detection with Ephemerality and Self-Training) on the Lyft Level 5 Perception Dataset [1]} and the nuScenes Dataset [2]} with various types of detectors [3]}, [4]}, [5]}, [6]}. We demonstrate that MODEST yields remarkably accurate mobile object detectors, comparable to their supervised counterparts.
Concretely, our contributions are three-fold:
| [3] | [
[
201,
204
]
] | https://openalex.org/W2949708697 |
7bdf97ba-6df9-4dd2-9558-46beedefdcfb | We validate our approach, MODEST (Mobile Object Detection with Ephemerality and Self-Training) on the Lyft Level 5 Perception Dataset [1]} and the nuScenes Dataset [2]} with various types of detectors [3]}, [4]}, [5]}, [6]}. We demonstrate that MODEST yields remarkably accurate mobile object detectors, comparable to their supervised counterparts.
Concretely, our contributions are three-fold:
| [5] | [
[
213,
216
]
] | https://openalex.org/W2963727135 |
15729042-80fe-44d2-99bf-5bf143654f06 | We validate our approach, MODEST (Mobile Object Detection with Ephemerality and Self-Training) on the Lyft Level 5 Perception Dataset [1]} and the nuScenes Dataset [2]} with various types of detectors [3]}, [4]}, [5]}, [6]}. We demonstrate that MODEST yields remarkably accurate mobile object detectors, comparable to their supervised counterparts.
Concretely, our contributions are three-fold:
| [6] | [
[
219,
222
]
] | https://openalex.org/W2897529137 |
deb1491b-509e-4a06-b024-3412e48093e2 | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [1] | [
[
225,
228
]
] | https://openalex.org/W2560609797 |
290c4146-dd84-43dc-8b20-d18ebafc466c | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [2] | [
[
231,
234
]
] | https://openalex.org/W2963121255 |
afa5aa97-7c96-4c38-9671-72867b97113a | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [3] | [
[
237,
240
]
] | https://openalex.org/W2964062501 |
2dd7949d-2f09-4b0f-87b3-ddad673b4318 | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [4] | [
[
243,
246
]
] | https://openalex.org/W2949708697 |
af598327-b522-474c-81f1-8abca08c205c | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [6] | [
[
339,
342
]
] | https://openalex.org/W2963727135 |
43a2ce96-2d3d-424e-9fdd-1a64b1e4a2ac | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [7] | [
[
345,
348
]
] | https://openalex.org/W3031752193 |
8d15e1e7-ae65-446b-9b4b-ad7a5b0b8d20 | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [8] | [
[
351,
354
]
] | https://openalex.org/W2798965597 |
b05f478d-cb02-4ff9-9b72-9c20119c2bb4 | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [9] | [
[
357,
360
]
] | https://openalex.org/W2897529137 |
be69b078-04db-4920-a697-3f68ba5a86ca | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [10] | [
[
363,
367
]
] | https://openalex.org/W3034314779 |
7cf0666a-0826-44e0-8de5-91a7f0a0e979 | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [11] | [
[
370,
374
]
] | https://openalex.org/W3108486966 |
c017a118-822a-4a63-bd9d-edfa22da1905 | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [13] | [
[
384,
388
]
] | https://openalex.org/W2555618208 |
b68be34f-ec6b-46be-9903-ff17d45aea39 | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [14] | [
[
626,
630
]
] | https://openalex.org/W2150066425 |
1597a5eb-3b90-42fe-bbb8-57468ae2aec0 | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [15] | [
[
633,
637
]
] | https://openalex.org/W2115579991 |
cde76ff3-1388-438a-916e-f8e36d00acbb | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [17] | [
[
647,
651
]
] | https://openalex.org/W3035574168 |
378886c3-6ecc-4622-abc9-8871bb18e481 | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [18] | [
[
654,
658
]
] | https://openalex.org/W2955189650 |
aa0a60cf-ecae-4d98-a70b-77a922e9d22c | 3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional neural architectures [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. Regardless of the architectures, they are trained using supervision and their performances hinges directly on the training dataset. However, the limited variety of objects and driving conditions in existing autonomous driving datasets [14]}, [15]}, [16]}, [17]}, [18]} impedes the generalizability of the resulting detectors [19]}.
| [19] | [
[
716,
720
]
] | https://openalex.org/W3034975685 |
e5e70a45-16d6-4838-8af0-4369d30eac00 | Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data.
A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image.
For 2D images, this is typically done by segmenting the image using appearance cues[1]}, [2]}, [3]}, [4]}, but color variations and perspective effects make this difficult.
Tian [5]} exploits the correspondence between images and 3D point clouds to detect objects in 2D.
In 3D scenes, one can use 3D information such as surface normals [6]}, [7]}, [2]}, [9]}, [10]}, [11]}, [12]}, [13]}.
One can also use temporal changes such as motion [14]}, [15]}, [16]}, [1]}, [9]}.
Our work combines effective 3D information with cues from changes in the scene over time to detect mobile objects[10]}, [11]}, [21]}. In particular, similar to our approach, Herbst et al.[10]}, [11]} reconstruct the same scene at various times and carve out dissimilar regions as mobile objects.
We use the analogous idea of ephemerality as proposed by Barnes et al.[24]}.
We show in our work that this idea yields a surprisingly accurate set of initial objects.
In addition, we also leverage other common-sense rules such as locations of the objects (e.g. objects should stay on the ground) [25]}, [26]}, [27]}, [28]} or shapes of an object (e.g. objects should be compact) [12]}, [27]}, [28]}.
However, crucially, we do not just stop at this proposal stage.
Instead, we use these seed labels to train an object detector through multiple rounds of self-training.
This effectively identifies objects consistent across multiple scenes.
While previous work has attempted to use this consistency cue[3]}, [4]}, [34]}, [6]}, [7]}, [37]}, [38]} (including co-segmentation[39]}, [40]}, [41]}), prior work typically uses clustering to accomplish this.
In contrast, we demonstrate that neural network training and self-training provides a very strong signal and substantially improves the quality of the proposals or seed labels.
| [3] | [
[
340,
343
],
[
1712,
1715
]
] | https://openalex.org/W2979654309 |
6f935b40-ccfc-4b4a-82b7-c3ffbbd93fad | Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data.
A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image.
For 2D images, this is typically done by segmenting the image using appearance cues[1]}, [2]}, [3]}, [4]}, but color variations and perspective effects make this difficult.
Tian [5]} exploits the correspondence between images and 3D point clouds to detect objects in 2D.
In 3D scenes, one can use 3D information such as surface normals [6]}, [7]}, [2]}, [9]}, [10]}, [11]}, [12]}, [13]}.
One can also use temporal changes such as motion [14]}, [15]}, [16]}, [1]}, [9]}.
Our work combines effective 3D information with cues from changes in the scene over time to detect mobile objects[10]}, [11]}, [21]}. In particular, similar to our approach, Herbst et al.[10]}, [11]} reconstruct the same scene at various times and carve out dissimilar regions as mobile objects.
We use the analogous idea of ephemerality as proposed by Barnes et al.[24]}.
We show in our work that this idea yields a surprisingly accurate set of initial objects.
In addition, we also leverage other common-sense rules such as locations of the objects (e.g. objects should stay on the ground) [25]}, [26]}, [27]}, [28]} or shapes of an object (e.g. objects should be compact) [12]}, [27]}, [28]}.
However, crucially, we do not just stop at this proposal stage.
Instead, we use these seed labels to train an object detector through multiple rounds of self-training.
This effectively identifies objects consistent across multiple scenes.
While previous work has attempted to use this consistency cue[3]}, [4]}, [34]}, [6]}, [7]}, [37]}, [38]} (including co-segmentation[39]}, [40]}, [41]}), prior work typically uses clustering to accomplish this.
In contrast, we demonstrate that neural network training and self-training provides a very strong signal and substantially improves the quality of the proposals or seed labels.
| [4] | [
[
346,
349
],
[
1718,
1721
]
] | https://openalex.org/W1919709169 |
48938c3e-838c-4ff0-9732-d22435682279 | Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data.
A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image.
For 2D images, this is typically done by segmenting the image using appearance cues[1]}, [2]}, [3]}, [4]}, but color variations and perspective effects make this difficult.
Tian [5]} exploits the correspondence between images and 3D point clouds to detect objects in 2D.
In 3D scenes, one can use 3D information such as surface normals [6]}, [7]}, [2]}, [9]}, [10]}, [11]}, [12]}, [13]}.
One can also use temporal changes such as motion [14]}, [15]}, [16]}, [1]}, [9]}.
Our work combines effective 3D information with cues from changes in the scene over time to detect mobile objects[10]}, [11]}, [21]}. In particular, similar to our approach, Herbst et al.[10]}, [11]} reconstruct the same scene at various times and carve out dissimilar regions as mobile objects.
We use the analogous idea of ephemerality as proposed by Barnes et al.[24]}.
We show in our work that this idea yields a surprisingly accurate set of initial objects.
In addition, we also leverage other common-sense rules such as locations of the objects (e.g. objects should stay on the ground) [25]}, [26]}, [27]}, [28]} or shapes of an object (e.g. objects should be compact) [12]}, [27]}, [28]}.
However, crucially, we do not just stop at this proposal stage.
Instead, we use these seed labels to train an object detector through multiple rounds of self-training.
This effectively identifies objects consistent across multiple scenes.
While previous work has attempted to use this consistency cue[3]}, [4]}, [34]}, [6]}, [7]}, [37]}, [38]} (including co-segmentation[39]}, [40]}, [41]}), prior work typically uses clustering to accomplish this.
In contrast, we demonstrate that neural network training and self-training provides a very strong signal and substantially improves the quality of the proposals or seed labels.
| [12] | [
[
620,
624
],
[
1391,
1395
]
] | https://openalex.org/W2049776679 |
5e2cd74a-bfe3-43f7-8752-edaba6f5baae | Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data.
A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image.
For 2D images, this is typically done by segmenting the image using appearance cues[1]}, [2]}, [3]}, [4]}, but color variations and perspective effects make this difficult.
Tian [5]} exploits the correspondence between images and 3D point clouds to detect objects in 2D.
In 3D scenes, one can use 3D information such as surface normals [6]}, [7]}, [2]}, [9]}, [10]}, [11]}, [12]}, [13]}.
One can also use temporal changes such as motion [14]}, [15]}, [16]}, [1]}, [9]}.
Our work combines effective 3D information with cues from changes in the scene over time to detect mobile objects[10]}, [11]}, [21]}. In particular, similar to our approach, Herbst et al.[10]}, [11]} reconstruct the same scene at various times and carve out dissimilar regions as mobile objects.
We use the analogous idea of ephemerality as proposed by Barnes et al.[24]}.
We show in our work that this idea yields a surprisingly accurate set of initial objects.
In addition, we also leverage other common-sense rules such as locations of the objects (e.g. objects should stay on the ground) [25]}, [26]}, [27]}, [28]} or shapes of an object (e.g. objects should be compact) [12]}, [27]}, [28]}.
However, crucially, we do not just stop at this proposal stage.
Instead, we use these seed labels to train an object detector through multiple rounds of self-training.
This effectively identifies objects consistent across multiple scenes.
While previous work has attempted to use this consistency cue[3]}, [4]}, [34]}, [6]}, [7]}, [37]}, [38]} (including co-segmentation[39]}, [40]}, [41]}), prior work typically uses clustering to accomplish this.
In contrast, we demonstrate that neural network training and self-training provides a very strong signal and substantially improves the quality of the proposals or seed labels.
| [24] | [
[
1082,
1086
]
] | https://openalex.org/W2963170338 |
90123595-9cfd-4181-bc3a-885955566240 | Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data.
A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image.
For 2D images, this is typically done by segmenting the image using appearance cues[1]}, [2]}, [3]}, [4]}, but color variations and perspective effects make this difficult.
Tian [5]} exploits the correspondence between images and 3D point clouds to detect objects in 2D.
In 3D scenes, one can use 3D information such as surface normals [6]}, [7]}, [2]}, [9]}, [10]}, [11]}, [12]}, [13]}.
One can also use temporal changes such as motion [14]}, [15]}, [16]}, [1]}, [9]}.
Our work combines effective 3D information with cues from changes in the scene over time to detect mobile objects[10]}, [11]}, [21]}. In particular, similar to our approach, Herbst et al.[10]}, [11]} reconstruct the same scene at various times and carve out dissimilar regions as mobile objects.
We use the analogous idea of ephemerality as proposed by Barnes et al.[24]}.
We show in our work that this idea yields a surprisingly accurate set of initial objects.
In addition, we also leverage other common-sense rules such as locations of the objects (e.g. objects should stay on the ground) [25]}, [26]}, [27]}, [28]} or shapes of an object (e.g. objects should be compact) [12]}, [27]}, [28]}.
However, crucially, we do not just stop at this proposal stage.
Instead, we use these seed labels to train an object detector through multiple rounds of self-training.
This effectively identifies objects consistent across multiple scenes.
While previous work has attempted to use this consistency cue[3]}, [4]}, [34]}, [6]}, [7]}, [37]}, [38]} (including co-segmentation[39]}, [40]}, [41]}), prior work typically uses clustering to accomplish this.
In contrast, we demonstrate that neural network training and self-training provides a very strong signal and substantially improves the quality of the proposals or seed labels.
| [34] | [
[
1724,
1728
]
] | https://openalex.org/W3106670560 |
1805295a-986f-42da-835a-05f1dc515ac2 | Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data.
A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image.
For 2D images, this is typically done by segmenting the image using appearance cues[1]}, [2]}, [3]}, [4]}, but color variations and perspective effects make this difficult.
Tian [5]} exploits the correspondence between images and 3D point clouds to detect objects in 2D.
In 3D scenes, one can use 3D information such as surface normals [6]}, [7]}, [2]}, [9]}, [10]}, [11]}, [12]}, [13]}.
One can also use temporal changes such as motion [14]}, [15]}, [16]}, [1]}, [9]}.
Our work combines effective 3D information with cues from changes in the scene over time to detect mobile objects[10]}, [11]}, [21]}. In particular, similar to our approach, Herbst et al.[10]}, [11]} reconstruct the same scene at various times and carve out dissimilar regions as mobile objects.
We use the analogous idea of ephemerality as proposed by Barnes et al.[24]}.
We show in our work that this idea yields a surprisingly accurate set of initial objects.
In addition, we also leverage other common-sense rules such as locations of the objects (e.g. objects should stay on the ground) [25]}, [26]}, [27]}, [28]} or shapes of an object (e.g. objects should be compact) [12]}, [27]}, [28]}.
However, crucially, we do not just stop at this proposal stage.
Instead, we use these seed labels to train an object detector through multiple rounds of self-training.
This effectively identifies objects consistent across multiple scenes.
While previous work has attempted to use this consistency cue[3]}, [4]}, [34]}, [6]}, [7]}, [37]}, [38]} (including co-segmentation[39]}, [40]}, [41]}), prior work typically uses clustering to accomplish this.
In contrast, we demonstrate that neural network training and self-training provides a very strong signal and substantially improves the quality of the proposals or seed labels.
| [39] | [
[
1782,
1786
]
] | https://openalex.org/W1984034752 |
edbaf067-105d-4b58-b8b3-1e476b4590cf | Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data.
A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image.
For 2D images, this is typically done by segmenting the image using appearance cues[1]}, [2]}, [3]}, [4]}, but color variations and perspective effects make this difficult.
Tian [5]} exploits the correspondence between images and 3D point clouds to detect objects in 2D.
In 3D scenes, one can use 3D information such as surface normals [6]}, [7]}, [2]}, [9]}, [10]}, [11]}, [12]}, [13]}.
One can also use temporal changes such as motion [14]}, [15]}, [16]}, [1]}, [9]}.
Our work combines effective 3D information with cues from changes in the scene over time to detect mobile objects[10]}, [11]}, [21]}. In particular, similar to our approach, Herbst et al.[10]}, [11]} reconstruct the same scene at various times and carve out dissimilar regions as mobile objects.
We use the analogous idea of ephemerality as proposed by Barnes et al.[24]}.
We show in our work that this idea yields a surprisingly accurate set of initial objects.
In addition, we also leverage other common-sense rules such as locations of the objects (e.g. objects should stay on the ground) [25]}, [26]}, [27]}, [28]} or shapes of an object (e.g. objects should be compact) [12]}, [27]}, [28]}.
However, crucially, we do not just stop at this proposal stage.
Instead, we use these seed labels to train an object detector through multiple rounds of self-training.
This effectively identifies objects consistent across multiple scenes.
While previous work has attempted to use this consistency cue[3]}, [4]}, [34]}, [6]}, [7]}, [37]}, [38]} (including co-segmentation[39]}, [40]}, [41]}), prior work typically uses clustering to accomplish this.
In contrast, we demonstrate that neural network training and self-training provides a very strong signal and substantially improves the quality of the proposals or seed labels.
| [41] | [
[
1796,
1800
]
] | https://openalex.org/W2964283970 |
97a869b7-eea0-4483-af9c-f4943fc47445 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [2] | [
[
189,
192
]
] | https://openalex.org/W3035160371 |
406925b9-7399-474a-9e8d-c7206f2b0f2b | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [3] | [
[
213,
216
]
] | https://openalex.org/W2895281799 |
f5680c4a-8d2f-44f0-85da-be38801d80a2 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [4] | [
[
219,
222
]
] | https://openalex.org/W2963240485 |
644401b9-8347-4976-a4be-dcf2227fb4e9 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [5] | [
[
225,
228
]
] | https://openalex.org/W2985406498 |
3459c54f-2461-48c6-b1db-7f8a040fcb01 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [6] | [
[
231,
234
]
] | https://openalex.org/W2970092410 |
462932a8-1591-4245-93ff-b3ad459c827c | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [7] | [
[
237,
240
]
] | https://openalex.org/W3108566666 |
c2568917-4783-4a55-8b3e-3b87a40a90d2 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [8] | [
[
243,
246
]
] | https://openalex.org/W3175269419 |
325c9b93-4b28-44fb-a349-27fddfc17e6f | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [9] | [
[
279,
282
]
] | https://openalex.org/W3108975329 |
45224ba8-16cf-483d-9402-ac524cc1affd | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [10] | [
[
285,
289
]
] | https://openalex.org/W3128167848 |
18a55fb5-66f8-4a48-9454-336be5b97baf | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [12] | [
[
299,
303
]
] | https://openalex.org/W3204397973 |
2187df7f-b83a-453d-82e9-e7ecc28e2ed4 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [13] | [
[
516,
520
]
] | https://openalex.org/W2963735582 |
aab9c985-e4c9-47b2-bc42-d971870f9650 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [14] | [
[
523,
527
]
] | https://openalex.org/W3004146535 |
919489a8-2de9-410e-a23b-bf56f62351b0 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [15] | [
[
530,
534
]
] | https://openalex.org/W2963096987 |
08b2a223-ae79-48b5-8d07-f483023db56d | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [16] | [
[
537,
541
]
] | https://openalex.org/W2575671312 |
42d9a250-788a-4d70-a460-142456129ac2 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [17] | [
[
625,
629
]
] | https://openalex.org/W2145494108 |
4890480e-9f78-4759-adcc-23994bc86779 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [18] | [
[
632,
636
]
] | https://openalex.org/W2978426779 |
75c1a013-a367-4f90-9f02-ed69743c8bc8 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [19] | [
[
639,
643
]
] | https://openalex.org/W2989700832 |
6fc51ad7-ebf6-4dfc-880b-e90b52febbc9 | Self-training, semi-supervised and self-supervised learning.
When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly, we show that self-training can not only discover more objects, but also correct the initially noisy box labels.
This result that neural networks can denoise noisy labels has been observed before [13]}, [14]}, [15]}, [16]}.
Self-training also bears resemblance to other semi-supervised learning techniques[17]}, [18]}, [19]}, [20]} but is simpler and more broadly applicable.
| [20] | [
[
646,
650
]
] | https://openalex.org/W3001197829 |
02dc9dc1-6b79-4c4a-847c-2c5df7c8935b | Overview. We propose simple, high-level common-sense properties that can easily identify a few seed objects in the unlabeled data.
These discovered objects then serve as labels to train an off-the-shelf object detector.
Specifically, building upon the neural network's ability to learn consistent patterns from initial seed labels, we bootstrap the detector
by self-training [1]}, [2]} using the same unlabeled data. The self-training process serves to correct and expand the initial pool of seed objects, gradually discovering more and more objects to further help train the detector. The whole process is summarized in alg:main.
<FIGURE> | [2] | [
[
381,
384
]
] | https://openalex.org/W3035160371 |
271449ee-de13-48fb-b9b5-b4e28235b0e9 | What properties define mobile objects or traffic participants?
Clearly, the most important characteristic is that they are mobile, i.e., they move around.
If such an object is spotted at a particular location (e.g., a car at an intersection), it is unlikely that the object will still be there when one visits the intersection again a few days hence.
In other words, mobile objects are ephemeral members of a scene [1]}.
Of course, occasionally mobile objects like cars might be parked on the road for extended periods of time.
However, for the most part ephemeral objects are likely to be mobile objects.
| [1] | [
[
415,
418
]
] | https://openalex.org/W2963170338 |
2788e5b9-e830-4e54-967a-21668742b9e4 | We assume that our unlabeled data include a set of locations \(L\) which are traversed multiple times in separate driving sessions (or traversals).
For every traversal \(t\) through location \(c \in L\) , we aggregate point clouds captured within a range of \([-H_s, H_e]\) of \(c\) to produce a dense 3D point cloud \(S_c^t\) for location \(c\) in traversal \(t\)We can easily transform captured point clouds to a shared coordinate frame via precise localization information through GPS/INS..
We then use these dense point clouds \(S_c^t\) to define ephemerality as described by Barnes et al. [1]}.
Concretely, to check if a 3D point \({\mathbf {q}}\) in a scene is ephemeral,
for each traversal \(t\) we can count the number \(N_t({\mathbf {q}})\) of LiDAR points that fall within a distance \(r\) to \({\mathbf {q}}\) ,
\(N_t \left(q\right) = \left|\left\lbrace p_i \mid \Vert p_i - q\Vert _{2} < r, p_i \in S_c^t \right\rbrace \right|.\)
| [1] | [
[
601,
604
]
] | https://openalex.org/W2963170338 |
4dd3e45d-4cb8-4c1e-9a3c-524275b03026 | The graph structure together with the edge weights define a new metric that quantifies the similarity between two points. In this graph, two points that are connected by a path are considered to be close if the path has low total edge weight, namely, the points along the path share similar PP scores, indicating these points are likely from the same object. In contrast, a path in the graph that has high total edge weight likely goes across the boundary of two different objects (e.g., a mobile object and the background).
Many graph-based clustering algorithms can fit the bill.
We deploy the widely used DBSCAN [1]} algorithm for the clustering due to its simplicity and its ability to cluster without the need of setting the number of clusters beforehand.
DBSCAN returns a list of clusters, from which we remove clusters of static (and thus persistent) background points by
applying a threshold \(\gamma \) on the \(\alpha \) percentile of PP scores in a cluster (i.e., remove the cluster if the \(\alpha \) percentile of the PP scores in this cluster is larger than \(\gamma \) ).
We then apply a straightforward bounding box fitting algorithm [2]} to fit an up-right bounding box to each cluster.
| [1] | [
[
615,
618
]
] | https://openalex.org/W1673310716 |
6201e7f7-3a1e-4090-8037-85dc3dc0b6ec | Concretely, we simply take off-the-shelf 3D object detectors [1]}, [2]}, [3]}, [4]} and directly train them from scratch on these initial seed labels via minimizing the corresponding detection loss from the detection algorithms.
| [1] | [
[
61,
64
]
] | https://openalex.org/W2949708697 |
8ae03edd-0665-494f-9ca8-ce09f486e586 | Concretely, we simply take off-the-shelf 3D object detectors [1]}, [2]}, [3]}, [4]} and directly train them from scratch on these initial seed labels via minimizing the corresponding detection loss from the detection algorithms.
| [3] | [
[
73,
76
]
] | https://openalex.org/W2963727135 |
a0e21ac6-c0fe-49fe-a19c-15402908a7dd | Concretely, we simply take off-the-shelf 3D object detectors [1]}, [2]}, [3]}, [4]} and directly train them from scratch on these initial seed labels via minimizing the corresponding detection loss from the detection algorithms.
| [4] | [
[
79,
82
]
] | https://openalex.org/W2897529137 |
75f16918-8728-4067-aa02-bd9c5a5c93dc | Intriguingly, the object detector trained in this way outperforms the original seed bounding boxes themselves — the “detected” boxes have higher recall and are more accurate than the “discovered” boxes on the same training point clouds. See fig:teaser for an illustration.
This phenomenon of a neural network improving on the provided noisy labels themselves is superficially surprising, but it has been observed before in other contexts [1]}.
The key reason is that the noise is not consistent: the initial labels are generated scene-by-scene and object-by-object. In some scenes a car may be missed because it was parked throughout all traversals, while in many others it will be discovered as a mobile object. Even among discovered boxes of similar objects, some may miss a few foreground points but others wrongly include background points.
The neural network, equipped with limited capacityWe note that for detection problems, a neural network can hardly over-fit the training data to achieve \(100\%\) accuracy [2]}, in contrast to classification problems [3]}., thus cannot reproduce this inconsistency and instead instead identifies the underlying consistent object patterns.
| [1] | [
[
438,
441
]
] | https://openalex.org/W2575671312 |
cb62e046-926a-4efe-9d5a-6a86e26b6b3c | Intriguingly, the object detector trained in this way outperforms the original seed bounding boxes themselves — the “detected” boxes have higher recall and are more accurate than the “discovered” boxes on the same training point clouds. See fig:teaser for an illustration.
This phenomenon of a neural network improving on the provided noisy labels themselves is superficially surprising, but it has been observed before in other contexts [1]}.
The key reason is that the noise is not consistent: the initial labels are generated scene-by-scene and object-by-object. In some scenes a car may be missed because it was parked throughout all traversals, while in many others it will be discovered as a mobile object. Even among discovered boxes of similar objects, some may miss a few foreground points but others wrongly include background points.
The neural network, equipped with limited capacityWe note that for detection problems, a neural network can hardly over-fit the training data to achieve \(100\%\) accuracy [2]}, in contrast to classification problems [3]}., thus cannot reproduce this inconsistency and instead instead identifies the underlying consistent object patterns.
| [2] | [
[
1018,
1021
]
] | https://openalex.org/W3178826664 |
8a31145f-7c0d-4c50-8098-fac58f12af24 | Intriguingly, the object detector trained in this way outperforms the original seed bounding boxes themselves — the “detected” boxes have higher recall and are more accurate than the “discovered” boxes on the same training point clouds. See fig:teaser for an illustration.
This phenomenon of a neural network improving on the provided noisy labels themselves is superficially surprising, but it has been observed before in other contexts [1]}.
The key reason is that the noise is not consistent: the initial labels are generated scene-by-scene and object-by-object. In some scenes a car may be missed because it was parked throughout all traversals, while in many others it will be discovered as a mobile object. Even among discovered boxes of similar objects, some may miss a few foreground points but others wrongly include background points.
The neural network, equipped with limited capacityWe note that for detection problems, a neural network can hardly over-fit the training data to achieve \(100\%\) accuracy [2]}, in contrast to classification problems [3]}., thus cannot reproduce this inconsistency and instead instead identifies the underlying consistent object patterns.
| [3] | [
[
1063,
1066
]
] | https://openalex.org/W2566079294 |
d949639f-062f-40b8-8e4d-d6a178b412b8 | Automatic improvement through self-training.
Given that the trained detector has discovered many more objects, we can use the detector itself to produce an improved set of ground-truth labels, and re-train a new detector from scratch with these better ground truths.
Furthermore, we can iterate this process: the new retrained detector has more positives and more accurate boxes for training, so it will likely have higher recall and better localization than the initial detector.
As such we can use this second detector to produce a new set of pseudo-ground-truth boxes which can be used to train a third detector and so on.
This iterative self-training [1]}, [2]} process will eventually converge when the pseudo-ground truth labeling is consistent with itself and the detector can no longer improve upon it.
| [2] | [
[
661,
664
]
] | https://openalex.org/W3035160371 |
2ec18b39-97e0-4dc6-9b28-4252a6728f02 | Datasets. We validate our approach on two datasets: Lyft Level 5 Perception [1]} and nuScenes [2]}.
To the best of our knowledge, these are the only two publicly available autonomous driving datasets that have both bounding box annotations and multiple traversals with accurate localization. To ensure fair assessment of generalizability, we re-split the dataset so that the training set and test set are geographically disjoint; we also discard locations with less than 2 examples in the training set. This results a train/test split of 11,873/4,901 point clouds for Lyft and 3,985/2,324 for nuScenes. To construct ground truth labels, we group all the traffic participants types in the original datasets into a single “mobile" object. Note that the ground-truth labels are only used for evaluation, not training.
| [2] | [
[
94,
97
]
] | https://openalex.org/W3035574168 |
d7e7d570-0746-4887-a62e-2309fe3d7b8e | In addition, we convert the raw Lyft and nuScenes data into the KITTI format to leverage off-the-shelf 3D object detectors that is predominantly built for KITTI [1]}. We use the roof LiDAR (40 or 60 beam in Lyft; 32 beam in nuScenes), and the global 6-DoF localization along with the calibration matrices directly from the raw data.
| [1] | [
[
162,
165
]
] | https://openalex.org/W2115579991 |
d7b8a73b-26ca-4f3e-b681-ef94acdb7e7b | On localization. With current localization technology, we can reliably achieve accurate localization (e.g., 1-2 cm-level accuracy with RTKhttps://en.wikipedia.org/wiki/Real-time_kinematic_positioning, 10 cm-level with Monte Carlo Localization scheme [1]} as adopted in the nuScenes dataset [2]}). We assume good localization in the training set.
| [2] | [
[
290,
293
]
] | https://openalex.org/W3035574168 |
356886dd-fdee-4810-9ad2-25129961f7f1 | Evaluation metric. We follow KITTI [1]} to evaluate object detection in the bird's-eye view (BEV) and in 3D for the mobile objects. We report average precision (AP) with the intersection over union (IoU) thresholds at 0.5/0.25, which are used to evaluate cyclists and pedestrians objects in KITTI.
We further follow [2]} to evaluate the AP at various depth ranges.
Due to space constraints, we only present evaluation results with IoU=0.25 in tbl:main,tbl:ablation,tbl:nusc025,tbl:labelquality,tbl:kitti025,tbl:recallbyclassiou025summary. Please refer to the supplementary for results with IoU=0.5.
| [1] | [
[
36,
39
]
] | https://openalex.org/W2150066425 |
be7a2f02-59bb-4ed1-a2cd-567bade6990c | Evaluation metric. We follow KITTI [1]} to evaluate object detection in the bird's-eye view (BEV) and in 3D for the mobile objects. We report average precision (AP) with the intersection over union (IoU) thresholds at 0.5/0.25, which are used to evaluate cyclists and pedestrians objects in KITTI.
We further follow [2]} to evaluate the AP at various depth ranges.
Due to space constraints, we only present evaluation results with IoU=0.25 in tbl:main,tbl:ablation,tbl:nusc025,tbl:labelquality,tbl:kitti025,tbl:recallbyclassiou025summary. Please refer to the supplementary for results with IoU=0.5.
| [2] | [
[
317,
320
]
] | https://openalex.org/W3034975685 |
cc48ad30-a6a7-46bf-b093-49186de7e8f9 | Implementation. We present results on PointRCNN [1]} (the conclusions hold for other detectors such as PointPillars [2]}, and VoxelNet (SECOND) [3]}, [4]}. See more details in the supplementary materials). For reproducibility, we use the publicly available code from OpenPCDet [5]} for all models. We use the default hyperparameters tuned for KITTI except on the Lyft dataset in which we enlarge the perception range from 70m to 90m (since Lyft provides labels beyond 70m) and reduce the number of training epochs by 1/4 (since the training set is about three times of the size of KITTI). We default to 10 rounds of self-training (chosen arbitrarily due to compute constraints) and trained the model from scratch for each round of self-training. We also include results on PointRCNN trained up to 40 rounds, where we empirically observe that the performance converges (fig:ablationrounds). All models are trained with 4 NVIDIA 3090 GPUs. Please refer to the supplementary materials for full hyperparameters.
| [1] | [
[
49,
52
]
] | https://openalex.org/W2949708697 |
62d9cd11-7e91-4794-92de-4badebd15e3c | Implementation. We present results on PointRCNN [1]} (the conclusions hold for other detectors such as PointPillars [2]}, and VoxelNet (SECOND) [3]}, [4]}. See more details in the supplementary materials). For reproducibility, we use the publicly available code from OpenPCDet [5]} for all models. We use the default hyperparameters tuned for KITTI except on the Lyft dataset in which we enlarge the perception range from 70m to 90m (since Lyft provides labels beyond 70m) and reduce the number of training epochs by 1/4 (since the training set is about three times of the size of KITTI). We default to 10 rounds of self-training (chosen arbitrarily due to compute constraints) and trained the model from scratch for each round of self-training. We also include results on PointRCNN trained up to 40 rounds, where we empirically observe that the performance converges (fig:ablationrounds). All models are trained with 4 NVIDIA 3090 GPUs. Please refer to the supplementary materials for full hyperparameters.
| [3] | [
[
145,
148
]
] | https://openalex.org/W2963727135 |
9611e80e-ad1b-4737-895e-16a83762e127 | Implementation. We present results on PointRCNN [1]} (the conclusions hold for other detectors such as PointPillars [2]}, and VoxelNet (SECOND) [3]}, [4]}. See more details in the supplementary materials). For reproducibility, we use the publicly available code from OpenPCDet [5]} for all models. We use the default hyperparameters tuned for KITTI except on the Lyft dataset in which we enlarge the perception range from 70m to 90m (since Lyft provides labels beyond 70m) and reduce the number of training epochs by 1/4 (since the training set is about three times of the size of KITTI). We default to 10 rounds of self-training (chosen arbitrarily due to compute constraints) and trained the model from scratch for each round of self-training. We also include results on PointRCNN trained up to 40 rounds, where we empirically observe that the performance converges (fig:ablationrounds). All models are trained with 4 NVIDIA 3090 GPUs. Please refer to the supplementary materials for full hyperparameters.
| [4] | [
[
151,
154
]
] | https://openalex.org/W2897529137 |
30512044-188f-464a-9a67-6499055ea4d1 | Acknowledgements
This research is supported by grants from the National Science Foundation NSF (III-1618134, III-1526012, IIS-1149882, IIS-1724282, TRIPODS-1740822, IIS-2107077, OAC-2118240, OAC-2112606 and IIS-2107161),
the Office of Naval Research DOD (N00014-17-1-2175), the DARPA Learning with Less Labels program (HR001118S0044), the Bill and Melinda Gates Foundation, the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875), and SAP America.
Supplementary Material
Implementation details
We set \([-H_s, H_e]\) to \([0, 70]\) m since we experiment with frontal-view detection only. We combine only one scan into the dense point cloud \(S_c^t\) every 2 m within this range. In calculating PP score, we use as many traversals as possible (\(\ge 2\) ) and set \(r=0.3\) m. For clustering, we use \(K=70\) and \(r^{\prime }=2.0\) m in the graph, and \(\epsilon =0.1\) , min_samples \(=10\) for DBSCAN. For filtering, we use a loose threshold of \(\alpha =20\) percentile and \(\gamma =0.7\) . Other common sense properties are simply implemented as follows:
[itemsep=1pt,topsep=3pt]
\(\#\) points in the cluster \(>= 10\) ;
Volume of fitted bounding boxes \(\in [0.5, 120]m^3\) ;
The height (upright distance against the ground plane) of points \(\text{Height}_{\max } > 0.5m\) and \(\text{Height}_{\min } < 1.0m\) to ensure clusters not floating in the air or beneath the ground due to errors in LiDAR.
We did not tune these parameters except qualitatively checked the fitted bounding boxes in few scenes in the Lyft “train” set.
For detection models, we use the default hyper-parameters tuned on KITTIhttps://github.com/open-mmlab/OpenPCDet/tree/master/tools/cfgs/kitti_models with few exceptions listed in the paper. We will open-source the code upon acceptance.
Experiments with other detectors
Besides the PointRCNN detector [1]}, We experiment with two other detectors PointPillars [2]} and VoxelNet (SECOND) [3]}, [4]}, and show their results in tbl:second and tbl:pointpillars. We apply the default hyper-parameters of these two models tuned on KITTI, and apply the same procedure as that on PointRCNN models. Note that PointPillars and VoxelNet model need a pre-defined anchor size for different types of objects, which we picked (length, width, height) as (\(2.0, 1.0, 1.7\) ) m without tuning. We observe that generally the PointPillars and VoxelNet yield worse results than PointRCNN models (possibly due to the fixed anchor size for all mobile objects), but we still observe significant gains from self-training.
<TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE>
Detailed evaluation by object types
In tbl:recallbyclass, we include detailed evaluations (BEV IoU\(=0.5\) , BEV IoU\(=0.25\) and by different depth ranges) of the recall of different object types in the Lyft test set. This corresponds to tbl:recallbyclassiou025summary in the main paper.
Corresponding IoU=0.5 results
We list the IoU=0.5 counterparts of tbl:main,tbl:ablation,tbl:nusc025,tbl:labelquality,tbl:kitti025,tbl:recallbyclassiou025summary in tbl:mainiou05,tbl:nusc05,tbl:kitti05,tbl:labelqualityiou05,tbl:ablation05.
<FIGURE>
Precision-recall evaluation
In fig:ablationpr, we show how PR curve changes with different rounds of self-training: the max recall improves gradually while keeping high precision.
This aligns with the expanded recall of the training set described above, and with what we observe qualitatively in fig:teaser.
More qualitative results
We show visualizations for additional qualitative results in fig:self-training for 5 additional LiDAR scenes. Visualizations show the progression of MODEST from seed generation, to detector trained on seed label set, to detector after 10 rounds of self training, and finally the ground truth bounding boxes. Observe that the detections obtain higher recall and learns a correct prior over object shapes as the method progresses.
<FIGURE> | [1] | [
[
1887,
1890
]
] | https://openalex.org/W2949708697 |
f98ceb15-6941-4259-a496-137e244149c4 | Acknowledgements
This research is supported by grants from the National Science Foundation NSF (III-1618134, III-1526012, IIS-1149882, IIS-1724282, TRIPODS-1740822, IIS-2107077, OAC-2118240, OAC-2112606 and IIS-2107161),
the Office of Naval Research DOD (N00014-17-1-2175), the DARPA Learning with Less Labels program (HR001118S0044), the Bill and Melinda Gates Foundation, the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875), and SAP America.
Supplementary Material
Implementation details
We set \([-H_s, H_e]\) to \([0, 70]\) m since we experiment with frontal-view detection only. We combine only one scan into the dense point cloud \(S_c^t\) every 2 m within this range. In calculating PP score, we use as many traversals as possible (\(\ge 2\) ) and set \(r=0.3\) m. For clustering, we use \(K=70\) and \(r^{\prime }=2.0\) m in the graph, and \(\epsilon =0.1\) , min_samples \(=10\) for DBSCAN. For filtering, we use a loose threshold of \(\alpha =20\) percentile and \(\gamma =0.7\) . Other common sense properties are simply implemented as follows:
[itemsep=1pt,topsep=3pt]
\(\#\) points in the cluster \(>= 10\) ;
Volume of fitted bounding boxes \(\in [0.5, 120]m^3\) ;
The height (upright distance against the ground plane) of points \(\text{Height}_{\max } > 0.5m\) and \(\text{Height}_{\min } < 1.0m\) to ensure clusters not floating in the air or beneath the ground due to errors in LiDAR.
We did not tune these parameters except qualitatively checked the fitted bounding boxes in few scenes in the Lyft “train” set.
For detection models, we use the default hyper-parameters tuned on KITTIhttps://github.com/open-mmlab/OpenPCDet/tree/master/tools/cfgs/kitti_models with few exceptions listed in the paper. We will open-source the code upon acceptance.
Experiments with other detectors
Besides the PointRCNN detector [1]}, We experiment with two other detectors PointPillars [2]} and VoxelNet (SECOND) [3]}, [4]}, and show their results in tbl:second and tbl:pointpillars. We apply the default hyper-parameters of these two models tuned on KITTI, and apply the same procedure as that on PointRCNN models. Note that PointPillars and VoxelNet model need a pre-defined anchor size for different types of objects, which we picked (length, width, height) as (\(2.0, 1.0, 1.7\) ) m without tuning. We observe that generally the PointPillars and VoxelNet yield worse results than PointRCNN models (possibly due to the fixed anchor size for all mobile objects), but we still observe significant gains from self-training.
<TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE>
Detailed evaluation by object types
In tbl:recallbyclass, we include detailed evaluations (BEV IoU\(=0.5\) , BEV IoU\(=0.25\) and by different depth ranges) of the recall of different object types in the Lyft test set. This corresponds to tbl:recallbyclassiou025summary in the main paper.
Corresponding IoU=0.5 results
We list the IoU=0.5 counterparts of tbl:main,tbl:ablation,tbl:nusc025,tbl:labelquality,tbl:kitti025,tbl:recallbyclassiou025summary in tbl:mainiou05,tbl:nusc05,tbl:kitti05,tbl:labelqualityiou05,tbl:ablation05.
<FIGURE>
Precision-recall evaluation
In fig:ablationpr, we show how PR curve changes with different rounds of self-training: the max recall improves gradually while keeping high precision.
This aligns with the expanded recall of the training set described above, and with what we observe qualitatively in fig:teaser.
More qualitative results
We show visualizations for additional qualitative results in fig:self-training for 5 additional LiDAR scenes. Visualizations show the progression of MODEST from seed generation, to detector trained on seed label set, to detector after 10 rounds of self training, and finally the ground truth bounding boxes. Observe that the detections obtain higher recall and learns a correct prior over object shapes as the method progresses.
<FIGURE> | [3] | [
[
1972,
1975
]
] | https://openalex.org/W2963727135 |
269e60a8-f66f-451c-8c4a-c206a34767bc | Acknowledgements
This research is supported by grants from the National Science Foundation NSF (III-1618134, III-1526012, IIS-1149882, IIS-1724282, TRIPODS-1740822, IIS-2107077, OAC-2118240, OAC-2112606 and IIS-2107161),
the Office of Naval Research DOD (N00014-17-1-2175), the DARPA Learning with Less Labels program (HR001118S0044), the Bill and Melinda Gates Foundation, the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875), and SAP America.
Supplementary Material
Implementation details
We set \([-H_s, H_e]\) to \([0, 70]\) m since we experiment with frontal-view detection only. We combine only one scan into the dense point cloud \(S_c^t\) every 2 m within this range. In calculating PP score, we use as many traversals as possible (\(\ge 2\) ) and set \(r=0.3\) m. For clustering, we use \(K=70\) and \(r^{\prime }=2.0\) m in the graph, and \(\epsilon =0.1\) , min_samples \(=10\) for DBSCAN. For filtering, we use a loose threshold of \(\alpha =20\) percentile and \(\gamma =0.7\) . Other common sense properties are simply implemented as follows:
[itemsep=1pt,topsep=3pt]
\(\#\) points in the cluster \(>= 10\) ;
Volume of fitted bounding boxes \(\in [0.5, 120]m^3\) ;
The height (upright distance against the ground plane) of points \(\text{Height}_{\max } > 0.5m\) and \(\text{Height}_{\min } < 1.0m\) to ensure clusters not floating in the air or beneath the ground due to errors in LiDAR.
We did not tune these parameters except qualitatively checked the fitted bounding boxes in few scenes in the Lyft “train” set.
For detection models, we use the default hyper-parameters tuned on KITTIhttps://github.com/open-mmlab/OpenPCDet/tree/master/tools/cfgs/kitti_models with few exceptions listed in the paper. We will open-source the code upon acceptance.
Experiments with other detectors
Besides the PointRCNN detector [1]}, We experiment with two other detectors PointPillars [2]} and VoxelNet (SECOND) [3]}, [4]}, and show their results in tbl:second and tbl:pointpillars. We apply the default hyper-parameters of these two models tuned on KITTI, and apply the same procedure as that on PointRCNN models. Note that PointPillars and VoxelNet model need a pre-defined anchor size for different types of objects, which we picked (length, width, height) as (\(2.0, 1.0, 1.7\) ) m without tuning. We observe that generally the PointPillars and VoxelNet yield worse results than PointRCNN models (possibly due to the fixed anchor size for all mobile objects), but we still observe significant gains from self-training.
<TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE>
Detailed evaluation by object types
In tbl:recallbyclass, we include detailed evaluations (BEV IoU\(=0.5\) , BEV IoU\(=0.25\) and by different depth ranges) of the recall of different object types in the Lyft test set. This corresponds to tbl:recallbyclassiou025summary in the main paper.
Corresponding IoU=0.5 results
We list the IoU=0.5 counterparts of tbl:main,tbl:ablation,tbl:nusc025,tbl:labelquality,tbl:kitti025,tbl:recallbyclassiou025summary in tbl:mainiou05,tbl:nusc05,tbl:kitti05,tbl:labelqualityiou05,tbl:ablation05.
<FIGURE>
Precision-recall evaluation
In fig:ablationpr, we show how PR curve changes with different rounds of self-training: the max recall improves gradually while keeping high precision.
This aligns with the expanded recall of the training set described above, and with what we observe qualitatively in fig:teaser.
More qualitative results
We show visualizations for additional qualitative results in fig:self-training for 5 additional LiDAR scenes. Visualizations show the progression of MODEST from seed generation, to detector trained on seed label set, to detector after 10 rounds of self training, and finally the ground truth bounding boxes. Observe that the detections obtain higher recall and learns a correct prior over object shapes as the method progresses.
<FIGURE> | [4] | [
[
1978,
1981
]
] | https://openalex.org/W2897529137 |
91a91ad7-c57f-46bf-851d-513f26cb4e83 | Besides the PointRCNN detector [1]}, We experiment with two other detectors PointPillars [2]} and VoxelNet (SECOND) [3]}, [4]}, and show their results in tbl:second and tbl:pointpillars. We apply the default hyper-parameters of these two models tuned on KITTI, and apply the same procedure as that on PointRCNN models. Note that PointPillars and VoxelNet model need a pre-defined anchor size for different types of objects, which we picked (length, width, height) as (\(2.0, 1.0, 1.7\) ) m without tuning. We observe that generally the PointPillars and VoxelNet yield worse results than PointRCNN models (possibly due to the fixed anchor size for all mobile objects), but we still observe significant gains from self-training.
<TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE> | [1] | [
[
31,
34
]
] | https://openalex.org/W2949708697 |
2e44d642-1e6c-465c-99c3-b4c2e600c56f | Besides the PointRCNN detector [1]}, We experiment with two other detectors PointPillars [2]} and VoxelNet (SECOND) [3]}, [4]}, and show their results in tbl:second and tbl:pointpillars. We apply the default hyper-parameters of these two models tuned on KITTI, and apply the same procedure as that on PointRCNN models. Note that PointPillars and VoxelNet model need a pre-defined anchor size for different types of objects, which we picked (length, width, height) as (\(2.0, 1.0, 1.7\) ) m without tuning. We observe that generally the PointPillars and VoxelNet yield worse results than PointRCNN models (possibly due to the fixed anchor size for all mobile objects), but we still observe significant gains from self-training.
<TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE> | [3] | [
[
116,
119
]
] | https://openalex.org/W2963727135 |
4e47c945-7464-492c-af5f-fbf0e8bbccb8 | Besides the PointRCNN detector [1]}, We experiment with two other detectors PointPillars [2]} and VoxelNet (SECOND) [3]}, [4]}, and show their results in tbl:second and tbl:pointpillars. We apply the default hyper-parameters of these two models tuned on KITTI, and apply the same procedure as that on PointRCNN models. Note that PointPillars and VoxelNet model need a pre-defined anchor size for different types of objects, which we picked (length, width, height) as (\(2.0, 1.0, 1.7\) ) m without tuning. We observe that generally the PointPillars and VoxelNet yield worse results than PointRCNN models (possibly due to the fixed anchor size for all mobile objects), but we still observe significant gains from self-training.
<TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE><TABLE> | [4] | [
[
122,
125
]
] | https://openalex.org/W2897529137 |
a30e2945-44da-4aa7-b69c-9008f8ae8ab3 | Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. Traditional methods relied on a patch-based matching approach using the measurement of cosine similarity [1]}. Recently, the remarkable capability of generative adversarial networks (GAN) [2]} has boosted image inpainting performance based on convolutional neural networks (CNN). Because of its hierarchical design, GAN with encoder-decoder structure has exceptional reconstruction ability compared to previous approaches. The decoder synthesizes visual images from the feature level as the encoder learns how to extract feature representations from images. Currently, GAN-based approaches constitute a dominant stream in image inpainting [3]}, [4]}, [5]}, [6]}, [7]}, [8]}.
| [3] | [
[
958,
961
]
] | https://openalex.org/W2963420272 |
d08be01e-5527-45bb-8962-682e875615bd | Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. Traditional methods relied on a patch-based matching approach using the measurement of cosine similarity [1]}. Recently, the remarkable capability of generative adversarial networks (GAN) [2]} has boosted image inpainting performance based on convolutional neural networks (CNN). Because of its hierarchical design, GAN with encoder-decoder structure has exceptional reconstruction ability compared to previous approaches. The decoder synthesizes visual images from the feature level as the encoder learns how to extract feature representations from images. Currently, GAN-based approaches constitute a dominant stream in image inpainting [3]}, [4]}, [5]}, [6]}, [7]}, [8]}.
| [4] | [
[
964,
967
]
] | https://openalex.org/W2738588019 |
f725bb2c-14c4-40de-97cc-7e7a1a7eefeb | Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. Traditional methods relied on a patch-based matching approach using the measurement of cosine similarity [1]}. Recently, the remarkable capability of generative adversarial networks (GAN) [2]} has boosted image inpainting performance based on convolutional neural networks (CNN). Because of its hierarchical design, GAN with encoder-decoder structure has exceptional reconstruction ability compared to previous approaches. The decoder synthesizes visual images from the feature level as the encoder learns how to extract feature representations from images. Currently, GAN-based approaches constitute a dominant stream in image inpainting [3]}, [4]}, [5]}, [6]}, [7]}, [8]}.
| [5] | [
[
970,
973
]
] | https://openalex.org/W2796286534 |
4aac10d4-3934-4eeb-9c20-8ed4d5ef875f | Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. Traditional methods relied on a patch-based matching approach using the measurement of cosine similarity [1]}. Recently, the remarkable capability of generative adversarial networks (GAN) [2]} has boosted image inpainting performance based on convolutional neural networks (CNN). Because of its hierarchical design, GAN with encoder-decoder structure has exceptional reconstruction ability compared to previous approaches. The decoder synthesizes visual images from the feature level as the encoder learns how to extract feature representations from images. Currently, GAN-based approaches constitute a dominant stream in image inpainting [3]}, [4]}, [5]}, [6]}, [7]}, [8]}.
| [6] | [
[
976,
979
]
] | https://openalex.org/W3043547428 |
833eac53-d21d-48d1-8556-63ab46e85b9e | Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. Traditional methods relied on a patch-based matching approach using the measurement of cosine similarity [1]}. Recently, the remarkable capability of generative adversarial networks (GAN) [2]} has boosted image inpainting performance based on convolutional neural networks (CNN). Because of its hierarchical design, GAN with encoder-decoder structure has exceptional reconstruction ability compared to previous approaches. The decoder synthesizes visual images from the feature level as the encoder learns how to extract feature representations from images. Currently, GAN-based approaches constitute a dominant stream in image inpainting [3]}, [4]}, [5]}, [6]}, [7]}, [8]}.
| [7] | [
[
982,
985
]
] | https://openalex.org/W2982763192 |
f0d84e4a-e011-43aa-86b7-3861411d01da | Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. Traditional methods relied on a patch-based matching approach using the measurement of cosine similarity [1]}. Recently, the remarkable capability of generative adversarial networks (GAN) [2]} has boosted image inpainting performance based on convolutional neural networks (CNN). Because of its hierarchical design, GAN with encoder-decoder structure has exceptional reconstruction ability compared to previous approaches. The decoder synthesizes visual images from the feature level as the encoder learns how to extract feature representations from images. Currently, GAN-based approaches constitute a dominant stream in image inpainting [3]}, [4]}, [5]}, [6]}, [7]}, [8]}.
| [8] | [
[
988,
991
]
] | https://openalex.org/W3026446890 |
64304234-ed2b-40fd-9f92-ba48976da5e8 | However, despite GAN's high image restoration performance, some pixel artifacts or color inconsistency called 'fake texture' inevitably occur in the process of decoding [1]}, [2]}. Fake pixels cause degradation of image restoration performance by dropping the appearance consistency in the synthesized image. To tackle this issue, we introduce dynamic attention map (DAM) that detects fake textures in feature map and highlights them by generating an attention mask (or attention map) [3]} for image inpainting. Unlike existing GAN-based inpainting methods requiring high computational cost for generating attention map [4]}, [5]}, our proposed DAM blocks exploit learnable convolutional layers for detecting fake texture and converting it into an attention map for each different scale of each decoding layer. We reported the comparisons on CelebA-HQ and Places2 datasets and showed that outcome of our DAM-GAN demonstrating higher quality than other existing inpainting methods including GAN-based approaches.
<FIGURE> | [2] | [
[
175,
178
]
] | https://openalex.org/W3012472557 |
91f8c837-28f1-4f41-b55c-057a1c9dd28b | However, despite GAN's high image restoration performance, some pixel artifacts or color inconsistency called 'fake texture' inevitably occur in the process of decoding [1]}, [2]}. Fake pixels cause degradation of image restoration performance by dropping the appearance consistency in the synthesized image. To tackle this issue, we introduce dynamic attention map (DAM) that detects fake textures in feature map and highlights them by generating an attention mask (or attention map) [3]} for image inpainting. Unlike existing GAN-based inpainting methods requiring high computational cost for generating attention map [4]}, [5]}, our proposed DAM blocks exploit learnable convolutional layers for detecting fake texture and converting it into an attention map for each different scale of each decoding layer. We reported the comparisons on CelebA-HQ and Places2 datasets and showed that outcome of our DAM-GAN demonstrating higher quality than other existing inpainting methods including GAN-based approaches.
<FIGURE> | [3] | [
[
485,
488
]
] | https://openalex.org/W2804078698 |
5f76847c-cf08-4fb3-ab5a-895ab8592911 | However, despite GAN's high image restoration performance, some pixel artifacts or color inconsistency called 'fake texture' inevitably occur in the process of decoding [1]}, [2]}. Fake pixels cause degradation of image restoration performance by dropping the appearance consistency in the synthesized image. To tackle this issue, we introduce dynamic attention map (DAM) that detects fake textures in feature map and highlights them by generating an attention mask (or attention map) [3]} for image inpainting. Unlike existing GAN-based inpainting methods requiring high computational cost for generating attention map [4]}, [5]}, our proposed DAM blocks exploit learnable convolutional layers for detecting fake texture and converting it into an attention map for each different scale of each decoding layer. We reported the comparisons on CelebA-HQ and Places2 datasets and showed that outcome of our DAM-GAN demonstrating higher quality than other existing inpainting methods including GAN-based approaches.
<FIGURE> | [4] | [
[
620,
623
]
] | https://openalex.org/W2985764327 |
206db2dd-0956-4b1b-be7f-ff78ef22c107 | However, despite GAN's high image restoration performance, some pixel artifacts or color inconsistency called 'fake texture' inevitably occur in the process of decoding [1]}, [2]}. Fake pixels cause degradation of image restoration performance by dropping the appearance consistency in the synthesized image. To tackle this issue, we introduce dynamic attention map (DAM) that detects fake textures in feature map and highlights them by generating an attention mask (or attention map) [3]} for image inpainting. Unlike existing GAN-based inpainting methods requiring high computational cost for generating attention map [4]}, [5]}, our proposed DAM blocks exploit learnable convolutional layers for detecting fake texture and converting it into an attention map for each different scale of each decoding layer. We reported the comparisons on CelebA-HQ and Places2 datasets and showed that outcome of our DAM-GAN demonstrating higher quality than other existing inpainting methods including GAN-based approaches.
<FIGURE> | [5] | [
[
626,
629
]
] | https://openalex.org/W3026446890 |
64d69187-7a1a-4281-8336-e682ab994d65 | Traditional image inpainting methods were based on the exemplar-search approach, which divides image into patches to refill missing areas with other patches according to similarity computations such as PatchMatch [1]}. Recently, progressive improvement of deep learning based generative models have demonstrated high feasibility for image synthesis. Especially GAN [2]} demonstrates brilliant performance in image inpainting tasks. Context Encoders (CE) [3]} adopted encoder-decoder based GAN for image inpainting and Globally and Locally (GL) [4]} incorporates global and local generators to maintain pixel consistency of output images. Contextual Attention (CA) [5]} imitated the traditional patch-based method using GAN to take advantage of the basic concept of conventional exemplar-based methods. However, CE [3]}, GL [4]} and CA [5]} have limitations on refilling irregular regions because of their local region based discriminators. Since they are usually specialized in reconstructing rectangular masks, images with free-shaped masks will decrease the quality of outputs. To tackle this limitations, recent inpainting approaches tend to remove local discriminator on architecture [9]}.
| [3] | [
[
454,
457
],
[
814,
817
]
] | https://openalex.org/W2963420272 |
7b18da8d-5f3e-4c13-af18-0a7143549d35 | Traditional image inpainting methods were based on the exemplar-search approach, which divides image into patches to refill missing areas with other patches according to similarity computations such as PatchMatch [1]}. Recently, progressive improvement of deep learning based generative models have demonstrated high feasibility for image synthesis. Especially GAN [2]} demonstrates brilliant performance in image inpainting tasks. Context Encoders (CE) [3]} adopted encoder-decoder based GAN for image inpainting and Globally and Locally (GL) [4]} incorporates global and local generators to maintain pixel consistency of output images. Contextual Attention (CA) [5]} imitated the traditional patch-based method using GAN to take advantage of the basic concept of conventional exemplar-based methods. However, CE [3]}, GL [4]} and CA [5]} have limitations on refilling irregular regions because of their local region based discriminators. Since they are usually specialized in reconstructing rectangular masks, images with free-shaped masks will decrease the quality of outputs. To tackle this limitations, recent inpainting approaches tend to remove local discriminator on architecture [9]}.
| [4] | [
[
544,
547
],
[
823,
826
]
] | https://openalex.org/W2738588019 |
f63a1da9-3bee-4629-be9a-b3f0969fcbc9 | Traditional image inpainting methods were based on the exemplar-search approach, which divides image into patches to refill missing areas with other patches according to similarity computations such as PatchMatch [1]}. Recently, progressive improvement of deep learning based generative models have demonstrated high feasibility for image synthesis. Especially GAN [2]} demonstrates brilliant performance in image inpainting tasks. Context Encoders (CE) [3]} adopted encoder-decoder based GAN for image inpainting and Globally and Locally (GL) [4]} incorporates global and local generators to maintain pixel consistency of output images. Contextual Attention (CA) [5]} imitated the traditional patch-based method using GAN to take advantage of the basic concept of conventional exemplar-based methods. However, CE [3]}, GL [4]} and CA [5]} have limitations on refilling irregular regions because of their local region based discriminators. Since they are usually specialized in reconstructing rectangular masks, images with free-shaped masks will decrease the quality of outputs. To tackle this limitations, recent inpainting approaches tend to remove local discriminator on architecture [9]}.
| [5] | [
[
664,
667
],
[
835,
838
]
] | https://openalex.org/W3043547428 |
c8b04b9f-4157-4698-83dc-124e9b9502bc | Partial conv [1]} did not employ GAN for inpainting, but solved the problem of generalization on irregular masks. It propose rule-based binary mask which is updated layer by layer in encoder-decoder network and showed high feasibility of refilling irregular masks. This mask-based inpainting approach is advanced in Gated conv [2]} by adopting GAN and replacing rule-based mask with learnable mask. Both Partial conv [1]} and Gated conv [2]} put forward a mask-based weights map for feature maps in the decoding process, similar to attention map [5]} based method.
| [1] | [
[
13,
16
],
[
417,
420
]
] | https://openalex.org/W2798365772 |
61a1fb55-80c9-4bbf-adf8-1209a3e80e9c | Partial conv [1]} did not employ GAN for inpainting, but solved the problem of generalization on irregular masks. It propose rule-based binary mask which is updated layer by layer in encoder-decoder network and showed high feasibility of refilling irregular masks. This mask-based inpainting approach is advanced in Gated conv [2]} by adopting GAN and replacing rule-based mask with learnable mask. Both Partial conv [1]} and Gated conv [2]} put forward a mask-based weights map for feature maps in the decoding process, similar to attention map [5]} based method.
| [2] | [
[
327,
330
],
[
437,
440
]
] | https://openalex.org/W2982763192 |
311a7cb3-6c20-4889-8f40-e6f5a14039b6 | Partial conv [1]} did not employ GAN for inpainting, but solved the problem of generalization on irregular masks. It propose rule-based binary mask which is updated layer by layer in encoder-decoder network and showed high feasibility of refilling irregular masks. This mask-based inpainting approach is advanced in Gated conv [2]} by adopting GAN and replacing rule-based mask with learnable mask. Both Partial conv [1]} and Gated conv [2]} put forward a mask-based weights map for feature maps in the decoding process, similar to attention map [5]} based method.
| [5] | [
[
546,
549
]
] | https://openalex.org/W2804078698 |
2487d61a-b68b-42ed-8e2a-a145fcf52944 | The goal of generator \(G\) is to fill missing parts with appropriate contents by understanding the input image \(x\) (encoding) and synthesizing the output image \(G(x)\) (decoding). Fig. REF describes the overall architecture of generator \(G\) . The coarse reconstruction stage begins by filling pixels with a rough texture. The DAM reconstruction then uses DAM blocks to restore the coarse output \({G}_{C}(x)\) with detailed contents. We defined the residual convolution layer by combining residual block [1]} and convolution layer, and we adopted concatenation-based skip-connection [2]} and dilated convolution [3]} in the middle of the generator. Skip-connections have a notable effect on reducing vanishing gradient problems and maintaining spatial information of reconstructed images, and dilated convolution increases the receptive field to enhance the efficiency of the computations.
| [1] | [
[
515,
518
]
] | https://openalex.org/W2194775991 |
589e2785-8407-4215-96bd-9d561d799c2b | The goal of generator \(G\) is to fill missing parts with appropriate contents by understanding the input image \(x\) (encoding) and synthesizing the output image \(G(x)\) (decoding). Fig. REF describes the overall architecture of generator \(G\) . The coarse reconstruction stage begins by filling pixels with a rough texture. The DAM reconstruction then uses DAM blocks to restore the coarse output \({G}_{C}(x)\) with detailed contents. We defined the residual convolution layer by combining residual block [1]} and convolution layer, and we adopted concatenation-based skip-connection [2]} and dilated convolution [3]} in the middle of the generator. Skip-connections have a notable effect on reducing vanishing gradient problems and maintaining spatial information of reconstructed images, and dilated convolution increases the receptive field to enhance the efficiency of the computations.
| [2] | [
[
594,
597
]
] | https://openalex.org/W1901129140 |
635a17e8-f207-4696-8943-e86350a5d9ab | The goal of generator \(G\) is to fill missing parts with appropriate contents by understanding the input image \(x\) (encoding) and synthesizing the output image \(G(x)\) (decoding). Fig. REF describes the overall architecture of generator \(G\) . The coarse reconstruction stage begins by filling pixels with a rough texture. The DAM reconstruction then uses DAM blocks to restore the coarse output \({G}_{C}(x)\) with detailed contents. We defined the residual convolution layer by combining residual block [1]} and convolution layer, and we adopted concatenation-based skip-connection [2]} and dilated convolution [3]} in the middle of the generator. Skip-connections have a notable effect on reducing vanishing gradient problems and maintaining spatial information of reconstructed images, and dilated convolution increases the receptive field to enhance the efficiency of the computations.
| [3] | [
[
623,
626
]
] | https://openalex.org/W2412782625 |
7106f6f6-e877-49cd-a447-5a11cdbcfcca | Discriminator \(D\) serves as a criticizer that distinguishes between real and synthesized images. Adversarial training between \(G\) and \(D\) can further improve the quality of synthesized image. Because local discriminator has critical limitations on handling irregular mask as mentioned in section 2., we use one global discriminator for adversarial training our model. We employed the global discriminator from CA [1]}.
| [1] | [
[
422,
425
]
] | https://openalex.org/W3043547428 |
2d314833-f297-4066-9ce0-7d5924329d78 | Similar to fakeness prediction in [1]}, fakeness map \({M}_{i}\) is produced through 1x1 convolutional filters and sigmoid function from feature \({F}_{i}\) . Then, we can use \({M}_{i}\) as an attention map like [2]}. After element-wise multiplication of \({M}_{i} \otimes {F}_{i}\) , the output feature \({F^{\prime }}_{i}\) is obtained. Then element-wise sum \({F}_{i} \oplus {F^{\prime }}_{i}\) becomes the final output \({T}_{i-1}\) , which is upsampled and passed to the upper layer in the decoder. Fakeness map \({M}_{i}\) is trainable dynamically in each layer from decoder using DAM loss \(\mathcal {L}_{DAM}\) , which is expressed in section 3.
| [2] | [
[
215,
218
]
] | https://openalex.org/W2804078698 |
89439ebe-58df-4219-9ad6-ff5934c03876 | Our model was trained on two datasets: CelebA-HQ and [1]} Places2 [2]}. We randomly divided the 30,000 images in CelebA-HQ dataset into a training set of 27,000 images and a validation set of 3,000 images. In Places2 dataset, we select same categories as [3]} in training set and tested our model on validation set. All images are resized to 128 \(\times \) 128.
| [1] | [
[
53,
56
]
] | https://openalex.org/W2962760235 |
12a97c61-29f3-46b3-af9c-4f5747670de4 | Our model was trained on two datasets: CelebA-HQ and [1]} Places2 [2]}. We randomly divided the 30,000 images in CelebA-HQ dataset into a training set of 27,000 images and a validation set of 3,000 images. In Places2 dataset, we select same categories as [3]} in training set and tested our model on validation set. All images are resized to 128 \(\times \) 128.
| [2] | [
[
66,
69
]
] | https://openalex.org/W2732026016 |
694fed95-3aaa-45ae-9911-6fb09a6cebc8 | Our model was trained on two datasets: CelebA-HQ and [1]} Places2 [2]}. We randomly divided the 30,000 images in CelebA-HQ dataset into a training set of 27,000 images and a validation set of 3,000 images. In Places2 dataset, we select same categories as [3]} in training set and tested our model on validation set. All images are resized to 128 \(\times \) 128.
| [3] | [
[
255,
258
]
] | https://openalex.org/W3175375202 |
cd10e4d4-0796-4156-b537-a717d2b1c4f2 | To prepare input images for our model, we defined the centered mask and random mask. The centered mask has 64 \(\times \) 64 size fixed in the center of the image, and the random mask has an irregular shape following the mask generation approach in [1]}. We used an ADAM optimizer [2]} in this experiment, and hyper-parameters are set to \({\lambda }_{re}=1, {\lambda }_{adv}=0.001\) and \({\lambda }_{DAM}=0.005\) .
| [2] | [
[
282,
285
]
] | https://openalex.org/W2964121744 |