_id
stringlengths 36
36
| text
stringlengths 200
328k
| label
stringclasses 5
values |
---|---|---|
0885404c-625f-489e-938c-fb31d2e719ba | To address the shortcomings of existing approaches, we introduce a novel 3D object detection model – Range Sparse Net (RSN) – which boosts the 3D detection accuracy and efficiency by combining the advantages of methods based on both dense range images and grids. RSN first applies a lightweight 2D convolutional network to efficiently learn semantic features from the high-resolution range image. Unlike existing range image methods, which regress boxes directly from their underlying features, RSN is trained for high recall foreground segmentation. In a subsequent stage, sparse convolutions are applied only on the predicted foreground voxels and their learned range image features, in order to accurately regress 3D boxes. A configurable sparse convolution backbone and a customized CenterNet [1]} head designed for processing sparse voxels is introduced in order to enable end-to-end, efficient, accurate object detection without non-maximum-suppression. Figure REF summarizes the main gains obtained with RSN models compared to others on the WOD validation set to demonstrate RSN's efficiency and accuracy.
| i |
04a2f9a1-5be5-4488-9616-ea56c4ef4457 | RSN is a novel multi-view fusion method, as it transfers information from perspective view (range image) to the 3D view (sparse convolution on the foreground points). Its fusion approach differs from existing multi-view detection methods [1]}, [2]} in that 1) RSN's first stage directly operates on the high resolution range image while past approaches [1]}, [2]} perform voxelization (in a cylindrical or spherical coordinate system) that may lose some resolution, especially for small objects at a distance. 2) RSN's second stage processes only 3D points selected as foreground by the first stage, which yields improvements in both feature quality and efficiency.
| i |
a51ee77f-8d8b-4d51-8034-b898f7d065d5 | RSN's design combines several insights that make the model very efficient. The initial stage is optimized to rapidly discriminate foreground from background points, a task that is simpler than full 3D object detection and allows a lightweight 2D image backbone to be applied to the range image at full resolution. The downstream sparse convolution processing is only applied on points that are likely to belong to a foreground object, which leads to additional, significant savings in compute. Furthermore, expensive postprocessing such as non-maximum suppression are eliminated by gathering local maxima center-ness points on the output, similar to CenterNet [1]}.
| i |
c13763bf-01e3-4ce6-988c-2591b457a333 | We propose a simple, efficient and accurate 3D LiDAR detection model RSN, which utilizes LiDAR range images to perform foreground object segmentation, followed by sparse convolutions to efficiently process the segmented foreground points to detect objects.
| i |
0cc9a013-01ef-4b52-9bfd-68aafa3ccb7c | In experiments on the Waymo Open Dataset [1]} (WOD), we demonstrate the state of art accuracy and efficiency for vehicle and pedestrian detection. Experiments on an internal dataset further demonstrate RSN's scalability for long-range object detection.
| i |
35eb2857-3471-40a2-8d3b-3894a8b04ce6 | We conduct ablation studies to examine the effectiveness of range image features and the impact of aspects like foreground point selection thresholds, or end-to-end model training, on both latency and accuracy.
Related Work
LiDAR Data Representation
The are four major LiDAR data representations for 3D object detection including voxel grids, point sets, range images, and hybrids.
Voxel grid based methods. 3D points are divided into a grid of voxels. Each voxel is encoded with hand-crafted metrics such as voxel feature means and covariances.
Vote3Deep [1]} was the first to apply a deep network composed of sparse 3D convolutions to 3D detection. They also proposed an \(L_1\) penalty to favour sparsity in deeper layers.
The voxels can be scattered to a pseudo-image which can be processed by standard image detection architectures. MV3D [2]}, PIXOR [3]} and Complex YOLO [4]} are notable models based on this approach.
VoxelNet [5]} applied PointNet [6]} in each voxel to avoid handcrafted voxel features. PointPillars [7]} introduced 2D pillar to replace 3D voxel to boost model efficiency. For small enough 3D voxel sizes, the PointNet can be removed if 3D sparse convolutions are used. Notable examples based on this approach include Second [8]} and PVRCNN [9]}.
There are three major drawbacks to voxel based methods. 1) Voxel size is constant at all ranges which limits the model's capability at distance and usually needs larger receptive fields. 2) The requirement of a full 3D grid poses a limitation for long-range, since both complexity and memory consumption scale quadratically or cubically with the range. Sparse convolutions can be applied to improve scalability but is usually still limited by the large number of voxels. 3) The voxel representation has a limited resolution due to the scalability issue mentioned above.
Point set based methods. This line of methods treats point clouds as unordered sets. Most approaches are based on the seminal PointNet and variants [6]}, [11]}. FPointNet[12]} detects objects from a cropped point cloud given by 2D proposals obtained from images; PointRCNN[13]} proposes objects directly from each point; STD [14]} relies on a sparse to dense strategy for better proposal refinement; DeepHough [15]} explores deep hough voting to better group points before generating box proposals. Although these methods have the potential to scale better with range, they lag behind the quality of voxel methods. Moreover, they require nearest neighbor search for the input, scaling with the number of points, which can be costly.
Range image based methods.
Despite being a native and dense representation for 3D points captured from a single view-point e.g. LiDAR, prior work on using 2D range images is not extensive. LaserNet [16]} applied a traditional 2D convolution network to range image to regress boxes directly. RCD-RCNN [17]} pursued range conditioned dilation to augment traditional 2D convolutions, followed by a second stage to refine the proposed range-image boxes which is also used by Range-RCNN [18]}. Features learned from range images alone are very efficient when performing 2D convolutions on 2D images but aren't that good at handling occlusions, for accurate object localization, and for size regression, which usually requires more expressive 3D features.
Hybrid methods. MultiView [19]} fuses features learned from voxels in both spherical and Cartesian coordinates to mitigate the limited long-range receptive fields resulting from the fixed-voxel discretization in grid based methods. Pillar-MultiView [20]} improves [19]} by further projecting fused spherical and cartesian features to bird-eye views followed by additional convolution processing to produce stronger features. These methods face similar scalability issues as voxel approaches.
Object Detection Architectures
Typical two-stage detectors [22]}, [23]}, [24]}, [25]} generate a sparse set of regions of interest (RoIs) and classify each of them by a network. PointRCNN [13]}, PVRCNN [9]}, RCD-RCNN [17]} share similar architectures with Faster-RCNN but rely on different region proposal networks designed for different point cloud representations. Single-stage detectors were popularized by the introduction of YOLO [29]}, SSD [30]} and RetinaNet [31]}. Similar architectures are used to design single stage 3D point cloud methods [5]}, [7]}, [8]}, [19]}, [20]}. These achieve competitive accuracy compared to two stage methods such as PVRCNN [9]} but have much lower latency. Keypoint-based architectures such as CornerNet [38]} and CenterNet [39]} enable end to end training without non-maximum-suppression. AFDet [40]} applies a CenterNet-style detection head to a PointPillars-like detector for 3D point clouds.
Our proposed RSN method also relies on two stages. However the first stage performs segmentation rather than box proposal estimation, and the second stage detects objects from segmented foreground points rather than performing RoI refinement. RSN adapts the CenterNet detection head to sparse voxels.
Range Sparse Net
The main contribution of this work is the Range Sparse Net (RSN) architecture (Fig. REF ). RSN accepts raw LiDAR range images [41]} as input to an efficient 2D convolution backbone that extracts range image features. A segmentation head is added to process range image features. This segments background and foreground points, with the foreground being points inside ground truth objects. Unlike traditional semantic segmentation, recall is emphasized over high precision in this network. We select foreground points based on the segmentation result.
The selected foreground points are further voxelized and fed into a sparse convolution network. These sparse convolutions are very efficient because we only need to operate on a small number of foreground points. At the end, we apply a modified CenterNet [39]} head to regress 3D boxes efficiently without non-maximum-suppression.
<FIGURE>Range Image Feature Extraction (RIFE)
Range images are a native dense representation of the data captured by LiDAR sensors. Our input range images contain range, intensity and elongation channels, where range is the distance from LiDAR to the point at the time the point is collected, while intensity and elongation are LiDAR return properties which can be replaced or augmented with other LiDAR specific signals. The channel values of the input range images are normalized by clipping and rescaling to \([0, 1]\) .
A 2D convolution net is applied on the range image to simultaneously learn range image features and for foreground segmentation.
We adopt a lightweight U-Net [43]} with its structure shown in Fig. REF . Each \(D(L,C)\) downsampling block contains \(L\) resnet [44]} blocks each with \(C\) output channels. Within each block the first has stride 2. Each \(U(L, C)\) block contains 1 upsampling layer and \(L\) resnet blocks. All resnet blocks have stride 1. The upsampling layer consists of a \(1\times 1\) convolution followed by a bilinear interpolation.
<FIGURE>
Foreground Point Selection
To maximize efficiency through sparsity in the downstream processing, the output of this 2D convolutional network is an ideal place to reduce the input data cloud to only points most likely to belong to an object. Here, a \(1\times 1\) convolutional layer performs pixelwise foreground classification on the learned range image features from REF . This layer is trained using the focal loss [31]} with ground truth labels derived from 3d bounding boxes by checking whether the corresponding pixel point is in any box.
\(L_{\textrm {seg}} = \frac{1}{P}\sum _{i}{L_{i}},\)
\(P\) is the total number of valid range image pixels. \(L_i\) is the focal loss for point \(i\) . Points with foreground score \(s_i\) greater than a threshold \(\gamma \) are selected.
As false positives can be removed in the later sparse point feature extraction phase (§REF ) but false negatives cannot be recovered, the foreground threshold is selected to achieve high recall and acceptable precision.
Sparse Point Feature Extraction (SPFE)
We apply dynamic voxelization [19]} on the selected foreground points. Similar to PointPillars [7]}, we append each point \(p\) with \(p - m, \textbf {\textrm {var}}, p - c\) where \(m\) , \(\textbf {\textrm {var}}\) is the arithmetic mean and covariance of each voxel, \(c\) is the voxel center. Voxel sizes are denoted as \(\Delta _{x,y,z}\) along each dimension. When using a pillar style voxelization where 2D sparse convolution is applied, \(\Delta _z\) is set to \(+\infty \) . The selected foreground points are encoded into sparse voxel features which can optionally be further processed by a PointNet [6]}.
A 2D or 3D sparse convolution network (for pillar style, or 3D type voxelization, respectively) is applied on the sparse voxels. Fig. REF illustrates the net building blocks and example net architectures used for vehicle and pedestrian detection. More network architecture details can be found in the Appendix .
<FIGURE>
Box Regression
We use a modified CenterNet [39]}, [40]} head to regress boxes from point features efficiently. The feature map consists of voxelized coordinates \(V = \lbrace v | v \in \mathbb {N}_{0}^d\rbrace \) , where \(d\in \lbrace 2,3\rbrace \) depending on whether 2D or 3D SPFE is used. We scale and shift it back to the raw point Cartesian coordinate as \(\tilde{V} = \lbrace \tilde{v} | \tilde{v} \in R^d\rbrace \) . The ground truth heatmap for any \(\tilde{v} \in \tilde{V}\) is computed as \(h = \max \lbrace \exp (-\frac{||\tilde{v} - b_c|| - ||\tilde{V} - b_c||}{\sigma ^2}) | b_c \in B_c(\tilde{v})\rbrace \) where \(B_c(\tilde{v})\) is the set of centers of the boxes that contain \(\tilde{v}\) . \(h = 0\) if \(|B_c(\tilde{v})| = 0\) . This is illustrated in Fig. REF . \(\sigma \) is a per class constant. We use a single fully connected layer to predict heatmap and box parameters. The heatmap is regressed with a penalty-reduced focal loss [39]}, [31]}.
\(\begin{split}L_{\textrm {hm}} = -\frac{1}{N}\sum _{\tilde{p}}\lbrace (1 - \tilde{h})^\alpha \log (\tilde{h})I_{h > 1 - \epsilon } + \\ (1-h)^\beta \tilde{h}^\alpha \log (1-\tilde{h})I_{h \le 1 - \epsilon }\rbrace ,\end{split}\)
where \(\tilde{h}\) and \(h\) are the predicted and ground truth heatmap values respectively. \(\epsilon \) , added for numerical stability, is set to \(1e-3\) . We use \(\alpha =2\) and \(\beta =4\) in all experiments, following [39]}, [38]}.
<FIGURE>The 3D boxes are parameterized as \(b = \lbrace d_x, d_y, d_z, l, w, h, \theta \rbrace \) where \(d_x, d_y, d_z\) are the box center offsets relative to the voxel centers. Note that \(d_z\) is the same as the absolute box \(z\) center if 2D point feature extraction backbone is used (see REF ). \(l, w, h, \theta \) are box length, width, height and box heading. A bin loss [13]} is applied to regress heading \(\theta \) . The other box parameters are directly regressed under smooth L1 losses. IoU loss [56]} is added to further boost box regression accuracy. Box regression losses are only active on the feature map pixels that have ground truth heatmap values greater than a threshold \(\delta _1\) .
\(L_{\theta _i} &= L_{bin}(\theta _i, \tilde{\theta }_i), \\L_{b_{i} \backslash \theta _{i}} &= \textrm {SmoothL1}(b_{i} \backslash \theta _{i} - \tilde{b_{i}} \backslash \tilde{\theta }_i), \\L_{\textrm {box}} &= \frac{1}{N} \sum _i {(L_{\theta _i} + L_{b_i \backslash \theta _{i}} + L_{\textrm {iou}_{i}}) I_{h_i > \delta _1}},\)
where \(\tilde{b}_i\) , \(b_i\) are the predicted and ground truth box parameters respectively, \(\tilde{\theta }_i\) , \(\theta _i\) are the predicted and ground truth box heading respectively. \(h_i\) is the ground truth heatmap value computed at feature map pixel \(i\) .
The net is trained end to end with the total loss defined as
\(L = \lambda _1 L_{\textrm {seg}} + \lambda _2 L_{\textrm {hm}} + L_{\textrm {box}}\)
We run a sparse submanifold max pooling operation on the sparse feature map voxels that have heatmap prediction greater than a threshold \(\delta _2\) . Boxes corresponding to local maximum heatmap predictions are selected.
Temporal Fusion
Existing range image based detection methods [16]} [17]} are not temporal fusion friendly as range images are constructed while the self-driving-car (SDC) moves. Stacking range images directly gives little benefit for detection performance due to ego-motion. Removing ego-motion from the range images is not optimal because range reconstructions at a different frame results in non-trivial quantization errors.
Temporal RSN takes a sequence of temporal invariant range images as input as shown in Fig. REF . RIFE is applied on each range image to segment foreground points and extract range image features. Then we transform all the selected points to the latest frame to remove ego-motion. During the SPFE phase, we append to each point voxel features computed from its own frame instead of all frames. This works better because it avoids mixing points from different frames together during voxelization. In addition, we append the time difference in seconds w.r.t. the latest frame to each point to differentiate points from different frames. The selected foreground points from all frames are processed by the SPFE backbone same as single frame models.
Experiments
We introduce the RSN implementation details and illustrate its efficiency and accuracy in multiple experiments. Ablation studies are conducted to understand the importance of various RSN components.
Waymo Open Dataset
We primarily benchmark on the challenging Waymo Open Dataset (WOD) [41]}. WOD released its raw data in high quality range image format directly, which makes it a better fit for building range image models. The dataset contains 1150 sequences in total, split into 798 training, 202 validation, and 150 test. Each sequence contains about 200 frames, where each frame captures the full 360 degrees around the ego-vehicle that results in a range image of a dimension \(64\times 2650\) pixels. The dataset has one long range LiDAR with range capped at 75 meters and four near range LiDARs. We only used data from the long range LiDAR but still evaluated our results on full range. In practice, RSN can be adapted to accept multiple LiDAR images as inputs.
Implementation Details
RSN is implemented in the Tensorflow framework [60]} with sparse convolution implementation similar as [8]}. Pedestrians and vehicles are trained separately with different SPFEs (§REF ). They share the same RIFE (§REF ). We show results from 3 vehicle models CarS, CarL, CarXL and 2 pedestrian models PedS, PedL with network details described in §REF and Appendix . Each model can be trained with single frame input (e.g. CarS_1f) or 3 frame input (e.g. CarS_3f). The input images are normalized by \(\min (v, m) / m\) where \(v\) is range, intensity and elongation, \(m\) is 79.5, 2.0, 2.0 respectively. The last return is picked if there are multiple laser returns.
The foreground score cutoff \(\gamma \) in §REF is set to 0.15 for vehicle and 0.1 for pedestrian. The segmentation loss weight \(\lambda _1\) in Eq.REF is set to 400. The voxelization region is \([-79.5m, 79.5m]\times [-79.5m, 79.5m]\times [-5m, 5m]\) . The voxel sizes are set to 0.2 meter and 0.1 meter for vehicle model and pedestrian model respectively. Per object \(\sigma \) in the heatmap computation is set to 1.0 for vehicle and 0.5 for pedestrian. The heatmap loss weight \(\lambda _2\) is set to 4 in Eq. REF . The heatmap thresholds \(\delta _1\) , \(\delta _2\) in §REF are both set to 0.2. We use 12 and 4 bins in the heading bin loss in Eq. REF for heading regression for vehicle and pedestrian, respectively.
<TABLE><TABLE><TABLE><FIGURE>
Training and Inference
RSN is trained from scratch end-to-end using an ADAM optimizer [62]} on Tesla V100 GPUs. Different SPFE backbones are trained with the maximum possible batch sizes to fit the net in GPU memory. Single frame models are trained on 8 GPUs. 3-frame temporal models are trained on 16 GPUs. We adopted the cosine learning rate decay, with initial learning rate set to 0.006, 5k warmup steps starting at 0.003, 120k steps in total. We observed that accuracy metrics such as AP fluctuate during training because the points selected to SPFE keep changing, although networks always stabilize at the end. This input diversity to SPFE adds regularization to RSN. Layer normalization [63]} instead of batch normalization [64]} is used in the PointNet within each voxel because the number of foreground points varies a lot among input frames.
We rely on two widely adopted data augmentation strategies, including random flipping along the X axis and global rotation around the Z axis with a random angle sampled from \([-\pi /4, \pi /4]\) on the selected foreground points.
During inference, we reuse past learned range features and segmentation results (outputs of foreground point selection) such that the inference cost for temporal models remains similar as the single frame models.
Results
All detection results are measured using the official WOD evaluation detection metrics which are BEV and 3D average precision (AP), heading error weighted BEV and 3D average precision (APH) for L1 (easy) and L2 (hard) difficulty levels [41]}. The IoU threshold is set as 0.7 for vehicle, 0.5 for pedestrian. We show results on the validation set for all our models in Table REF and Table REF , results on the official test set in Table REF . The latency numbers are obtained on Tesla V100 GPUs with float32 without TensorRT except PVRCNN which is obtained on Titan RTX from PVRCNN authors. In order to better show the latency improvement from our RSN model, NMS timing is not included in all of the baselines because our efficient detection head can be adapted to most of other baselines. We do not show timing for our single frame models as their latency is bounded by their multi-frame correspondences.
Table REF shows that our single frame model CarS_1f is at least 3x more efficient than the baselines while still being more accurate than all single stage methods. Its temporal version boosts the accuracy further at negligible additional inference costs. CarXL_3f significantly outperforms all published methods. It also outperforms PVRCNN-WOD [66]}, the most accurate LiDAR only model submitted in the Waymo Open Dataset Challenge.
Table REF shows more significant improvements on both efficiency and accuracy on pedestrian detection. The efficient single frame model PedS_1f is significantly more accurate and efficient than all published single-stage baseline models. Its temporal version further improves accuracy. The less efficient model PedL_3f , outperforms PVRCNN-WOD [66]}, while still being significantly more efficient than all baselines. We see additional efficiency gains on pedestrian detection compared with vehicle detection because there are much fewer people-foreground points. Given the high resolution range image and the high recall foreground segmentation, our model is a great fit for real time small object detection.
Table REF shows that RSN ensemble outperforms the PVRCNN WOD challenge submission [66]} which is an ensemble of many models.
fig:ex shows a few examples picked from Waymo Open Dataset validation set to demonstrate the model quality in dealing with various hard cases such as a crowd of pedestrians, small objects with few points, large objects, and moving objects in temporal model.
Foreground Point Selection Experiments
Foreground point selection is one of the major contributions in the RSN model that supports better efficiency. We conduct experiments by scanning the foreground selection threshold \(\gamma \) described in §REF . As shown in Fig. REF , there exists a score threshold \(\gamma \) that reduces model latency significantly with negligible impact on accuracy.
<FIGURE>In practice, \(\gamma \) and \(\lambda _1\) in Eq REF need to be set to values so that selected foreground points have high recall and enough accuracy to achieve good speedup. In our experiments, foreground precision/recall is 77.5%/99.6% for CarS_3f and 15.3%/97.6% for PedS_3f. We can start with low \(\gamma \) and scanning some possible values of \(\lambda _1\) to pick one \(\lambda _1\) . Then we grid search a few \(\gamma \) .
Ablation study
In this section, we show additional ablation studies in order to gain insight into model design choices. All experiments in this section are conducted on our efficient models CarS_3f and on PedS_3f.
<TABLE>Table REF shows that features learnt from range image not only help segment foreground points, thus supporting model efficiency, but also improve model accuracy as shown in row -RI. Accuracy improvement is higher for pedestrians because of the high resolution semantic feature learned especially impacting the long range. Gradients passed from SPFE to RIFE help detection accuracy as shown in row -E2E. Temporal variant features \((x, y, z)\) with ego-motion removed hurt detection accuracy for pedestrian detection as shown in row +xyz. Detection accuracy drops if the heatmap normalization described in §REF is disabled as shown in row -Norm.
Scalability
To further demonstrate RSN's model scalabilty, we conducted experiments on an internal dataset collected from higher quality longer range LiDARs. Here, the detection range is a square of size \([-250m, 250m] \times [-250m, 250m]\) and that is centered at the SDC. This is beyond the memory capacity of PointPillars [7]} running on a Tesla v100 GPU. We have trained RSN CarS_3f and a variant with RIFE and foreground point selection removed on this dataset. As shown in table REF , RSN can scale to a significantly larger detection range with good accuracy and efficiency. This demonstrates that foreground sampling and range image features remain effective in the larger detection range.
<TABLE>
Conclusions
We have introduced RSN, a novel range image based 3D object detection method that can be trained end-to-end using LiDAR data. The network operates in the large detection range required for safe, fast-speed driving. In the Waymo Open Dataset, we show that RSN outperforms all existing LiDAR-only methods by offering higher detection performance (AP/APH on both BEV and 3D) as well as faster running times. For future work, we plan to explore alternative detection heads and optimized SPFE in order to better take advantage of the sparsity of the foreground points.
Additional details on SPFE
SPFE is composed of blocks illustrated in Fig. REF . PedL and CarL have been illustrated in Fig. REF . Architecture details of PedS, CarS and CarXL can be found in Fig. REF . PedS, PedL, CarS, CarL use 2D sparse convolutions and have channel size for all convolutions set to 96. CarXL use 3D sparse convolutions and have channel size for all convolutions set to 64. CarXL does not have PointNet within each 3D voxel.
<FIGURE>
More Details on Temporal Fusion
1) Temporal RSN duplicates the RIFE (§REF ) and Foreground Point Selection part (§REF ) for each temporal frame. Shown in Fig. REF , each branch shares weights and matches the architecture for single frame RSN. These branches are trained together
while during inference only the last frame is computed as other time-steps reuse previous results.
2) After segmentation branches, points are gathered to multiple set of points \(P_{\delta _i}\) where \(\delta _i\) is the frame time difference between frame 0 (latest frame) and frame \(i\) which is usually close to \(0.1 * i\) seconds.
Each point \(p\) in \(P_{\delta _i}\) is augmented with \(p - m, \textbf {\textrm {var}}, p - c\) , \(\delta _i\) , and features learned from RIFE stage where \(m\) , \(\textbf {\textrm {var}}\) is the voxel statistics from \(P_{\delta _i}\) . After this per frame voxel feature augmentation, all the points are merged to one set \(P\) followed by normal voxelization and point net. The rest of the model is the same as single frame models. 3) Given an input sequence \(F=\lbrace f_i|i=0, 1,...,\rbrace \) , frames are re-grouped into \(\tilde{F} = \lbrace (f_i, f_{i-1}, ..., f_{i-k})|i=0, 1, ...\rbrace \) to train a \(k+1\) -frame temporal RSN model with target output for frame \(i\) . If \(i-k < 0\) , we reuse the last valid frame.
<FIGURE>
Ensemble Details
We provide additional description of the ensembling approach used to produce results highlighted in Table REF . We combine both data-level and test-time augmentation-based voting schemes:
We trained five copies of the proposed model, each using a disjoint subset of 80% of the original training data. For each of the trained model, we perform box prediction under five random point cloud augmentations including random rotation and translation. This procedure yields 25 sets of results in total for each sample. We then use the box aggregation strategy proposed by Solovyev et al.Weighted Boxes Fusion: ensembling boxes for object detection models. Solovyev et al., extended to 3D boxes with a yaw heading.
| i |
aba2b239-8422-4cdb-8083-6b4a5344537f | All detection results are measured using the official WOD evaluation detection metrics which are BEV and 3D average precision (AP), heading error weighted BEV and 3D average precision (APH) for L1 (easy) and L2 (hard) difficulty levels [1]}. The IoU threshold is set as 0.7 for vehicle, 0.5 for pedestrian. We show results on the validation set for all our models in Table REF and Table REF , results on the official test set in Table REF . The latency numbers are obtained on Tesla V100 GPUs with float32 without TensorRT except PVRCNN which is obtained on Titan RTX from PVRCNN authors. In order to better show the latency improvement from our RSN model, NMS timing is not included in all of the baselines because our efficient detection head can be adapted to most of other baselines. We do not show timing for our single frame models as their latency is bounded by their multi-frame correspondences.
| r |
480db525-4ce0-4736-b3f9-eddb12fda7d5 | Table REF shows that our single frame model CarS_1f is at least 3x more efficient than the baselines while still being more accurate than all single stage methods. Its temporal version boosts the accuracy further at negligible additional inference costs. CarXL_3f significantly outperforms all published methods. It also outperforms PVRCNN-WOD [1]}, the most accurate LiDAR only model submitted in the Waymo Open Dataset Challenge.
| r |
1200f29c-e8d4-4edc-bf2a-5c8a30adf961 | Table REF shows more significant improvements on both efficiency and accuracy on pedestrian detection. The efficient single frame model PedS_1f is significantly more accurate and efficient than all published single-stage baseline models. Its temporal version further improves accuracy. The less efficient model PedL_3f , outperforms PVRCNN-WOD [1]}, while still being significantly more efficient than all baselines. We see additional efficiency gains on pedestrian detection compared with vehicle detection because there are much fewer people-foreground points. Given the high resolution range image and the high recall foreground segmentation, our model is a great fit for real time small object detection.
| r |
5f690630-4b58-4438-908a-d264774552bf | fig:ex shows a few examples picked from Waymo Open Dataset validation set to demonstrate the model quality in dealing with various hard cases such as a crowd of pedestrians, small objects with few points, large objects, and moving objects in temporal model.
| r |
0c6b7329-6f1c-42c8-a188-3cdc60a9f772 | We have introduced RSN, a novel range image based 3D object detection method that can be trained end-to-end using LiDAR data. The network operates in the large detection range required for safe, fast-speed driving. In the Waymo Open Dataset, we show that RSN outperforms all existing LiDAR-only methods by offering higher detection performance (AP/APH on both BEV and 3D) as well as faster running times. For future work, we plan to explore alternative detection heads and optimized SPFE in order to better take advantage of the sparsity of the foreground points.
| d |
a78623ae-8662-43f8-8117-1b240b8b1223 | Lexicase-based parent selection algorithms have proven to be highly successful for finding effective solutions to test-based problems in genetic programming (GP) [1]}, [2]}, [3]}.
Lexicase selection's success is rooted in its ability to balance strong search space exploration with simultaneous exploitation.
That is, lexicase selection maintains meaningfully diverse populations [4]}, [5]} by promoting the coexistence of subpopulations that are each focused on different aspects of a problem (e.g., on different test cases or selection criteria) [6]}.
As such, lexicase selection algorithms are able to explore many promising problem-solving pathways in parallel, optimizing each until an overall solution is found.
| i |
870fe884-9979-4345-9ac1-2df14f6fc168 | Many genetic programming problems are multi-faceted where the quality of a candidate solution must be measured according to its performance on a set of test cases.
For such problems, we must decide how to combine performances across many test cases in order to select promising individuals to produce offspring for the next generation.
Traditional parent selection algorithms assess the quality of an individual by aggregating their performance on all test cases.
The lexicase selection algorithm, however, chooses each parent based on the relative performances of candidate solutions on random permutations of the test set.
Specifically, each time a parent is needed, the entire population is considered as candidates for selection, and the full set of test cases are shuffled; each test case is applied sequentially (in the given shuffled order) to the current set of candidates, removing all but the best candidates from consideration until only a single individual remains to be selected [1]}.
Because the ordering of test cases is different for each parent selection event, individuals that perform well on different subsets of problems are able to coexist [2]}.
Moreover, lexicase selection exerts strong selection pressure to optimize each subpopulation, as only the best candidates on different sequences of test cases are selected.
| i |
0396ca7d-be22-4c80-b3d6-dffc0b7cb5ab | Indeed, the successes of the original lexicase selection algorithm have inspired numerous variants, each either specialized for solving different categories of problems or designed to address potential shortcomings of the original lexicase algorithm (e.g., computational efficiency).
Such variants include epsilon lexicase [1]}, [2]}, down-sampled lexicase [3]}, novelty-lexicase [4]}, ALPS lexicase [5]}, and batch-lexicase selection [6]}.
Many of these variants have been rigorously benchmarked on their problem-solving success and on their ability to maintain phenotypic and phylogenetic diversity [7]}, [8]}, [9]}, [10]}.
However, benchmarking is often performed in the context of a particular GP system and with the overall goal of measuring performance on challenging computational problems (e.g., program synthesis benchmark problems from [11]} and [12]}).
While such benchmarking is critical for understanding the real-world applicability of a selection scheme, the specific problems used do not always allow us to disentangle the particular pros and cons of each scheme [13]}.
For this paper, we focus on one important aspect of lexicase-based selection schemes: How do we isolate the exploration capabilities of lexicase selection and its variants?
| i |
fff60f43-3586-4952-8ad2-b99133147795 | We introduce an “exploration diagnostic” and use it to test how well a set of parent selection algorithms can explore a simple landscape with many uphill pathways of differing peak fitnesses.
Our exploration diagnostic allows for the total number of possible evolutionary pathways to be tuned, enabling practitioners to find where an algorithm's exploratory abilities begin to fall off.
First, we verify established expectations that lexicase selection better facilitates search space exploration than tournament selection, a more traditional selection algorithm.
Next, we evaluate lexicase selection on our exploratory diagnostic with an increasing number of possible pathways identify its exploratory limitations.
Finally, we apply our exploration diagnostic to four variants of lexicase selection: epsilon lexicase, down-sampled lexicase, cohort lexicase, and novelty-lexicase selection.
| i |
74909d73-f9c7-41f5-bed0-db6f8d3c15e9 | We find that lexicase selection drives performance improvement at each of the exploration diagnostic difficulty levels that we evaluated.
Lexicase selection finds nearly perfect solutions for fitness landscapes with a small number of pathways to be explored, and performance gradually declines as the number of possible evolutionary pathways increases.
Additionally, we show that lexicase selection can be sensitive to the ratio between population size and the number of test cases used for evaluating candidate solutions.
For small values of \(\epsilon \) , epsilon lexicase improves the exploratory capacity of lexicase selection.
Random subsampling via either down-sampled or cohort lexicase degrades exploratory capacity, but cohort partitioning better preserves lexicase's exploratory capacity than down-sampling.
Finally, we did not find compelling evidence that novelty-lexicase improves performance on the exploration diagnostic relative to standard lexicase selection; in fact, the addition of novelty test cases can substantially degrade lexicase's diagnostic performance.
| i |
a4b38150-3012-4055-b00f-77f4749b938f | In this work, we introduced a new diagnostic to investigate the exploratory limits of lexicase selection along with several of its variants: epsilon lexicase, down-sampled lexicase, cohort lexicase, and novelty-lexicase.
First, we verified well-established expectations that lexicase selection better facilitates search space exploration than tournament selection.
Across all exploration diagnostic difficulty levels (i.e., cardinalities), lexicase selection drove improvements in performance (Figure REF ), while tournament selection repeatedly failed to escape early local optima (Figure REF ).
As we increased the cardinality of the diagnostic, lexicase selection's specialist maintenance and overall performance waned.
Conditions with larger diagnostic cardinalities used more test cases to evaluate individuals, and as such had more possible specialists (i.e., niches).
Given a fixed population size, lexicase maintained a smaller fraction of possible specialists as the number of possible niches increased, which, in turn, decreased overall performance (Figure REF ).
| d |
3b845c9d-435f-497a-a19c-a005cffbd6e9 | Interestingly, we found that allocating a computational budget (i.e., candidate solution evaluations) toward increasing generations versus increasing population size is not necessarily a straightforward choice when using lexicase selection.
In our case, a larger population size enabled better specialist maintenance and ultimately higher performance on the exploration diagnostic with standard lexicase (Figure REF ).
This finding is interesting in light of [1]}'s work investigating the problem-solving benefits of down-sampled lexicase; on a suite of program synthesis problems, Helmuth and Spector found that some problems benefited from an increased population size (at the cost of running for fewer generations), some problems benefited from an increase in generations, and most problems were unaffected by their choice of increasing population size versus generations evaluated.
| d |
c129269d-80da-4bec-b991-54ef2e82fa31 | Overall, these results suggest that lexicase selection can be sensitive to expanding the set of test cases used for evaluation, especially if each test case uniquely represents a distinct, desirable trait.
Moreover, our results suggest the importance of more deeply examining the benchmark problems that we use and the characteristics of the search spaces that they represent.
Given a fixed computational budget, why do some problems benefit from running deeper evolutionary searches while others benefit from increased population sizes under lexicase selection?
For many problems, different categories of test cases have uneven representation in the test set.
We hypothesize that the distribution of test cases among categories plays a role in lexicase selection's success and the optimal balance between population size and depth of search (generations of evolution).
For example, if the number of test cases is similar to population size, lexicase selection may fail to maintain specialists on categories that are underrepresented in the test cases and instead favor overrepresented categories.
In future work, we will develop novel diagnostic tools for investigating the sensitivity of selection schemes to test case set composition.
| d |
3e2706e2-193b-4e70-b896-eab2a4e2afb4 | We found that each of the lexicase variants that we evaluated—epsilon lexicase, down-sampled lexicase, cohort lexicase, and novelty-lexicase—affected lexicase selection's exploratory capacity.
For small values of \(\epsilon \) , epsilon lexicase outperformed standard lexicase selection on the exploration diagnostic, while large values of \(\epsilon \) substantially degraded performance.
Surprisingly, we found that novelty-lexicase degrades performance on the exploration diagnostic relative to standard lexicase selection.
| d |
5a6448d9-7dbf-41d2-929c-8b3199fdd553 | Our experiments are also the first to demonstrate consequential differences between down-sampled and cohort lexicase selection, as previous work generally failed to distinguish the problem-solving performance of these two lexicase variants [1]}.
Cohort lexicase substantially outperformed down-sampled lexicase (Figure REF ).
Both down-sampled and cohort lexicase offer equivalent per-generation evaluation savings, so our results suggest that cohort partitioning may often be a better subsampling method than down-sampling for lexicase selection.
Future work should examine whether this difference between cohort partitioning and down-sampling holds across different selection schemes.
| d |
8044aecd-9fa6-48fb-a616-19ae9749a1a8 | Given equivalent computational budgets, we found that standard lexicase selection eventually outperforms both cohort and down-sampled lexicase on the exploration diagnostic (Figures REF and REF ).
This result diverges from recent benchmarking studies where subsampling substantially improved performance on a range of program synthesis problems [1]}, [2]}, [3]}.
Future work will develop diagnostic problems to help identify when subsampling (e.g., via either cohort partitioning or down-sampling) is likely to improve versus impede lexicase selection's performance.
| d |
0aa3487c-c0fd-4315-ada6-70fc3d74313c | In each of our experiments, we focused our analyses on performance and activation position diversity maintenance.
Future work should more deeply examine the evolutionary histories of evolving populations using phylodiversity metrics [1]}.
Along with this, other parameter values and configurations of each of the variants evaluated in this work could be tested in order to develop a more complete understanding of how parameterization affects exploration.
| d |
cc7fd059-9433-4beb-a611-ff43d941a56d | We intend for this work to demonstrate how diagnostics (e.g., the exploration diagnostic introduced here) can be valuable tools for evaluating the pros and cons of different selection schemes.
We plan to implement a larger suite of selection scheme diagnostics, each targeted toward evaluating a particular aspect of problem-solving.
Such diagnostics will complement conventional benchmarking experiments in our community's effort to understand how different selection schemes steer evolutionary search.
| d |
9559474b-5ff7-4e4c-967c-a973e99eb7ac | Algorithmic scalability is an important component of modern machine learning. Making high quality inference on large, feature rich datasets under a constrained computational budget is arguably the primary goal of the learning community. This, however, comes with significant challenges. On the one hand, the exact computation of linear algebraic quantities may be prohibitively expensive, such as that of the log determinant. On the other hand, an analytic expression for the quantity of interest may not exist at all, such as the case for the entropy of a Gaussian mixture model, and approximate methods are often both inefficient and inaccurate. These highlight the need for efficient approximations especially in solving large-scale machine learning problems.
| i |
46c6503b-e969-4eb8-92dc-f4589036647f | In this paper, to address this challenge, we propose a novel, robust maximum entropy algorithm, stable for a large number of moments, surpassing the limit of previous maximum entropy algorithms [1]}, [2]}, [3]}. We show that the ability to handle more moment information, which can be calculated cheaply either analytically or with the use of stochastic trace estimation, leads to significantly enhanced performance. We showcase the effectiveness of the proposed algorithm by applying it to log determinant estimation [4]}, [5]}, [6]} and entropy term approximation in the information-theoretic Bayesian optimisation [7]}, [8]}, [9]}. Specifically, we reformulate the log determinant estimation into an eigenvalue spectral estimation problem so that we can estimate the log determinant of a symmetric positive definite matrix via computing the maximum entropy spectral density of its eigenvalues. Similarly, we learn the maximum entropy spectral density for the Gaussian mixture and then approximate the entropy of the Gaussian mixture via the entropy of the maximum entropy spectral density, which provides an analytic upper bound. Furthermore, in developing our algorithm, we establish equivalence between maximum entropy methods and constrained Bayesian variational inference [10]}.
| i |
d2d4d7ed-5151-408b-ab74-36faea05b38d |
We propose a maximum entropy algorithm, which is stable and consistent for hundreds of moments, surpassing other off-the-shelf algorithms with a limit of a small number of moments. Based on this robust algorithm, we develop a new Maximum Entropy Method (MEMe) which improves upon the scalability of existing machine learning algorithms by efficiently approximating computational bottlenecks using maximum entropy and fast moment estimation techniques;
We establish the link between maximum entropy methods and variational inference under moment constraints, hence connecting the former to well-known Bayesian approximation techniques;
We apply MEMe to the problem of estimating the log determinant, crucial to inference in determinental point processes [1]}, and to that of estimating the entropy of a Gaussian mixture, important to state-of-the-art information-theoretic Bayesian optimisation algorithms.
| i |
02a2c02d-0687-4860-a4e6-b155f7a40941 | In this paper, we established the equivalence between the method of maximum entropy and Bayesian variational inference under moment constraints, and proposed a novel maximum entropy algorithm (MEMe) that is stable and consistent for a large number of moments. We apply MEMe in two applications, i.e., log determinant estimation and Bayesian optimisation, to demonstrate its effectiveness and superiority over state-of-the-art approaches.
The proposed algorithm can further benefit a wide range of large-scale machine learning applications where efficient approximation is of crucial importance.
| d |
7c1b5543-d1a3-40d6-b47e-740971b49022 | B.R. would like to thank the Oxford Clarendon Fund and the Oxford-Man Institute of Quantitative Finance; D.G., S.Z. and X.D would like to thank the Oxford-Man Institute of Quantitative Finance for financial support; S.R. would like to thank the UK Royal Academy of Engineering and the Oxford-Man Institute of Quantitative Finance
| d |
a831079b-4ae7-4c61-bb94-650028e98558 | In the filed of evolutionary multi-objective optimization (EMO), there are many performance indicators which are used to evaluate the performance of EMO algorithms [1]}. Some representative performance indicators include GD [2]}, IGD [3]}, hypervolume [4]}, R2 [5]}, etc. Among them, the hypervolume indicator is the most widely investigated one since it has rich theoretical properties and mature applications. For example, it is able to evaluate both the convergence and diversity of a solution set simultaneously [6]}. It is Pareto compliant [7]}. Furthermore, it can also be used in EMO algorithms for environmental selection. Representative hypervolume-based EMO algorithms include SMS-EMOA [8]}, [9]}, FV-MOEA [10]}, HypE [11]}, and R2HCA-EMOA [12]}.
| i |
1c298369-8ebd-4933-a498-10d7e01d0509 | The main drawback of the hypervolume indicator is that it is computationally more expensive than other performance indicators. Some efficient hypervolume calculation methods have been proposed such as WFG [1]}, QHV [2]}, and HBDA [3]}. However, these methods aim to exactly calculate the hypervolume indicator. They will become inefficient when the number of objectives is large (e.g., \(>10\) ) since the calculation of the hypervolume indicator is #P-hard [4]}. Therefore, some hypervolume approximation methods have been proposed to overcome this drawback [5]}, [4]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}.
| i |
51fbbd9a-fbe9-4879-8ef2-147e2aea254d | Two representative approximation methods are the point-based method and the line-based method. The point-based method is also known as Monte Carlo sampling method [1]}, [2]}. In this method, a large number of points are sampled in the sampling space, and the hypervolume is approximated based on the percentage of the points lying inside the hypervolume region. The line-based method is also known as the R2 indicator method [3]} or the polar coordinate method [4]}. In this method, a set of line segments with different directions are used to approximate the hypervolume.
| i |
42ad7354-8ccf-4992-b1a3-adb38317f561 | In this letter, we propose HV-Net, a new hypervolume approximation method. HV-Net is a deep neural network where the input is a non-dominated solution set and the output is the hypervolume approximation of this solution set. HV-Net has two characteristics. 1) It is permutation invariant to the order of elements in the input set (i.e., the output of HV-Net does not depend on the order of solutions in the input solution set). 2) It can handle an input solution set with an arbitrary number of solutions (i.e., the number of solutions in the input solution set is not fixed). These characteristics make sure that HV-Net is flexible and general for hypervolume approximation.
| i |
419d7613-670f-448e-b628-92b352c0a16f | The main contribution of this letter is that we develop a new type of hypervolume approximation method based on a deep neural network. Our experimental results are promising and encouraging. This will bring new opportunities for the development of the EMO field.
| i |
fa78c8e3-a240-4964-b9ef-2c8859c1514c | The rest of this letter is organized as follows. Section II presents the preliminaries of the study. Section III introduces the new hypervolume approximation method, HV-Net. Section IV conducts experimental studies. Section V concludes the letter.
| i |
b31eb0ad-3a8d-490e-9aff-ec616139df61 | We showed that HV-Net is a promising method for hypervolume approximation. Compared with the point-based and line-based methods, HV-Net achieved better performance in terms of the approximation error and the runtime. We also showed the effectiveness of the proposed loss function over the other two loss functions for training HV-Net. The experimental results are promising and encouraging. We believe that HV-Net can bring new opportunities for the development of the EMO field.
| d |
6a6b089a-41b1-4a4e-b09f-5b9a2ee439d7 | One disadvantage of HV-Net is that its structure is prespecified before training. Therefore, we can only obtain a single approximation result for each solution set based on a trained HV-Net. This disadvantage elicits our future research: multi-objective HV-Net. That is, we can train a set of HV-Nets with different structures, so that a tradeoff between the approximation error and the runtime can be obtained. The multi-objective neural architecture search technique (e.g., NSGA-Net [1]}) can be useful to realize this goal.
| d |
1878329a-105f-41eb-bad1-8bcac7a44043 | Can we determine the location of a scene given a single ground-level RGB image? For famous and characteristic scenes, such as the Eiffel Tower, it is trivial because the landmark is so distinctive of Paris. Moreover, there is a lot of images captured in such prominent locations under different viewing angles, at different times of the day, and even in different weather or lighting conditions. However, some scenes, especially places outside cities and tourist attractions, may not have characteristic landmarks and it is not so obvious to locate where they were snapped. This is the case for the vast majority of the places in the world. Moreover, such places are less popular and are less photographed. As a result, there are very few images from such locations and the existing ones do not capture a diversity of viewing angles, time of the day, or weather conditions, making it much harder to geo-locate. Because of the complexity of this problem, most existing geo-localization approaches have been constrained to small parts of the world [1]}, [2]}, [3]}, [4]}. Recently, convolutional neural networks (CNNs) trained with large datasets have significantly improved the performance of geo-localization methods and enabled extending the task to the scale of the entire world [5]}, [6]}, [7]}, [8]}. However, planet-scale unconstrained geo-localization is still a very challenging problem and existing state-of-the-art methods struggle to geo-locate images taken anywhere in the world.
| i |
e98812dc-36e0-4984-b22f-9aa54aff17aa | In contrast to many other vision applications, single-image geo-localization often depends on fine-grained visual cues present in small regions of an image. In Figure REF , consider the photo of the Eiffel Tower in Paris and its replica in Las Vegas. Even though these two images seem to come from the same location, the buildings and vegetation in the background play a decisive role in distinguishing them. Similarly, in the case of most other images, the global context spanned over the entire image is more important than individual foreground objects in geo-localization. Recently, a few studies [1]}, [2]} comparing vision transformer (ViT) with CNNs have revealed that the early aggregation of global information using self-attention enables transformers to build long-range dependencies within an image. Moreover, higher layers of ViT maintain spatial location information better than CNNs. Hence, we argue that transformer-based networks are more effective than CNNs for geo-localization because they focus on detailed visual cues spanned over the entire image.
| i |
57c85339-e22a-43b4-9bb4-5090b2b26480 | Another challenge of single-image geo-localization is the drastic appearance variation of the exact location under different daytime or weather conditions. Semantic segmentation offers a solution to this problem by generating robust representations in such extreme variations [1]}. For example, consider the drastic disparity of the RGB images of same locations in day and night or winter and fall in Figure REF . In contrast to the RGB images, the semantic segmentation maps remain almost unchanged. Furthermore, semantic segmentation provides auxiliary information about the objects present in the image. This additional information can be a valuable pre-processing step since it enables the model to learn which objects occur more frequently in which geographic locations. For example, as soon as the semantic map detects mountains, the model immediately eliminates all flat regions, thus, reducing the complexity of the problem.
| i |
ecf04aa6-59f5-4ee5-aca6-786ae4badb48 | Planet-scale geo-localization deals with a diverse set of input images caused by different environmental settings (e.g., outdoors vs indoors), which entails different features to distinguish between them. For example, to geo-locate outdoor images, features such as the architecture of buildings or the type of vegetation are important. In contrast, for indoor images, the shape and style of furniture may be helpful. To address such variations, Muller et al. [1]} proposed to train different networks for different environmental settings. Though such an approach produces good results, they are cost-prohibitive and are not generalizable to a higher number of environmental scenarios. In contrast, we propose a unified multi-task framework for simultaneous geo-localization and scene recognition applied to images from all environmental settings.
| i |
df36b5e7-d0e0-4ace-adb0-71b6b9fc197c | This work addresses the challenges of planet-scale single-image geo-localization by designing a novel dual-branch transformer architecture, TransLocator. We treat the problem as a classification task [1]}, [2]} by subdividing the earth's surface into a high number of geo-cells and assigning each image to one geo-cell. TransLocator takes an RGB image and its corresponding semantic segmentation map as input, divides them into non-overlapping patches, flattens the patches, and feeds them into two parallel transformer encoder modules to simultaneously predict the geo-cell and recognize the environmental scene in a multi-task framework. The two parallel transformer branches interact after every layer, ensuring an efficient fusion strategy. The resulting features learned by TransLocator are robust under appearance variation and focus on tiny details over the entire image.
| i |
5453f267-3ced-4f6f-b7e9-4ae65786fad7 | In summary, our contributions are three-fold. \((i)\) We propose TransLocator - a unified solution to planet-scale single-image geo-localization with a dual-branch transformer network. TransLocator is able to distinguish between similar images from different locations by precisely attending to tiny visual cues. \((ii)\) We propose a simple yet efficient fusion of two transformer branches, which helps TransLocator to learn robust features under extreme appearance variation. \((iii)\) We achieve state-of-the-art performance on four datasets with a significant improvement of \(5.5\%\) , \(14.1\%\) , \(4.9\%\) , \(9.9\%\) continent-level geolocational accuracy on Im2GPS [1]}, Im2GPS3k [2]}, YFCC4k [3]}, and YFCC26k [4]}, respectively. We also qualitatively evaluate the effectiveness of the proposed method on real-world images.
| i |
e43e8c2d-ade8-4b8c-9ed5-6f4204be4bc1 | We conduct extensive experiments to show the effectiveness of our proposed method. In this section, we describe the datasets, the evaluation metrics, the baseline methods, and the detailed experimental settings.
| m |
e5046e8f-4925-462b-b884-c4a8ac125d8f |
Planet-scale single-image geo-localization is a highly challenging problem. These challenges include images with a large diversity in various environmental scenarios and appearance variation due to daytime, season, or weather changes. Hence, most existing approaches limit geo-localization in the scale of landmarks, a specific area, or an environmental scenario. Some approaches propose to use separate systems for different environments. In this paper, we address this challenging problem by proposing TransLocator, a unified dual-branch transformer network that attends to tiny details over the entire image and produces robust feature representation under extreme appearance variations. TransLocator takes an RGB image with its semantic segmentation map as input, interacts between its two parallel channels after each transformer layer, and concatenates the learned RGB and semantic representations using global attention. We train TransLocator in a unified multi-task framework for simultaneous geo-localization and scene recognition, and thus, our system can be applied to images from all environmental settings. Extensive experiments with TransLocator on four benchmark datasets - Im2GPS [1]}, Im2GPS3k [2]}, YFCC4k [3]} and YFCC26k [4]} shows a significant improvement of \(5.5\%\) , \(14.1\%\) , \(4.9\%\) , \(9.9\%\) continent-level accuracy over current state-of-the-art. We also obtain better qualitative results when we test TransLocator on challenging real-world images.
| d |
4b97fe33-fa4d-496e-918b-6b883eef6938 | In this supplementary material, we provide additional details on geo-cell partitioning, data augmentation, hyper-parameter values, baselines, evaluation metrics and illustrate additional quantitative and qualitative results.
| d |
4a759019-8018-428d-96a5-982ff41d65a2 | Along with the pressing need to mitigate climate change, the world's energy sector is undergoing a remarkable transition. To fulfill the goal of decarbonization, as well as reaching the United Nations Sustainable Development Goal (UN SDG) #7 (access to affordable, reliable, sustainable, and modern energy for all), renewable energy sources are rapidly entering the system. The affordability of small-scale, der is thus increasing correspondingly. With the simultaneous maturing of Information and Communication Technologies, such as dlt, previously passive consumers are able to take a more active role in the energy system, transitioning to being prosumers. This can further accelerate the enrollment of renewable energy and unlock flexibility in the distribution system for future grid planning and operation. However, the introduction of prosumers into the distribution system may lead to challenges in terms of operational stability and security, as well as social and legislative issues. To both enhance the benefits of the democratization of the energy system and to control the challenges, lem have emerged as a promising concept [1]}, [2]}.
| i |
3ba19fe0-f0c5-4a95-bdb9-edb938dc6eb8 | One of the key determinants of a prosumer's willingness to participate in a lem is the community trading price [1]}. Multiple studies have proposed different strategies for both bidding processes and the actual price setting, all including different aspects to be considered in a fair market price. In general, one can categorize the different schemes proposed in the literature into auction-based [2]}, [3]}, [4]}, [5]}, optimization-based [6]}, [7]}, [8]}, [9]}, [10]}, game theory-based [11]}, [12]} and cost-sharing [13]}, [14]}, [15]} methods.
| i |
c698d49d-b8aa-467f-8c5b-c61279836ac7 | Auction-based methods rely on the active participation of each market participant by placing bids and offers to the market platform. Dynamic price adjustments are analyzed in [1]} and show advantages in terms of higher social welfare and reduced environmental impacts compared to other bidding strategies. Two auction mechanisms, Discriminatory and Uniform k-mean, are analyzed in [2]}, as well as the impact of different bidding strategies. The analysis suggests that the discriminatory approach outperforms the uniform mechanism in terms of market participation, but is more sensitive to market conditions and can lead to more fluctuating trading prices. Employing a game-theoretic bidding strategy where the participants compete to bid for the best price obtains a near-optimal economic efficiency according to this analysis.
| i |
82b29ed3-f7a0-44b0-b622-27c8a087bd8c | Although game-theoretic approaches have gained popularity in the literature, market participants are still required to take an active role in price-setting games. A common approach in modeling the lem
interactions are Stackelberg games, where the prosumers act as leaders and the consumers act as followers, as done in [1]} and [2]}. A similar game is analyzed in [3]}, showing higher social welfare among the market participants compared to direct p2p trading. A socially optimal solution, which is also Pareto optimal, is obtained through a cake-cutting game in [4]}, with discriminate pricing. However, both auction-based and game theoretic market structures are high-demanding strategies in terms of participation and may seem intimidating to some agents. This can in turn discourage the establishment of lem.
| i |
17f947e5-023f-4b1b-bbea-44151fde7a6e | Other pricing mechanisms use different optimization techniques to set the lem prices implicitly, through dual prices of an optimal power flow model [1]} or a general market clearing problem [2]}. Others propose more direct price-setting optimization algorithms, like consensus-based admm [3]}. In [4]}, the lem price is calculated based on a multi-class energy clearing problem, solved through the admm approach, considering the different preferences of the market participants.
| i |
9e17b25b-d8bd-4a9e-982b-bb948dd7f2cc | In most cost-sharing methods, the community costs are distributed ex-post, and the final price is set based on different balancing criteria. The prosumers are thus considered as price-takers and cannot actively influence the lem price. In [1]}, a sdr is calculated based on the energy balance within the community at each time step. The local trading price is linearly set between an upper and lower bound, based on this ratio. Two incentive extensions to this method are also proposed. A similar method combined with a compensation rate is analyzed by [2]} and prosumers' preferred level of participation is taken into account in the approach proposed by [3]}. The sdr approach is compared with the Bill Sharing Scheme and the Mid-Market Rate in [4]}, showing the disadvantages of the existing cost-sharing schemes, while proposing an improved, two-stage sdr mechanism. The same three methods are compared in [5]}, illustrating an outperformance of the sdr method compared to the other two in terms of the overall performance.
| i |
01c03f02-0330-44cc-b123-230e1a563448 | In this paper, the sdr pricing mechanism is further explored. With the fit schemes being phased out in, e.g., Germany, a new limit on the minimum trading price based on the der' lcoe is proposed to ensure both prosumer and consumer profitability. The usage of lcoe as a pricing instrument for lem is limited within the current literature. In [1]}, it is used together with the national grid price to form a truncated normal distribution as a basis for two different auction mechanisms. It is also used as a minimum trading price for a price matching algorithm in [2]}. However, further research is required and thus the contributions of this paper are as follows:
| i |
c513d0d5-c871-40dc-acc4-028df8c7704f |
Rethinking the existing convention of the FiT as the lower bound of prosumers' willingness to participate in lem, an LCOE-based mechanism is proposed to sustain the viability of the prosumers' investments in production capacity.
A fair and transparent pricing mechanism for lem is proposed, distributing the costs and benefits of the local energy trading based on each market participant's contribution to the supply-demand balance.
The impacts of the proposed LEM pricing mechanism are investigated using a techno-economic analysis from both prosumers' and consumers' points of view.
Conceptualization of a potential DLT-based local energy trading platform that can use the proposed LEM pricing mechanism as an integral part.
| i |
27b9f049-a62d-41d7-a638-2d926a71f5c0 | The modern energy system can be classified as a combination of cyber, physical, and social subsystems, creating a cyber-physical-social-system (CPSS), as described in [1]}. The overall market framework of this paper can be explained through the interaction between these layers, illustrated in Fig. REF .
<FIGURE> | m |
aa87f425-07ea-4b3a-9968-72981bc374f4 | The main underlying assumption of the lem is that necessary energy policies and regulations are in place. In that case, private households are not only allowed but encouraged through legal provisions to form transactive energy markets with their neighbors. The policies and regulations will thus influence the deployment of new physical components of the power system, such as DLT-enabled electric meters, that will be operated in the lem and support the proposed approach (Process (1a)). They will also affect how the lem is structured and how the local energy price will be set and communicated with the lem participants (Process (1b)). The interaction between the CPSS layers in the operational phase is designed as follows.
| m |
d6841961-be29-4990-85b5-eb50fb857672 | Process (2). The lem aggregator facilitates the routing of energy to satisfy the needs and preferences of the market participants, without violating the physical boundaries of the system. Smart meters provide information about generation and consumption for each market participant and communicate this information to the lem platform.
| m |
4d99b1c4-e2f2-4d69-9975-f479731b2a96 | Process (4). DLT is used to ensure trust is built between transacting parties, generate immutable track recording, and secure sharing of information and transactions on the market platform. The fourth process of the framework is marked red to symbolize that it is not implemented in this paper's analysis but is assumed to be in place for the real-life implementation of the lem platform where DLT can be deployed.
| m |
81effabe-810a-4576-9e06-1644bf144941 | Process (5). The processed data regarding the market participants' generation and consumption profiles are retrieved by the market mechanism. The lem price for each timestamp is then obtained. Pricing information is then sent back to the DLT network. Since this information is sensitive and not expected to overload the DLT network, such data can be stored in an on-chain database.
| m |
9a14ebdb-7288-4bd7-9027-e78665cdc05a | Process (6). Based on the pricing results, techno-economic analyses are made to investigate the economic conditions of the neighborhood, as well as the individual gains or losses as a result of participating in the lem.
| m |
1486e3c3-0fff-4daf-af8d-eb951b2a9453 | To capture seasonal variations in the sdr profile, hourly simulations for an entire year were executed for all cases. To illustrate how the sdr varies over time, the hourly values for March are shown in Figure REF . It is clear that the \(r_t > 1\) condition occurs quite frequently, indicating a slight overscaling of the number of prosumers in the neighborhood.
<FIGURE> | r |
a32a36cb-61ee-4edb-9166-fc3a602d878c | To promote lem as an attractive option to take an active role in the energy system for end-users, a transparent and fair pricing mechanism for locally traded energy must be established. With the phase-out of the FiT and similar support schemes, other methods to ensure the profitability of participating in local flexibility activities must be redesigned. The average LCOE value of the lem is proposed in this paper, showing promising results in terms of cost reductions for the consumers in the market. Comparing the new LCOE convention with the existing FiT scheme, the prosumers' revenue is decreased. However, comparing it to the auction alternative, both consumers and prosumers experience substantially higher benefits. This finding can also be utilized by policymakers to incentivize the decarbonization, decentralization, and democratization through LEMs in areas with a high penetration level of DERs, and thus further contribute to UN SDG #7. Hence, the proposed pricing mechanism requires sophistication, with track recording of parameters like the SDR, \(r_t\) , dynamically, while also ensuring information security for the participating agents. DLT is thus highlighted as one of the most promising digitalization technologies to provide a functional solution to realize such next-generation energy policy and LEM mechanisms. Still, the proposed pricing scheme is easy to grasp for its users, and will likely be perceived as fair by the market participants, as the costs and benefits are distributed according to actual contribution to the market. Further improvements of the method should involve the inclusion of energy storage technology and how it influences both the LCOE value and the strategic behavior of the market agents.
| d |
b6761596-e666-4d17-bf3a-0f3d0a1a475b | The city is the center of socioeconomic development, natural resource consumption and commodity production (Acuto et al., 2018). Modern cities are supported by freight transport systems, which guarantee the supply of household goods, industrial raw materials and construction materials (Behrends, 2016). In intracity freight, heavy trucks mainly undertake mass transportation tasks between industrial enterprises, logistics warehouses and port terminals (Dernir et al., 2014). Although heavy trucks account for less than 40% of intracity freight vehicles, they carry more than 80% of the freight volume (Aljohani, 2016). In the future, rapid urbanization will further stimulate the growth of intracity freight demand (Balk et al., 2018). Heavy trucks will play a more important role in the intracity freight system. However, the increase in the number of heavy trucks will create serious social and environmental problems, such as traffic accidents, air pollution and nonrenewable energy consumption (Hu et al., 2019; Sakai et al., 2019; Velickovic et al., 2018). In particular, the proportion of the total mileage of heavy trucks in city road traffic is less than 6%, but heavy trucks contribute 36% of air pollution (Perez-Martinez et al., 2017) and 18% of traffic death accidents (Evgenikos et al., 2016; Knight and Newton, 2008). In recent years, authorities and organizations have developed many freight-related policies, such as truck road pricing (Wang and Zhang, 2017), freight bottleneck management (Sharma et al., 2020) and freight demand management (Hassan et al., 2020), to eliminate the negative effects of heavy trucks. Intracity heavy truck freight trips are critical basic data for developing these freight policies. However, a major challenge at present is the lack of massive heavy truck freight trip data, which greatly hinders our in-depth understanding of city freight systems (Allen et al., 2012; Hadavi et al., 2019; Pluvinet et al., 2012; Zhang et al., 2019).
| i |
a2d44c74-c591-4847-8e9b-9d600810cfaa | Traditionally, intracity freight trip data are collected through travel surveys conducted in many cities, such as London (Allen et al., 2018), Tokyo (Oka et al., 2019), Paris (Toilier et al., 2016), Toronto (McCabe et al., 2013) and Melbourne (Greaves and Figliozzi, 2008). Travel surveys provide relatively rich travel information on heavy trucks but are time consuming and costly (Allen et al., 2012; Oka et al., 2019; Pani and Sahu, 2019). Therefore, the quantity of intracity freight travel data collected through freight surveys is often limited and not sufficient for the analysis and modeling of city freight systems (Allen et al., 2014). In the era of big data, the development and application of satellite positioning technology make it possible to obtain massive heavy truck GPS trajectories through GPS devices (Deng et al., 2010; Papadopoulos et al., 2021; Pluvinet et al., 2012). It is critical to accurately extract heavy truck freight trips from GPS trajectories (Kamali et al., 2016; Zanjani et al., 2015).
| i |
299c9781-ebf0-40cb-9918-7e9043206632 | To extract freight trips from GPS trajectories, most previous studies (Arentze et al., 2012; Feng et al., 2012; Gingerich et al., 2016; Huang et al., 2014; Kamali et al., 2016; Laranjeiro et al., 2019; Zanjani et al., 2015) first identified heavy truck freight trip ends (origins and destinations, OD) and then split the GPS trajectory into multiple trips according to the identified trip ends. The process of trip end identification can be divided into two steps: (1) identify truck stops from the GPS trajectory and (2) select freight trip ends from these identified truck stops. For the first step, previous studies (Comendador et al., 2012; Greaves and Figliozzi, 2008; Joubert and Axhausen, 2011; Yang et al., 2014) often used trajectory features, such as velocity, acceleration and truck direction, to infer heavy truck motion states (stationary or moving) at a given time period and then identified truck stops from GPS trajectories. These identified truck stops include not only freight trip ends but also temporary stops due to refueling, traffic congestion, etc. Thus, in the second step, freight trip ends need to be selected from all truck stops. Some researchers (Gingerich et al., 2016; Hughes et al., 2019; Zanjani et al., 2015) found that the temporary stopping time for heavy trucks is usually short (mostly a few minutes), while loading or unloading usually takes dozens of minutes or even hours. Therefore, one or more stop time thresholds can be set to distinguish freight trip ends and temporary stops.
| i |
cb30b014-0d23-47d8-8302-dceb383c7e68 | Previous studies (Arentze et al., 2012; Feng et al., 2012; Gingerich et al., 2016; Huang et al., 2014; Laranjeiro et al., 2019) determined the time threshold for identifying freight trip ends from different perspectives. For example, some studies (Ma et al., 2011; McCormack et al., 2006; McCormack et al., 2010) took the traffic signal cycle as the time threshold to distinguish trip ends and temporary stops due to traffic control. (Kamali et al., 2016) and (Aziz et al., 2016) determined the time threshold according to city traffic conditions and the GPS data sampling rate. (Hess et al., 2015) determined the time threshold according to local freight policies. (Greaves and Figliozzi, 2008) selected the most suitable time threshold from a certain time range by a manual check. In addition, some studies (Arentze et al., 2012; Feng et al., 2012; Gingerich et al., 2016; Huang et al., 2014; Laranjeiro et al., 2019; Zanjani et al., 2015) subjectively determined different time thresholds according to the characteristics of heavy truck freight activities. Overall, the time threshold determination methods proposed in most previous studies are subjective and feasible for specific scenarios but lack universality. However, the above methods may not be suitable for intracity trip end identification since these method determined only one single time threshold. In intracity freight, heavy truck transport activities are usually organized in the form of trip chains; that is, heavy trucks start from a base (i.e., freight enterprise, logistics warehouse and factory), make multiple trips for different purposes, and finally return to this base. In a trip chain, a heavy truck dwells longer at the base and shorter at intermediate destinations. Moreover, the spatial patterns of truck trip chains are complex and diverse in different cities (Siripirote et al., 2020). Therefore, a single time threshold may not be sufficient to accurately identify heavy truck short-stay trip ends in a trip chain.
| i |
4e32f19e-a91b-4aed-be96-f4f4ffa8a0b6 | To address this issue, (Thakur et al., 2015) proposed a trip end identification method with dynamic time threshold adjustment. This method first manually determines three-time thresholds, i.e., 30 min, 15 min and 5 min, and then dynamically selects a suitable time threshold according to the circuity ratio (the ratio between the straight line distance and cumulative geodetic distance from the origin to destination) of the truck trajectory. The smaller the circuity ratio of a GPS trajectory is, the greater the likelihood that it is composed of multiple trips. This method first uses the time threshold of 30 min to identify the long-stay trip ends of heavy trucks from a GPS trajectory and then splits this trajectory into multiple segments (defined as primary subtrajectories) according to the identified trip ends. Next, this method calculates the circuity ratio of each primary subtrajectory. A subtrajectory with a circuity ratio less than a predefined threshold of 0.7 is considered to contain multiple trips due to its significantly circuitous degrees. Then, this method uses a time threshold of 15 min to identify the trip ends from each primary subtrajectory composed of multiple trips and splits these primary subtrajectories into multiple secondary subtrajectories according to the identified trip ends. Similarly, this method uses a time threshold of 5 min to identify the trip ends from each secondary subtrajectory composed of multiple trips. After this procedure, this method considers that all trip ends are identified. This time threshold dynamic adjustment method is applicable for identifying intracity heavy truck trip ends, greatly improving the identification accuracy of truck short-stay trip ends in a trip chain. However, the time thresholds (30 min, 15 min and 5 min) and circuity ratio threshold (0.7) in this method are determined subjectively. In the era of big data, massive heavy truck GPS trajectories provide the possibility to identify heavy truck trip ends objectively and accurately. However, a data-driven method of identifying intracity heavy truck trip ends is still lacking.
| i |
9fbea6d6-d9b6-49a1-bda8-cdee904cb013 | In this paper, we use GPS trajectories of 2.7 million heavy trucks in China as basic data and propose a data-driven method for identifying intracity heavy truck trip ends. We use a nonparametric iterative method to determine the multilevel time thresholds. In the process of trip end identification, we first select the first-level time threshold to identify trip ends from a GPS trajectory and then split this GPS trajectory into multiple subtrajectories according to the identified trip ends. Afterward, we determine whether these subtrajectories are composed of multiple trips according to the circuitous degree of intracity heavy truck trip paths. For this purpose, we use a multipath generation algorithm and a similarity analysis method to find the nth shortest path that is closest in length to the intracity heavy truck trip paths. The nth shortest path is used to measure the circuitous degree of intracity heavy truck trip paths. Similarly, we select the next level of a smaller time threshold to identify trip ends from subtrajectories composed of multiple trips. The above process is iterated until all of the short-stay heavy truck trip ends are identified. For each identified trip end, we use freight-related POIs and urban road networks to determine whether it is an actual trip end. Finally, we discuss the potential application value of extracted intracity heavy truck freight trips.
| i |
3e9c78e2-ad16-42e2-9a81-19ecde290956 | We propose a data-driven method to identify the intracity trip ends of each heavy truck. Figure 2 shows the three main steps in this method. First, we identify heavy truck stops from GPS trajectories by using a predetermined speed threshold (see Fig. 2a). These identified truck stops may consist of both trip ends (loading stops, unloading stops and rest stops) and temporary stops due to refueling, traffic congestion, etc. Second, we determine multilevel time thresholds and select an appropriate time threshold level according to a truck’s maximum stopping time to identify trip ends from these truck stops. For example, the first-level (maximum) time threshold is selected if it is shorter than this truck’s maximum stopping time. Otherwise, another level of shorter time threshold is selected to ensure that trip ends are identified. Then, we split an entire GPS trajectory into multiple segments (primary subtrajectories) according to these identified trip ends (see Fig. 2b). Third, we determine whether these primary subtrajectories may be composed of multiple trips. If a primary subtrajectory is significantly circuitous, then it is likely to be composed of multiple trips, as shown in Fig. 2c. We use the next shorter time threshold level to identify potential trip ends from these significantly circuitous primary subtrajectories. If these potential trip ends are identified, one primary subtrajectory is split into multiple secondary subtrajectories, as shown in Fig. 2d. Otherwise, the next shorter time threshold level is used. The above process is iterated until there are no significantly circuitous subtrajectories or the time threshold reaches the minimum value, indicating that all trip ends in an entire trajectory are identified.
<FIGURE> | m |
974fa3a3-9f57-4114-8f87-06ad0fb98f6d | Texts have always represented a significant portion of all the clinical data produced every day in the world, from E.R. reports to clinical diary of patients, drugs prescriptions and administrative documents. Recent digitalization has paved the way for new applications by leveraging automatic data analysis. It is therefore necessary to develop tools capable of understanding the content of documents and their contextual nuances in order to be able to extract useful information. This is one of the main objectives of Natural Language Processing (NLP), which in recent years – thanks to the deep-learning revolution – has led to extraordinary results.
Many successes are due to what are known as foundational models, which are large neural networks that have been trained over a vast collection of unannotated data, capable of operating upon simple adaptation (or fine-tuning) on the most varied downstream tasks.
| i |
ef9b919b-e1bd-4263-8198-59457404abe6 | However, it is difficult to train a generic model suitable for every kind of text. For this reason, and starting from a pretrained model of the language of interest, a new specific embedding model is created for a given domain. This is done by continuing the training on a specific selection of texts.
Although less expensive than starting a new training from scratch, there are still many difficulties, especially when dealing with languages with limited resources, such as Italian, which lacks extensive corpora of freely accessible clinical texts.
Due to limited resources, these models should be even more capable of operating in contexts of few annotations with regard to downstream tasks.
In these cases, a more accurate representation of similarity is therefore necessary and turns out to be useful in many circumstances. For example, in [1]} the semantic similarity between medical terms has been exploited to reduce lexical variability by finding a common representation that can be mapped to IDC-9-CM. Starting from this work, and with the aim of improving the measure of semantic similarity, we have applied recent techniques of contrastive learning as a tool for representation learning, by approaching pairs of semantically similar or possibly equivalent terms (i.e. synonyms) and distancing dissimilar pairs.
| i |
f626d598-bf03-4f4b-a6eb-e94d68f3dcca | Born in the Computer Vision field, contrastive learning is increasingly applied also to the NLP domain [1]}, with still unexplored potential. However, the biggest difficulty lies in the efficient sampling of negative cases and the selection of positive examples, an even more difficult task in a low-resource language such as Italian.
| i |
694db7f1-f184-4808-9976-837f085c3d29 | To compensate for the lack of synonyms listed in the Italian vocabularies of the Unified Medical Language System (UMLS), we directly exploit the Knowledge Graph Embedding (KGE) representation – built from the UMLS semantic network – by combining it with the word embeddings representation. In doing this, we modify the contrastive MS loss [1]} so that its parameters are tied to the similarity calculated on KGEs; we also exploit the context surrounding the terms (increasing in this way the positive cases), and a new BERT-derived model specifically fine-tuned on the Italian medical texts. To the best of our knowledge, this is the first time that MS loss, contexts and KGE have been combined in a single model. Although without having outperformed the state-of-the-art represented by multilingual models, the results obtained are encouraging and demonstrate the goodness of the developed approach, providing a significant leap in performance compared to the starting model while using a significantly lower amount of data than the state-of-the-art. However, further experiments and computational resources will be needed to extend the current model and to fully leverage the multilingual datasets.
| i |
56b3d34d-93d8-4a31-8ffe-5f8d08ac2c70 | Our main contributions are the following: 1) we trained a new word embeddings model by fine-tuning BERT on the Italian medical domain, 2) we leveraged different contrastive learning strategies to overcome the limited number of synonyms in Italian, 3) we integrated the knowledge of the UMLS semantic network by injecting its KGEs directly into the model or by modifying the contrastive loss.
| i |
61c692ad-c265-4bb6-a0f0-a75017794ab0 | In the literature, there are many works that aim to specialize a word embedding model on a specific domain, like [1]}, [2]}, [3]}. Similar studies exist for Italian, for example [4]}, but not for the medical domain. To the best of our knowledge, there is no publicly available embedding model for the medical domain in Italian. There are several possible strategies to train new pretrained models, such as the possibility of training a model from scratch (like SciBERT [5]}) with considerable associated costs, or to continue training on new domain-specific documents (BioBERT [1]}); it is often necessary to extend also the vocabulary as done by [7]}, [8]}.
| w |
c5544cd5-66c1-41d5-bda8-0ed89e0aae04 | In addition to word embeddings, the incorporation of the explicit knowledge represented in Knowledge Graphs (KGs) have recently been explored, injecting it into BERT and enriching in this way the model. Among the first works there is KnowBert [1]}, that with a mechanism of projections and re-contextualization combines word embeddings and knowledge graph embeddings calculated from Wikipedia and WordNet. A similar approach is followed by KeBioLM [2]} albeit with a simplification of the architecture.
| w |
8dd9cc0b-f7ba-477f-8c49-e2278d87da57 | Our work is directly inspired by SapBERT [1]}, the first to use contrastive learning on UMLS synonyms to improve the representation of biomedical embeddings, and CODER [2]} which replaces the InfoNCE loss – used by SapBERT – with the MS loss and integrates the relational information of the UMLS semantic graph by adding a loss inspired by DistMult. The same authors have recently developed an extension of CODER (named CODER++ [3]}) which introduces dynamic sampling that provides hard positive and negative pairs to the MS loss and outperforms previous results, becoming the new state-of-the-art model. While SapBERT and CODER are limited to decontextualized term, KRISSBERT [4]} extends SapBERT by adding a context windows – taken from PubMed – around the terms and managing in this way their ambiguity. Furthermore, it incorporates the UMLS relationships, but limits itself to the taxonomic relationships of the ontology.
CODER is also available in a multilingual version, while a multilingual extension has recently been released also for SapBERT [5]}. CODER++ and KRISSBERT, on the other hand, cannot be used directly in Italian.
| w |
f6bd258d-542e-4af0-8bba-2b22bc0ef294 | Our first model is trained using contrastive learning, mapping similar entities closer together and different entities further apart. Term representation is retrieved from context encoding by averaging the representation vectors of tokens belonging to the entity. We adopt the multi-similarity loss (MS loss) function and modify it in a way that enables us to dynamically change the \(\lambda \) parameter (i.e. the similarity margin) according to the similarity derived from Knowledge Graph Embeddings. Given the MS loss formula:
\(\mathcal {L}_{MS} = \dfrac{1}{m}\sum _{i=1}^{m}\left\lbrace \dfrac{1}{\alpha }\log [1 + \sum _{k\in \mathcal {P}_{i}}e^{-\alpha (S_{ik}-\lambda )}] + \dfrac{1}{\beta }\log [1 + \sum _{k\in \mathcal {N}_{i}}e^{\beta (S_{ik}-\lambda )}]\right\rbrace \)
| m |
6ffa5d12-d62a-44f3-a3ce-bf47969d7964 | where \(\lambda \) is a fixed similarity margin. This margin heavily penalizes positive pairs with similarity \(< \lambda \) and negative pairs with similarity \(> \lambda \) . The idea of separating positive and negative thresholds was first introduced by Liu et al.[1]}. Given the results reported by their study, we have chosen to split the threshold as follows:
\(\mathcal {L}_{MS} = \dfrac{1}{m}\sum _{i=1}^{m}\left\lbrace \dfrac{1}{\alpha }\log [1 + \sum _{k\in \mathcal {P}_{i}}e^{-\alpha (S_{ik}-\lambda _{p})}] + \dfrac{1}{\beta }\log [1 + \sum _{k\in \mathcal {N}_{i}}e^{\beta (S_{ik}-\lambda _{n})}]\right\rbrace \)
| m |
17513987-e775-4e62-b446-84cd71da643e | We will refer to this version of the loss as MS loss v2. After setting \(\lambda _{p}=1\) and \(\lambda _{n}=0.5\) we immediately notice improvements across all metrics. We then propose a further extension of the MS loss that exploits the similarities between KGE entities in order to dynamically chose \(\lambda \) . We name the following loss MS loss v3:
\(\begin{split}\mathcal {L}_{MS} = \dfrac{1}{m}\sum _{i=1}^{m}\Bigg \lbrace & \dfrac{1}{\alpha }\log [1 + \sum _{k\in \mathcal {P}_{i}}e^{-\alpha (S_{ik}-\vert S_{ik}-S_{ik}^{KGE}\vert )}] \\+ & \dfrac{1}{\beta }\log [1 + \sum _{k\in \mathcal {N}_{i}}e^{\beta (S_{ik}-(1-\vert S_{ik}-S_{ik}^{KGE}\vert ))}]\Bigg \rbrace \end{split}\)
| m |
ec3f698c-077c-42aa-935c-e1964f1fd576 | where \(S^{KGE}_{ik}\) is the similarity between concepts \(i\) and \(k\) in the KGE space. According to Wang and Liu[1]} the excessive pursuit to uniformity can make the contrastive loss not tolerant to semantically similar samples which may be harmful. Thus, instead of pushing all the different instances indiscriminately apart, \(S^{KGE}\) helps to introduce a factor that takes into account the underlying relations between samples. The hard positive and hard negative in-batch mining is kept unchanged as in regular MS loss.
| m |
e46180a2-0121-4ade-a8ef-7f5ef0e8b011 | Hence, we proceed to train the model with MS loss v3. Each training batch is constructed dynamically by sampling a virtual batch from a subset \(\mathcal {P}\) of our dataset \(\mathcal {D}\) . \(\mathcal {P}\) is constructed beforehand, by selecting representative contexts of each concept. We will call these concepts prototypes. For each entity \(e\) a small number of prototypes is chosen in a way that prioritizes – where possible – a different synonym of \(e\) for each prototype. Then, for each prototype \(p\) in the virtual batch, we sample \(k\) positive pairs randomly, prioritizing contexts that use different synonyms than the one used in \(p\) . Subsequently, we sample \(m\) possibly hard negative pairs, following the method introduced in [1]}. For top-\(m\) similarity search we use Faiss index, that stores embeddings of all mentions from \(\mathcal {D}\) and efficiently searches for the \(m\) most similar entries with respect to \(p\) . We update this index after each training epoch.
| m |
f438bde8-7c54-4ebe-bbc7-989fc25afc5f | In the second model (which we will call from now on “KGE-injected"), we use the knowledge more directly, by fusing BERT and KGE representations in the upper layers of BERT. The method is similar to the one used in KeBioLM [1]}. We inject the knowledge at the layer \(i\) by running the first \(i\) layers of BERT and then, for each mention \(m\) in the sequence, we apply the mean pool to the tokens of \(m\) , obtaining the BERT mention representation \(\textbf {h}_m\) . Then a linear projection is used to map each mention to the KGE space \(\textbf {h}_m^{proj}=\textbf {W}_m^{proj}\textbf {h}_m+\textbf {b}^{proj}\) . We then use an entity linker, which selects \(n\) candidate entities closest to \(h_m^{proj}\) . The similarities of the \(n\) candidates are normalized through a softmax function. This gives us the normalized similarity scores \(\textbf {a}\) that are used to compute the combined entity representation:
\(e_m = \sum _{j=1}^{n}a_j\cdot \textbf {e}_j\)
| m |
07713243-102d-4b6c-bb74-f418bfe9c106 | Unlike KeBioLM, we keep entity embeddings \(\textbf {e}\) fixed throughout the training. After obtaining the entity representation, we project it back to the BERT embedding space, where it is added to every token of the mention. The resulting embeddings are then normalized and forwarded to the following levels of BERT as usual.
| m |
789f68e7-a1cc-4072-b07d-ecffc3e7ca38 | To link the mentions encoded by BERT to the KGE entities, we define an entity linking loss as cross-entropy between self-supervised entity labels and similarities obtained from the linker in KGE space:
\(\mathcal {L}_{EL}=\sum -\log \dfrac{\exp (h_m^{proj}\cdot \textbf {e})}{\sum _{\textbf {e}_j\in \mathcal {E}} \exp (h_m^{proj}\cdot \textbf {e}_j)}\)
| m |
725377d1-1ea0-4142-9430-07c8284cfdfa | Furthermore, we add the masking language modeling task to prevent the catastrophic forgetting phenomenon [1]}, [2]}. As done in [3]}, we mask the whole entity if any of the \(15\%\) masked tokens happens to belong to a mention. Ultimately, we jointly minimize the following loss:
\(\mathcal {L} = \mathcal {L}_{MLM} + \mathcal {L}_{EL}\)
| m |
3541f525-4052-4254-9b21-712a94e2bff8 | The two previously described models use knowledge in different ways, improving particular aspects of the representation and returning different results depending on the task. Therefore, we decided to combine both in a single model, using the KGE injection training as pre-training phase and the contrastive learning with MS loss v3 as fine-tuning process. We call the latter model “pipelined".
| m |
87fa7559-c95a-415c-a9b0-a944fbb8db26 | All the training is performed on one NVIDIA T4 GPU, which has 16 GB of memory. For this reason, we could not experiment with larger batches, like those used for training CODER and CODER++, which were trained on 8 NVIDIA A100 40GB GPUs.
| r |
c8710bc7-d663-4f51-8b8a-651fb645e2a4 | We evaluated our models on three similarity-oriented metrics: MSCM score, clustering pair and semantic relatedness. MSCM is a similarity score based on the UMLS taxonomy, developed by [1]} and used in CODER. It is defined as:
\(MSCM(V,T,k) = \frac{1}{V(T)}\sum _{v \in V(T)}{\sum _{i=1}^{k}{\frac{1_T(v(i))}{log_2(i+1)}}}\)
| r |
c4f53bbb-1f32-48ea-a6c7-dfca00a4de15 | where \(V\) is a set of concepts, \(T\) the semantic type according to UMLS, \(k\) the parameterized number of neighborhood, \(V(T)\) a subset of concepts with type \(T\) , \(v(i)\) the \(i^{th}\) closest neighbor of concept \(v\) and \(1_T\) is an indicator function which is 1 if \(v\) is of type \(T\) , 0 otherwise. Given this formulation and the default settings (\(k=40\) as used in CODER) the score ranges from 0 to \(11.09\) .
Given its importance in low-resource language, where pre-trained tools for entity recognition and linking are lacking, we have also included the clustering pair task. Already experimented in [1]} to unify lexically different but semantically equivalent terms, the task is defined more formally by CODER++, where two terms are considered synonyms if their cosine similarity is higher than a given threshold (\(\theta \) ) meanwhile true synonyms are taken from UMLS.
For semantic relatedness, since there are no datasets of this kind for Italian and given their development costs (which would require the intervention of several domain experts), we rely on two English datasets, manually translating the entities involved. MayoSRS and UMNSRS were introduced by [2]} and [3]} with a manual annotation of a relatedness score for 101 and 587 medical term pairs, respectively. The values vary from 1 to 10 for MayoSRS and 0-1600 for UMNSRS. Due to the lack of an appropriate translation for some terms, the number of pairs for the UMNSRS dataset is reduced to 536 tuples.
| r |
b911e6a7-3dae-485d-a37e-0b56ef02a1f2 | A first comparison of the state-of-the-art shows that SapBERT, in the multilingual version, actually outperforms CODER, despite the fact that the latter was in advantage in the paper that introduced it.
As regards the training of KGEs, Table REF shows the results on the different models evaluated on the link prediction and similarity tasks. We have chosen ComplEx as reference model, thanks to the good results obtained on the similarity datasets and a representation still comparable with the other models.
| r |
57d56dd8-33df-4e12-aa0e-94bc92d6e4cb | The use of ComplEx in MS loss v3 proved to be of substantial benefit, as shown in Table REF where we compare the performances obtained with the different variants of the loss. ComplEx also proved to be the superior embedding for the KGE-injected model, obtaining a moderate improvement relative to the baseline (ext-BERT) albeit in a more contained way if compared to the contrastive learning training.
<TABLE><TABLE> | r |
7bcf3ba4-b9cd-4afd-b3f6-3f84de57555c | Finally, we combine the two models, exploiting the contrastive learning (with MS loss v3) and the previously trained KGE-injected model. The results thus obtained, shown in Tables REF and REF , are better than the previous models taken individually.
To obtain the final model, we first train ext-BERT with the KGE injection approach for 40k training steps. Each batch contains 6 training sequences, where each sequence contains at least 1 and at most 119 mentions. KGEs are then injected at the 8th BERT layer. The remaining hyperparameters are the following: 4k warm-up steps, weight decay \(0.01\) , and learning rate \(1e-5\) .
The resulting model is then trained with MS loss v3 for 50k training steps ( 4 epochs). For each training step, we sample 4 prototypes \(p\) from \(\mathcal {P}\) . We set the number of positives as \(k=20\) , and the number of possible negatives as \(m=30\) .
With regard to the (possible) negatives mining, we update the Faiss index at every epoch. We also experiment with different MS loss parameters, but without seeing any improvement and thus leaving the original \(\alpha =2\) , \(\beta =50\) , \(\epsilon =0.1\) . Other parameters are: learning rate \(2e-5\) , weight decay \(0.01\) , max gradient norm 1 and 20k warm-up steps. During the training, we use gradient accumulation for 8 steps, while the number of contexts per term is limited to 4 for computational reasons.
| r |
443dbef9-51c7-498a-8d97-4582ad63852f | The chosen parameters represent a compromise between the performances obtained on the various tasks. In fact, we have noticed a different behavior of the model depending on the task on which it is evaluated. In particular, the human annotated semantic relatedness seems to be in contrast with the metrics defined automatically from UMLS; each improvement of human metrics corresponds to a worsening of UMLS-based metrics and vice versa. Moreover, while the clustering pair task seems to benefit particularly from the increase in the number of epochs with 16 epochs the F1 score stands at 25.62, closer to 33.92 of the SOTA model than 19.37 of the 4 epochs model, the prolonged training has a slightly negative effect on semantic relatedness and strongly penalizes MSCM score. The choice of the pooling strategy is also not optimal for all tasks. By replacing the mean pooling with the CLS tag, both during training and validation, we obtain higher than state-of-the-art scores on the MayoSRS and UMNSRS datasetsrespectively 0.49 and 0.50 vs 0.44 and 0.48 of the SOTA model, however this choice is ineffective for MSCM (-0.61 percentage points on average) and clustering pair (-10.39 points). Finally, we observed that the Masked Language Modeling training is counterproductive with respect to similarity measures. In fact, by comparing ext-BERT with the basic version of BERT, it is already evident that performances dropped over all datasets, despite having obtained significant gains in linguistic tasks. At the same time, the new representation obtained with contrastive learning does not seem to bring any benefit on linguistic tasks, as happened with CODER and SapBERT. This phenomenon needs to be further investigated in order to find the right balance.
<TABLE><TABLE> | r |
7dd59bcb-0d10-4955-88c9-e45737673c65 | Quantitative MRI, and the push towards in vivo histology, aims to extract tissue-specific parameters from a series of weighted volumes [1]}. For example, the longitudinal relaxation rate, \(R_1\) , which is sensitive to important biological features, such as myelin and iron content, can be quantified with the variable flip angle (VFA) approach, e.g. [2]}, [3]}. A common assumption when computing quantitative metrics is that certain multiplicative factors, such as the signal intensity modulation imposed by the receiver coil's net sensitivity profile, are constant across the weighted volumes. However, this is invalid if motion occurs between the volume acquisitions. In the case of neuroimaging, rigid body co-registration can be used to realign the brain but will not correct for the differential coil sensitivity modulation, which in \(R_1\) maps computed with the VFA approach can lead to mean absolute error approaching 20% [4]}.
| i |
dedbd061-0a14-4165-90f8-93955ae47a25 | A correction scheme has previously been proposed by [1]} and validated for \(R_1\) mapping at 3T. The position-specific net receive sensitivity is estimated from two rapid low-resolution magnitude images, received on the body and array coils respectively prior to each VFA acquisition. The more homogeneous profile of the body coil is used as a reference to compute the net receiver sensitivity, which is then removed from the VFA acquisitions. This approach effectively assumes that the body coil's modulation is consistent across volumes instead of that of the array coil. This in itself is a potential limitation, as is the general unavailability at body coils at higher field strengths.
| i |
bcbfb2d4-a6e0-4c4e-a003-db824ccd1972 | Here we propose an alternative whereby we estimate the relative sensitivity between volumes. This approach does not fully remove the receiver's sensitivity modulation but does remove the bias that differential modulation introduces in quantitative metrics. Only the calibration images obtained with the array coil are required, i.e. less data than the originally proposed method [1]}. To validate the approach, we focus on \(R_1\) maps computed with the multi-parameter mapping (MPM) protocol [2]}. We first compare performance with the established method of Papp et al. at 3T [1]} and then demonstrate a reduction of inter-scan motion artefacts at 7T under a range of different motion conditions. We further demonstrate that, unlike at 3T, the transmit field \(B_1^+\) also exhibits substantial position-specific variability at 7T. As a result, the most precise \(R_1\) estimates were obtained by accounting for both position-specific \(B_1^+\) and \(B_1^-\) effects.
| i |
137ea522-3935-4701-baf9-ecb3f5026fea | Exemplar images, relative sensitivities, and results from the generative modelling are shown in figure REF . The \(R_1\) and error maps obtained at 3T and 7T are shown in figures REF and REF respectively. The means and standard deviations of the MAE are reported in Table REF . The differential impact of correcting for transmit and receive field effects is illustrated in figure REF . This shows \(R_1\) and error maps without motion, and with motion having implemented (i) no correction, (ii) correction only for receive field effects, \(B_1^-\) , (iii) only for transmit field effects, \(B_1^+\) , or (iv) for both effects in combination.
<FIGURE> | r |
82141aae-f37b-46ad-b7a4-471e3e44c375 | We have introduced methods for correcting inter-scan motion artefacts in quantitative MRI that do not rely on the availability or spatial homogeneity of a body coil. The approaches are based on estimating the relative sensitivity modulation across positions, and successfully reduced error in \(R_1\) maps at both 3T and 7T.
| d |
68786516-66a5-4ff9-9211-39611ccfeabe | At 3T, the proposed approaches outperformed a previously established correction method [1]}. This can be attributed to the fact that the method of Papp et al. assumes that the reference modulation of the body coil is independent of position, whereas the proposed methods do not. Instead they specifically account for the relative sensitivity across positions thereby restoring consistent modulations.
| d |