_id
stringlengths 36
36
| text
stringlengths 5
665k
| marker
stringlengths 3
6
| marker_offsets
sequence | label
stringlengths 28
32
|
---|---|---|---|---|
9e00ef99-96fb-43c3-8f89-51bd2a0f07e0 | Causality plays a central role in understanding distribution changes, which can be modelled as causal interventions [1]}. The Sparse Mechanism Shift hypothesis [1]}, [3]} (SMS) states that naturally occurring shifts in the data distribution can be attributed to sparse and local changes in the causal generative process. This implies that many causal mechanisms remain invariant across domains [4]}, [5]}, [6]}. In this light, learning a causal model of the environment enables agents to reason about distribution shifts and to exploit the invariance of learnt causal mechanisms across different environments. Hence, we posit that world models with a causal structure can facilitate modular transfer of knowledge. To date, however, methods for causal discovery [7]}, [8]}, [9]}, [10]}, [11]} require access to abstract causal variables to learn causal models from data. These are not typically available in the context of world model learning, where we wish to operate directly on high-dimensional observations.
| [10] | [
[
779,
783
]
] | https://openalex.org/W3103069071 |
0ceb8202-d9f6-4419-bc1d-5b7b206b46ff | Causality plays a central role in understanding distribution changes, which can be modelled as causal interventions [1]}. The Sparse Mechanism Shift hypothesis [1]}, [3]} (SMS) states that naturally occurring shifts in the data distribution can be attributed to sparse and local changes in the causal generative process. This implies that many causal mechanisms remain invariant across domains [4]}, [5]}, [6]}. In this light, learning a causal model of the environment enables agents to reason about distribution shifts and to exploit the invariance of learnt causal mechanisms across different environments. Hence, we posit that world models with a causal structure can facilitate modular transfer of knowledge. To date, however, methods for causal discovery [7]}, [8]}, [9]}, [10]}, [11]} require access to abstract causal variables to learn causal models from data. These are not typically available in the context of world model learning, where we wish to operate directly on high-dimensional observations.
| [11] | [
[
786,
790
]
] | https://openalex.org/W2979174981 |
ec875b0b-8f99-4ec6-84cb-b9a0a5f81a76 | Predictive models of the environment can be used to derive exploration- [1]} or reward-driven [2]}, [3]}, [4]} behaviours. In this paper, we focus on the learning of latent dynamics models. World models [2]} train a representation encoder and a RNN-based transition model in a two-stage process. Other approaches [4]}, [7]}, [8]} learn a generative model by jointly training the representation and the transition via variational inference. PlaNet [4]} parameterises the transition model with RNNs. E2C [8]}, [11]} and SOLAR [7]} use locally-linear transition models, arguing that including constraints in the dynamics model yields structured latent spaces that are suitable for control problems. Our proposed approach shares the general principle that latent representations can be shaped by structured transition mechanisms [13]}. However, to the best of our knowledge, VCD is the first approach that implements a causal transition model given high-dimensional inputs.
| [1] | [
[
72,
75
]
] | https://openalex.org/W3025660841 |
2e41c1ed-9f8a-4fdf-a658-16c6f3f01fb3 | Predictive models of the environment can be used to derive exploration- [1]} or reward-driven [2]}, [3]}, [4]} behaviours. In this paper, we focus on the learning of latent dynamics models. World models [2]} train a representation encoder and a RNN-based transition model in a two-stage process. Other approaches [4]}, [7]}, [8]} learn a generative model by jointly training the representation and the transition via variational inference. PlaNet [4]} parameterises the transition model with RNNs. E2C [8]}, [11]} and SOLAR [7]} use locally-linear transition models, arguing that including constraints in the dynamics model yields structured latent spaces that are suitable for control problems. Our proposed approach shares the general principle that latent representations can be shaped by structured transition mechanisms [13]}. However, to the best of our knowledge, VCD is the first approach that implements a causal transition model given high-dimensional inputs.
| [2] | [
[
94,
97
],
[
203,
206
]
] | https://openalex.org/W2890208753 |
a2e00aed-1a27-4347-8434-e985e69cbd14 | Predictive models of the environment can be used to derive exploration- [1]} or reward-driven [2]}, [3]}, [4]} behaviours. In this paper, we focus on the learning of latent dynamics models. World models [2]} train a representation encoder and a RNN-based transition model in a two-stage process. Other approaches [4]}, [7]}, [8]} learn a generative model by jointly training the representation and the transition via variational inference. PlaNet [4]} parameterises the transition model with RNNs. E2C [8]}, [11]} and SOLAR [7]} use locally-linear transition models, arguing that including constraints in the dynamics model yields structured latent spaces that are suitable for control problems. Our proposed approach shares the general principle that latent representations can be shaped by structured transition mechanisms [13]}. However, to the best of our knowledge, VCD is the first approach that implements a causal transition model given high-dimensional inputs.
| [3] | [
[
100,
103
]
] | https://openalex.org/W2995298643 |
0335f128-30d0-43d2-9040-516ae52f58f9 | Predictive models of the environment can be used to derive exploration- [1]} or reward-driven [2]}, [3]}, [4]} behaviours. In this paper, we focus on the learning of latent dynamics models. World models [2]} train a representation encoder and a RNN-based transition model in a two-stage process. Other approaches [4]}, [7]}, [8]} learn a generative model by jointly training the representation and the transition via variational inference. PlaNet [4]} parameterises the transition model with RNNs. E2C [8]}, [11]} and SOLAR [7]} use locally-linear transition models, arguing that including constraints in the dynamics model yields structured latent spaces that are suitable for control problems. Our proposed approach shares the general principle that latent representations can be shaped by structured transition mechanisms [13]}. However, to the best of our knowledge, VCD is the first approach that implements a causal transition model given high-dimensional inputs.
| [4] | [
[
106,
109
],
[
313,
316
],
[
447,
450
]
] | https://openalex.org/W2900152462 |
2b06f66c-7b8f-4711-8cb9-2f00448a580f | Predictive models of the environment can be used to derive exploration- [1]} or reward-driven [2]}, [3]}, [4]} behaviours. In this paper, we focus on the learning of latent dynamics models. World models [2]} train a representation encoder and a RNN-based transition model in a two-stage process. Other approaches [4]}, [7]}, [8]} learn a generative model by jointly training the representation and the transition via variational inference. PlaNet [4]} parameterises the transition model with RNNs. E2C [8]}, [11]} and SOLAR [7]} use locally-linear transition models, arguing that including constraints in the dynamics model yields structured latent spaces that are suitable for control problems. Our proposed approach shares the general principle that latent representations can be shaped by structured transition mechanisms [13]}. However, to the best of our knowledge, VCD is the first approach that implements a causal transition model given high-dimensional inputs.
| [7] | [
[
319,
322
],
[
524,
527
]
] | https://openalex.org/W2889347284 |
069516e6-4483-48f1-b290-24ee70deb6d5 | Predictive models of the environment can be used to derive exploration- [1]} or reward-driven [2]}, [3]}, [4]} behaviours. In this paper, we focus on the learning of latent dynamics models. World models [2]} train a representation encoder and a RNN-based transition model in a two-stage process. Other approaches [4]}, [7]}, [8]} learn a generative model by jointly training the representation and the transition via variational inference. PlaNet [4]} parameterises the transition model with RNNs. E2C [8]}, [11]} and SOLAR [7]} use locally-linear transition models, arguing that including constraints in the dynamics model yields structured latent spaces that are suitable for control problems. Our proposed approach shares the general principle that latent representations can be shaped by structured transition mechanisms [13]}. However, to the best of our knowledge, VCD is the first approach that implements a causal transition model given high-dimensional inputs.
| [8] | [
[
325,
328
],
[
502,
505
]
] | https://openalex.org/W2963430173 |
15ee302f-152a-41c3-9426-6e923e728372 | Predictive models of the environment can be used to derive exploration- [1]} or reward-driven [2]}, [3]}, [4]} behaviours. In this paper, we focus on the learning of latent dynamics models. World models [2]} train a representation encoder and a RNN-based transition model in a two-stage process. Other approaches [4]}, [7]}, [8]} learn a generative model by jointly training the representation and the transition via variational inference. PlaNet [4]} parameterises the transition model with RNNs. E2C [8]}, [11]} and SOLAR [7]} use locally-linear transition models, arguing that including constraints in the dynamics model yields structured latent spaces that are suitable for control problems. Our proposed approach shares the general principle that latent representations can be shaped by structured transition mechanisms [13]}. However, to the best of our knowledge, VCD is the first approach that implements a causal transition model given high-dimensional inputs.
| [13] | [
[
825,
829
]
] | https://openalex.org/W3210352550 |
136a6982-290a-47bb-a083-207910a853d5 | Causal discovery methods enable the learning of causal structure from data. Approaches can be categorised as constraint-based (e.g. [1]}) and score-based (e.g. [2]}). The reader is referred to [3]} for a detailed review of causal discovery methods. Motivated by the fact that these methods require access to abstract causal variables, recent efforts have been made to reconcile machine learning, which has the ability to operate on low-level data, and causality [4]}. Our current work situates within this broader context of causal representation learning, and aims to identify causally meaningful representations via the discovery of causal transition dynamics. To this end, [5]} proposes a similar framework to ours and provide a theoretical discussion around the identifiability of causal variables. Our approach differs in that we focus on the adaptation capabilities of causal models and show that the method is applicable to image observations.
| [1] | [
[
132,
135
]
] | https://openalex.org/W1524326598 |
abce2049-f3a9-41ff-b975-f599274af60e | Causal discovery methods enable the learning of causal structure from data. Approaches can be categorised as constraint-based (e.g. [1]}) and score-based (e.g. [2]}). The reader is referred to [3]} for a detailed review of causal discovery methods. Motivated by the fact that these methods require access to abstract causal variables, recent efforts have been made to reconcile machine learning, which has the ability to operate on low-level data, and causality [4]}. Our current work situates within this broader context of causal representation learning, and aims to identify causally meaningful representations via the discovery of causal transition dynamics. To this end, [5]} proposes a similar framework to ours and provide a theoretical discussion around the identifiability of causal variables. Our approach differs in that we focus on the adaptation capabilities of causal models and show that the method is applicable to image observations.
| [2] | [
[
160,
163
]
] | https://openalex.org/W2103659055 |
16748e03-1c07-4577-8c81-da64b818a0a5 | Causal discovery methods enable the learning of causal structure from data. Approaches can be categorised as constraint-based (e.g. [1]}) and score-based (e.g. [2]}). The reader is referred to [3]} for a detailed review of causal discovery methods. Motivated by the fact that these methods require access to abstract causal variables, recent efforts have been made to reconcile machine learning, which has the ability to operate on low-level data, and causality [4]}. Our current work situates within this broader context of causal representation learning, and aims to identify causally meaningful representations via the discovery of causal transition dynamics. To this end, [5]} proposes a similar framework to ours and provide a theoretical discussion around the identifiability of causal variables. Our approach differs in that we focus on the adaptation capabilities of causal models and show that the method is applicable to image observations.
| [3] | [
[
193,
196
]
] | https://openalex.org/W2801890059 |
37f25f52-8bbc-4cfc-8644-a3542d8bccc0 | Causal discovery methods enable the learning of causal structure from data. Approaches can be categorised as constraint-based (e.g. [1]}) and score-based (e.g. [2]}). The reader is referred to [3]} for a detailed review of causal discovery methods. Motivated by the fact that these methods require access to abstract causal variables, recent efforts have been made to reconcile machine learning, which has the ability to operate on low-level data, and causality [4]}. Our current work situates within this broader context of causal representation learning, and aims to identify causally meaningful representations via the discovery of causal transition dynamics. To this end, [5]} proposes a similar framework to ours and provide a theoretical discussion around the identifiability of causal variables. Our approach differs in that we focus on the adaptation capabilities of causal models and show that the method is applicable to image observations.
| [4] | [
[
462,
465
]
] | https://openalex.org/W3135588948 |
68c25c9f-9647-4770-8c63-b30b6d59ce2a | Another branch of related work leverages the invariance of causal mechanisms by learning invariant predictors across environments [1]}, [2]}, [3]}, [4]}, [5]}. This invariance has been studied in the context of state abstractions in MDPs [6]}, and invariant policies can be learnt via imitation learning from different environments [7]}. In contrast, our approach models the full generative process of the data across different environments rather than learning discriminative predictors.
| [1] | [
[
130,
133
]
] | https://openalex.org/W1515756431 |
c14fb118-0a2a-4431-917a-0b343b3b524d | Another branch of related work leverages the invariance of causal mechanisms by learning invariant predictors across environments [1]}, [2]}, [3]}, [4]}, [5]}. This invariance has been studied in the context of state abstractions in MDPs [6]}, and invariant policies can be learnt via imitation learning from different environments [7]}. In contrast, our approach models the full generative process of the data across different environments rather than learning discriminative predictors.
| [2] | [
[
136,
139
]
] | https://openalex.org/W2144020560 |
19f9cd20-a61b-440d-a324-45757eb6feb4 | Another branch of related work leverages the invariance of causal mechanisms by learning invariant predictors across environments [1]}, [2]}, [3]}, [4]}, [5]}. This invariance has been studied in the context of state abstractions in MDPs [6]}, and invariant policies can be learnt via imitation learning from different environments [7]}. In contrast, our approach models the full generative process of the data across different environments rather than learning discriminative predictors.
| [3] | [
[
142,
145
]
] | https://openalex.org/W1905064697 |
536060cc-289f-48f7-a181-e835a1b408f8 | Another branch of related work leverages the invariance of causal mechanisms by learning invariant predictors across environments [1]}, [2]}, [3]}, [4]}, [5]}. This invariance has been studied in the context of state abstractions in MDPs [6]}, and invariant policies can be learnt via imitation learning from different environments [7]}. In contrast, our approach models the full generative process of the data across different environments rather than learning discriminative predictors.
| [5] | [
[
154,
157
]
] | https://openalex.org/W2953494151 |
2c4212b5-9056-4a97-823f-6a5180b1947d | Another branch of related work leverages the invariance of causal mechanisms by learning invariant predictors across environments [1]}, [2]}, [3]}, [4]}, [5]}. This invariance has been studied in the context of state abstractions in MDPs [6]}, and invariant policies can be learnt via imitation learning from different environments [7]}. In contrast, our approach models the full generative process of the data across different environments rather than learning discriminative predictors.
| [6] | [
[
238,
241
]
] | https://openalex.org/W3034932139 |
4a0a9dfc-8cf4-465c-afc3-a50c3dbb1236 | In a complex environment with high-dimensional observations, such as images, learning a compact latent state space that captures the dynamics of the environment has been shown to be more computationally efficient than learning predictions directly in the observation space [1]}, [2]}. Given a dataset of sequences \(\lbrace (o^{0:T}, a^{0:T}_i)\rbrace _{i=0}^N\)In the current work, we focus on environments without rewards. However, the proposed method can be readily extended to include reward prediction., with observations \(o^t\) and actions \(a^t\) at discrete timesteps \(t\) , a generative model of the observations can be defined using latent states \(z^{0:T}\) as
\(p(o^{0:T}, a^{0:T}) = \int \prod _{t=0}^T p_\theta (o^t|z^t)p(a^t|z^t)p_\theta (z^t|z^{t-1}, a^{t-1}) dz^{0:T},\)
| [1] | [
[
273,
276
]
] | https://openalex.org/W2900152462 |
48797bc7-5a2b-4e40-b2e7-bdb91121df75 | In a complex environment with high-dimensional observations, such as images, learning a compact latent state space that captures the dynamics of the environment has been shown to be more computationally efficient than learning predictions directly in the observation space [1]}, [2]}. Given a dataset of sequences \(\lbrace (o^{0:T}, a^{0:T}_i)\rbrace _{i=0}^N\)In the current work, we focus on environments without rewards. However, the proposed method can be readily extended to include reward prediction., with observations \(o^t\) and actions \(a^t\) at discrete timesteps \(t\) , a generative model of the observations can be defined using latent states \(z^{0:T}\) as
\(p(o^{0:T}, a^{0:T}) = \int \prod _{t=0}^T p_\theta (o^t|z^t)p(a^t|z^t)p_\theta (z^t|z^{t-1}, a^{t-1}) dz^{0:T},\)
| [2] | [
[
279,
282
]
] | https://openalex.org/W2786019934 |
a9de60eb-5061-4376-a95c-f29492b5d8fd | where \(q_\phi (z^t|o^t)\) is a learnable approximate posterior of the observations. See Appendix for the derivation.
RSSM [1]} employs a flexible transition model parameterised as a fully connected recurrent neural network, where the transition probability is split into a stochastic part and a deterministic recurrent part,
\(z^t \sim p_\theta (z^t|h^t), \quad h^t = f_\theta (h^{t-1}, z^{t-1}, a^{t-1}),\)
| [1] | [
[
125,
128
]
] | https://openalex.org/W2900152462 |
19a7d4f7-b48c-45e0-9ce8-156e2f4ea622 | where \(f(\cdot )\) is instantiated as a GRU [1]} and \(h^t\) is the associated hidden state. Intuitively, this provides a path through which information can be passed on over multiple timesteps.
| [1] | [
[
46,
49
]
] | https://openalex.org/W2157331557 |
eff8f2ab-1f29-4341-b9ea-c10006450a27 | A causal graphical model (CGM) [1]} is defined as a set of random variables \(\lbrace X_1, ..., X_d\rbrace \) , their joint distribution \(P_X\) , and a directed acyclic graph (DAG), \(\mathcal {G}=(X,E)\) , where each edge \((i,j)\in E\) implies that \(X_i\) is a direct cause of \(X_j\) . The joint distribution admits a causal factorisation such that
\(p(x_1,...,x_d) = \prod _{i=0}^d p(x_i| PA_i),\)
| [1] | [
[
31,
34
]
] | https://openalex.org/W2801890059 |
2ab2bfba-dd92-475c-bb50-7d98c9216d18 | where \(p^{\prime }(\cdot |\cdot )\) is the new conditional distribution corresponding to the intervention. The SMS hypothesis [1]} posits that naturally occurring distribution shifts tend to correspond to sparse changes in a causal model when factorized as (REF ), i.e., changes of a few mechanisms only. Causal mechanisms thus tend to be invariant across environments [2]}, [3]}, [4]}. In this light, we argue that a causal world model can structurally leverage the invariance within distribution shifts as an inductive prior. In order to learn a causal model in the context of world models, we draw inspiration from recent advances in causal discovery which aim to learn causal structures from data.
| [1] | [
[
128,
131
]
] | https://openalex.org/W3135588948 |
6b4b7e5c-05e5-4369-b322-7b3174841779 | where \(p^{\prime }(\cdot |\cdot )\) is the new conditional distribution corresponding to the intervention. The SMS hypothesis [1]} posits that naturally occurring distribution shifts tend to correspond to sparse changes in a causal model when factorized as (REF ), i.e., changes of a few mechanisms only. Causal mechanisms thus tend to be invariant across environments [2]}, [3]}, [4]}. In this light, we argue that a causal world model can structurally leverage the invariance within distribution shifts as an inductive prior. In order to learn a causal model in the context of world models, we draw inspiration from recent advances in causal discovery which aim to learn causal structures from data.
| [2] | [
[
371,
374
]
] | https://openalex.org/W2144020560 |
6decda73-a5c0-4a72-9423-f9354f9a44b6 | where \(p^{\prime }(\cdot |\cdot )\) is the new conditional distribution corresponding to the intervention. The SMS hypothesis [1]} posits that naturally occurring distribution shifts tend to correspond to sparse changes in a causal model when factorized as (REF ), i.e., changes of a few mechanisms only. Causal mechanisms thus tend to be invariant across environments [2]}, [3]}, [4]}. In this light, we argue that a causal world model can structurally leverage the invariance within distribution shifts as an inductive prior. In order to learn a causal model in the context of world models, we draw inspiration from recent advances in causal discovery which aim to learn causal structures from data.
| [3] | [
[
377,
380
]
] | https://openalex.org/W1905064697 |
8dc974a1-e167-488e-9ee7-47dcf5614eec | where \(p^{\prime }(\cdot |\cdot )\) is the new conditional distribution corresponding to the intervention. The SMS hypothesis [1]} posits that naturally occurring distribution shifts tend to correspond to sparse changes in a causal model when factorized as (REF ), i.e., changes of a few mechanisms only. Causal mechanisms thus tend to be invariant across environments [2]}, [3]}, [4]}. In this light, we argue that a causal world model can structurally leverage the invariance within distribution shifts as an inductive prior. In order to learn a causal model in the context of world models, we draw inspiration from recent advances in causal discovery which aim to learn causal structures from data.
| [4] | [
[
383,
386
]
] | https://openalex.org/W2740437707 |
5b32b80f-cf5d-488e-ab37-fbb479a6fd5d | We focus on methods that formulate causal discovery as a continuous optimisation problem [1]}, [2]}, [3]} as these can be naturally incorporated into the variational inference framework. Furthermore, since the causal variables are learnt in our model, the causal discovery module is required to learn causal graphs from unknown intervention targets. In this work, we follow the formulation in Differentiable Causal Discovery with Interventional data [1]} (DCDI), which optimises a continuously parameterised probabilistic belief over graph structures and intervention targets. See Appendix for further detail.
| [1] | [
[
89,
92
],
[
450,
453
]
] | https://openalex.org/W3103069071 |
6779bf16-6b5f-4cc2-ba7f-244045b86719 | We focus on methods that formulate causal discovery as a continuous optimisation problem [1]}, [2]}, [3]} as these can be naturally incorporated into the variational inference framework. Furthermore, since the causal variables are learnt in our model, the causal discovery module is required to learn causal graphs from unknown intervention targets. In this work, we follow the formulation in Differentiable Causal Discovery with Interventional data [1]} (DCDI), which optimises a continuously parameterised probabilistic belief over graph structures and intervention targets. See Appendix for further detail.
| [2] | [
[
95,
98
]
] | https://openalex.org/W2979174981 |
4d6b52d2-9bf8-4e4c-a8df-7fb4a38d70fb | We focus on methods that formulate causal discovery as a continuous optimisation problem [1]}, [2]}, [3]} as these can be naturally incorporated into the variational inference framework. Furthermore, since the causal variables are learnt in our model, the causal discovery module is required to learn causal graphs from unknown intervention targets. In this work, we follow the formulation in Differentiable Causal Discovery with Interventional data [1]} (DCDI), which optimises a continuously parameterised probabilistic belief over graph structures and intervention targets. See Appendix for further detail.
| [3] | [
[
101,
104
]
] | https://openalex.org/W2914607694 |
9f4eab91-c983-46fb-ba24-bb9352f200d1 | where \(d\) is the dimension of the latent space, to be set as a hyperparameter. \(z_i\) denotes the \(i\) th dimension of the latent state, and each conditional distribution \(p_i\) is a one-dimensional normal distribution with mean and variance given by separate neural networks. This factorisation and separation of parameters is motivated by the Independent Causal Mechanism principle, which states that the causal generative process of a system’s variables is composed of autonomous modules that do not inform or influence each other [1]}. This explicit modularity of the model structure enables the notion of interventions, where individual conditional distributions are locally changed without affecting the other mechanisms.
| [1] | [
[
542,
545
]
] | https://openalex.org/W3135588948 |
f159127b-af7c-4992-9527-41e50b2e8836 | Following the structure of a CGM, we condition each variable only on its causal parents according to the learnable causal graph \(\mathcal {G}\) , rather than the full state. Given a graph \(\mathcal {G}\) , we define the binary adjacency mask \(M^\mathcal {G}\) where the entry \(M^\mathcal {G}_{ij}\) is 1 if and only if \([z^{t-1}, a^{t-1}]_i\) is a causal parent of \(z_j^t\) . This is consistent with the intuition that, in physical systems, states interact with each other in a sparse manner [1]}, and actions tend to have a direct effect on only a subset of the states. Under this parameterisation, the causal transition probability can be written as
\(p(z^t | z^{t-1}, a^{t-1}) = \prod _i^d p_i(z_i^t | M^\mathcal {G}_i \odot [z^{t-1}, a^{t-1}]),\)
| [1] | [
[
501,
504
]
] | https://openalex.org/W2976023236 |
6ce0e2b2-64c2-4349-ad47-8577a14e15d4 | Following the SMS hypothesis [1]}, we assume that changes in distributions across the \(K\) intervened environments are due to sparse interventions in the ground truth causal generative process. In order to incorporate sparse interventions in VCD, given the set of learnt intervention targets \(\mathcal {I}_k\) for each environment, we define the binary intervention mask \(R^\mathcal {I}\) where the entry \(R^\mathcal {I}_{ki}\) is 1 if and only if the variable \(z_i\) is in the set of intervention targets in environment \(k\) . For each variable \(z_i\) , \(R^\mathcal {I}_{ki}\) acts as a switch between reusing a shared observational model and an environment-specific interventional model. The full interventional causal model of the transition probability in the environment \(k\) can be written as
\(p^k(z^t|z^{t-1}, a^{t-1}) = \prod _i^d p^{(0)}_i(z_i^t | M^\mathcal {G}_i \odot [z^{t-1}, a^{t-1}])^{1-R^\mathcal {I}_{ki}} p^{(k)}_i(z_i^t | M^\mathcal {G}_i \odot [z^{t-1}, a^{t-1}])^{R^\mathcal {I}_{ki}},\)
| [1] | [
[
29,
32
]
] | https://openalex.org/W3135588948 |
4b68fcef-4421-47b7-84db-2a9e1bb94933 | Similar to RSSM [1]}, we augment the model with a deterministic recurrent path to enable long-term predictions. To ensure that each conditional distribution only has access to the causal parents, in the same way that each conditional distribution is modelled by a separate network, each conditional distribution keeps a separate recurrent unit and a corresponding hidden activation:
\(z^t_i \sim p_i(z^t_i|h^{t}_i), \quad h^t_i = f_i(h^{t-1}_i, M^\mathcal {G}_i \odot [z^{t-1}_i, a^{t-1}_i]),\)
| [1] | [
[
16,
19
]
] | https://openalex.org/W2900152462 |
41db38d5-ca51-4166-9d8e-46c51ef5f029 | where \(f_i\) is a recurrent module specific to the variable \(z_i\) , instantiated as a GRU [1]}.
| [1] | [
[
94,
97
]
] | https://openalex.org/W2157331557 |
e3d7caf7-9ce9-4219-a093-0eba0f16f135 |
\(p^{(k)}_\theta (z^t|z^{t-1}, a^{t-1})\) is further factorised as in Equation (REF ). The gradients through the outer expectation and the expectation term in ELBO are estimated using the straight-through Gumbel-max trick [1]} and the reparameterisation trick [2]} respectively. For further implementation details, derivation of the lower bound, and model architectures, see Appendices and .
| [1] | [
[
224,
227
]
] | https://openalex.org/W2547875792 |
3324bba4-94d9-4499-a284-cbda4908a4bc |
\(p^{(k)}_\theta (z^t|z^{t-1}, a^{t-1})\) is further factorised as in Equation (REF ). The gradients through the outer expectation and the expectation term in ELBO are estimated using the straight-through Gumbel-max trick [1]} and the reparameterisation trick [2]} respectively. For further implementation details, derivation of the lower bound, and model architectures, see Appendices and .
| [2] | [
[
262,
265
]
] | https://openalex.org/W1959608418 |
fbd168cd-bec3-498a-a885-80549d539e35 | We compare the performance of VCD against RSSM [1]}, a state-of-the-art latent world model that served as inspiration for VCD. As RSSM does not support learning from multiple environments, we consider two adaptations of RSSM with different levels of knowledge transfer between environments: (1) RSSM, where one transition model is trained over all environments, i.e., maximum parameter sharing across environments; and (2) MultiRSSM, where individual transition models are trained on each environment, with shared encoders and decoders. This corresponds to the case where no knowledge about dynamics is transferred, i.e., each model is a local expert.
We hypothesise that, compared to these two extremes of knowledge sharing, VCD is able to capture environment-specific behaviours whilst reusing invariant mechanisms via modular transfer.
| [1] | [
[
47,
50
]
] | https://openalex.org/W2900152462 |
480d9d46-7a4a-4a7f-acec-19b15a02ad55 | In this paper, we propose VCD, a predictive world model with a causal structure that is able to consume high-dimensional observations. This is achieved by jointly training a representation and a causally structured transition model using a modified causal discovery objective. In doing so, VCD is able to identify causally meaningful representations of the observations and discover sparse relationships in the dynamics of the system. By leveraging the invariance of causal mechanisms, VCD is able to adapt to new environments efficiently by identifying relevant mechanism changes and updating in a modular way, resulting in significantly improved data efficiency. One exciting avenue of future research is to explore the synergy between causal world models and object-centric generative models [1]}, [2]}, [3]}.
| [1] | [
[
795,
798
]
] | https://openalex.org/W3167771209 |
45df6581-b954-4e45-a55b-8ec9d92c78ed | In this paper, we propose VCD, a predictive world model with a causal structure that is able to consume high-dimensional observations. This is achieved by jointly training a representation and a causally structured transition model using a modified causal discovery objective. In doing so, VCD is able to identify causally meaningful representations of the observations and discover sparse relationships in the dynamics of the system. By leveraging the invariance of causal mechanisms, VCD is able to adapt to new environments efficiently by identifying relevant mechanism changes and updating in a modular way, resulting in significantly improved data efficiency. One exciting avenue of future research is to explore the synergy between causal world models and object-centric generative models [1]}, [2]}, [3]}.
| [2] | [
[
801,
804
]
] | https://openalex.org/W3154552141 |
5b4f2131-3e43-40bd-bd51-631d81bb7638 |
The KL terms can be computed analytically since the conditional distributions in the last expression are univariate Gaussian distributions. In training time, the gradients through the expectation terms in the ELBO is estimated by drawing a sample from the posterior distribution using the reparameterisation trick [1]}.
| [1] | [
[
315,
318
]
] | https://openalex.org/W1959608418 |
1fbb8a04-2f78-4d12-8672-0bed0b10640d | This section covers the formulation of DCDI [1]} and the graph learning method. These are subsequently used in the learning of VCD.
| [1] | [
[
44,
47
]
] | https://openalex.org/W3103069071 |
94e4ff6a-4f64-4293-a4e6-8d6c044a967f | The gradients through the outer expectation can be estimated using the Gumbel-Softmax trick [1]}. To implement this, the ELBO term is evaluated with a sample of the causal graph using the following expression for each entry,
\(M^\mathcal {G}_{ij} = \mathbb {I}(\sigma (\alpha _{ij}+L_{ij}) > 0.5) + \sigma (\alpha _{ij}+L_{ij}) - stop\_gradient(\sigma (\alpha _{ij}+L_{ij}) ),\)
| [1] | [
[
92,
95
]
] | https://openalex.org/W2547875792 |
8a0b195f-35ee-40bc-b0de-f52e0d8543ee | In the mixed-state experiment, all conditional distributions (including encoders, decoders and transition models) are parameterised by feedforward MLPs with two hidden layers of 64 hidden units each. The recurrent modules are implemented as GRUs [1]} with 64 hidden units. Distributions in the latent space are 16-dimensional diagonal Gaussian distributions with predicted mean and log variance.
| [1] | [
[
246,
249
]
] | https://openalex.org/W2157331557 |
718aec46-7f41-4e19-b667-e901f587afe4 | In the image experiment, the encoders and decoders are parameterised as convolutional and deconvolutional networks from [1]}. In the RSSM models, the transition models are parameterised as feedforward MLPs with two hidden layers of 300 hidden units. The recurrent module is a GRU with 300 hidden units. In VCD, to compensate for the fact each dimension in the latent space has a separate model, the number of hidden units in the GRU and MLP are reduced to 32 to avoid over-parameterisation. We found that initialising the encoders and decoders by pretraining them as a variational autoencoder helped with training stability for both RSSM and VCD.
| [1] | [
[
120,
123
]
] | https://openalex.org/W2890208753 |
41f79478-da84-4910-a3dc-ac0de562c3d0 | In both experiments, the training objective is maximised using the ADAM optimiser [1]} with learning rate \(10^{-3}\) for mixed-state, and \(10^{-4}\) for images. In both environments, we clip the log variance to \(-3\) , with a batch size of two trajectories from each of six environments with \(T=50\) . In VCD, the hyperparameters \(\lambda _{\mathcal {G}}, \lambda _{\mathcal {I}}\) are both set to 0.01. All models are trained on a single Nvidia Tesla V100 GPU.
| [1] | [
[
82,
85
]
] | https://openalex.org/W2964121744 |
fb5a92b2-2ac5-4131-9b37-214ca51a94a8 | Conditional generative adversarial networks (GANs) [1]}, widely used in other generative tasks, have been the primary choice for this line of tasks. Conditional GANs used in the facial de-occlusion and reconstruction are classified into two categories with regard to their architecture for the generator part; U-net-based generator and modulated-generator-based approach. U-net-based generator focuses on completing only the masked region, and conventionally the rest part of an image is directly copy-and-pasted from the conditioned image. However, the fact that it does not train to construct the whole image is postulated to have a negative effect on its generative capability, as when U-net-based generators are confronted with novel masks in shape and size not seen during training, they tend to significantly underperform. Figure REF shows the failure cases of the U-net-based generator when tested on the different mask types unseen during the training.
| [1] | [
[
51,
54
]
] | https://openalex.org/W2125389028 |
b48206fb-4fc9-44fd-bd0e-1e35d38df479 | Modulated generative approach is one of the recent advances among the conditional GANs, which regards an input as a random constant and each convolution layer adjusts the intermediate latent vectors with denormalization factors (e.g. scale and bias) [1]}. Modulated approach has further advanced the generative capability and edit-ability. However, in the case of conditional generation, there is information loss when a conditioned image is compressed to a low-dimensional latent vector. [2]} The lost information is mostly high-frequency details or the infrequent information such as background, as GAN tends to sustain common information of a domain. This leads to the model's under-performance at pixel preservation of the unmasked region as well, resulting in the prediction being largely different from the conditioned input.
| [1] | [
[
250,
253
]
] | https://openalex.org/W2962770929 |
fcad6486-b708-46d8-8bce-75fa17d6d5ad | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [1] | [
[
264,
267
]
] | https://openalex.org/W2295936755 |
31235ab1-4597-457d-a265-862128c08248 | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [2] | [
[
270,
273
]
] | https://openalex.org/W2012875423 |
6913ae5d-bc58-4f9f-997c-aea0fa01ddfe | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [3] | [
[
336,
339
]
] | https://openalex.org/W2116013899 |
8748b646-c462-4cf9-b41d-77d96c11bc4d | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [4] | [
[
342,
345
]
] | https://openalex.org/W1999360130 |
dac40063-6e45-4ea1-bb41-3c2d9f2c7296 | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [5] | [
[
348,
351
]
] | https://openalex.org/W2125873654 |
ff0a5446-cbf3-4185-8ab4-5375f78f3f6d | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [6] | [
[
354,
357
]
] | https://openalex.org/W2105038642 |
dbb8ef52-c224-41fb-a338-1ef4de4f1d40 | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [8] | [
[
471,
474
]
] | https://openalex.org/W2963420272 |
6f43c3fe-a399-477c-a8a6-61c8263b19b0 | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [10] | [
[
645,
649
]
] | https://openalex.org/W2738588019 |
4f820834-45de-4350-834f-95b91408333b | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [11] | [
[
652,
656
]
] | https://openalex.org/W2611104282 |
38380364-6cc6-4c49-b340-7c60568d949a | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [12] | [
[
659,
663
]
] | https://openalex.org/W2963270367 |
0afa9ab0-f94d-474e-bc62-7e4e0609e3dd | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [13] | [
[
666,
670
]
] | https://openalex.org/W2963231084 |
86c8e274-f941-4810-8ca3-c74a22326368 | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [14] | [
[
857,
861
]
] | https://openalex.org/W3176913662 |
b62e22d9-3e77-4595-9991-a6e7ef696e40 | Image Inpainting
is defined as a task of reconstructing missing regions in an image as well as removing objects that occlude an image.
Early non-learning-based approaches used to fill
the masked regions with the information retrieved directly from their surrounds [1]}, [2]}, or replace the missing regions with the best matching patch [3]}, [4]}, [5]}, [6]}, [7]}.
With the advent of deep learning, various learning-based methods for image inpainting have been studied.
[8]} was the first to use GANs [9]} for image inpainting, using an encoder-decoder architecture.
Until now,
numerous studies using the U-Net architecture have been conducted [10]}, [11]}, [12]}, [13]} and more recently, the modulated generator approaches started to generate photo-realistic images, while some studies utilized them as generators and trained encoders correspond to them [14]}, [15]}.
| [15] | [
[
864,
868
]
] | https://openalex.org/W3178406257 |
2cbd405b-b277-4eae-a796-8fc8162917c1 | Semantic Image Inpainting
refers to the problem of filling in large holes that require semantic information.
Even with the advances in GANs, image inpainting is still an ill-posed problem, especially when most of the semantic knowledge is lost.
There have been a number of novel approaches to the problem [1]}, [2]}.
[3]} pointed out that existing studies have focused only on rectangular-shaped holes, and proposed PConv to address irregular masks by constructing the U-Net architecture with partial convolution layers.
[4]} proposed a contextual attention layer to explicitly utilize surrounding image features, overcoming the ineffectiveness of convolutional neural networks in explicitly utilizing distant surrounding information.
| [1] | [
[
306,
309
]
] | https://openalex.org/W2963917315 |
a9999a6e-77c5-40e7-adbe-da56d32186a7 | Semantic Image Inpainting
refers to the problem of filling in large holes that require semantic information.
Even with the advances in GANs, image inpainting is still an ill-posed problem, especially when most of the semantic knowledge is lost.
There have been a number of novel approaches to the problem [1]}, [2]}.
[3]} pointed out that existing studies have focused only on rectangular-shaped holes, and proposed PConv to address irregular masks by constructing the U-Net architecture with partial convolution layers.
[4]} proposed a contextual attention layer to explicitly utilize surrounding image features, overcoming the ineffectiveness of convolutional neural networks in explicitly utilizing distant surrounding information.
| [2] | [
[
312,
315
]
] | https://openalex.org/W2611104282 |
713641f4-1ab7-42f0-89ae-9614eee2b3d9 | Semantic Image Inpainting
refers to the problem of filling in large holes that require semantic information.
Even with the advances in GANs, image inpainting is still an ill-posed problem, especially when most of the semantic knowledge is lost.
There have been a number of novel approaches to the problem [1]}, [2]}.
[3]} pointed out that existing studies have focused only on rectangular-shaped holes, and proposed PConv to address irregular masks by constructing the U-Net architecture with partial convolution layers.
[4]} proposed a contextual attention layer to explicitly utilize surrounding image features, overcoming the ineffectiveness of convolutional neural networks in explicitly utilizing distant surrounding information.
| [3] | [
[
318,
321
]
] | https://openalex.org/W2798365772 |
2a8bc640-1fe7-424d-9275-52471afc7715 | Semantic Image Inpainting
refers to the problem of filling in large holes that require semantic information.
Even with the advances in GANs, image inpainting is still an ill-posed problem, especially when most of the semantic knowledge is lost.
There have been a number of novel approaches to the problem [1]}, [2]}.
[3]} pointed out that existing studies have focused only on rectangular-shaped holes, and proposed PConv to address irregular masks by constructing the U-Net architecture with partial convolution layers.
[4]} proposed a contextual attention layer to explicitly utilize surrounding image features, overcoming the ineffectiveness of convolutional neural networks in explicitly utilizing distant surrounding information.
| [4] | [
[
522,
525
]
] | https://openalex.org/W3043547428 |
5515c165-7bae-48c8-9e18-16cfb7e1b1b6 | As semantic image inpainting is an ill-posed problem, several attempts have been made to utilize additional information. EdgeConnect [1]} is a two-stage adversarial network where an edge generator generates surrounding edges in an image and an image completion module is provided with this additional information about the edges. There has also been an attempt to integrate landmark information of the face [2]} or utilize the semantic maps for facial reconstruction. [3]}, [4]}
| [1] | [
[
133,
136
]
] | https://openalex.org/W2907097116 |
022940ed-dbcc-4a17-acd4-88929d2cf6db | As semantic image inpainting is an ill-posed problem, several attempts have been made to utilize additional information. EdgeConnect [1]} is a two-stage adversarial network where an edge generator generates surrounding edges in an image and an image completion module is provided with this additional information about the edges. There has also been an attempt to integrate landmark information of the face [2]} or utilize the semantic maps for facial reconstruction. [3]}, [4]}
| [4] | [
[
474,
477
]
] | https://openalex.org/W2888039002 |
58501e91-0971-40e4-8b22-017ab8c2af75 | The usage of semantic map allows us to grasp both the purpose and performance of the model.
Although requiring the semantic labeling can accompany much efforts, we overcome this by using a semantic map predictor which enables obtaining the semantic label in an on-the-fly manner so that we can neglect the need of human labeling.
It is important to note that the semantic map predictor is a pre-trained network trained with a separate non-overlapping dataset with the SGIN's training data, and fortunately it generalizes well to the SGIN's training data.
Given the masked image \(X_{masked}\) , the semantic map predictor predicts the semantic label map \(L = \lbrace l_{1}, \cdots , l_{C} \rbrace \) . Each \(l_c , c \in [C]\) , indicates the binary class label map for eleven regions (i.e, \(C=11\) ). We trained BiseNet [1]} for our generalized semantic map predictor.
| [1] | [
[
823,
826
]
] | https://openalex.org/W2886934227 |
d04453ab-33ea-4ae9-b8e2-ab718836f8df | We chose Feature Pyramid Network (FPN) [1]} as our encoder, which generates latent codes through multi-scaled hierarchical features.
Style representations from the latent code are fully determined by the masked image \(X_{masked}\) and the semantic label map \(L_{n}, n \in [N]\) , where \(N\) indicates the number of layers in the FPN's feature map.
As we use the semantic map \(L_{n}\) for additional semantic knowledge, the output latent code needs to disentangle styles (e.g, color, patterns ...) for each semantic region.
In order to achieve this, we first concatenate the semantic label \(L_{n}\) and the masked image \(X_{masked}\) channel-wise.
Then each pyramid network produces \(F_{n}\) , where \(H_{n}\) and \(W_{n}\) indicate the spatial dimension of height and width for each layer.
We expand \(F_{n}\in \mathbb {R}^{H_{n} \times W_{n} \times 512}\) to \(F_{exp_{n}} \in \mathbb {R}^{H_{n} \times W_{n} \times 512 \times C}\) by broadcasting along the dimension of binary class map.
Also, the semantic label map \(L_{n} \in \mathbb {R}^{H_{n} \times W_{n} \times C}\) is broadcasted along the dimension of feature map channels, producing \(L_{exp_{n}} \in \mathbb {R}^{H_{n} \times W_{n} \times 512 \times C}\) .
Additionally, it is difficult to construct faithful style embeddings in the missing holes, because there are no features extracted from the masked region. In the light of this, we harness the well-known contextual attention module [2]} in between the feature pyramids, which can provide additional attention-wise information in the masked region as well. Ablation study shows that the attention module helps increasing the prediction quality.
| [1] | [
[
39,
42
]
] | https://openalex.org/W3176913662 |
e369375d-c350-45cd-a38b-284eaa4ebd44 | We chose Feature Pyramid Network (FPN) [1]} as our encoder, which generates latent codes through multi-scaled hierarchical features.
Style representations from the latent code are fully determined by the masked image \(X_{masked}\) and the semantic label map \(L_{n}, n \in [N]\) , where \(N\) indicates the number of layers in the FPN's feature map.
As we use the semantic map \(L_{n}\) for additional semantic knowledge, the output latent code needs to disentangle styles (e.g, color, patterns ...) for each semantic region.
In order to achieve this, we first concatenate the semantic label \(L_{n}\) and the masked image \(X_{masked}\) channel-wise.
Then each pyramid network produces \(F_{n}\) , where \(H_{n}\) and \(W_{n}\) indicate the spatial dimension of height and width for each layer.
We expand \(F_{n}\in \mathbb {R}^{H_{n} \times W_{n} \times 512}\) to \(F_{exp_{n}} \in \mathbb {R}^{H_{n} \times W_{n} \times 512 \times C}\) by broadcasting along the dimension of binary class map.
Also, the semantic label map \(L_{n} \in \mathbb {R}^{H_{n} \times W_{n} \times C}\) is broadcasted along the dimension of feature map channels, producing \(L_{exp_{n}} \in \mathbb {R}^{H_{n} \times W_{n} \times 512 \times C}\) .
Additionally, it is difficult to construct faithful style embeddings in the missing holes, because there are no features extracted from the masked region. In the light of this, we harness the well-known contextual attention module [2]} in between the feature pyramids, which can provide additional attention-wise information in the masked region as well. Ablation study shows that the attention module helps increasing the prediction quality.
| [2] | [
[
1468,
1471
]
] | https://openalex.org/W3043547428 |
c1ed1104-a196-40f3-bb1d-c1eb42fd2974 | The semantics-guided inpainting network (SGIN) is comprised of a number of serially connected SGI blocks (SGIB), the exact number of which is determined by the resolution of training images. As shown in Fig. REF , each convolution block has two convolution layers, which is composed of a normalization layer and a convolution layer. The input in the normalization layer is firstly instance-normalized, and then denormalized by the semantic region adapative block (SEAN) [1]} in an attempt to reflect the previously extracted semantic features on the reconstructed image.
While the spatially adaptive (SPADE) normalization block [2]}, which can separately process spatial parameters of each image, we chose SEAN, a variant of SPADE, as our denormalizer as it is able to process not only the spatial parameters but also style modulation parameters.
Overall, the generator of our framework is expressed as follows;
\(G(X_{masked},L) = \text{SGIN}(\text{RAP}(\text{FPN}(X;L));L)\)
| [1] | [
[
470,
473
]
] | https://openalex.org/W3106333289 |
bf553850-fdcd-4b0f-9424-1aa3d629e8e2 | The semantics-guided inpainting network (SGIN) is comprised of a number of serially connected SGI blocks (SGIB), the exact number of which is determined by the resolution of training images. As shown in Fig. REF , each convolution block has two convolution layers, which is composed of a normalization layer and a convolution layer. The input in the normalization layer is firstly instance-normalized, and then denormalized by the semantic region adapative block (SEAN) [1]} in an attempt to reflect the previously extracted semantic features on the reconstructed image.
While the spatially adaptive (SPADE) normalization block [2]}, which can separately process spatial parameters of each image, we chose SEAN, a variant of SPADE, as our denormalizer as it is able to process not only the spatial parameters but also style modulation parameters.
Overall, the generator of our framework is expressed as follows;
\(G(X_{masked},L) = \text{SGIN}(\text{RAP}(\text{FPN}(X;L));L)\)
| [2] | [
[
628,
631
]
] | https://openalex.org/W2962974533 |
3a3050f1-231c-4d8c-8561-e239fb08461c | Feeding back the lost features directly to the generation module has been one of the best solutions to this problem [1]}. Yet, such a model requires heavy memory with large computational units such as consultation fusion mapping networks, adaptive distortion mapping networks and various data augmentation techniques.
| [1] | [
[
116,
119
]
] | https://openalex.org/W3200670538 |
1916313a-cfe9-4e5c-a07e-41ca6fca4e44 | We introduce the concept of self-distillation loss, which provides the feature-level supervision directly to the generator for preserving high-fidelity details of the input.
Inspired by the `privileged information' in the work of PISR [1]}, where a teacher network is forwarded with a ground truth image to produce further detailed features and a student network learns the feature map of the teacher network through distillation, we devised an information flow that the generator is fed with its own first coarse image along with the loss calculated from the comparison between the feature map of the ground truth and the predicted output. Thus, we call this as the self-distillation loss. The details of the calculation are as follows:
The generator is forwarded with the ground truth image \(X_{gt}\) with no masked region, and produce compact feature maps \(f_{i}(X_{gt})\) in the \(i^{th}\) SGI block.
Then, the \(L_2\) difference between the feature map of the initially forwarded masked input \(X_m\) and the groundtruth, \(X_{gt}\) , is calculated.
Finally, the self distillation loss is defined as \(\mathcal {L}_{sd} = \sum _{i=1}^{K} ||f_{i}(X_{gt}) - f_{i}(X_m)||_2\) , where \(K\) denotes the number of SGI blocks.
The advantageous effect of using self-distillation loss can be found in the ablation study section.
| [1] | [
[
235,
238
]
] | https://openalex.org/W3107716502 |
b3a92120-af4b-43ab-8d61-b20e128a4b2f | In addition, we applied several conventionally used loss functions in the literature of image inpainting. The discriminator computes the \(\mathcal {L}_{feat}\) , which is the \(L_1\) loss between the discriminator features for the \(X_{gt}\) and the predicted image, as well as an adversarial loss \(\mathcal {L}_{adv}\) . Also, we used the \(\mathcal {L}_{per}\) , which is the perceptual loss between the features of \(X_{gt}\) and \(X_{masked}\) extracted from a VGG-19 network [1]}. \(\mathcal {L}_{adv}\) and \(\mathcal {L}_{per}\) are defined as follows:
\(\mathcal {L}_{adv} = \mathbb {E}_X[\log D(X))] + \mathbb {E}_X[\log (1-D(G(X_{masked} |L))],\)
\(\mathcal {L}_{per} = ||\text{Vgg}(G(X_{masked} |L)) - \text{Vgg}(X)||_2.\)
| [1] | [
[
486,
489
]
] | https://openalex.org/W1686810756 |
3dee760a-e25b-4127-9f73-ead8c0289089 | To generate diverse occluded facial images, we used Naturalistic Occlusion Generation (NatOcc) [1]} to overlay human facial images from HELEN [2]} and CelebA-HQ [3]} with occluding objects and create naturalistic synthetic images (See Fig. REF for some examples). As for the occluding objects, we used 128 objects across 20 categories from Microsoft Common Objects in Context (COCO) and 200 hands from EgoHands [4]}. Note that for the training of our semantic map predictor, HELEN-derived occlusion images are used and its evaluation is done using CelebA-HQ images. Different from this, for the SGIN, we only used CelebA-HQ. We split CelebAMask-HQ-derived images into 22,300 training images and 2,800 validation images.
For more implementation details, please refer to the supplementary information.
<FIGURE><TABLE><TABLE> | [2] | [
[
142,
145
]
] | https://openalex.org/W1796263212 |
3e0d045a-0c4e-432b-bb46-9e4f882f0a01 | To generate diverse occluded facial images, we used Naturalistic Occlusion Generation (NatOcc) [1]} to overlay human facial images from HELEN [2]} and CelebA-HQ [3]} with occluding objects and create naturalistic synthetic images (See Fig. REF for some examples). As for the occluding objects, we used 128 objects across 20 categories from Microsoft Common Objects in Context (COCO) and 200 hands from EgoHands [4]}. Note that for the training of our semantic map predictor, HELEN-derived occlusion images are used and its evaluation is done using CelebA-HQ images. Different from this, for the SGIN, we only used CelebA-HQ. We split CelebAMask-HQ-derived images into 22,300 training images and 2,800 validation images.
For more implementation details, please refer to the supplementary information.
<FIGURE><TABLE><TABLE> | [3] | [
[
161,
164
]
] | https://openalex.org/W3034521057 |
eec2a7ee-6e00-4711-9f4e-ee9123621e11 | To generate diverse occluded facial images, we used Naturalistic Occlusion Generation (NatOcc) [1]} to overlay human facial images from HELEN [2]} and CelebA-HQ [3]} with occluding objects and create naturalistic synthetic images (See Fig. REF for some examples). As for the occluding objects, we used 128 objects across 20 categories from Microsoft Common Objects in Context (COCO) and 200 hands from EgoHands [4]}. Note that for the training of our semantic map predictor, HELEN-derived occlusion images are used and its evaluation is done using CelebA-HQ images. Different from this, for the SGIN, we only used CelebA-HQ. We split CelebAMask-HQ-derived images into 22,300 training images and 2,800 validation images.
For more implementation details, please refer to the supplementary information.
<FIGURE><TABLE><TABLE> | [4] | [
[
412,
415
]
] | https://openalex.org/W2204609240 |
a96db6a7-38ad-42bb-bdca-e22055775db1 | We compared our SGIN with various image-inpainting models different in their types and schemes. For the U-net architecture, we chose Deepfill-v2 [1]} and Crfill [2]}, and for the modulated generator architecture, we chose PsP [3]} and E4E [4]}. We also included SEAN [5]} in that it also uses semantic maps. and MAT [6]}, the current state-of-art (SOTA) inpainting module which is based on a transformer model.
For a fair comparison, all of these baseline models are trained with the same NatOcc datasets with the same masks obtained from our Occlusion Detector, and SEAN is trained with the same semantic maps obtained from our semantic map predictor, except for MAT whose large computational cost is unaffordable in our devices.
Alternatively, we made our comparison based on the pretrained CelebA-HQ MAT model uploaded at the author's GitHub repository and used the same masks as ours.
| [1] | [
[
145,
148
]
] | https://openalex.org/W2982763192 |
faff17f0-2579-438c-a436-061c504f028a | We compared our SGIN with various image-inpainting models different in their types and schemes. For the U-net architecture, we chose Deepfill-v2 [1]} and Crfill [2]}, and for the modulated generator architecture, we chose PsP [3]} and E4E [4]}. We also included SEAN [5]} in that it also uses semantic maps. and MAT [6]}, the current state-of-art (SOTA) inpainting module which is based on a transformer model.
For a fair comparison, all of these baseline models are trained with the same NatOcc datasets with the same masks obtained from our Occlusion Detector, and SEAN is trained with the same semantic maps obtained from our semantic map predictor, except for MAT whose large computational cost is unaffordable in our devices.
Alternatively, we made our comparison based on the pretrained CelebA-HQ MAT model uploaded at the author's GitHub repository and used the same masks as ours.
| [3] | [
[
226,
229
]
] | https://openalex.org/W3176913662 |
f766850e-fd4a-4722-b338-229277fc3fe1 | We compared our SGIN with various image-inpainting models different in their types and schemes. For the U-net architecture, we chose Deepfill-v2 [1]} and Crfill [2]}, and for the modulated generator architecture, we chose PsP [3]} and E4E [4]}. We also included SEAN [5]} in that it also uses semantic maps. and MAT [6]}, the current state-of-art (SOTA) inpainting module which is based on a transformer model.
For a fair comparison, all of these baseline models are trained with the same NatOcc datasets with the same masks obtained from our Occlusion Detector, and SEAN is trained with the same semantic maps obtained from our semantic map predictor, except for MAT whose large computational cost is unaffordable in our devices.
Alternatively, we made our comparison based on the pretrained CelebA-HQ MAT model uploaded at the author's GitHub repository and used the same masks as ours.
| [4] | [
[
239,
242
]
] | https://openalex.org/W3178406257 |
ae28912e-406a-4c4b-9e05-e236bc8e4f83 | We compared our SGIN with various image-inpainting models different in their types and schemes. For the U-net architecture, we chose Deepfill-v2 [1]} and Crfill [2]}, and for the modulated generator architecture, we chose PsP [3]} and E4E [4]}. We also included SEAN [5]} in that it also uses semantic maps. and MAT [6]}, the current state-of-art (SOTA) inpainting module which is based on a transformer model.
For a fair comparison, all of these baseline models are trained with the same NatOcc datasets with the same masks obtained from our Occlusion Detector, and SEAN is trained with the same semantic maps obtained from our semantic map predictor, except for MAT whose large computational cost is unaffordable in our devices.
Alternatively, we made our comparison based on the pretrained CelebA-HQ MAT model uploaded at the author's GitHub repository and used the same masks as ours.
| [5] | [
[
267,
270
]
] | https://openalex.org/W3106333289 |
984915e2-ea22-492a-adca-b67d704213c0 | As some previous papers [1]}, [2]} have pointed out, image generation tasks lack a good quantitative metric for quantitative evaluation. For example, it is possible to have a different but highly plausible reconstructed image from the ground truth but the scores from SSIM or RMSE may fluctuate simply because of its difference from the ground truth. In the light of this, we employed six metrics that can shed light on the different aspects of the quality of reconstruction; PSNR, SSIM [3]}, MS-SSIM [4]}, RMSE, LPIPS [5]}, and FID [6]}. We evaluated the average scores for all of the validation samples.
| [1] | [
[
24,
27
]
] | https://openalex.org/W2887695188 |
94e6ad8b-6e23-4aef-b3f5-2e52c3cede68 | As some previous papers [1]}, [2]} have pointed out, image generation tasks lack a good quantitative metric for quantitative evaluation. For example, it is possible to have a different but highly plausible reconstructed image from the ground truth but the scores from SSIM or RMSE may fluctuate simply because of its difference from the ground truth. In the light of this, we employed six metrics that can shed light on the different aspects of the quality of reconstruction; PSNR, SSIM [3]}, MS-SSIM [4]}, RMSE, LPIPS [5]}, and FID [6]}. We evaluated the average scores for all of the validation samples.
| [2] | [
[
30,
33
]
] | https://openalex.org/W2963185411 |
649a91d5-1885-4171-b901-bf428a9f08ce | As some previous papers [1]}, [2]} have pointed out, image generation tasks lack a good quantitative metric for quantitative evaluation. For example, it is possible to have a different but highly plausible reconstructed image from the ground truth but the scores from SSIM or RMSE may fluctuate simply because of its difference from the ground truth. In the light of this, we employed six metrics that can shed light on the different aspects of the quality of reconstruction; PSNR, SSIM [3]}, MS-SSIM [4]}, RMSE, LPIPS [5]}, and FID [6]}. We evaluated the average scores for all of the validation samples.
| [3] | [
[
487,
490
]
] | https://openalex.org/W2133665775 |
dbad9c78-3b61-4e5a-8a56-f48820dcb876 | As some previous papers [1]}, [2]} have pointed out, image generation tasks lack a good quantitative metric for quantitative evaluation. For example, it is possible to have a different but highly plausible reconstructed image from the ground truth but the scores from SSIM or RMSE may fluctuate simply because of its difference from the ground truth. In the light of this, we employed six metrics that can shed light on the different aspects of the quality of reconstruction; PSNR, SSIM [3]}, MS-SSIM [4]}, RMSE, LPIPS [5]}, and FID [6]}. We evaluated the average scores for all of the validation samples.
| [4] | [
[
501,
504
]
] | https://openalex.org/W1580389772 |
8ec07b05-9660-4b66-8541-42462aa1dad3 | As some previous papers [1]}, [2]} have pointed out, image generation tasks lack a good quantitative metric for quantitative evaluation. For example, it is possible to have a different but highly plausible reconstructed image from the ground truth but the scores from SSIM or RMSE may fluctuate simply because of its difference from the ground truth. In the light of this, we employed six metrics that can shed light on the different aspects of the quality of reconstruction; PSNR, SSIM [3]}, MS-SSIM [4]}, RMSE, LPIPS [5]}, and FID [6]}. We evaluated the average scores for all of the validation samples.
| [5] | [
[
519,
522
]
] | https://openalex.org/W2962785568 |
8d595403-eeb0-4dd3-ba68-1feadf6a2633 | As some previous papers [1]}, [2]} have pointed out, image generation tasks lack a good quantitative metric for quantitative evaluation. For example, it is possible to have a different but highly plausible reconstructed image from the ground truth but the scores from SSIM or RMSE may fluctuate simply because of its difference from the ground truth. In the light of this, we employed six metrics that can shed light on the different aspects of the quality of reconstruction; PSNR, SSIM [3]}, MS-SSIM [4]}, RMSE, LPIPS [5]}, and FID [6]}. We evaluated the average scores for all of the validation samples.
| [6] | [
[
533,
536
]
] | https://openalex.org/W2963981733 |
2a26a261-5e76-484e-9fa9-7dcc0c4cf507 | There are many works dedicated to interactive data visualization or real-time data visualization in the CFD domain. ViSTA FlowLib[1]} uses haptic rendering techniques to give a better understanding of the unsteady fluid flows data. Another work [2]} provides and evaluates multimodal feedback such as sonification during interaction with fluid simulation, especially to address visual overload.
Stam J. has pioneered the development of the interactive simulation using an unconditionally stable model[3]}. Until today, most applications based on interactive simulation approach use particle-based methods such as Translating Eulerian Grids[4]} or Smoothed Particle Hydrodynamics (SPH)[5]}. These methods target extreme performance, high level rendering, and stability, but at the expense of the physical relevance, which is required by decision-making, physics education or research. Using more relevant methods requires computing centers that are often isolated from the visualization resources which are usually undersized. Even if a lot of technical results were published to support the interactive fluid simulation approach, this methodology remains rarely used. Moreover, there are only a few studies that deal with the problem of the usefulness of interactive simulations in terms of performance and user experience. We propose in this paper a work-in-progress to address this issue by designing an interactive fluid simulation platform based on Unity 3D and evaluating the benefit of the interactive simulation approach on decision making from fluid simulation on a simple but realistic use case.
| [3] | [
[
500,
503
]
] | https://openalex.org/W2295821368 |
6174f1ab-f37e-4fc8-97f1-bbe9b281cc04 | There are many works dedicated to interactive data visualization or real-time data visualization in the CFD domain. ViSTA FlowLib[1]} uses haptic rendering techniques to give a better understanding of the unsteady fluid flows data. Another work [2]} provides and evaluates multimodal feedback such as sonification during interaction with fluid simulation, especially to address visual overload.
Stam J. has pioneered the development of the interactive simulation using an unconditionally stable model[3]}. Until today, most applications based on interactive simulation approach use particle-based methods such as Translating Eulerian Grids[4]} or Smoothed Particle Hydrodynamics (SPH)[5]}. These methods target extreme performance, high level rendering, and stability, but at the expense of the physical relevance, which is required by decision-making, physics education or research. Using more relevant methods requires computing centers that are often isolated from the visualization resources which are usually undersized. Even if a lot of technical results were published to support the interactive fluid simulation approach, this methodology remains rarely used. Moreover, there are only a few studies that deal with the problem of the usefulness of interactive simulations in terms of performance and user experience. We propose in this paper a work-in-progress to address this issue by designing an interactive fluid simulation platform based on Unity 3D and evaluating the benefit of the interactive simulation approach on decision making from fluid simulation on a simple but realistic use case.
| [5] | [
[
684,
687
]
] | https://openalex.org/W2141435354 |
4da3a341-2988-4e4a-a249-ab7991b514cc | Because of the Covid-19 sanitary condition, an experiment in the lab was not permitted. Therefore, before the experiment, an email containing an information notice and a consent form was sent to the subjects. Once they signed the document, they were contacted by phone and invited to connect to our computer remotely via Teamviewer. Then, they had to perform a 5-minute familiarisation task with a training scene to familiarize themselves with interactive features and goals. After this training stage, the first session of the experiment started with each mode of simulation (interactive mode or non-interactive condition). After 3 scenes with the same mode of simulation, they were invited to fill the NASA TLX questionnaire[1]}. Then the subject entered the second session: another training part with the other mode of simulation started followed by 3 other scenes of the experiment. After the second part, the experiment ended with the same questionnaire for the second session.
| [1] | [
[
726,
729
]
] | https://openalex.org/W2151905266 |
e5a248e4-d5e6-4342-bbf8-4dd18e710e0d | Autonomous driving promises to revolutionize how we transport goods, travel, and interact with our environment. To safely plan a route, a self-driving vehicle must first perceive and localize mobile traffic participants such as other vehicles and pedestrians in 3D. Current state-of-the-art 3D object detectors are all based on deep neural networks [1]}, [2]}, [3]}, [4]} and can yield up to 80 average precision on benchmark datasets[5]}, [6]}.
| [1] | [
[
349,
352
]
] | https://openalex.org/W2964062501 |
6b3a4370-cda4-4f47-8b76-166086c34671 | Autonomous driving promises to revolutionize how we transport goods, travel, and interact with our environment. To safely plan a route, a self-driving vehicle must first perceive and localize mobile traffic participants such as other vehicles and pedestrians in 3D. Current state-of-the-art 3D object detectors are all based on deep neural networks [1]}, [2]}, [3]}, [4]} and can yield up to 80 average precision on benchmark datasets[5]}, [6]}.
| [2] | [
[
355,
358
]
] | https://openalex.org/W2949708697 |
13e753bf-6efd-490e-bd83-44a8ee17cea2 | Autonomous driving promises to revolutionize how we transport goods, travel, and interact with our environment. To safely plan a route, a self-driving vehicle must first perceive and localize mobile traffic participants such as other vehicles and pedestrians in 3D. Current state-of-the-art 3D object detectors are all based on deep neural networks [1]}, [2]}, [3]}, [4]} and can yield up to 80 average precision on benchmark datasets[5]}, [6]}.
| [3] | [
[
361,
364
]
] | https://openalex.org/W2798965597 |
1cb61609-b009-471a-9cb7-dbc4e17a263a | Autonomous driving promises to revolutionize how we transport goods, travel, and interact with our environment. To safely plan a route, a self-driving vehicle must first perceive and localize mobile traffic participants such as other vehicles and pedestrians in 3D. Current state-of-the-art 3D object detectors are all based on deep neural networks [1]}, [2]}, [3]}, [4]} and can yield up to 80 average precision on benchmark datasets[5]}, [6]}.
| [4] | [
[
367,
370
]
] | https://openalex.org/W3034314779 |
c9dbd71c-3e35-4b8a-914c-38fa3b1d9f0e | Autonomous driving promises to revolutionize how we transport goods, travel, and interact with our environment. To safely plan a route, a self-driving vehicle must first perceive and localize mobile traffic participants such as other vehicles and pedestrians in 3D. Current state-of-the-art 3D object detectors are all based on deep neural networks [1]}, [2]}, [3]}, [4]} and can yield up to 80 average precision on benchmark datasets[5]}, [6]}.
| [5] | [
[
434,
437
]
] | https://openalex.org/W2150066425 |
54af1c44-21ef-49c9-b6a0-7d3d40c14b85 | Autonomous driving promises to revolutionize how we transport goods, travel, and interact with our environment. To safely plan a route, a self-driving vehicle must first perceive and localize mobile traffic participants such as other vehicles and pedestrians in 3D. Current state-of-the-art 3D object detectors are all based on deep neural networks [1]}, [2]}, [3]}, [4]} and can yield up to 80 average precision on benchmark datasets[5]}, [6]}.
| [6] | [
[
440,
443
]
] | https://openalex.org/W2115579991 |
685c12d1-6697-4c1b-b56e-7e804f3bdf76 | However, as with all deep learning approaches, these techniques have an insatiable need for labeled-data.
Specifically, to train a 3D object detector that takes LiDAR scans as input, one typically needs to first come up with a list of objects of interest and annotate each of them with tight bounding boxes in the 3D point cloud space. Such a data annotation process is laborious and costly, but worst of all, the resulting detectors only achieve high accuracy when the training and test data distributions match [1]}. In other words, their accuracy deteriorates over time and space, as looks and shapes of cars, vegetation, and background objects change.
To guarantee good performance, one has to collect labeled training data for specific geo-fenced areas and re-label data constantly, greatly limiting the applicability and development of self-driving vehicles.
| [1] | [
[
513,
516
]
] | https://openalex.org/W3034975685 |
503ae5c1-4aae-4695-a00d-abd5aa0f7e36 | Concretely, whenever we discover multiple traversals of one route, we calculate a simple ephemerality statistic [1]} for each LiDAR point, which characterizes the change of its local neighborhood across traversals.
We cluster LiDAR points according to their coordinates and ephemerality statistics. Resulting clusters with high ephemerality statistics, and located on the ground, are considered as mobile objects and are further fitted with upright bounding boxes.
| [1] | [
[
112,
115
]
] | https://openalex.org/W2963170338 |
cdc45969-6528-4173-9c5b-71770ce22c1f | Self-training (ST).
While this initial seed set of mobile objects is not exhaustive (e.g., parked cars may be missed) and somewhat noisy in shape, we demonstrate that an object detector trained upon them can already learn the underlying object patterns and is able to output more and higher-quality bounding boxes than the seed set.
This intriguing observation further opens up the possibility of using the detected object boxes as “better” pseudo-ground truths to train a new object detector.
We show that such a self-training cycle [1]}, [2]} enables the detector to improve itself over time; notably, it can even benefit from additional, unlabeled data that do not have multiple past traversals associated to them.
| [2] | [
[
540,
543
]
] | https://openalex.org/W3035160371 |