_id
stringlengths 36
36
| text
stringlengths 5
665k
| marker
stringlengths 3
6
| marker_offsets
sequence | label
stringlengths 28
32
|
---|---|---|---|---|
a381057b-9259-4cd1-b352-224bd76a4a64 | The following theorem was proved by Agol [1]} and
Wise [2]}, [3]} in the hyperbolic case. It was proved by
Liu [4]} and Przytycki-Wise [5]} for graph manifolds
with boundary and it was proved by Przytycki-Wise [6]} for
manifolds with a non-trivial Jaco-Shalen-Johannson decomposition and at least one
hyperbolic piece in the JSJ decomposition.
| [3] | [
[
61,
64
]
] | https://openalex.org/W3080612394 |
587be717-9224-49ab-8539-b9c25832bd4d | By the Sphere Theorem [1]}, an irreducible 3-manifold is
aspherical, i.e., all its higher homotopy groups vanish, if and only if it is a
3-disk or has infinite fundamental group. If \(M\) and \(N\) are two aspherical closed
3-manifolds, then they are homeomorphic if and only if their fundamental groups are
isomorphic. Actually, every isomorphism between their fundamental groups is induced by a
homeomorphism. More generally, every 3-manifold \(N\) with torsionfree fundamental group
group is topologically rigid in the sense that any homotopy equivalence of closed
3-manifolds with \(N\) as target is homotopic to a homeomorphism. This follows from
results of Waldhausen, see Hempel [1]} and
Turaev [3]}, as explained for instance [4]}.
| [1] | [
[
22,
25
],
[
690,
693
]
] | https://openalex.org/W4238266252 |
1e522c5f-462d-46b5-9016-5c90478ebdb3 | The fundamental group of a closed manifold is finitely presented. Fix a natural number
\(d \ge 4\) . Then a group \(G\) is finitely presented if and only if it occurs as fundamental
group of a closed orientable \(d\) -dimensional manifold. This is not true in dimension 3.
A detailed exposition about the problem, which finitely presented groups occur as
fundamental groups of closed 3-manifolds, can be found
in [1]}. For us it will be important that the
fundamental group of any 3-manifold is residually finite, This follows from [2]}
and the proof of the Geometrization Conjecture. More information about fundamental groups of
3-manifolds can be found for instance in [1]}.
| [1] | [
[
414,
417
],
[
672,
675
]
] | https://openalex.org/W1956890014 |
c2adccc7-e3bb-4608-b3de-91e206db7880 | Let \(M\) be a compact oriented 3-manifold. Recall the definition
in [1]} of the Thurston norm \(x_M(\phi )\) of a 3-manifold \(M\)
and an element \(\phi \in H^1(M;{\mathbb {Z}})=\operatorname{Hom}(\pi _1(M),{\mathbb {Z}})\) :
\(x(\phi )–:=\min \lbrace \chi _-(F)\, | \, F \subset M \textup { properly embedded surface dualto }\phi \rbrace ,\)
| [1] | [
[
70,
73
]
] | https://openalex.org/W122881644 |
32ad9e2a-6075-4b90-b5f0-678387026b5b | Thurston [1]} showed that this defines a seminorm on
\(H^1(M;\mathbb {Z})\) which can be extended to a seminorm on \(H^1(M;\mathbb {R} )\) which we
also denote by \(x_M\) . In particular we get for \(r \in {\mathbb {R}}\) and \(\phi \in H^1(M;{\mathbb {R}})\)
\(x_M(r \cdot \phi )& = &|r| \cdot x_M(\phi ).\)
| [1] | [
[
9,
12
]
] | https://openalex.org/W122881644 |
750f1f73-f4f8-40c6-9719-047ca607ce77 | If \(p \colon \widetilde{M} \rightarrow M\) is a finite covering with \(n\) sheets, then
Gabai [1]} showed that
\(x_{\widetilde{M}}(p^*\phi )& = &n \cdot x_M(\phi ).\)
| [1] | [
[
97,
100
]
] | https://openalex.org/W1560527130 |
5bc115ef-6925-49a1-b72e-df616ccd4106 | If \(F \rightarrow M \xrightarrow{} S^1\) is a fiber bundle for a 3-manifold \(M\) and compact
surface \(F\) , and \(\phi \in H^1(M;{\mathbb {Z}})\) is given by
\(H_1(p) \colon H_1(M) \rightarrow H_1(S^1)={\mathbb {Z}}\) , then by [1]} we
have
\(x_M(\phi ) & = &{\left\lbrace \begin{array}{ll}- \chi (F), & \text{if} \;\chi (F) \le 0;\\0, & \text{if} \;\chi (F) \ge 0.\end{array}\right.}\)
| [1] | [
[
234,
237
]
] | https://openalex.org/W122881644 |
ccc790ab-de69-44fa-b74b-85020a6594b5 | Thurston [1]}
has shown that \(T(M)^*\) is an integral polytope, i.e, the convex hull of finitely many
points in the integral lattice \(H_1(M;{\mathbb {Z}})/\mbox{torsion} \subseteq H_1(M;{\mathbb {R}})\) .
| [1] | [
[
9,
12
]
] | https://openalex.org/W122881644 |
1bd55364-3241-4748-93e9-0d41a58403b8 | A marking for a polytope is a (possibly empty) subset of the set of its
vertices. We conclude from Thurston [1]} that we can equip
\(T(M)^*\) with a marking such that \(\phi \in H^1(M;{\mathbb {R}})\) is fibered if and only if it pairs
maximally with a marked vertex, i.e., there exists a marked vertex \(v\) of \(T(M)^*\) , such
that \(\phi (v) >\phi (w)\) for any vertex \(w\ne v\) .
| [1] | [
[
108,
111
]
] | https://openalex.org/W122881644 |
87f14232-ecdd-4835-8a91-f3fa7021248f | For some information about the proof and in particular of references in the literature we
refer to [1]} except for
assertion (REF ) which is due to
Jaikin-Zapirain and Lopez-Alvarez [2]}. A group is
called locally indicable if every non-trivial finitely generated subgroup admits an epimorphism
onto \({\mathbb {Z}}\) . Examples are torsionfree one-relator groups.
| [2] | [
[
182,
185
]
] | https://openalex.org/W3105807034 |
ce6a6f60-9b6e-4a56-a8f1-fef54d3a1312 | There is a program of Linnell [1]} to prove the Atyiah Conjecture which is
discussed in details for instance in [2]} and [3]}. This shows that one has at least some
ideas why the Atyiah Conjecture is true and that the Atiyah
Conjecture is related to some deep ring theory and to
algebraic \(K\) -theory, notably to projective class groups. This connection to ring theory
has been explained and exploited for instance in [4]}, [5]},
where the division closure is replaced by the \(\ast \) -regular closure.
| [4] | [
[
420,
423
]
] | https://openalex.org/W2922526935 |
6f39928a-8a0a-44d3-9703-153fe86eefb0 | There is a program of Linnell [1]} to prove the Atyiah Conjecture which is
discussed in details for instance in [2]} and [3]}. This shows that one has at least some
ideas why the Atyiah Conjecture is true and that the Atiyah
Conjecture is related to some deep ring theory and to
algebraic \(K\) -theory, notably to projective class groups. This connection to ring theory
has been explained and exploited for instance in [4]}, [5]},
where the division closure is replaced by the \(\ast \) -regular closure.
| [5] | [
[
426,
429
]
] | https://openalex.org/W3105807034 |
6f3afbfc-b3de-41a3-a6c9-296861418d36 | The class of sofic groups is very large. It is closed under direct and free products,
taking subgroups, taking inverse and direct limits over directed index sets, and is
closed under extensions with amenable groups as quotients and a sofic group as kernel.
In particular it contains all residually amenable groups and fundamental groups of
3-manifolds. One expects that there exists non-sofic groups but no example is known.
More information about sofic groups can be found for instance in [1]}
and [2]}.
| [2] | [
[
499,
502
]
] | https://openalex.org/W2153827714 |
5870d41a-7f0f-4e88-88f5-b21e7148bcf6 | Remark 3.22
The conjectures above imply a positive answer
to [1]} and [2]}. They also would settle [3]} and [4]}. One may wonder whether it is related to
the Volume Conjecture due to Kashaev [5]} and H. and
J. Murakami [6]}.
| [6] | [
[
220,
223
]
] | https://openalex.org/W2023409861 |
81b5e8ee-528b-43b0-ab4d-6b34bed4a8c0 | The proof of the following result can be found in [1]}.
It reduces in the weakly acyclic case
Conjecture REF to
Conjecture REF .
| [1] | [
[
50,
53
]
] | https://openalex.org/W2557229284 |
10374974-cf0b-4fde-ad4b-978fd1e92dae | It is conceivable that Theorem REF remains
true if we drop the assumption that \(b_p^{(2)}(\overline{M};{\mathcal {N}}(G))\) vanishes for all
\(p \ge 0\) , but our present proof works only under this assumption,
see [1]}.
| [1] | [
[
218,
221
]
] | https://openalex.org/W2557229284 |
59ce72cd-9857-4534-adad-155820dd8081 | More information about the conjectures above can be found in [1]}.
| [1] | [
[
61,
64
]
] | https://openalex.org/W2557229284 |
db7a7163-c10b-40a4-a69f-edd42dcd2ca3 | Conjecture REF is
attributed to Bergeron-Venkatesh [1]}. They allow only locally
symmetric spaces for \(M\) . They also consider the case of twisting with a
finite-dimensional integral representation. Further discussions about this conjecture can
be found for instance
in [2]}, [3]},
and [4]}.
| [1] | [
[
52,
55
]
] | https://openalex.org/W2963260686 |
1d34ca4b-d33c-4234-8aac-653aec519c19 | Conjecture REF is
attributed to Bergeron-Venkatesh [1]}. They allow only locally
symmetric spaces for \(M\) . They also consider the case of twisting with a
finite-dimensional integral representation. Further discussions about this conjecture can
be found for instance
in [2]}, [3]},
and [4]}.
| [2] | [
[
273,
276
]
] | https://openalex.org/W1956890014 |
a6fae5ed-6757-4798-9eea-c76bb8bfb1e9 | Conjecture REF is
attributed to Bergeron-Venkatesh [1]}. They allow only locally
symmetric spaces for \(M\) . They also consider the case of twisting with a
finite-dimensional integral representation. Further discussions about this conjecture can
be found for instance
in [2]}, [3]},
and [4]}.
| [3] | [
[
279,
282
]
] | https://openalex.org/W3100738199 |
9c3e93e9-1bb1-485d-b8bd-0e50ff61508b | Conjecture REF is
attributed to Bergeron-Venkatesh [1]}. They allow only locally
symmetric spaces for \(M\) . They also consider the case of twisting with a
finite-dimensional integral representation. Further discussions about this conjecture can
be found for instance
in [2]}, [3]},
and [4]}.
| [4] | [
[
289,
292
]
] | https://openalex.org/W3101583560 |
f309edfa-e2ab-48b0-9062-8d30537770e6 | The relation between
Conjecture REF
and Conjecture REF is
discussed in [1]}.
| [1] | [
[
73,
76
]
] | https://openalex.org/W2557229284 |
9ddc271a-2143-4d91-8d02-09aacedf22ef | The chain complex version
Conjecture REF is stated
in [1]}. We at least explain what it says for
1-dimensional chain complexes, or, equivalently, matrices. Here it is important to work
over the integral group ring.
| [1] | [
[
55,
58
]
] | https://openalex.org/W2557229284 |
0b67926f-8992-4d2d-8150-bd6dfe404a7c | Notice that \(||y||_1\) defines only a seminorm on \(H_p^{\operatorname{sing}}(X;{\mathbb {R}})\) , it is possible that
\(||y||_1 = 0\) but \(y \ne 0\) . The next definition is taken
from [1]}.
| [1] | [
[
191,
194
]
] | https://openalex.org/W1570262040 |
df1a3754-3fb5-4f0c-99bc-34fd608b5d21 | Bergeron-Sengun-Venkatesh [1]} consider the equality
above for arithmetic hyperbolic 3-manifolds and relate it to a conjecture about classes
in the second integral homology.
| [1] | [
[
26,
29
]
] | https://openalex.org/W3100738199 |
f91cff73-a969-403d-a2ff-9eec9569e33d | Define the positive real number \(v_3\) to be the supremum of the volumes of all
\(n\) -dimensional geodesic simplices, i.e., the convex hull of \((n+1)\) points in general
position, in the \(n\) -dimensional hyperbolic space \({\mathbb {H}}^3\) . If \(M\) is an admissible
3-manifold, then one gets from [1]}, [2]}, and [3]},
see [4]}
\(||M|| = \frac{- 6\pi }{v_3} \cdot \rho ^{(2)}(\widetilde{M}).\)
| [2] | [
[
314,
317
]
] | https://openalex.org/W2019578387 |
144e1559-62b5-4af9-837b-b9b70a0bf0c7 | There are variants of the simplicial volume, namely, the notion of the integral
foliated simplicial volume, see [1]}, [2]},
or [3]}, and of the stable
integral simplicial volume, see [3]}.
The integral foliated simplicial volume gives an upper bound on the torsion growth for an
oriented closed manifold, i.e, an upper bound on
\(\limsup _{i \rightarrow \infty } \;\frac{\ln \big (\bigl |\operatorname{tors}(H_n(M[i];{\mathbb {Z}}))\bigr |\bigr )}{[G:G_i]}\)
in the situation of
Conjecture REF ,
see [3]}. There are the open
questions whether for an aspherical oriented closed manifold the simplicial volume and the
integral foliated simplicial volume agree and whether for an aspherical oriented closed
manifold with residually finite fundamental group the integral foliated simplicial volume
and the stable integral simplicial volume agree, see [3]}. The stable integral simplicial
volume and the simplicial volume agree for aspherical oriented closed 3-manifolds,
see [7]}.
| [1] | [
[
112,
115
]
] | https://openalex.org/W1483679902 |
f6504ac4-067d-4234-a224-4fb888ac676e | If \(\eta ^{(2)}_{V_u,B_u} (C_*)\) has a gap at the spectrum at zero, then obviously the
\(L^2\) -torsion of \(\eta ^{(2)}_{V_u,B_u} (C_*)\) is well-defined. Moreover the function
sending \(v \in R(G,\operatorname{GL}_n({\mathbb {C}}))\) to the \(L^2\) -torsion of \(\eta ^{(2)}_{V_u,B_u} (C_*)\) is
well-defined and continuous in neighborhood of \(u\) . This follows form the continuity of
the Fulgede-Kadison determinant for invertible matrices over the group von Neumann
algebra with respect to the norm topology,
see [1]}, [2]}, or, [3]}. This
is studied in more detail for a hyperbolic 3-manifold \(M\) with empty or incompressible
torus boundary and the canonical holonomy representation
\(h \colon \pi _1(M) \rightarrow \operatorname{SL}_2({\mathbb {C}})\) by Bénard-Raimbault [4]}.
They actually show that this function is real analytic near \(h\) .
| [2] | [
[
531,
534
]
] | https://openalex.org/W2323746695 |
b2dd72d2-40d6-4e1c-bc27-0e48ca6bede1 | Remark 7.2 (Assumption REF )
The reader does not need to know what the \(K\) -theoretic Farrell-Jones Conjecture for
\({\mathbb {Z}}G\) is, it can be used as a black box. The reader should have in mind that it is
known for a large class of groups, e.g., hyperbolic groups, CAT(0)-groups, solvable
groups, lattices in almost connected Lie groups, fundamental groups of 3-manifolds and
passes to subgroups, finite direct products, free products, and colimits of directed
systems of groups (with arbitrary structure maps). For more information we refer for
instance to [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}.
| [4] | [
[
586,
589
]
] | https://openalex.org/W2094789018 |
2818f70b-e6da-4105-a71e-74cbbafd66cf | Notice that for polytopes \(P_0\) , \(P_1\) and \(Q\) in a finite-dimensional real vector space
we have the implication \(P_0 + Q = P_1 + Q \Longrightarrow P_0 = P_1\) ,
see [1]}. Hence elements in \(\mathcal {P}_{{\mathbb {Z}}}(H)\) are given by
formal differences \([P] - [Q]\) for integral polytopes \(P\) and \(Q\) in \({\mathbb {R}}\otimes _{{\mathbb {Z}}} H\)
and we have \([P_0] - [Q_0] = [P_1] - [Q_1] \Longleftrightarrow P_0 + Q_1 = P_1 + Q_0\) .
| [1] | [
[
176,
179
]
] | https://openalex.org/W4245290017 |
1d2fe87e-a508-4bab-a98c-098fc16ecc9c | It makes no difference whether \(\widehat{K} \cong \widehat{G}\) means abstract isomorphism
of groups or topological group isomorphism, see Nikolov and
Segal [1]}.
| [1] | [
[
159,
162
]
] | https://openalex.org/W1997331766 |
ba73689d-76ea-4974-949a-0de902e8a57e | To the author's knowledge profinite rigidity of fundamental groups of hyperbolic closed
3-manifolds, even among themselves, is an open question. Examples of hyperbolic closed
3-manifolds, whose fundamental groups are profinite rigid in the absolute sense, are
constructed in [1]}. A weaker but still open
problem is the following which is equivalent to [2]}.
| [1] | [
[
275,
278
]
] | https://openalex.org/W3102237430 |
61bec780-e31a-4747-b586-f782bf2e72cb | Our model may be seen as modelling another version of `clumping' as discussed by [1]} which, like the models they investigate, allows for more variability in outcomes than standard homogeneously mixing models.
The model is also closely related to the `epidemic among giants' of [2]} and the discussion at end of section 4.3 of that paper; but that model considers only Reed-Frost epidemic dynamics (see Section REF , 3rd paragraph) and here we provide much more detailed and complete results. The stochastic multi-type model with weaker transmission between types than within types goes back at least to [3]}, who cites related models in publications from the late 1950s. The idea of a population of communities with relatively strong within-community links and weaker between-community links is similar in spirit to some motivations for the Stochastic Block Model or planted partition model (see e.g. [4]} in the probabilistic literature or [5]} in the networks and community detection literature); though in that context the strength of between- and within-community connections are usually, but not always, assumed to scale with population size in the same way as each other (as is the case in the usual multi-type epidemic model).
| [5] | [
[
942,
945
]
] | https://openalex.org/W2559839022 |
d4cef450-b432-4ae9-be44-20d6deae2312 | Thanks to the above references on the Cauchy problem for (REF ), solving (REF ) will not be an issue here because we know from [1]}, [2]}, [3]}, [4]} that (REF ) is globally well-posed in \(H^s(\) , \(s\ge 1\) (and even locally well-posed for \(s\ge \frac{1}{2}\) ). In fact, for \(M<\infty \) an easier argument can be applied by using that for frequencies \(\le M\) the equation (REF ) becomes an ODE (with a Lipschitz vector field) for which global well-posedness holds thanks to the \(L^2\) conservation law, while for frequencies \(>M\) the equation (REF ) becomes a linear equation. However, if one wishes to have \(H^s(\) bounds uniformly with respect to \(M\) , the analysis of [1]}, [2]}, [3]}, [4]} cannot be avoided. Let us denote by \(\Phi _M(t)\) the flow of (REF ). For \(M=\infty \) we simply write \(\Phi _{\infty }(t)=\Phi (t)\) . We shall note specify the dependence on \(p\) in these notations.
| [3] | [
[
139,
142
],
[
704,
707
]
] | https://openalex.org/W2075677712 |
4ec1e11d-e13d-475b-89e2-193034831fd4 | The proof of Theorem REF relies on a key improvement on our previous papers [1]} and [2]},
together with bounds resulting from dispersive estimates such as Bourgain's \(L^6\) Strichartz inequality for (linear) KdV.
Let us now briefly explain how we improve: following our argument in [2]} we would get on the r.h.s. of (REF ) a power larger than two of the \(H^{(k-\frac{1}{2})^-}\) norm of the initial datum, while in (REF )
we get a power less than two. This is the key to proving, beyond quasi-invariance, \(L^p\) regularity for the density of the transported Gaussian measure (see Theorem REF below).
Similarly, one should also compare (REF ) with the estimate obtained in [1]} which allows to get on the r.h.s. of (REF ) a power of the \(H^{k}\) norm of the initial datum
that is worse than the one that we get in this paper. The improvement on the growth
of the Sobolev norms that we get in Theorem REF below relies in a fundamental way on improving this power. For an overview of the results in [1]} and [2]} we refer to [7]}. The aforementioned improvements on both exponents come
from a refinement of the energies, compared with the ones used in the previous papers. In particular, once we
compute its time derivative along solutions we get a multilinear expression of densities in which for every single term at least five factors involve one nontrivial derivative. This key property
of distribution of derivatives on several factors was out of reach with our previous constructions of modified energies. For more details see Section . Then the dispersive effect, through the \(L^6\) Strichartz bound, allows us to transform the aforementioned distribution of derivative in terms of powers of Sobolev norms of the initial datum, as discussed above.
| [1] | [
[
77,
80
],
[
682,
685
],
[
1009,
1012
]
] | https://openalex.org/W3102750408 |
1d6059da-a2d5-42df-a744-819ee6f33811 | Finally, we should point out that, in the context of gKdV, modified energies already appeared, at the level of \(k=2\) , in [1]}, where they are used in connection with \(N\) -soliton asymptotics.
| [1] | [
[
124,
127
]
] | https://openalex.org/W2043098860 |
2f40ae57-f577-4627-87ba-1588d2e9772f | The line of research leading to results as the one in Theorem REF was initiated in [1]}.
We improve results obtained in [2]} (where the growth had exponent \(2k\) ) and [3]} (where the growth was lowered to \(k-1+\varepsilon \) .)
For details on how (REF ), (REF ) in Theorem REF imply Theorem REF we refer to [4]}.
| [4] | [
[
313,
316
]
] | https://openalex.org/W3102750408 |
379d6f7a-052c-4565-9374-8bc5af5774da | It would be very interesting to construct solutions of the defocusing gKdV such that the \(H^k\) , \(k>1\) norms do not remain bounded in time. Unfortunately such results are rare in the context of canonical dispersive models
(with the notable exception of [1]}).
| [1] | [
[
258,
261
]
] | https://openalex.org/W2963594978 |
83adb9ec-7252-4f9f-864e-be907267ed63 | Theorem REF fits in the line of research aiming to describe macroscopical (statistical dynamics) properties of Hamiltonian PDE's.
The earliest references we are aware of is [1]}, followed by [2]}, [3]}, [4]}, [5]}. Inspired by the work on invariant measures for the Benjamin-Ono equation [6]}, [7]}, [8]}, [9]}, quasi-invariance of Gaussian measure for several dispersive models was obtained in recent years, see [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}.
The method to identify the densities in Theorem REF is inspired by recent works [10]}, [25]}. In Theorem REF , we provide much more information on the densities when compared to [21]}, which used modified energies on the nonlinear Schrödinger equation. It should be underlined that a key novelty in the proof of Theorem REF with respect to [25]} and [21]} is that we crucially use dispersive estimates in the analysis.
| [4] | [
[
204,
207
]
] | https://openalex.org/W1990089063 |
bdc7bcb7-8991-4797-bcf2-402309b0bd50 | Theorem REF fits in the line of research aiming to describe macroscopical (statistical dynamics) properties of Hamiltonian PDE's.
The earliest references we are aware of is [1]}, followed by [2]}, [3]}, [4]}, [5]}. Inspired by the work on invariant measures for the Benjamin-Ono equation [6]}, [7]}, [8]}, [9]}, quasi-invariance of Gaussian measure for several dispersive models was obtained in recent years, see [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}.
The method to identify the densities in Theorem REF is inspired by recent works [10]}, [25]}. In Theorem REF , we provide much more information on the densities when compared to [21]}, which used modified energies on the nonlinear Schrödinger equation. It should be underlined that a key novelty in the proof of Theorem REF with respect to [25]} and [21]} is that we crucially use dispersive estimates in the analysis.
| [9] | [
[
307,
310
]
] | https://openalex.org/W2255092212 |
41f0118c-7ace-4aac-8490-337dca5fa82f | Theorem REF fits in the line of research aiming to describe macroscopical (statistical dynamics) properties of Hamiltonian PDE's.
The earliest references we are aware of is [1]}, followed by [2]}, [3]}, [4]}, [5]}. Inspired by the work on invariant measures for the Benjamin-Ono equation [6]}, [7]}, [8]}, [9]}, quasi-invariance of Gaussian measure for several dispersive models was obtained in recent years, see [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}.
The method to identify the densities in Theorem REF is inspired by recent works [10]}, [25]}. In Theorem REF , we provide much more information on the densities when compared to [21]}, which used modified energies on the nonlinear Schrödinger equation. It should be underlined that a key novelty in the proof of Theorem REF with respect to [25]} and [21]} is that we crucially use dispersive estimates in the analysis.
| [15] | [
[
449,
453
]
] | https://openalex.org/W3183210163 |
c8f741ef-16b2-4e0c-a493-20654b4812ca | Theorem REF fits in the line of research aiming to describe macroscopical (statistical dynamics) properties of Hamiltonian PDE's.
The earliest references we are aware of is [1]}, followed by [2]}, [3]}, [4]}, [5]}. Inspired by the work on invariant measures for the Benjamin-Ono equation [6]}, [7]}, [8]}, [9]}, quasi-invariance of Gaussian measure for several dispersive models was obtained in recent years, see [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}.
The method to identify the densities in Theorem REF is inspired by recent works [10]}, [25]}. In Theorem REF , we provide much more information on the densities when compared to [21]}, which used modified energies on the nonlinear Schrödinger equation. It should be underlined that a key novelty in the proof of Theorem REF with respect to [25]} and [21]} is that we crucially use dispersive estimates in the analysis.
| [17] | [
[
463,
467
]
] | https://openalex.org/W2964230361 |
9f93712b-7c09-40a0-ae12-4e6cc5d102c9 | The results of this paper and previous works of the second and third authors [1]}, [2]}, [3]}, [4]} can be summarized as follows.
In the case of integrable models, exact conservation laws for all Sobolev regularities imply existence of invariant measures; the modified energies we construct in the context of non integrable models imply existence of quasi-invariant measures. Concerning the deterministic behavior of the solutions, exact conservation laws imply uniform bounds on Sobolev norms of solutions while the modified energies we construct imply polynomial bounds on Sobolev norms of solutions.
| [1] | [
[
77,
80
]
] | https://openalex.org/W3102008150 |
d207e293-9735-44da-863b-c95b2c76ad6e | The results of this paper and previous works of the second and third authors [1]}, [2]}, [3]}, [4]} can be summarized as follows.
In the case of integrable models, exact conservation laws for all Sobolev regularities imply existence of invariant measures; the modified energies we construct in the context of non integrable models imply existence of quasi-invariant measures. Concerning the deterministic behavior of the solutions, exact conservation laws imply uniform bounds on Sobolev norms of solutions while the modified energies we construct imply polynomial bounds on Sobolev norms of solutions.
| [4] | [
[
95,
98
]
] | https://openalex.org/W2255092212 |
965e27be-d17c-4b96-bc62-b367f8857557 | Acknowledgement. The third author is grateful to Yvan Martel for pointing out the reference [1]} and for interesting discussions about gKdV.
| [1] | [
[
92,
95
]
] | https://openalex.org/W2043098860 |
70036992-5101-4e32-958c-b1b86bf33c20 | The aim of this section is to collect useful results on the flows associated with
(REF ) and (REF ). Firstly, global existence and uniqueness of solutions
for the truncated flows follow by a straightforward O.D.E. argument, along with conservation of \(L^2\) mass. From now on we assume without further comment
existence and uniqueness of global flows \(\Phi _M(t)\) for \(M\in {\mathbb {N}}\) .
The Cauchy problem associated with (REF ) is much more involved. In particular we quote [1]}, [2]}, [3]}, [4]} whose analysis implies that for every \(s\ge 1\) there exists a unique global solution associated with the initial datum \(\varphi \in H^s\) ; moreover we have continuous dependence on the initial datum. The analysis in [3]} allows to treat local Cauchy theory down to low regularity \(H^\frac{1}{2}\) .
| [3] | [
[
498,
501
],
[
730,
733
]
] | https://openalex.org/W2075677712 |
be585cbe-58e8-4d88-8f0e-b41f9f08174d | It will later be important to have a series of uniform bounds with respect to \(M\) (in particular suitable \(L^6\) bounds), as well as some delicate convergences in suitable topologies of the finite dimensional flows
to the infinite dimensional one. To the best of our knowledge, those properties do not follow in a straightforward way from the aforementioned works and their proofs require some further arguments. Indeed in our analysis we shall borrow many ideas from references above (in particular [1]}), that in conjunction with new ingredients will imply
several properties for the flows \(\Phi _M(t)\) with \(M\in {\mathbb {N}}\cup \lbrace \infty \rbrace \) .
| [1] | [
[
505,
508
]
] | https://openalex.org/W2075677712 |
e957e8d7-dd78-4142-b6a4-241aca205af3 | We now present the gauge transform following [1]}.
Set \(u_M(t,x)=\pi _M (\Phi _M(t)\varphi ) \)
and introduce a change of unknown,
\(v_M(t,x)=u_M\big (t,x+(p+1)\int _{0}^t \int _ṵ_M^p dxdt\big )\,,\)
| [1] | [
[
45,
48
]
] | https://openalex.org/W2075677712 |
d6f847df-917a-4a7c-b430-28f306d4b87e | Denote by \(S(t)\) the linear group associated with linear KdV equation, namely \(S(t)=e^{t\partial _x^3}\) . Then () rewrites, in integral form,
\(v_M(t)=S(t)(\pi _M \varphi )+(p+1)\int _{0}^t S(t-\tau ) \pi _M\Pi ( \partial _x v_M(\tau ) \Pi v_M^p(\tau ))d\tau \,.\)
The analysis of [1]}, pages 183-186 and pages 197-200 may be used to obtain that for \(s\ge 1\) ,
\(\Big \Vert \int _{0}^t S(t-\tau ) \pi _M\Pi ( \partial _x w(\tau ) \Pi w^p(\tau ))d\tau \Big \Vert _{Y^s_T}\le CT^{\kappa }\Vert w\Vert _{Y^1_T}^p \Vert w\Vert _{Y^s_T},\)
where \(\kappa >0\) and \(T\in (0,1)\) . We refer to the appendix for the proof of (REF ). Notice that (REF ) is a slightly modified version compared with the one available in the literature: we gain a power of \(T\) , which is very important later. By a similar argument one proves a multi-linear estimate for \(s\ge 1\) :
\(\Big \Vert \int _{0}^t S(t-\tau ) \pi _M\Pi ( \partial _x w_{p+1}(\tau )\Pi (w_1(\tau )\times \dots \times w_p(\tau )))d\tau \Big \Vert _{Y^s_T}\le CT^{\kappa }\sum _{i=1}^{p+1}\big ( \Vert w_{i}\Vert _{Y^s_T} \prod _{\begin{array}{c}j=1,\dots , p+1\\j\ne i\end{array}} \Vert w_{j}\Vert _{Y^1_T}\big )\)
and existence and uniqueness follows by a classical fixed point argument in the space \(Y_T^s\) .
Applying (REF ) with \(s=1\) , \( w=v_N\) and recalling (REF ), we obtain that \(\Vert v_M\Vert _{Y^1_T}\le C\Vert \varphi \Vert _{H^1}\) provided \(T\) is small enough depending only on a bound for \(\varphi \) in \(H^1\) .
Applying once again (REF ), we get
\(\Vert v_M\Vert _{Y^s_T}\le C\Vert \varphi \Vert _{H^s}+CT^\kappa (C\Vert \varphi \Vert _{H^1})^p \Vert v_M\Vert _{Y^s_T}\)
which implies
\(\Vert v_M\Vert _{Y^s_T}\le C\Vert \varphi \Vert _{H^s}\)
by possibly taking \(T\) smaller but still depending only on an \(H^1\) bound for \(\varphi \) . By the embedding \(Y_T^s\subset L^\infty ([0,T];H^s)\) , (REF ) follows and we also get
\(\Vert v_M\Vert _{X^{s,\frac{1}{2}}_T}\le C\Vert \varphi \Vert _{H^s}.\)
Now we invoke the Strichartz estimate \((8.37)\) of [2]} :
\(\Vert S(t)g\Vert _{L^6((0,T); L^6)}\le C \Vert g\Vert _{H^{\sigma }},\quad \sigma >0\)
which together with the transfer principle from [3]} yields
\(\Vert w\Vert _{L^6((0,T); L^{6}) }\le C \Vert w\Vert _{ X^{\sigma ,b}_T}, \quad b>\frac{1}{2}.\)
Next let \(w\in X^{\frac{1}{3}, \frac{1}{3}}_T\) , then we may assume without loss of generality that \(w\) is a global space time function such that \( \Vert w\Vert _{X^{\frac{1}{3}, \frac{1}{3}}}\le 2 \Vert w\Vert _{X^{\frac{1}{3}, \frac{1}{3}}_T}\) . By Sobolev embedding \(H^\frac{1}{3}\subset L^6\) and \(S(t)\) being an isometry on \(H^s\) ,
\(\Vert w\Vert _{L^6({\mathbb {R}};L^6()}\le C \Vert S(-t) w(t,.)\Vert _{L^6({\mathbb {R}};H^\frac{1}{3}()}\le C \Vert \langle D \rangle _x^\frac{1}{3} (S(-t) w(t,.))\Vert _{L^6({\mathbb {R}};L^2()}\)
and by Minkowski inequality and Sobolev embedding (that we now exploit w.r.t. the time variable)
\(\dots \le C \Vert \langle D \rangle _x^{\frac{1}{3}} (S(-t) w(t,.))\Vert _{L^2( L^6({\mathbb {R}}))}\le C \Vert \langle D \rangle _x^{\frac{1}{3}} S(-t) w(t,.)\Vert _{L^2( H^\frac{1}{3} ({\mathbb {R}}))}\\=C \Vert \langle D \rangle _t^{\frac{1}{3}}\langle D \rangle _x^\frac{1}{3} (S(-t) w(t,.))\Vert _{L^2({\mathbb {R}}\times } =C\Vert w\Vert _{X^{\frac{1}{3},\frac{1}{3}}}\le 2C \Vert w\Vert _{X_T^{\frac{1}{3},\frac{1}{3}}}\)
so that \(\Vert w\Vert _{L^6((0,T); L^{6}) }\le C \Vert w\Vert _{X^{\frac{1}{3},\frac{1}{3}}_T}\) .
Interpolation with (REF ) yields
\(\forall \,\varepsilon >0\,,\quad \Vert w\Vert _{L^6((0,T); L^6)}\le C \Vert w\Vert _{X^{\varepsilon ,\frac{1}{2}}_T}\,.\)
By choosing \(w=v_M\) and recalling (REF ) where we replace \(s\) by \(s+\varepsilon \) ,
\(\Vert v_M\Vert _{L^6((0,T); W^{s,6})}\le C \Vert v_M\Vert _{X^{s+\varepsilon ,\frac{1}{2}}_T}\le C \Vert \varphi \Vert _{H^{s+\varepsilon }}, \quad \forall \varepsilon >0,\)
and we get (REF ).
The proof of (REF ) follows by (REF ) by considering the difference of two solutions.
Finally,
\(\pi _M\Pi (\partial _x v_M \Pi v_M^p))-\Pi (\partial _x v \Pi v^p )=\\\pi _M \Pi (\partial _x v_M\Pi (v_M^p-v^p))+ (\partial _x v_M-\partial _x v)\Pi v^p)-(1-\pi _M)\Pi ( \partial _x v \Pi v^p)\,,\)
where \(v_M, v\) are solutions to () and (). Therefore using (REF ), where we choose \(p\) factors \(w_i\) equal to either \(v_M, v\) and one factor equal to \(v-v_M\) , writing the fixed point equation solved by \(v-v_M\) , and recalling (REF ),
we get (see e.g. [4]} for details), with \(\mathcal {K}\) being a compact in \(H^s\) ,
\(\sup _{\varphi \in {\mathcal {K}}} \Vert \pi _M \Phi _M^{\mathcal {G}}(t)\varphi -\Phi ^{\mathcal {G}} (t)\varphi \Vert _{Y^s_T}\overset{M\rightarrow \infty }{\longrightarrow }0\,.\)
Therefore we get (REF ) by using the continuous embedding \(Y^s_T\subset L^\infty ([0,T]; H^s)\) .
| [1] | [
[
287,
290
]
] | https://openalex.org/W2075677712 |
03063a5d-5448-4585-870f-08072507445b | Denote by \(S(t)\) the linear group associated with linear KdV equation, namely \(S(t)=e^{t\partial _x^3}\) . Then () rewrites, in integral form,
\(v_M(t)=S(t)(\pi _M \varphi )+(p+1)\int _{0}^t S(t-\tau ) \pi _M\Pi ( \partial _x v_M(\tau ) \Pi v_M^p(\tau ))d\tau \,.\)
The analysis of [1]}, pages 183-186 and pages 197-200 may be used to obtain that for \(s\ge 1\) ,
\(\Big \Vert \int _{0}^t S(t-\tau ) \pi _M\Pi ( \partial _x w(\tau ) \Pi w^p(\tau ))d\tau \Big \Vert _{Y^s_T}\le CT^{\kappa }\Vert w\Vert _{Y^1_T}^p \Vert w\Vert _{Y^s_T},\)
where \(\kappa >0\) and \(T\in (0,1)\) . We refer to the appendix for the proof of (REF ). Notice that (REF ) is a slightly modified version compared with the one available in the literature: we gain a power of \(T\) , which is very important later. By a similar argument one proves a multi-linear estimate for \(s\ge 1\) :
\(\Big \Vert \int _{0}^t S(t-\tau ) \pi _M\Pi ( \partial _x w_{p+1}(\tau )\Pi (w_1(\tau )\times \dots \times w_p(\tau )))d\tau \Big \Vert _{Y^s_T}\le CT^{\kappa }\sum _{i=1}^{p+1}\big ( \Vert w_{i}\Vert _{Y^s_T} \prod _{\begin{array}{c}j=1,\dots , p+1\\j\ne i\end{array}} \Vert w_{j}\Vert _{Y^1_T}\big )\)
and existence and uniqueness follows by a classical fixed point argument in the space \(Y_T^s\) .
Applying (REF ) with \(s=1\) , \( w=v_N\) and recalling (REF ), we obtain that \(\Vert v_M\Vert _{Y^1_T}\le C\Vert \varphi \Vert _{H^1}\) provided \(T\) is small enough depending only on a bound for \(\varphi \) in \(H^1\) .
Applying once again (REF ), we get
\(\Vert v_M\Vert _{Y^s_T}\le C\Vert \varphi \Vert _{H^s}+CT^\kappa (C\Vert \varphi \Vert _{H^1})^p \Vert v_M\Vert _{Y^s_T}\)
which implies
\(\Vert v_M\Vert _{Y^s_T}\le C\Vert \varphi \Vert _{H^s}\)
by possibly taking \(T\) smaller but still depending only on an \(H^1\) bound for \(\varphi \) . By the embedding \(Y_T^s\subset L^\infty ([0,T];H^s)\) , (REF ) follows and we also get
\(\Vert v_M\Vert _{X^{s,\frac{1}{2}}_T}\le C\Vert \varphi \Vert _{H^s}.\)
Now we invoke the Strichartz estimate \((8.37)\) of [2]} :
\(\Vert S(t)g\Vert _{L^6((0,T); L^6)}\le C \Vert g\Vert _{H^{\sigma }},\quad \sigma >0\)
which together with the transfer principle from [3]} yields
\(\Vert w\Vert _{L^6((0,T); L^{6}) }\le C \Vert w\Vert _{ X^{\sigma ,b}_T}, \quad b>\frac{1}{2}.\)
Next let \(w\in X^{\frac{1}{3}, \frac{1}{3}}_T\) , then we may assume without loss of generality that \(w\) is a global space time function such that \( \Vert w\Vert _{X^{\frac{1}{3}, \frac{1}{3}}}\le 2 \Vert w\Vert _{X^{\frac{1}{3}, \frac{1}{3}}_T}\) . By Sobolev embedding \(H^\frac{1}{3}\subset L^6\) and \(S(t)\) being an isometry on \(H^s\) ,
\(\Vert w\Vert _{L^6({\mathbb {R}};L^6()}\le C \Vert S(-t) w(t,.)\Vert _{L^6({\mathbb {R}};H^\frac{1}{3}()}\le C \Vert \langle D \rangle _x^\frac{1}{3} (S(-t) w(t,.))\Vert _{L^6({\mathbb {R}};L^2()}\)
and by Minkowski inequality and Sobolev embedding (that we now exploit w.r.t. the time variable)
\(\dots \le C \Vert \langle D \rangle _x^{\frac{1}{3}} (S(-t) w(t,.))\Vert _{L^2( L^6({\mathbb {R}}))}\le C \Vert \langle D \rangle _x^{\frac{1}{3}} S(-t) w(t,.)\Vert _{L^2( H^\frac{1}{3} ({\mathbb {R}}))}\\=C \Vert \langle D \rangle _t^{\frac{1}{3}}\langle D \rangle _x^\frac{1}{3} (S(-t) w(t,.))\Vert _{L^2({\mathbb {R}}\times } =C\Vert w\Vert _{X^{\frac{1}{3},\frac{1}{3}}}\le 2C \Vert w\Vert _{X_T^{\frac{1}{3},\frac{1}{3}}}\)
so that \(\Vert w\Vert _{L^6((0,T); L^{6}) }\le C \Vert w\Vert _{X^{\frac{1}{3},\frac{1}{3}}_T}\) .
Interpolation with (REF ) yields
\(\forall \,\varepsilon >0\,,\quad \Vert w\Vert _{L^6((0,T); L^6)}\le C \Vert w\Vert _{X^{\varepsilon ,\frac{1}{2}}_T}\,.\)
By choosing \(w=v_M\) and recalling (REF ) where we replace \(s\) by \(s+\varepsilon \) ,
\(\Vert v_M\Vert _{L^6((0,T); W^{s,6})}\le C \Vert v_M\Vert _{X^{s+\varepsilon ,\frac{1}{2}}_T}\le C \Vert \varphi \Vert _{H^{s+\varepsilon }}, \quad \forall \varepsilon >0,\)
and we get (REF ).
The proof of (REF ) follows by (REF ) by considering the difference of two solutions.
Finally,
\(\pi _M\Pi (\partial _x v_M \Pi v_M^p))-\Pi (\partial _x v \Pi v^p )=\\\pi _M \Pi (\partial _x v_M\Pi (v_M^p-v^p))+ (\partial _x v_M-\partial _x v)\Pi v^p)-(1-\pi _M)\Pi ( \partial _x v \Pi v^p)\,,\)
where \(v_M, v\) are solutions to () and (). Therefore using (REF ), where we choose \(p\) factors \(w_i\) equal to either \(v_M, v\) and one factor equal to \(v-v_M\) , writing the fixed point equation solved by \(v-v_M\) , and recalling (REF ),
we get (see e.g. [4]} for details), with \(\mathcal {K}\) being a compact in \(H^s\) ,
\(\sup _{\varphi \in {\mathcal {K}}} \Vert \pi _M \Phi _M^{\mathcal {G}}(t)\varphi -\Phi ^{\mathcal {G}} (t)\varphi \Vert _{Y^s_T}\overset{M\rightarrow \infty }{\longrightarrow }0\,.\)
Therefore we get (REF ) by using the continuous embedding \(Y^s_T\subset L^\infty ([0,T]; H^s)\) .
| [3] | [
[
2198,
2201
]
] | https://openalex.org/W1964699420 |
930a7a23-15b2-4a9a-8fbd-b2d9edf9b147 | for \(T\le 1\) and \(C>0\) independent of \(M\) and \(\kappa \) . The arguments that we will perform below are standard.
Our only goal is to provide a complete argument for a reader unfamiliar with the \(X^{s,b}\) machinery, as well as proving how to gain the positive power of \(T\)
at the r.h.s. (which was of importance for our analysis in Section ). At the best of our knowledge the estimate above written in this form is not readily available in the literature, even if we closely follow [1]}.
As \(\pi _M\) is bounded on \(Y^s\) , it suffices to prove
\(\Big \Vert \int _{0}^t S(t-\tau ) \Pi (\partial _x v(\tau ) \Pi v^p(\tau ))d\tau \Big \Vert _{Y^s_T}\le CT^{\kappa }\Vert v\Vert _{Y^1_T}^p \Vert v\Vert _{Y^s_T},\quad \kappa >0\,.\)
| [1] | [
[
498,
501
]
] | https://openalex.org/W2075677712 |
f260d0b6-2e4e-4678-a661-7d55d0b5fd9b | where \(\psi \in C^\infty _0({\mathbb {R}})\) is such that \(\psi \equiv 1\) on \([-1,1]\) .
Using [1]}, we obtain that
\(\Big \Vert \psi (t) \int _{0}^t S(t-\tau ) \Pi (\partial _x v(\tau ) \Pi v^p(\tau ))d\tau \Big \Vert _{Y^s}\le C\big \Vert \Pi (\partial _x v \Pi v^p )\big \Vert _{Z^s}\, ,\)
| [1] | [
[
101,
104
]
] | https://openalex.org/W2075677712 |
b6835926-a582-4ea0-bb59-14308c35cc85 | will be enough. Its proof follows by combining the following propositions. The next statement is a slightly modified version of [1]}.
| [1] | [
[
128,
131
]
] | https://openalex.org/W2075677712 |
a13d87ee-b58f-4eca-9e68-37b861d055de | Write
\({\mathcal {F}}(u^p)(\tau ,n)= \int _{\tau =\tau _1+\cdots +\tau _p}\,\,\,\sum _{n=n_1+\cdots +n_p}\,\,\prod _{k=1}^p \hat{u}(\tau _k,n_k)\,,\)
where \({\mathcal {F}}\) and \(\hat{u}\) denote the space time Fourier transform (continuous in time and discrete in space).
\(\Vert u^p\Vert _{X^{s-1,\frac{1}{2}}}^2=\int _{{\mathbb {R}}}\sum _{n\in {\mathbb {Z}}}\langle n\rangle ^{2(s-1)} \langle \tau +n^3\rangle \, |{\mathcal {F}}(u^p)(\tau ,n)|^2\, d\tau \,.\)
Notice that the r.h.s. in (REF ) may be bounded with
\(\int _{{\mathbb {R}}}\sum _{n\in {\mathbb {Z}}}\langle n\rangle ^{2(s-1)} \langle \tau +n^3\rangle \big (\int _{\tau =\tau _1+\cdots +\tau _p}\,\,\sum _{n=n_1+\cdots +n_p}\,\,\prod _{k=1}^p |\widehat{u}(\tau _k,n_k)|\big )^2\, d\tau .\)
Hence if we define \(w(t,x)\) by \(\hat{w}(\tau ,n)=|\hat{u}(\tau ,n)|\) we get \(\Vert u\Vert _{X^{s,b}}=\Vert w\Vert _{X^{s,b}}\) , \(\Vert u\Vert _{Y^s}=\Vert w\Vert _{Y^s}\) , and we are reduced to estimate
\(\int _{{\mathbb {R}}}\sum _{n\in {\mathbb {Z}}}\langle n\rangle ^{2(s-1)} \langle \tau +n^3\rangle \,\big (\int _{\tau =\tau _1+\cdots +\tau _p}\,\,\sum _{n=n_1+\cdots +n_p}\,\,\prod _{k=1}^p \widehat{w}(\tau _k,n_k)\big )^2 d\tau \,\,\,.\)
Next we split the domain of integration and we consider first the contribution to (REF ) in the region
\(|\tau +n^3|\le 10p |\tau _1+n_1^3|.\)
If we define \(w_1\) by \(\widehat{w_1}(\tau ,n)=\langle \tau +n^3\rangle ^{\frac{1}{2}}\, \widehat{w}(\tau ,n)\) ,
then the contribution to (REF ) in the region (REF ) can be controlled
in the physical space variables as follows
\(C\Vert w_1 w^{p-1}\Vert _{L^2({\mathbb {R}};H^{s-1})}^2\le & C\big (\Vert w_1\Vert _{L^2({\mathbb {R}}; H^{s-1})}^2 \Vert w^{p-1}\Vert _{L^\infty ({\mathbb {R}}; L^\infty )}^2+\Vert w_1\Vert _{L^2({\mathbb {R}}; L^\infty )}^2\Vert w^{p-1}\Vert _{L^\infty ({\mathbb {R}}; H^{s-1})}^2\big )\\\le & C \big (\Vert w\Vert _{X^{s-1, \frac{1}{2}}}^2 \Vert w\Vert _{L^\infty ({\mathbb {R}}; H^1)}^{2(p-1)} + \Vert w_1\Vert _{L^2({\mathbb {R}}; H^1)}^2\Vert w\Vert _{L^\infty ({\mathbb {R}}; H^{s-1})}^2\Vert w\Vert _{L^\infty ({\mathbb {R}}; H^{1})}^{2(p-2)}\big )\)
where we have used standard product rules and Sobolev embedding \(H^1\subset L^\infty \) .
We proceed with
\((\dots ) \le C \big (\Vert w\Vert _{X^{s-1, \frac{1}{2}}}^2 \Vert w\Vert _{Y^1}^{2(p-1)} + \Vert w_1\Vert _{X^{1,\frac{1}{2}}}^2\Vert w\Vert _{Y^{s-1}}^2\Vert w\Vert _{Y^1}^{2(p-2)} \big )\)
where we used \(Y^1\subset L^\infty ({\mathbb {R}}; H^1)\) , \(Y^{s-1}\subset L^\infty ({\mathbb {R}}; H^{s-1})\) .
Notice that we have a better estimate, when compared with
(REF ),
in the region (REF ).
Similarly, we can evaluate the contributions to (REF ) of the regions
\(| \tau +n^3|\le 10 p| \tau _k+n_k^3|,\quad 2\le k\le p\,.\)
Therefore, we may assume that the summation and the integration in (REF ) is performed in the region
\(\max _{1\le k\le p}|\tau _k+n_k^3|\le \frac{1}{10p} |\tau +n^3|\,.\)
Write
\((\tau +n^3)-\sum _{k=1}^p(\tau _k+n_k^3)=\Big (\sum _{k=1}^p n_k\Big )^3-\sum _{k=1}^p n_k^3\,,\)
therefore in the region (REF ) we have
\(\Big |\Big (\sum _{k=1}^p n_k\Big )^3-\sum _{k=1}^p n_k^3\Big |\ge |\tau +n^3|-\sum _{k=1}^p |\tau _k+n_k^3|\ge \frac{9}{10}|\tau +n^3|\)
hence
\(\langle \tau +n^3\rangle \le C\Big |\Big (\sum _{k=1}^p n_k\Big )^3-\sum _{k=1}^p n_k^3\Big |\,.\)
By symmetry we can assume \(|n_1|\ge |n_2|\ge \cdots \ge |n_k|\) and by using [1]}, we obtain that
\(\Big |\Big (\sum _{k=1}^p n_k\Big )^3-\sum _{k=1}^p n_k^3\Big |\le C |n_1|^2 |n_2|.\)
Consequently in the region (REF ) we get \(\langle \tau +n^3\rangle \le C \langle n_1\rangle ^2 \langle n_2\rangle \) , and the corresponding contribution to (REF ) can be estimated as
\(C\, \int _{{\mathbb {R}}}\sum _{n\in {\mathbb {Z}}}\,\big (\int _{\tau =\tau _1+\cdots +\tau _p}\,\,\sum _{n=n_1+\cdots +n_p}\,\langle n_1\rangle ^{s} \langle n_2\rangle ^\frac{1}{2} \,\prod _{k=1}^p (\widehat{w}(\tau _k,n_k)\big )^2 \, d\tau \)
If we define \(w_1\) , \(w_2\) by \(\widehat{w_1}(\tau ,n)=\langle n\rangle ^{s} \widehat{w}(\tau ,n)\) ,
\(\widehat{w_2}(\tau ,n)=\langle n\rangle ^{\frac{1}{2}} \widehat{w}(\tau ,n)\) , going back to physical space variables, we estimate (REF ) as
\(C\Vert w_1 w_2 w^{p-2}\Vert _{L^2({\mathbb {R}}; L^2)}^2 \le &C\Vert w_1\Vert _{L^\infty ({\mathbb {R}}; L^2)}^2\Vert w_2\Vert _{L^4({\mathbb {R}}; L^\infty )}^2\Vert w\Vert _{L^4({\mathbb {R}}; L^\infty )}^2\Vert w\Vert _{L^\infty ({\mathbb {R}}; L^\infty )}^{2(p-3)}\\\le & C \Vert w\Vert _{L^\infty ({\mathbb {R}}; H^s)}^2\Vert w_2\Vert _{L^4({\mathbb {R}}; W^{\frac{1}{2},4})}^2\Vert w\Vert _{L^4({\mathbb {R}};W^{1,4})}^2\Vert w\Vert _{L^\infty ({\mathbb {R}}; H^1)}^{2(p-3)}.\)
Hence by using \(Y^1\subset L^\infty ({\mathbb {R}};H^1)\) and \(Y^s\subset L^\infty ({\mathbb {R}};H^s)\) ,
along with the estimate
\(\Vert u\Vert _{L^4({\mathbb {R}}; L^4)}\le C\Vert u\Vert _{X^{0,\frac{1}{3}}}\)
established in the fundamental work [2]}, we proceed with
\((\dots ) \le C \Vert w\Vert _{Y^s}^2\Vert w\Vert _{X^{1,\frac{1}{3}}}^2\Vert w\Vert _{X^{1,\frac{1}{3}}}^2\Vert w\Vert _{Y^1}^{2(p-3)}\)
and this concludes the proof.
| [1] | [
[
3444,
3447
]
] | https://openalex.org/W2075677712 |
916f41fe-69e8-45bf-98a5-d2d017293cf1 | On the other hand, it has turned out that there is another particular class of STIT tessellations, called Mondrian tessellations, for which a second-order description is desirable, since such tessellations have found numerous applications in machine learning. Reminiscent of the famous paintings of the Dutch modernist painter Piet Mondrian, the eponymous tessellations are a version of STIT tessellations with only axis-parrallel cutting directions. Originally established by Roy and Teh [1]}, Mondrian tessellations have been shown to have multiple applications in random forest learning [2]}, [3]} and kernel methods [4]}. Both random forest learners and random kernel approximations based on the Mondrian process have shown significant results, especially as they are substantially more adapted to online-learning (i.e., the ability to incorporate new data into an existing model without having to completely retrain it) than many other of their tessellation-based counterparts. This is due to the self-similarity of Mondrian tessellations, which stems from their defining characteristic of being iteration stable (see [5]}), and allows to obtain explicit representations for many conditional distributions of Mondrian tessellations. This property allows a tessellation-based learner to be re-trained on new data without having to newly start the training process and is thus considerably more efficient on large data sets. These methods have recently been carried over back to their origin in stochastic geometry, i.e., to general STIT tessellations [6]}, [7]}.
| [2] | [
[
590,
593
]
] | https://openalex.org/W2159187228 |
11d619e5-a96a-4ea6-a345-2cf9e77c987d | Let \([\mathbb {R}^2]\) be the space of lines in \(\mathbb {R}^2\) . Equipped with the Fell topology, \([\mathbb {R}^2]\) carries a natural Borel \(\sigma \) -field \(\mathfrak {B}([\mathbb {R}^2])\) , see [1]}. Further, define \([\mathbb {R}^2]_0\) to be the space of all lines in \(\mathbb {R}^2\) passing through the origin. For a line \(L\in [\mathbb {R}^2]\) , we write \(L^+\) and \(L^-\) for the positive and negative half-spaces of \(L\) , respectively, and \(L^\perp \) for its orthogonal line passing through the origin.
For a compact set \(A\subset \mathbb {R}^2\) define
\([A]:= \lbrace L \in [\mathbb {R}^2] : L \cap A\ne \emptyset \rbrace \in \mathfrak {B}([\mathbb {R}^2])\)
| [1] | [
[
208,
211
]
] | https://openalex.org/W2767091268 |
36fe2d06-fdc3-4c0d-829c-1173674b2cbd | for any non-negative measurable function \(g: [\mathbb {R}^2] \rightarrow \mathbb {R}\) , see [1]}. Sufficient normalization is usually applied to \(\mathcal {R}\) in order to gain a probability distribution, which is then referred to as the directional distribution.
| [1] | [
[
94,
97
]
] | https://openalex.org/W565214016 |
13a3874b-343a-4f32-a91c-2bec1e3bde4d | Remark 2.4 It is instructive to compare this result to the corresponding asymptotic formulas for isotropic STIT tessellations in the plane and the rectangular Poisson line process. We therefore denote by \(\Lambda _{\rm iso }\) the isometry invariant measure on the space of lines in the plane normalized in such a way that \(\Lambda _{\rm iso}([[0,1]^2])={4\over \pi }\) (this is the same normalization as the one used in [1]}).
| [1] | [
[
425,
428
]
] | https://openalex.org/W565214016 |
cf2c6d00-28bb-4c6d-a6bf-af5b29aed283 | In [1]} an explicit description of the pair-correlation function of the vertex point process of an isotropic planar STIT tessellation has been derived, while such a description for the random edge length measure can be found in [2]}. Also the so-called cross-correlation function between the vertex process and the random length measure was computed in [1]}. In the present paper we develop similar results for planar Mondrian tessellations. To define the necessary concepts, we suitably adapt the notions used in the isotropic case. We let \(Y_t\) be a weighted Mondrian tessellation of \(\mathbb {R}^2\) with weight \(p\in (0,1)\) and time parameter \(t>0\) , define \(R_p:=[0,1-p]\times [0,p]\) and let \(R_{r,p}:=rR_p\) be the rescaled rectangle with side lengths \(r(1-p)\) and \(rp\) . In the spirit of Ripley's K-function widely used in spatial statistics [4]}, we let \(t^2K_{\cal E}(r)\) be the total edge length of \(Y_t\) in \(R_{r,p}\) when \(Y_t\) is regarded under the Palm distribution with respect to the random edge length measure \(\mathcal {E}_t\) concentrated on the edge skeleton. On an intuitive level the latter means that we condition on the origin being a typical point of the edge skeleton, see [4]}. The classic version of Ripley's K-function considers a ball of radius \(r>0\) , but since our driving measure is non-isotropic, we account for that by considering \(R_{r,p}\) instead. Similarly, we let \(\lambda K_{\cal V}(r)\) be the total number of vertices of \(Y_t\) in \(R_{r,p}\) , where \(\lambda =t^2p(1-p)\) stands for the vertex intensity of \(Y_t\) and where we again condition on the origin being a typical vertex of the tessellation (in the sense of the Palm distribution with respect to the random vertex point process). While these functions still have a complicated form (which we will determine in the course of our arguments below), we consider their normalized derivatives – provided these derivatives are well defined as it is the case for us. In the isotropic case, these are known as the pair-correlation functions of the random edge length measure or the vertex point process, respectively. In our case, the following normalization turns out to be most suitable:
\(g_\mathcal {E}(r)= \frac{1}{2p(1-p)r} \frac{\textup {d}K_\mathcal {E}(r)}{\textup {d}r} \qquad \text{and}\qquad g_\mathcal {V}(r)= \frac{1}{2p(1-p)r} \frac{\textup {d}K_\mathcal {V}(r)}{\textup {d}r},\)
| [4] | [
[
869,
872
],
[
1232,
1235
]
] | https://openalex.org/W2118166339 |
c4563541-8877-40c1-a769-aef390e0b460 | (i)
In the isotropic case Schreiber and Thäle showed in [1]} that the pair-correlation function of the random edge length measure \(\mathcal {E}_{t}\) has the form
\(g_\mathcal {E}(r) = 1 + \frac{1}{2t^2 r^2}\Big (1-e^{-\frac{2}{\pi } tr}\Big ).\)
In [2]} the same authors showed that the pair-correlation function of the vertex point process \(\mathcal {V}_t\) and the cross-correlation function of the random edge length measure and the vertex point process are given by
\(g_{\mathcal {E},\mathcal {V}}(r) = 1+ \frac{1}{t^2 r^2}-\frac{\pi }{4t^3r^3}- \frac{e^{-\frac{2}{\pi }tr}}{2t^2r^2}\Big (1-\frac{\pi }{2tr}\Big )\)
and
\(g_\mathcal {V}(r) = 1 + \frac{2}{t^2r^2} - \frac{\pi }{t^3r^3} + \frac{\pi ^2}{4t^4r^4} - \frac{e^{-\frac{2}{\pi }tr}}{2t^2r^2} \Big ( 1 - \frac{\pi }{tr} + \frac{\pi ^2}{2t^2r^2} \Big ).\)
(ii)
For the rectangular Poisson line process as given in Remark REF one can use the theorem of Slivnyak-Mecke (see for example [3]}) to deduce that the corresponding analogues of the cross- and pair-correlation functions are given by
\(g_\mathcal {E}(r) = 1 + \frac{1}{tr},\qquad g_{\mathcal {E},\mathcal {V}}(r) = 1 + \frac{1}{4trp(1-p)}\qquad \text{and}\qquad g_\mathcal {V}(r)=1 + \frac{1}{2tr p^2(1-p)^2 }.\)
| [3] | [
[
953,
956
]
] | https://openalex.org/W2118166339 |
7c08224e-9351-453d-b10d-8bb0fd03dfc9 | + 12 p2(1-p) RRR1(() ) 1(() ) ( ) I1 (s2 e-sp|-|;t) d d d.
Having established the covariance measure of the edge process \(\mathcal {E}_{t}\) , we now aim at giving the corresponding pair-correlation function \(g_\mathcal {E}(r)\) . In a first step towards this we need to establish the reduced covariance measure \(\widehat{\operatorname{Cov}}(\mathcal {E}_{t})\) defined by the relation
Cov(Et)(AB)= A B-xCov(Et)(dy) 2(dx)
for a measurable product \(A\times B\subset \mathbb {R}^2\times \mathbb {R}^2\) (cf. [1]}). We now examine the first of the two integral summands in (). Using Lemma REF (ii) we see that
12 p(1-p)2 RRR1(A ()) 1(B ()) I1 (s2 e-s(1-p)|-|;t) d d d
| [1] | [
[
522,
525
]
] | https://openalex.org/W2525528836 |
e9dab924-aa79-4b23-ac0c-d3b007959d7d | Proceeding analogously with the second summand in () and using the diagonal shift argument from [1]}, we get the reduced covariance measure \( \widehat{\operatorname{Cov}}(\mathcal {E}_{t})\) on \(\mathbb {R}^2\) :
Cov(Et)( ) = 12 p(1-p)2 R(0 z)0 (0 z)0 y-x() dx dy I1 (s2 e-s(1-p)|z|;t) dz
| [1] | [
[
96,
99
]
] | https://openalex.org/W2525528836 |
7e3080bd-66d9-4529-af63-01db061f333c | + 12 p2(1-p) R0(0 z) 0(0 z) y-x() dx dy I1 (s2 e-s(1-p)|z|;t) dz.
Noting that the intensity of the random measure \(\mathcal {E}_{t}\) is just \(t\) , see [1]}, we apply Equation (8.1.6) in [2]} to see that the corresponding reduced second moment measure \(\widehat{\mathcal {K}}(\mathcal {E}_{t})\) is
K(Et)( ) = Cov(Et)( ) + t2 2().
While the classical Ripley's K-function would be \(t^{-2}\) times the \(\widehat{\mathcal {K}}(\mathcal {E}_{t})\) -measure of a disc of radius \(r>0\) , we define our Mondrian analogue as
KE(r):=1t2 K(Et)(Rr,p),
where \(R_{r,p}:=rR_p\) with \(R_p:=[0,1-p] \times [0, p]\) as before.
Calculating \(K_{\mathcal {E}}(r)\) explicitly via Lemma REF yields
KE(r) = r2p(1-p) +p(1-p)22t2 R(0 z)0 (0 z)0 y-x(Rr,p ) dx dy I1 (s2 e-s(1-p)|z|;t) dz
| [2] | [
[
203,
206
]
] | https://openalex.org/W2525528836 |
58e94361-c53b-406f-9ebb-6b930b85d193 | =p2 (1-p)R2 w(A)R 1(B-w0(0 z))) I1(s2 (-s p|z| );t) dz dw.
The second summand in each of the terms in () can be dealt with using Lemma REF (ii), see also Equation ().
As in the previous section, we want to proceed by giving the reduced covariance measure via the diagonal-shift argument in the sense of [1]}. Plugging the terms we just deduced into (), we end up with the covariance measure
Cov( Vt,Et)(AB)= A B-xCovV,E(dy) 2(dx),
where the reduced cross-covariance measure is given by
CovV,E( )= p(1-p)( (1-p)R 1( (0z)0) I1(s2 (-s(1-p)z ;t) dz
| [1] | [
[
313,
316
]
] | https://openalex.org/W2525528836 |
2fc2d0a4-7370-49a5-9905-fdb9ab11015b | We now aim at giving the corresponding Mondrian analogue of the pair-correlation function of the vertex point process. As in the previous sections, we do so by giving the reduced covariance measure via a diagonal-shift argument in the sense of [1]}.
Consider the first integral term in (REF ) without its coefficient for the Borel set \( A\times B \subset \mathbb {R}^2\times \mathbb {R}^2\) . After multiplying the Dirac measures, we only consider the first two summands that integrate over \(\delta _{(\tau ,\sigma )}(A)\delta _{(\tau ,\sigma )}(B)\) and \(\delta _{(\tau ,\sigma )}(A)\delta _{(\vartheta , \sigma )}(B)\) , respectively, as the other two can be handled in the same fashion. Using Lemma REF (iii) yields
\(&&\int _{\mathbb {R}} \int _{\mathbb {R}} \int _{\mathbb {R}} \, \delta _{(\tau ,\sigma )}(A)\delta _{(\tau ,\sigma )}(B) \, \mathcal {I}^1\big (s^2 \exp (-s(1-p)| \tau -\vartheta | ) ;t\big ) \, \textup {d}\vartheta \, \textup {d}\tau \, \textup {d}\sigma \\\\&=& \int _{\mathbb {R}^2} \delta _{w}(A)\, \int _{\mathbb {R}} \delta _{\mathbf {0}}(B - w) \mathcal {I}^1\big (s^2 \exp (-s(1-p)|-z_1| ) ;t\big ) \, \textup {d}z \, \textup {d}w\)
| [1] | [
[
244,
247
]
] | https://openalex.org/W2525528836 |
bf1936d9-5d5e-4788-a77f-e587166900aa | We again define a function in the spirit of Ripley's K-function via the reduced second moment measure \(\widehat{\mathcal {K}}( \mathcal {V}_{t})(R_{r,p})\) of \(R_{r,p}\) , \(r>0\) , and the corresponding normalized derivative as
KV(r)=1V2 K( Vt)(Rr,p)=1(t2p(1-p))2 K( Vt)(Rr,p).
Combining the considerations above with the diagonal shift argument from [1]} we obtain that the reduced covariance measure \( \widehat{\operatorname{Cov}}( \mathcal {V}_{t})\) with
Cov( Vt)(AB)= A B-x Cov( Vt)(dy) 2(dx),
is given by
\( \nonumber && \widehat{\operatorname{Cov}} ( \mathcal {V}_{t})( \, \cdot \, )\nonumber \\\nonumber && =p(1-p)\Bigg [(1-p) \bigg (\int _{\mathbb {R}} \delta _{\mathbf {0}}(\cdot ) \mathcal {I}^1\big (s^2 \exp (-s(1-p)|z| ) ;t\big ) \, \textup {d}z\\\nonumber && \qquad \qquad \qquad \qquad + \int _{\mathbb {R}} \delta _{(z, 0)}(\cdot ) \, \mathcal {I}^1\big (s^2 \exp (-s(1-p)|z|) ;t\big ) \, \textup {d}z \bigg )\\\nonumber && \qquad + p \bigg ( \int _{\mathbb {R}} \delta _{\mathbf {0}}(\cdot ) \mathcal {I}^1\big (s^2 \exp (-sp|z| ) ;t\big ) \, \textup {d}z+ \, \int _{\mathbb {R}} \delta _{(0, z)}(\cdot ) \, \mathcal {I}^1\big (s^2 \exp (-sp|z|) ;t\big ) \, \textup {d}z \, \bigg )\\\nonumber \\\nonumber && \qquad + 4\Bigg ( (1-p)^2 \int _{\mathbb {R}} \,\ell _1( \cdot \cap \overline{(0 z)}_0) \, \, \mathcal {I}^2\big (s^2 \exp (-s(1-p)|z|);t\big )\, \textup {d}z\\\nonumber \\\nonumber && \qquad \qquad \quad + p^2 \int _{\mathbb {R}} \,\ell _1( \cdot \cap {_0}\overline{(0 z)}) \, \, \mathcal {I}^2 \big (s^2 \exp (-sp|z|);t\big )\, \textup {d}z\\\nonumber && \qquad \qquad \quad + (1-p)^3 \int _{\mathbb {R}} \Big (\int _{ \overline{(0z)}_0} \int _{ \overline{(0z)}_0} \delta _{y-x}(\cdot )\, \textup {d}x \, \textup {d}y\Big )\,\mathcal {I}^3\big (s^2 \exp (-s(1-p)\vert z\vert ) ;t\big )\, \textup {d}z\\\nonumber \\&& \qquad \qquad \quad + p^3 \int _{\mathbb {R}} \Big (\int _{ \overline{{_0}(0z)}} \int _{ {_0}\overline{(0z)}} \delta _{y-x}(\cdot )\, \textup {d}x \, \textup {d}y\Big )\,\mathcal {I}^3\big (s^2 \exp (-sp\vert z\vert ) ;t\big )\, \textup {d}z\Bigg ) \Bigg ].\)
| [1] | [
[
361,
364
]
] | https://openalex.org/W2525528836 |
1b041648-1c6d-4b06-998d-6893942698e6 | The relationship [1]} now yields the reduced second moment measure \(\widehat{\mathcal {K}}( \mathcal {V}_{t})\)
\( \widehat{\mathcal {K}}( \mathcal {V}_{t})( \, \cdot \, )&=& \widehat{\operatorname{Cov}}( \mathcal {V}_{t})( \, \cdot \, )+ (t^2p(1-p))^2 \ell _2( \, \cdot \, ).\)
| [1] | [
[
17,
20
]
] | https://openalex.org/W2525528836 |
10a21af1-6f48-4f05-9b6a-685575dee5a9 | This is a sample theorem. The run-in heading is set in bold, while
the following text appears in italics. Definitions, lemmas,
propositions, and corollaries are styled the same way.
Proofs, examples, and remarks have the initial word in italics,
while the following text appears in normal font.
For citations of references, we prefer the use of square brackets
and consecutive numbers. Citations using labels or the author/year
convention are also acceptable. The following bibliography provides
a sample reference list with entries for journal
articles [1]}, an LNCS chapter [2]}, a
book [3]}, proceedings without editors [4]},
and a homepage [5]}. Multiple citations are grouped
[1]}, [2]}, [3]},
[1]}, [3]}, [4]}, [5]}.
| [1] | [
[
557,
560
],
[
684,
687
],
[
702,
705
]
] | https://openalex.org/W4232554906 |
af0c3339-2c30-476d-a15d-03eaf6812949 | Regarding the above, it would be very useful to see how relevant is the role that each fluid, represented by their respective energy-momentum tensor \(T^{i}_{\mu \nu }\) in (REF ), plays on a self-gravitating system, as well as how these gravitational sources interact with each other. This would allow, for instance, detecting which source dominates over the others, and consequently rule out any equation of state incompatible with the dominant source. Conceptually, achieving this in general relativity should be extremely difficult, given the nonlinear nature of the theory. However, since the Gravitational Decoupling approach (GD) [1]}, [2]} is precisely designed for coupling/decoupling gravitational sources in general relativity, we will see that, indeed, it is possible to elucidate the role played by each gravitational source, without resorting to any numerical protocol or perturbation scheme, as explained in the next paragraph.
| [1] | [
[
638,
641
]
] | https://openalex.org/W2608205523 |
1c8948bc-3e0a-49a0-a465-bbb24c5ad842 | Regarding the above, it would be very useful to see how relevant is the role that each fluid, represented by their respective energy-momentum tensor \(T^{i}_{\mu \nu }\) in (REF ), plays on a self-gravitating system, as well as how these gravitational sources interact with each other. This would allow, for instance, detecting which source dominates over the others, and consequently rule out any equation of state incompatible with the dominant source. Conceptually, achieving this in general relativity should be extremely difficult, given the nonlinear nature of the theory. However, since the Gravitational Decoupling approach (GD) [1]}, [2]} is precisely designed for coupling/decoupling gravitational sources in general relativity, we will see that, indeed, it is possible to elucidate the role played by each gravitational source, without resorting to any numerical protocol or perturbation scheme, as explained in the next paragraph.
| [2] | [
[
644,
647
]
] | https://openalex.org/W2901000943 |
5b6d07ab-e323-4d49-8604-5f57e092aeb5 | In this Section, we briefly review the GD for spherically symmetric gravitational systems
described in detail in Ref. [1]}. For the axially symmetric case, see Ref. [2]}.
The gravitational decoupling approach and its simplest version [3]}, based in the Minimal Geometric Deformation (MGD) [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}, [24]}, [25]}, [26]}, [27]}, are attractive
for many reasons (for an incomplete list of references, see [28]}, [29]}, [30]}, [31]}, [32]}, [33]}, [34]}, [35]}, [36]}, [37]}, [38]}, [39]}, [40]}, [41]}, [42]}, [43]}, [43]}, [45]}, [46]}, [47]}, [48]}, [49]}, [50]}, [51]}, [49]}, [53]}, [54]}, [55]}, [56]}, [57]}, [58]}, [59]}, [60]}, [61]}, [62]}, [63]}, [64]}, [65]}, [66]}, [67]}, [68]}, [69]}, [70]}, [71]}, [72]}, [73]}, [74]}, [75]}, [76]}, [77]}, [78]}, [79]}, [80]}. Among them we can mention i) the coupling of gravitational sources, which allows for extending known solutions of the Einstein field equations into
more complex domains; ii) the decoupling of gravitational sources, which is used to systematically reduce (decouple) a complex energy-momentum
tensor \(T_{\mu \nu }\) into simpler components; iii) to find solutions in gravitational theories beyond Einstein's; iv) to generate rotating hairy black hole solutions, among many others applications.
| [1] | [
[
118,
121
]
] | https://openalex.org/W2901000943 |
bd2b6133-5a08-4eaa-9da2-7c74f4c8ea75 | In this Section, we briefly review the GD for spherically symmetric gravitational systems
described in detail in Ref. [1]}. For the axially symmetric case, see Ref. [2]}.
The gravitational decoupling approach and its simplest version [3]}, based in the Minimal Geometric Deformation (MGD) [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}, [24]}, [25]}, [26]}, [27]}, are attractive
for many reasons (for an incomplete list of references, see [28]}, [29]}, [30]}, [31]}, [32]}, [33]}, [34]}, [35]}, [36]}, [37]}, [38]}, [39]}, [40]}, [41]}, [42]}, [43]}, [43]}, [45]}, [46]}, [47]}, [48]}, [49]}, [50]}, [51]}, [49]}, [53]}, [54]}, [55]}, [56]}, [57]}, [58]}, [59]}, [60]}, [61]}, [62]}, [63]}, [64]}, [65]}, [66]}, [67]}, [68]}, [69]}, [70]}, [71]}, [72]}, [73]}, [74]}, [75]}, [76]}, [77]}, [78]}, [79]}, [80]}. Among them we can mention i) the coupling of gravitational sources, which allows for extending known solutions of the Einstein field equations into
more complex domains; ii) the decoupling of gravitational sources, which is used to systematically reduce (decouple) a complex energy-momentum
tensor \(T_{\mu \nu }\) into simpler components; iii) to find solutions in gravitational theories beyond Einstein's; iv) to generate rotating hairy black hole solutions, among many others applications.
| [2] | [
[
165,
168
]
] | https://openalex.org/W3121669597 |
a7a72f39-1169-405d-acab-90fddfad8384 | In this Section, we briefly review the GD for spherically symmetric gravitational systems
described in detail in Ref. [1]}. For the axially symmetric case, see Ref. [2]}.
The gravitational decoupling approach and its simplest version [3]}, based in the Minimal Geometric Deformation (MGD) [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}, [24]}, [25]}, [26]}, [27]}, are attractive
for many reasons (for an incomplete list of references, see [28]}, [29]}, [30]}, [31]}, [32]}, [33]}, [34]}, [35]}, [36]}, [37]}, [38]}, [39]}, [40]}, [41]}, [42]}, [43]}, [43]}, [45]}, [46]}, [47]}, [48]}, [49]}, [50]}, [51]}, [49]}, [53]}, [54]}, [55]}, [56]}, [57]}, [58]}, [59]}, [60]}, [61]}, [62]}, [63]}, [64]}, [65]}, [66]}, [67]}, [68]}, [69]}, [70]}, [71]}, [72]}, [73]}, [74]}, [75]}, [76]}, [77]}, [78]}, [79]}, [80]}. Among them we can mention i) the coupling of gravitational sources, which allows for extending known solutions of the Einstein field equations into
more complex domains; ii) the decoupling of gravitational sources, which is used to systematically reduce (decouple) a complex energy-momentum
tensor \(T_{\mu \nu }\) into simpler components; iii) to find solutions in gravitational theories beyond Einstein's; iv) to generate rotating hairy black hole solutions, among many others applications.
| [3] | [
[
234,
237
]
] | https://openalex.org/W2608205523 |
6dfe4bcd-0bd2-4433-8607-bd874d267ec2 | Of course the tensor \(\theta _{\mu \nu }\) vanishes
when the deformations vanish (\(f=g=0\) ). We see that for the particular case \(g=0\) , Eqs. (REF )-() reduce to the simpler
“quasi-Einstein" system of the MGD of Ref. [1]},
in which \(f\) is only determined by \(\theta _{\mu \nu }\) and the undeformed metric (REF ). Also, notice that the set (REF )-() contains \(\lbrace \xi ,\,\mu \rbrace \) , and therefore is not independent of (REF )-(). This of course makes sense since both systems represent a simplified version of a more complex whole, described by Eqs. (REF )-().
| [1] | [
[
223,
226
]
] | https://openalex.org/W2608205523 |
1e16177d-a205-473b-98a4-d96d690e2e3d | The vision field is undergoing two revolutionary trends since about two years ago. The first trend is self-supervised visual representation learning pioneered by MoCo [1]}, which for the first time demonstrated superior transferring performance on seven downstream tasks over the previous standard supervised methods by ImageNet-1K classification. The second is the Transformer-based backbone architecture [2]}, [3]}, [4]}, which has strong potential to replace the previous standard convolutional neural networks such as ResNet [5]}. The pioneer work is ViT [2]}, which demonstrated strong performance on image classification by directly applying the standard Transformer encoder [7]} in NLP on non-overlapping image patches. The follow-up work, DeiT [3]}, tuned several training strategies to make ViT work well on ImageNet-1K image classification. While ViT/DeiT are designed for the image classification task and has not been well tamed for downstream tasks requiring dense prediction, Swin Transformer [4]} is proposed to serve as a general-purpose vision backbone by introducing useful inductive biases of locality, hierarchy and translation invariance.
| [1] | [
[
167,
170
]
] | https://openalex.org/W3035524453 |
23dc155a-a24f-40a1-9854-c72ab2669f0f | The vision field is undergoing two revolutionary trends since about two years ago. The first trend is self-supervised visual representation learning pioneered by MoCo [1]}, which for the first time demonstrated superior transferring performance on seven downstream tasks over the previous standard supervised methods by ImageNet-1K classification. The second is the Transformer-based backbone architecture [2]}, [3]}, [4]}, which has strong potential to replace the previous standard convolutional neural networks such as ResNet [5]}. The pioneer work is ViT [2]}, which demonstrated strong performance on image classification by directly applying the standard Transformer encoder [7]} in NLP on non-overlapping image patches. The follow-up work, DeiT [3]}, tuned several training strategies to make ViT work well on ImageNet-1K image classification. While ViT/DeiT are designed for the image classification task and has not been well tamed for downstream tasks requiring dense prediction, Swin Transformer [4]} is proposed to serve as a general-purpose vision backbone by introducing useful inductive biases of locality, hierarchy and translation invariance.
| [2] | [
[
406,
409
],
[
559,
562
]
] | https://openalex.org/W3094502228 |
a2f2ab67-cb75-4de3-bdb6-51c94deddaa5 | The vision field is undergoing two revolutionary trends since about two years ago. The first trend is self-supervised visual representation learning pioneered by MoCo [1]}, which for the first time demonstrated superior transferring performance on seven downstream tasks over the previous standard supervised methods by ImageNet-1K classification. The second is the Transformer-based backbone architecture [2]}, [3]}, [4]}, which has strong potential to replace the previous standard convolutional neural networks such as ResNet [5]}. The pioneer work is ViT [2]}, which demonstrated strong performance on image classification by directly applying the standard Transformer encoder [7]} in NLP on non-overlapping image patches. The follow-up work, DeiT [3]}, tuned several training strategies to make ViT work well on ImageNet-1K image classification. While ViT/DeiT are designed for the image classification task and has not been well tamed for downstream tasks requiring dense prediction, Swin Transformer [4]} is proposed to serve as a general-purpose vision backbone by introducing useful inductive biases of locality, hierarchy and translation invariance.
| [3] | [
[
412,
415
],
[
752,
755
]
] | https://openalex.org/W3116489684 |
e0fa8f84-b6c5-4183-bf85-882580528e66 | The vision field is undergoing two revolutionary trends since about two years ago. The first trend is self-supervised visual representation learning pioneered by MoCo [1]}, which for the first time demonstrated superior transferring performance on seven downstream tasks over the previous standard supervised methods by ImageNet-1K classification. The second is the Transformer-based backbone architecture [2]}, [3]}, [4]}, which has strong potential to replace the previous standard convolutional neural networks such as ResNet [5]}. The pioneer work is ViT [2]}, which demonstrated strong performance on image classification by directly applying the standard Transformer encoder [7]} in NLP on non-overlapping image patches. The follow-up work, DeiT [3]}, tuned several training strategies to make ViT work well on ImageNet-1K image classification. While ViT/DeiT are designed for the image classification task and has not been well tamed for downstream tasks requiring dense prediction, Swin Transformer [4]} is proposed to serve as a general-purpose vision backbone by introducing useful inductive biases of locality, hierarchy and translation invariance.
| [4] | [
[
418,
421
],
[
1007,
1010
]
] | https://openalex.org/W3138516171 |
46ae90da-bb29-4431-9981-19c3a51eb5ea | The vision field is undergoing two revolutionary trends since about two years ago. The first trend is self-supervised visual representation learning pioneered by MoCo [1]}, which for the first time demonstrated superior transferring performance on seven downstream tasks over the previous standard supervised methods by ImageNet-1K classification. The second is the Transformer-based backbone architecture [2]}, [3]}, [4]}, which has strong potential to replace the previous standard convolutional neural networks such as ResNet [5]}. The pioneer work is ViT [2]}, which demonstrated strong performance on image classification by directly applying the standard Transformer encoder [7]} in NLP on non-overlapping image patches. The follow-up work, DeiT [3]}, tuned several training strategies to make ViT work well on ImageNet-1K image classification. While ViT/DeiT are designed for the image classification task and has not been well tamed for downstream tasks requiring dense prediction, Swin Transformer [4]} is proposed to serve as a general-purpose vision backbone by introducing useful inductive biases of locality, hierarchy and translation invariance.
| [5] | [
[
529,
532
]
] | https://openalex.org/W2194775991 |
b60c8d28-2e3d-49bc-ab30-15d759695a68 | The vision field is undergoing two revolutionary trends since about two years ago. The first trend is self-supervised visual representation learning pioneered by MoCo [1]}, which for the first time demonstrated superior transferring performance on seven downstream tasks over the previous standard supervised methods by ImageNet-1K classification. The second is the Transformer-based backbone architecture [2]}, [3]}, [4]}, which has strong potential to replace the previous standard convolutional neural networks such as ResNet [5]}. The pioneer work is ViT [2]}, which demonstrated strong performance on image classification by directly applying the standard Transformer encoder [7]} in NLP on non-overlapping image patches. The follow-up work, DeiT [3]}, tuned several training strategies to make ViT work well on ImageNet-1K image classification. While ViT/DeiT are designed for the image classification task and has not been well tamed for downstream tasks requiring dense prediction, Swin Transformer [4]} is proposed to serve as a general-purpose vision backbone by introducing useful inductive biases of locality, hierarchy and translation invariance.
| [7] | [
[
681,
684
]
] | https://openalex.org/W2963403868 |
02a1be7a-4940-4a82-a280-b356dc267b3b | While the two revolutionary waves appeared independently, the community is curious about what kind of adaptation is needed and what it will behave when they meet each other. Nevertheless, until very recently, a few works started to explore this space: MoCo v3 [1]} presents a training recipe to let ViT perform reasonably well on ImageNet-1K linear evaluation; DINO [2]} presents a new self-supervised learning method which shows good synergy with the Transformer architecture.
| [1] | [
[
260,
263
]
] | https://openalex.org/W3145450063 |
e655a890-7073-4ef4-a906-a901eff76866 | While the two revolutionary waves appeared independently, the community is curious about what kind of adaptation is needed and what it will behave when they meet each other. Nevertheless, until very recently, a few works started to explore this space: MoCo v3 [1]} presents a training recipe to let ViT perform reasonably well on ImageNet-1K linear evaluation; DINO [2]} presents a new self-supervised learning method which shows good synergy with the Transformer architecture.
| [2] | [
[
366,
369
]
] | https://openalex.org/W3159481202 |
c15f115f-fbe0-46bf-b3d4-7658dd80b595 | In addition to this backbone architecture change, we also present a self-supervised learning approach by combining MoCo v2 [1]} and BYOL [2]}, named MaroonMoBY (by picking the first two letters of each). We tune a training recipe to make the approach performing reasonably high on ImageNet-1K linear evaluation: 72.8% top-1 accuracy using DeiT-S with 300-epoch training which is slightly better than that in MoCo v3 and DINO but with lighter tricks. Using Swin-T architecture instead of DeiT-S, it achieves 75.0% top-1 accuracy with 300-epoch training, which is 2.2% higher than that using DeiT-S.
Initial study shows that some tricks in MoCo v3 and DINO are also useful for MoBY, e.g. replacing the LayerNorm layers before the MLP blocks by BatchNorm like that in MoCo v3 bring additional +1.1% gains using 100 epoch training, indicating the strong potential of MoBY.
| [1] | [
[
123,
126
]
] | https://openalex.org/W3009561768 |
ec990692-8193-4885-b85b-1af46ab42e85 | In addition to this backbone architecture change, we also present a self-supervised learning approach by combining MoCo v2 [1]} and BYOL [2]}, named MaroonMoBY (by picking the first two letters of each). We tune a training recipe to make the approach performing reasonably high on ImageNet-1K linear evaluation: 72.8% top-1 accuracy using DeiT-S with 300-epoch training which is slightly better than that in MoCo v3 and DINO but with lighter tricks. Using Swin-T architecture instead of DeiT-S, it achieves 75.0% top-1 accuracy with 300-epoch training, which is 2.2% higher than that using DeiT-S.
Initial study shows that some tricks in MoCo v3 and DINO are also useful for MoBY, e.g. replacing the LayerNorm layers before the MLP blocks by BatchNorm like that in MoCo v3 bring additional +1.1% gains using 100 epoch training, indicating the strong potential of MoBY.
| [2] | [
[
137,
140
]
] | https://openalex.org/W3035060554 |
487d1fca-23a8-4c20-9d89-e8dab88d1fcd | When transferred to downstream tasks of COCO object detection and ADE20K semantic segmentation, the representations learnt by this self-supervised learning approach achieves on par performance compared to the supervised method. Noting self-supervised learning with ResNet architectures has shown significantly stronger transferring performance on downstream tasks than supervised methods [1]}, [2]}, [3]}, the results indicate large space to improve for self-supervised learning with Transformers.
| [1] | [
[
388,
391
]
] | https://openalex.org/W3035524453 |
599e6a47-bb14-479c-ae02-bc8a9f048a6b | When transferred to downstream tasks of COCO object detection and ADE20K semantic segmentation, the representations learnt by this self-supervised learning approach achieves on par performance compared to the supervised method. Noting self-supervised learning with ResNet architectures has shown significantly stronger transferring performance on downstream tasks than supervised methods [1]}, [2]}, [3]}, the results indicate large space to improve for self-supervised learning with Transformers.
| [2] | [
[
394,
397
]
] | https://openalex.org/W3172615411 |
96f38a14-992b-4be6-b4ff-1e7a875ce18f | When transferred to downstream tasks of COCO object detection and ADE20K semantic segmentation, the representations learnt by this self-supervised learning approach achieves on par performance compared to the supervised method. Noting self-supervised learning with ResNet architectures has shown significantly stronger transferring performance on downstream tasks than supervised methods [1]}, [2]}, [3]}, the results indicate large space to improve for self-supervised learning with Transformers.
| [3] | [
[
400,
403
]
] | https://openalex.org/W3135958856 |
a78f0cae-5cf1-4311-a3f5-fe13ac6fc8e3 | MoBY is a combination of two popular self-supervised learning approaches: MoCo v2 [1]} and BYOL [2]}. It inherits the momentum design, the key queue, and the contrastive loss used in MoCo v2, and inherits the asymmetric encoders, asymmetric data augmentations and the momentum scheduler in BYOL. We name it MaroonMoBY by picking the first two letters of each method.
| [1] | [
[
82,
85
]
] | https://openalex.org/W3009561768 |
ce0fffa5-803e-482c-88ef-616feb7b3b4a | MoBY is a combination of two popular self-supervised learning approaches: MoCo v2 [1]} and BYOL [2]}. It inherits the momentum design, the key queue, and the contrastive loss used in MoCo v2, and inherits the asymmetric encoders, asymmetric data augmentations and the momentum scheduler in BYOL. We name it MaroonMoBY by picking the first two letters of each method.
| [2] | [
[
96,
99
]
] | https://openalex.org/W3035060554 |
8285f7d9-de3f-4005-ae38-8f38b2dfabcf | In training, like most Transformer-based methods, we also adopt the AdamW [1]}, [2]} optimizer, in contrast to previous self-supervised learning approaches built on ResNet backbone where usually SGD [3]}, [4]} or LARS [5]}, [6]}, [7]} is used. We also introduce a regularization method of asymmetric drop path which proves crucial for the final performance.
| [1] | [
[
74,
77
]
] | https://openalex.org/W2964121744 |
34d1adcb-d252-4a69-bc0d-8e8e7f027049 | In training, like most Transformer-based methods, we also adopt the AdamW [1]}, [2]} optimizer, in contrast to previous self-supervised learning approaches built on ResNet backbone where usually SGD [3]}, [4]} or LARS [5]}, [6]}, [7]} is used. We also introduce a regularization method of asymmetric drop path which proves crucial for the final performance.
| [2] | [
[
80,
83
]
] | https://openalex.org/W2908510526 |
7332fbe5-51d5-469e-90ae-d3b9daa0b281 | In training, like most Transformer-based methods, we also adopt the AdamW [1]}, [2]} optimizer, in contrast to previous self-supervised learning approaches built on ResNet backbone where usually SGD [3]}, [4]} or LARS [5]}, [6]}, [7]} is used. We also introduce a regularization method of asymmetric drop path which proves crucial for the final performance.
| [3] | [
[
199,
202
]
] | https://openalex.org/W3035524453 |
46a815be-b146-4481-9122-fee0a42aba01 | In training, like most Transformer-based methods, we also adopt the AdamW [1]}, [2]} optimizer, in contrast to previous self-supervised learning approaches built on ResNet backbone where usually SGD [3]}, [4]} or LARS [5]}, [6]}, [7]} is used. We also introduce a regularization method of asymmetric drop path which proves crucial for the final performance.
| [4] | [
[
205,
208
]
] | https://openalex.org/W3106005682 |
2ce253ee-c6fa-4fed-8e61-06454cc41b6e | In training, like most Transformer-based methods, we also adopt the AdamW [1]}, [2]} optimizer, in contrast to previous self-supervised learning approaches built on ResNet backbone where usually SGD [3]}, [4]} or LARS [5]}, [6]}, [7]} is used. We also introduce a regularization method of asymmetric drop path which proves crucial for the final performance.
| [5] | [
[
218,
221
]
] | https://openalex.org/W3005680577 |
9be25d9e-8c8d-4c20-b078-db7d7240f49b | In training, like most Transformer-based methods, we also adopt the AdamW [1]}, [2]} optimizer, in contrast to previous self-supervised learning approaches built on ResNet backbone where usually SGD [3]}, [4]} or LARS [5]}, [6]}, [7]} is used. We also introduce a regularization method of asymmetric drop path which proves crucial for the final performance.
| [6] | [
[
224,
227
]
] | https://openalex.org/W3035060554 |
63712964-378f-4474-9e50-3ff8ec15f49e | In training, like most Transformer-based methods, we also adopt the AdamW [1]}, [2]} optimizer, in contrast to previous self-supervised learning approaches built on ResNet backbone where usually SGD [3]}, [4]} or LARS [5]}, [6]}, [7]} is used. We also introduce a regularization method of asymmetric drop path which proves crucial for the final performance.
| [7] | [
[
230,
233
]
] | https://openalex.org/W3172615411 |
05a29663-9167-48fb-bebc-f49dbb09c782 | In this work, we adopt the tiny version of Swin Transformer (Swin-T) as our default backbone, such that the transferring performance on downstream tasks of object detection and semantic segmentation can be also evaluated. The Swin-T has similar complexity with ResNet-50 and DeiT-S. The details of specific architecture design and hyper-parameters can be found in [1]}.
| [1] | [
[
364,
367
]
] | https://openalex.org/W3138516171 |
12b6988c-bf0d-467f-ab0a-dcb1e56e1f82 | Linear evaluation on ImageNet-1K dataset is a common evaluation protocol to assess the quality of learnt representations [1]}. In this protocol, a linear classifier is applied on the backbone, with the backbone weights frozen and only the linear classifier trained. After training this linear classifier, the top-1 accuracy using center crop is reported on the validation set.
| [1] | [
[
121,
124
]
] | https://openalex.org/W3035524453 |
278c7baf-86dd-411a-bf35-5ce93188af3c | During training, we follow [1]} to use random resize cropping with scale from \([0.08, 1]\) and horizontal flipping as the data augmentation. 100-epoch training with a 5-epoch linear warm-up stage is conducted. The weight decay is set as 0. The learning rate is set as the optimal one of \(\lbrace 0.5, 0.75, 1.0, 1.25\rbrace \) through grid search for each pre-trained model.
| [1] | [
[
27,
30
]
] | https://openalex.org/W3035524453 |
c370a077-c02f-4188-8499-04d48ae507bb | Regarding previous methods such as MoCo v3 [1]} and DINO [2]} adopt ViT/DeiT as their backbone architecture, we first report results of MoBY using DeiT-S [3]} for fair comparison with them. Under 300-epoch training, MoBY achieves 72.8% top-1 accuracy, which is slightly better than MoCo v3 and DINO (without the multi-crop trick), as shown in Table REF .
| [1] | [
[
43,
46
]
] | https://openalex.org/W3145450063 |