halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 1
8
| timestamp
stringclasses 938
values | year
stringclasses 55
values | url
stringlengths 43
389
| text
stringlengths 16
2.18M
| size
int64 16
2.18M
| authorids
sequencelengths 1
102
| affiliations
sequencelengths 0
229
|
---|---|---|---|---|---|---|---|---|---|
01762974 | en | [
"math"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01762974/file/GRE_2102_MD3.pdf | Tomasz Dębiec
email: [email protected]
Marie Doumic
email: [email protected]
AND Piotr Gwiazda
email: [email protected]
Emil Wiedemann
email: [email protected]
Tomasz D Ębiec
Sorbonne Universités
Relative Entropy Method for Measure Solutions of the Growth-Fragmentation Equation
Keywords: measure solutions, growth-fragmentation equation, structured population, relative entropy, generalised Young measure
published or not. The documents may come L'archive ouverte pluridisciplinaire
INTRODUCTION
Structured population models were developed for the purpose of understanding the evolution of a population over time -and in particular to adequately describe the dynamics of a population by its distribution along some "structuring" variables representing e.g., age, size, or cell maturity. These models, often taking the form of an evolutionary partial differential equation, have been extensively studied for many years. The first age structure was considered in the early 20th century by Sharpe and Lotka [START_REF] Sharpe | A problem in age-distribution[END_REF], who already made predictions on the question of asymptotic behaviour of the population, see also [START_REF] Kermack | A contribution to the mathematical theory of epidemics[END_REF][START_REF] Kermack | Contribution to the mathematical theory of epidemics. ii. the problem of endemicity[END_REF]. In the second half of the 20th century size-structured models appeared first in [START_REF] Bell | Cell growth and division: I. a mathematical model with applications to cell volume distributions in mammalian suspension cultures[END_REF][START_REF] Sinko | A new model for age-size structure of a population[END_REF]. These studies gave rise to other physiologically structured models (agesize, saturation, cell maturity, etc.).
The object of this note is the growth-fragmentation model, which is found fitting in many different contexts: cell division, polymerisation, neurosciences, prion proliferation or even telecommunication. In its general linear form this model takes the form of the following equation.
∂ t n(t, x) + ∂ x (g(x)n(t, x)) + B(x)n(t, x) = ∞ x k(x, y)B(y)n(t, y) dy, g(0)n(t, 0) = 0, n(0, x) = n 0 (x).
(1.1)
Here n(t, x) represents the concentration of individuals of size x ≥ 0 at time t > 0, g(x) ≥ 0 is their growth rate, B(x) ≥ 0 is their division rate and k(x, y) is the proportion of individuals of size x created out of the division of individuals of size y. This equation incorporates a very important phenomenon in biology -a competition between growth and fragmentation. Clearly they have opposite dynamics: growth drives the population towards a larger size, while fragmentation makes it smaller and smaller. Depending on which factor dominates, one can observe various long-time behaviour of the population distribution.
Many authors have studied the long-time asymptotics (along with well-posedness) of variants of the growth-fragmentation equation, see e.g. [START_REF] Cáceres | Rate of convergence to the remarkable state for fragmentation and growth-fragmentation equations[END_REF][START_REF] Doumic | Eigenelements of a general aggregation-fragmentation model[END_REF][START_REF] Michel | Existence of a solution to the cell division eigenproblem[END_REF][START_REF] Mischler | Spectral analysis of semigroups and growth-fragmentation equations[END_REF][START_REF] Perthame | Exponential decay for the fragmentation or cell-division equation[END_REF]. The studies which establish convergence, in a proper sense, of a (renormalised) solution towards a steady profile were until recently limited only to initial data in weighted L 1 spaces. The classical tools for such studies include a direct application of the Laplace transform and the semigroup theory [START_REF] Mischler | Spectral analysis of semigroups and growth-fragmentation equations[END_REF]. These methods could also provide an exponential rate of convergence, linked to the existence of a spectral gap.
A different approach was developed by Perthame et al. in a series of papers [START_REF] Michel | General entropy equations for structured population models and scattering[END_REF][START_REF] Michel | General relative entropy inequality: an illustration on growth models[END_REF][START_REF] Perthame | Exponential decay for the fragmentation or cell-division equation[END_REF]. Their Generalised Relative Entropy (GRE) method provides a way to study long-time asymptotics of linear models even when no spectral gap is guaranteed -however failing to provide a rate of convergence, unless an entropy-entropy dissipation inequality is obtained [START_REF] Cáceres | Rate of convergence to the remarkable state for fragmentation and growth-fragmentation equations[END_REF]. Recently Gwiazda and Wiedemann [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF] extended the GRE method to the case of the renewal equation with initial data in the space of non-negative Radon measures. Their result is motivated by the increasing interest in measure solutions to models of mathematical biology, see e.g. [START_REF] Carrillo | Structured populations, cell growth and measure valued balance laws[END_REF][START_REF] Gwiazda | A nonlinear structured population model: Lipshitz continuity of measure valued solutions with respect to model ingredients[END_REF] for some recent results concerning well-posedness and stability theory in the space of non-negative Radon measures. The clear advantage of considering measure data is that it is biologically justified -it allows for treating the situation when a population is initially concentrated with respect to the structuring variable (and is, in particular, not absolutely continuous with respect to the Lebesgue measure). This is typically the case when departing from a population formed by a unique cell. We refer also to the recent result of Gabriel [START_REF] Gabriel | Measure solutions to the conservative renewal equation[END_REF], who uses the Doeblin method to analyze the long-time behaviour of measure solutions to the renewal equatio n.
Let us remark that the method of analysis employed in the current paper is inspired by the classical relative entropy method introduced by Dafermos in [START_REF] Dafermos | The second law of thermodynamics and stability[END_REF]. In recent years this method was extended to yield results on measure-valued-strong uniqueness for equations of fluid dynamics [START_REF] Brenier | Weak-strong uniqueness for measure-valued solutions[END_REF][START_REF] Feireisl | Dissipative measurevalued solutions to the compressible Navier-Stokes system[END_REF][START_REF] Gwiazda | Weak-strong uniqueness for measurevalued solutions of some compressible fluid models[END_REF] and general conservation laws [START_REF] Christoforou | Relative entropy for hyperbolic-parabolic systems and application to the constitutive theory of thermoviscoelasticity[END_REF][START_REF] Demoulini | Weak-strong uniqueness of dissipative measure-valued solutions for polyconvex elastodynamics[END_REF][START_REF] Gwiazda | Dissipative measure valued solutions for general conservation laws[END_REF]. See also [START_REF] Dębiec | Relative entropy method for measure-valued solutions in natural sciences[END_REF] and refereces therein.
The purpose of this paper is to generalise the results of [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF] to the case of a general growthfragmentation equation. Similarly as in that paper we make use of the concept of a recession function to make sense of compositions of nonlinear functions with a Radon measure. However, the appearance of the term H (u ε (t, x))u ε (t, y) in the entropy dissipation (see (3.8) below), which mixes dependences on the variables x and y, poses a novel problem, which is overcome by using generalised Young measures and time regularity.
The current paper is structured as follows: in Section 2 we recall some basic results on Radon measures, recession functions and Young measures as well as introduce the assumptions of our model, in Section 3 we state and prove the GRE inequality, which is then used to prove a longtime asymptotics result in Section 4.
DESCRIPTION OF THE MODEL
Preliminaries.
In what follows we denote by R + the set [0, ∞). By M (R + ) we denote the space of signed Radon measures on R + . By Lebesgue's decomposition theorem, for each µ ∈ M (R + ) we can write
µ = µ a + µ s ,
where µ a is absolutely continuous with respect to the Lebesgue measure L 1 , and µ s is singular. The space M (R + ) is endowed with the total variation norm
µ TV := R + d|µ|,
and we denote µ TV = TV (µ). By the Riesz Representation Theorem we can identify this space with the dual space to the space C 0 (R + ) of continuous functions on R + which vanish at infinity. The duality pairing is given by
ν, f := ∞ 0 f (ξ ) dµ(ξ ).
By M + (R + ) we denote the set of positive Radon measures of bounded total variation. We further define the ϕ-weighted total variation by µ TV ϕ := R + ϕd|µ| and correspondingly the space M + (R + ; ϕ) of positive Radon measures whose ϕ-weighted total variation is finite. We still denote TV (µ) = µ TV ϕ . Of course we require that the function ϕ be non-negative. In fact, for our purposes ϕ will be strictly positive and bounded on each compact subset of (0, ∞).
We say that a sequence ν n ∈ M (R + ) converges weak * to some measure ν ∈ M (R + ) if
ν n , f -→ ν, f for each f ∈ C 0 (R + ).
By a Young measure on R + × R + we mean a parameterised family ν t,x of probability measures on R + . More precisely, it is a weak * -measurable function (t, x) → ν t,x , i.e. such that the mapping
(t, x) → ν t,x , f is measurable for each f ∈ C 0 (R + ).
Young measures are often used to describe limits of weakly converging approximating sequences to a given problem. They serve as a way of describing weak limits of nonlinear functions of the approximate solution. Indeed, it is a classical result that a uniformly bounded measurable sequence u n generates a Young measure by which one represents the limit of f (u n ), where f is some non-linear function, see [?] for sequences in L ∞ and [START_REF] Ball | A version of the fundamental theorem for Young measures[END_REF] for measurable sequences.
This framework was used by DiPerna in his celebrated paper [START_REF] Diperna | Measure-valued solutions to conservation laws[END_REF], where he introduced the concept of an admissible measure-valued solution to scalar conservation laws. However, in more general contexts (e.g. for hyperbolic systems, where there is usually only one entropy-entropyflux pair) one needs to be able to describe limits of sequences which exhibit oscillatory behaviour as well as concentrate mass. Such a framework is provided by generalised Young measures, first introduced in the context of incompressible Euler equations in [START_REF] Diperna | Oscillations and concentrations in weak solutions of the incompressible fluid equations[END_REF], and later developed by many authors. We follow the exposition of Alibert, Bouchitté [START_REF] Alibert | Non-uniform integrability and generalized Young measures[END_REF] and Kristensen, Rindler [START_REF] Kristensen | Characterization of Generalized Gradient Young Measures Generated by Sequences in W 1,1 and BV[END_REF].
Suppose f : R n → R + is an even continuous function with at most linear growth, i.e.
| f (x)| ≤ C(1 + |x|)
for some constant C. We define, whenever it exists, the recession function of f as
f ∞ (x) = lim s→∞ f (sx) s = lim s→∞ f (-sx) s .
Definition 2.1. The set F (R) of continuous functions f : R → R + for which f ∞ exists and is continuous on S n-1 is called the class of admissible integrands.
By a generalised Young measure on Ω = R + × R + we mean a parameterised family (ν t,x , m) where for (t, x) ∈ Ω, ν t,x is a family of probability measures on R and m is a nonnegative Radon measure on Ω. In the following, we may omit the indices for ν t,x and denote it simply (ν, m).
The following result gives a way of representing weak * limits of sequences bounded in L 1 via a generalised Young measure. It was first proved in [START_REF] Alibert | Non-uniform integrability and generalized Young measures[END_REF]Theorem 2.5]. We state an adaptation to our simpler case. Proposition 2.2. Let (u n ) be a bounded sequence in L 1 loc (Ω; µ, R), where µ is a measure on Ω. There exists a subsequence (u n k ), a nonnegative Radon measure m on Ω and a parametrized family of probabilities (ν ζ ) such that for any even function f ∈ F (R) we have
f (u n k (ζ ))µ * ν ζ , f µ + f ∞ m (2.1)
Proof. We apply Theorem 2.5. and Remark 2.6 in [START_REF] Alibert | Non-uniform integrability and generalized Young measures[END_REF], simplified by the fact that f is even and that we only test against functions f independent of x. Note that the weak * convergence then has to be understood in the sense of compactly supported test functions ϕ ∈ C 0 (Ω, R).
The above proposition can in fact be generalised to say that every bounded sequence of generalised Young measures possesses a weak * convergent subsequence, cf. [26, Corollary 2.] Proposition 2.3. Let (ν n , m n ) be a sequence of generalised Young measures on Ω such that
• The map x → ν n x , | • | is uniformly bounded in L 1 , • The sequence (m n ( Ω)) is uniformly bounded.
Then there is a generalised Young measure (ν, m) on Ω such that (ν n , m n ) converges weak * to (ν, m).
2.2. The model. We consider the growth-fragmentation equation under a general form:
∂ t n(t, x) + ∂ x (g(x)n(t, x)) + B(x)n(t, x) = ∞ x k(x, y)B(y)n(t, y) dy, g(0)n(t, 0) = 0, n(0, x) = n 0 (x). (2.2) We assume n 0 ∈ M + (R + ).
The fundamental tool in studying the long-time asymptotics with the GRE method is the existence and uniqueness of the first eigenelements (λ , N, ϕ), i.e. solutions to the following primal and dual eigenproblems.
∂ ∂ x (g(x)N(x)) + (B(x) + λ )N(x) = ∞ x k(x, y)B(y)N(y) dy g(0)N(0) = 0, N(x) > 0, for x > 0, ∞ 0 N(x)dx = 1, (2.3) -g(x) ∂ ∂ x (ϕ(x)) + (B(x) + λ )ϕ(x) = B(x) x 0 k(y, x)ϕ(y) dy ϕ(x) > 0, ∞ 0 ϕ(x)N(x)dx = 1.
(2.4)
We make the following assumptions on the parameters of the model.
B ∈ W 1,∞ (R + , R * + ), g ∈ W 1,∞ (R + , R * + ), ∀ x ≥ 0, g ≥ g 0 > 0, (2.5) k ∈ C b (R + × R + ), y 0 k(x, y)dx = 2, y 0 xk(x, y)dx = y, (2.6) k(x, y < x) = 0, k(x, y > x) > 0. (2.7)
These guarantee in particular existence and uniqueness of a solution n ∈ C (R + ; L 1 ϕ (R + )) for L 1 initial data (see e.g. [START_REF] Perthame | Kinetic formulation of conservation laws[END_REF]), existence of a unique measure solution for data in M + (R + ) (cf. [START_REF] Carrillo | Structured populations, cell growth and measure valued balance laws[END_REF]), as well as existence and uniqueness of a dominant eigentriplet (λ > 0, N(x), ϕ(x)), cf. [START_REF] Doumic | Eigenelements of a general aggregation-fragmentation model[END_REF]. In particular the functions N and ϕ are continuous, N is bounded and ϕ has at most polynomial growth. In what follows N and ϕ will always denote the solutions to problems (2.3) and (2.4), respectively. Let us remark that in the L 1 setting we have the following conservation law
∞ 0 n ε (t, x)e -λt ϕ(x)dx = ∞ 0 n 0 (x)ϕ(x)dx.
(2.8) 2.3. Measure and measure-valued solutions. Let us observe that there are two basic ways to treat the above model in the measure setting. The first one is to consider a measure solution, i.e. a narrowly continuous map t → µ t ∈ M + (R + ), which satisfies (2.2) in the weak sense, i.e. for each
ψ ∈ C 1 c (R + × R + ) - ∞ 0 ∞ 0 (∂ t ψ(t, x) + ∂ x ψ(t, x)g(x))dµ t (x)dt + ∞ 0 ∞ 0 ψ(t, x)B(x)dµ t (x)dt = ∞ 0 ∞ 0 ψ(t, x) ∞ x k(x, y)B(y)dµ t (y)dxdt + ∞ 0 ψ(0, x)dn 0 (x).
(2.9)
Thus a measure solution is a family of time-parameterised non-negative Radon measures on the structure-physical domain R + .
The second way is to work with generalised Young measures and corresponding measurevalued solutions. To prove the generalised relative entropy inequality, which relies on considering a family of non-linear renormalisations of the equation, we choose to work in this second framework.
A measure-valued solution is a generalised Young measure (ν, m), where the oscillation measure is a family of parameterised probabilities over the state domain R + such that equation (2.2) is satisfied by its barycenters ν t,x , ξ , i.e. the following equation
∂ t ( ν t,x , ξ + m) + ∂ x (g(x)( ν t,x , ξ + m)) + B(x)( ν t,x , ξ + m) = ∞ x k(x, y)B(y) ν t,x , ξ dy + ∞ x k(x, y)B(y)dm(y) (2.10)
holds in the sense of distributions on R * + × R * + . It is proven in [START_REF] Gwiazda | A nonlinear structured population model: Lipshitz continuity of measure valued solutions with respect to model ingredients[END_REF] that equation (2.2) has a unique measure solution. To this solution one can associate a measure-valued solution -for example, given a measure solution t → µ t one can define a measure-valued solution by
δ dµ a t dL 1 , id = µ a t , m = µ s t
where dµ 1 dµ 2 denotes the Radon-Nikodym derivative of µ 1 with respect to µ 2 . However, clearly, the measure-valued solutions are not unique -since the equation is linear, there is freedom in choosing the Young measure as long as the barycenter satisfies equation (2.10). For example, a different measure-valued solution can be defined by 1 2 δ
2 dµ a t dL 1 + 1 2 δ {0} , id = µ a t .
Uniqueness of measure-valued solution can be ensured by requiring that the generalised Young measure satisfies not only the equation, but also a family of nonlinear renormalisations. This was the case in the work of DiPerna [START_REF] Diperna | Measure-valued solutions to conservation laws[END_REF], see also [START_REF] Dębiec | Relative entropy method for measure-valued solutions in natural sciences[END_REF].
To establish the GRE inequality which will then be used to prove an asymptotic convergence result, we consider the measure-valued solution generated by a sequence of regularized solutions. This allows us to use the classical GRE method established in [START_REF] Perthame | Transport equations in biology[END_REF]. Careful passage to the limit will then show that analogous inequalities hold for the measure-valued solution.
GRE INEQUALITY
In this section we formulate and prove the generalised relative entropy inequality, our main tool in the study of long-time asymptotics for equation (2.2). We take advantage of the well-known GRE inequalities in the L 1 setting. To do so we consider the growth-fragmentation equation (2.2) for a sequence of regularized data and prove that we can pass to the limit, thus obtaining the desired inequalities in the measure setting.
Let n 0 ε ∈ L 1 ϕ (R + ) be a sequence of regularizations of n 0 converging weak * to n 0 in the space of measures and such that TV(n 0 ε ) → TV(n 0 ). Let n ε denote the corresponding unique solution to (2.2) with n 0 ε as an initial condition. Then for any differentiable strictly convex admissible integrand H we define the usual relative entropy
H ε (t) := ∞ 0 ϕ(x)N(x)H n ε (t, x)e -λt N(x) dx
and entropy dissipation
D H ε (t) = ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) H n ε (t, y)e -λt N(y) -H n ε (t, x)e -λt N(x) -H n ε (t, x)e -λt N(x) n ε (t, y)e -λt N(y) - n ε (t, x)e -λt N(x) dxdy.
Then, as shown e.g. in [START_REF] Michel | General entropy equations for structured population models and scattering[END_REF], one can show that
d dt ∞ 0 ϕ(x)N(x)H n ε (t, x)e -λt N(x) dx = -D H ε (t) (3.1)
with right-hand side being non-positive due to convexity of H. Hence the relative entropy is non-increasing. It follows that
H ε (t) ≤ H ε (0) and, since H ≥ 0, ∞ 0 D H ε (t) dt ≤ H ε (0). (3.2)
In the next proposition we prove corresponding inequalities for the measure-valued solution generated by the sequence n ε . This result is an analogue of Theorem 5.1 in [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF]. Proposition 3.1. With notation as above, there exists a subsequence (not relabelled), generating a generalised Young measure (ν, m) with m = m t ⊗ dt for a family of positive Radon measures m t , such that
lim ε→0 ∞ 0 χ(t)H ε (t) dt = ∞ 0 χ(t) ∞ 0 ϕ(x)N(x) ν t,x (α), H(α) dx + ∞ 0 ϕ(x)N(x)H ∞ dm t (x) dt (3.3) for any χ ∈ C c ([0, ∞)), and
lim ε→0 ∞ 0 D H ε (t) dt = ∞ 0 ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) ν t,y (ξ ) ⊗ ν t,x (α), H(ξ ) -H(α) -H (α)(ξ -α) dxdydt + ∞ 0 ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) ν t,x (α), H ∞ -H (α) dm t (y)dxdt ≥ 0. (3.4)
We denote the limits on the left-hand sides of the above equations by ∞ 0 χ(t)H (t) dt and ∞ 0 D H (t) dt, respectively, thus defining the measure-valued relative entropy and entropy dissipation for almost every t. We further set
H (0) := ∞ 0 ϕ(x)N(x)H (n 0 ) a (x) N(x) dx + ∞ 0 ϕ(x)H ∞ (n 0 ) s |(n 0 ) s | (x) d|(n 0 ) s (x)|. (3.5)
We then have d dt H (t) ≤ 0 in the sense of distributions, (3.6)
and ∞ 0 D H (t)dt ≤ H (0). (3.7)
Proof. The function t → ∞ 0 n ε (t, x)e -λt ϕ(x)dx is constant and the function N is strictly positive on (0, ∞). Therefore the sequence u ε (t, x) := n ε (t,x)e -λt N(x) is uniformly bounded in L ∞ (R + ; L 1 ϕ,loc (R + )). Hence we can apply Proposition 2.2 to obtain a generalised Young measure (ν, m) on
R + × R + . Since u ε ∈ L ∞ (R + ; L 1 ϕ,loc (R + )), we have m ∈ L ∞ (R + ; M (R + ; ϕ))
. By a standard disintegration argument (see for instance [START_REF] Evans | Weak convergence methods for nonlinear partial differential equations[END_REF]Theorem 1.5.1]) we can write the slicing measure for m, m(dt, dx) = m t (dx) ⊗ dt, where the map t → m t ∈ M + (R + ; ϕ) is measurable and bounded.
By Proposition 2.2 we have the weak * convergence
H(u ε (t, x))(dt ⊗ ϕ(x)dx) * ν t,x , H (dt ⊗ ϕ(x)dx) + H ∞ m.
Testing with (t, x) → χ(t)N(x) where χ ∈ C c (R + ), we obtain (3.3). Further, the convergence [START_REF] Brenier | Weak-strong uniqueness for measure-valued solutions[END_REF], since for H ε we have the corresponding inequality (3.1).
∞ 0 χ(t)H ε (t)dt → ∞ 0 χ(t)H ε (t)dt implies (3.
We now investigate the limit as
ε → 0 of ∞ 0 D H ε (t)dt. Denoting Φ(x, y) := k(x, y)N(y)B(y) we have D H ε (t) = ∞ 0 ∞ 0 Φ(x, y)ϕ(x)[H(u ε (t, y)) -H(u ε (t, x)) -H (u ε (t, x))u ε (t, y) + H (u ε (t, x))u ε (t, x)]dxdy. (3.8)
We consider each of the four terms of the sum separately on the restricted domain [0, T ] × [η, K] 2 for fixed T > 0 and K > η > 0. Let D H ε,η,K denote the entropy dissipation with the integrals of (3.8) each taken over the subsets [η, K] of R + .
We now apply Proposition 2.2 to the sequence u ε , the measure dt ⊗ ϕ(x)dx on the set [0, T ] × [η, K]. The first two and the last integrands of D H ε,η,K (t) depend on t and only either on x or on y. Therefore we can pass to the limit as ε → 0 by Proposition 2.2 using a convenient test function. More precisely, testing with (t, x) → K η Φ(x, y)dy, we obtain the convergence
- T 0 K η K η Φ(x, y)ϕ(x)H(u ε (t, x))dydxdt -→ - T 0 K η K η Φ(x, y)ϕ(x) ν t,x , H dydxdt - T 0 K η K η Φ(x, y)ϕ(x)H ∞ dm t (x)dydt,
and, noticing that the recession function of
α → αH (α) is H ∞ , T 0 K η K η Φ(x, y)ϕ(x)H (u ε (t, x))u ε (t, x)dydxdt -→ T 0 K η K η Φ(x, y)ϕ(x) ν t,x , αH (α) dydxdt + T 0 K η K η Φ(x, y)ϕ(x)H ∞ dm t (x)dydt.
Likewise, using (t, y)
→ 1 ϕ(y) K η Φ(x, y)ϕ(x)dx, we obtain T 0 K η K η Φ(x, y)ϕ(x)H(u ε (t, y))dxdydt → T 0 K η K η Φ(x, y)ϕ(x) ν t,y , H dxdydt + T 0 K η K η Φ(x, y)ϕ(x)H ∞ dm t (y)dxdt.
There remains the term of D H ε,η,K in which the dependence on u ε combines x and y. To deal with this term we separate variables by testing against functions of the form f 1 (x) f 2 (y). We then consider
- T 0 [η,K] 2 f 1 (x) f 2 (y)H (u ε (t, x))u ε (t, y)dxdydt = - T 0 K η f 1 (x)H (u ε (t, x))dx K η f 2 (y)u ε (t, y)dy dt.
The integrands are now split, one containing the x dependence, and one the y dependence. However, extra care is required here to pass to the limit. As the Young measures depend both on time and space, it is possible for the oscillations to appear in both directions. We therefore require appropriate time regularity of at least one of the sequences to guarantee the desired behaviour of the limit of the product. Such requirement is met by noticing that since
u ε ∈ C ([0, T ]; L 1 ϕ (R + )) uniformly, we have u ε uniformly in W 1,∞ ([0, T ]; (M + (R + ), • (W 1,∞ ) * )), cf. [8, Lemma 4.1]. Assuming f 2 ∈ W 1,∞ (R + ) we therefore have t → K η f 2 (y)u ε (t, y)dy ∈ W 1,∞ ([0, T ]).
This in turn implies strong convergence of K η f 2 (y)u ε (t, y)dy in C ([0, T ]), by virtue of Arzéla-Ascoli theorem. Therefore we have (noting that (H ) ∞ ≡ 0 by sublinear growth of H)
- T 0 [η,K] 2 f 1 (x) f 2 (y)H (u ε (t, x))u ε (t, y)dxdydt = - T 0 K η f 1 (x)H (u ε (t, x))dx K η f 2 (y)u ε (t, y)dy dt -→ - T 0 K η f 1 (x) ν t,x , H dx K η f 2 (y) ν t,y , id dy dt - T 0 K η f 1 (x) ν t,x , H dx K η f 2 (y)dm t (y) dt = - T 0 [η,K] 2 f 1 (x) f 2 (y) ν t,x , H (α) ν t,y , ξ dxdy - T 0 [η,K] 2 f 1 (x) f 2 (y) ν t,x , H (α) dm t (y)dxdt.
By density of the linear space spanned by separable functions in the space of bounded continuous functions of (x, y) we obtain
- T 0 [η,K] 2 Φ(x, y)ϕ(x)H (u ε (t, x))u ε (t, y)dxdydt -→ T 0 [η,K] 2 Φ(x, y)ϕ(x) ν t,x , H (α) ν t,y , ξ dxdydt - T 0 [η,K] 2 Φ(x, y)ϕ(x) ν t,x , H (α) dm t (y)dxdt.
Gathering all the terms we thus obtain the convergence as ε → 0
T 0 D H ε,η,K (t)dt -→ T 0 D H η,K (t)dt with D H η,K (t) := [η,K] 2 Φ(x, y)ϕ(x) ν t,y (ξ ) ⊗ ν t,x (α), H(ξ ) -H(α) -H (α)(ξ -α) dxdy + [η,K] 2 Φ(x, y)ϕ(x) ν t,x (α), H ∞ -H (α) dm t (y)dx.
Observe that since Φ is non-negative and H is convex, the integrand of D H ε,η,K is non-negative. Hence so is the integrand of the limit. Therefore, by Monotone Convergence, we can pass to the limit η → 0, K → ∞, and T → ∞ to obtain
0 ≤ lim ε→0 ∞ 0 D H ε (t) dt = ∞ 0 ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) ν t,y (ξ ) ⊗ ν t,x (α), H(ξ ) -H(α) -H (α)(ξ -α) dxdydt + ∞ 0 ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) ν t,x (α), H ∞ -H (α) dm t (y)dxdt.
Finally we note that by the Reshetnyak continuity theorem, cf. [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF][START_REF] Kristensen | Relaxation of signed integral functionals in BV[END_REF] we have the convergence H ε (0) → H (0). Together with (3.2) this implies (3.6).
LONG-TIME ASYMPTOTICS
In this section we use the result of the previous section to prove that a measure-valued solution of (2.2) converges to the steady-state solution. More precisely we prove Theorem 4.1. Let n 0 ∈ M (R + ) and let n solve the growth-fragmentation equation (2.2). Then
lim t→∞ ∞ 0 ϕ(x)d|n(t, x) -m 0 N(x)L 1 | = 0 (4.1)
where m 0 := ∞ 0 ϕ(x)dn 0 (x) and L 1 denotes the 1-dimensional Lebesgue measure.
Proof. From inequality (3.7) we see that D H belongs to L 1 (R + ). Therefore there exists a sequence of times t n → ∞ such that
lim n→∞ D H (t n ) = 0.
Consider the corresponding sequence of generalised Young measures (ν t n ,x , m t n ). Thanks to the inequality H (t) ≤ H (0) this sequence is uniformly bounded in the sense that
sup n ∞ 0 ϕ(x)N(x) ν t,x (α), |α| dx + ∞ 0 ϕ(x)N(x)dm t n (x) < ∞. (4.2)
Therefore by the compactness property of Proposition 2.3 there is a subsequence, not relabelled, and a generalised Young measure ( νx , m) such that
(ν t n ,x , m t n ) * ( νx , m)
in the sense of measures. We now show that the corresponding "entropy dissipation"
D H ∞ := ∞ 0 ∞ 0 Φ(x, y)ϕ(x) νy (ξ ) ⊗ νx (α), H(ξ ) -H(α) -H (α)(ξ -α) dxdy + ∞ 0 ∞ 0 Φ(x, y)ϕ(x) νx (α), H ∞ -H (α) d m(y)dx (4.3)
is zero. To this end we argue that
D H ∞ = lim n→∞ D H (t n ).
Indeed this follows by the same arguments as in the proof of Proposition 3.1. In fact now the "mixed" term poses no additional difficulty as there is no time integral. It therefore follows that
D H ∞ = 0. (4.4)
As H is convex, both integrands in (4.3) are non-negative. Therefore (4.4) implies that both the integrals of D H ∞ are zero. In particular
∞ 0 ∞ 0 H(ξ ) -H(α) -H (α)(ξ -α)d νx (α)d νy (ξ ) = 0,
and since the integrand vanishes if and only if ξ = α, this implies that the Young measure ν is a Dirac measure concentrated at a constant. Then the vanishing of the second integral of D H ∞ implies that m = 0. Moreover, the constant can be identified as
m 0 := ∞ 0 ϕ(x)dn 0 (x) (4
CONCLUSION
In this article, we have proved the long-time convergence of measure-valued solutions to the growth-fragmentation equation. This result extends previously obtained results for L 1 ϕ solutions [START_REF] Michel | General relative entropy inequality: an illustration on growth models[END_REF]. As for the renewal equation [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF], it is based on extending the generalised relative entropy inequality to measure-valued solutions, thanks to recession functions. Generalised Young measures provide an adequate framework to represent the measure-valued solutions and their entropy functionals.
Under slightly stronger assumptions on the fragmentation kernel k, e.g. the ones assumed in [START_REF] Cáceres | Rate of convergence to the remarkable state for fragmentation and growth-fragmentation equations[END_REF], it has been proved that an entropy-entropy dissipation inequality could be obtained. Under such assumptions, we could obtain in a simple way a stronger result of exponential convergence, see the proof of Theorem 4.1. in [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF]. However the aboveseen method allows us to extend the convergence to spaces where no spectral gap exists [START_REF] Bernard | Asymptotic behavior of the growth-fragmentation equation with bounded fragmentation rate[END_REF].
A specially interesting case of application of this method would be critical cases where the dominant eigenvalue is not unique but is given by a countable set of eigenvalues. It has been proved that for L 2 initial conditions, the solution then converges to its projection on the space spanned by the dominant eigensolutions [START_REF] Bernard | Cyclic asymptotic behaviour of a population reproducing by fission into two equal parts[END_REF]. In the case of measure-valued initial condition, due to the fact that the equation has not anymore a regularisation effect, the asymptotic limit is expected to be the periodically oscillating measure, projection of the initial condition on the space of measures spanned by the dominant eigensolutions. This is a subject for future work.
. 5 )
5 by virtue of the conservation in time of∞ 0 ϕ(x)e -λt ν t,x , • dx + ∞ 0 ϕ(x)e -λt dm t (x).By virtue of Proposition 2.2 withH = | • -m 0 | it then follows that lim n→∞ ∞ 0 ϕ(x)d|n(t n , x)e -λt nm 0 N(x)L 1 | = 0,which is the desired result, at least for our particular sequence of times.Finally, we can argue that the last convergence holds for the entire time limit t → ∞, invoking the monotonicity of the relative entropy H . Indeed, the choiceH = | • -m 0 | in (3.5) yields the monotonicity in time of ∞ 0 ϕ(x)d|n(t, x)e -λtm 0 N(x)L 1 |, and the result follows.
Acknowledgements. T. D. would like to thank the Institute for Applied Mathematics of the Leibniz University of Hannover for its warm hospitality during his stay, when part of this work was completed. This work was partially supported by the Simons -Foundation grant 346300 and the Polish Government MNiSW 2015-2019 matching fund. The research of T. D. was supported by National Science Center (Poland) 2014/13/B/ST1/03094. M.D.'s research was supported by the Wolfgang Pauli Institute (Vienna) and the ERC Starting Grant SKIPPER AD (number 306321). P. G. received support from National Science Center (Poland) 2015/18/M/ST1/00075. | 31,886 | [
"1030600",
"2247",
"1030601",
"1030602"
] | [
"107709",
"542023",
"55466",
"532215",
"532225"
] |
01762992 | en | [
"sdu"
] | 2024/03/05 22:32:13 | 2014 | https://hal.science/hal-01762992/file/1710.03509.pdf | Enrico Marchetti
Laird Close
Jean-Pierre Véran
Johan Mazoyer
email: [email protected]
Raphaël Galicher
Pierre Baudoz
Patrick Lanzoni
Frederic Zamkotsian
Gérard Rousset
Frédéric Zamkotsian
Deformable mirror interferometric analysis for the direct imagery of exoplanets
Keywords: Instrumentation, High-contrast imaging, adaptive optics, wave-front error correction, deformable mirror
INTRODUCTION
Direct imaging of exoplanets requires the use of high-contrast imaging techniques among which coronagraphy. These instruments diffract and block the light of the star and allow us to observe the signal of a potential companion. However, these instrument are drastically limited by aberrations, introduced either by the atmosphere or by the optics themselves. The use of deformable mirrors (DM) is mandatory to reach the required performance. The THD bench (french acronym for very high-contrast bench), located in the Paris Observatory, in Meudon, France, uses coronagraphy techniques associated with a Boston Micromachines DM. [START_REF] Bifano | Adaptive imaging: MEMS deformable mirrors[END_REF] This DM is a Micro-Electro-Mechanical Systems (MEMS), composed of 1024 actuators. In March 2013, we brought this DM in Laboratoire d`Astrophysique de Marseille (LAM), France, where we studied precisely the performance and defects of this DM on the interferometric bench of this laboratory. The result of that study, conducted in collaboration with F. Zamkotsian et P. Lanzoni, from LAM are presented here.
We first describe the MEMS DM, the performance announced by Boston Micromachines and its assumed state before this analysis (Section 2). In the same section, we also present the interferometric bench at LAM. The results of this analysis are then presented in several parts. We first describe the analyzed DM overall shape and surface quality (Sections 3 and 4). We then analyze accurately the influence function of an actuator and its response to the application of different voltages (Section 5), first precisely for one actuators and then extended to all the DM. Finally, special attention will be paid to the damaged actuators that we identified (Section 6). We will present several causes of dysfunction and possible solutions.
THE MEMS DM AND THE LAM INTERFEROMETRIC BENCH 2.1 The MEMS DM: specifications and damaged actuators
Out of the 1024 actuators, only 1020 are used because corners are fixed. We number our actuators from 0 (bottom right corner) to 1023 (top left corner) as shown in Figure 1. The four fixed corner actuators are therefore numbers 31, 992 and 1023. The edges of the DM are also composed of fixed actuators, unnumbered. The inter-actuator pitch is 300µm, for a total size of the DM 9.3 mm. Boston Micromachines announces a subnanometric minimum stroke and a total stroke of 1.5 µm. All the values presented in this paper, unless stated otherwise, are in mechanical deformation of the DM surface (which are half the phase deformation introduced by a reflection on this surface). The flattened-DM surface quality is valued by Boston Micromachines at 30 nm (root mean square, RMS).
The electronics of the DM allows us to apply voltages between 0 and 300 V, coded on 14 bits. The minimum stroke is therefore 300/2 14 V or 18.3 mV. To protect the surface, the maximum voltage for this DM is 205 V.
We use percentage to express the accessible voltages: 0% corresponds to 0 V, while 100% corresponds to a voltage of 205 V. Each percent is thus a voltage of 2.05 V. The higher the voltage is the more the actuator is pulled towards the DM. A voltage of 100 % thus provides the minimum value of the stroke, a voltage of 0% its maximum. The minimum stroke of each actuators is 8.93 10 -3 %. This value was checked on the THD bench by checking the minimum voltage to produce an effect in the pupil plan after the coronagraph. Gain measurement in Section 5 will allow us to check the specifications for the maximal and minimum stroke of an actuator. The numbering starts at 0 in the bottom right corner and ends in 1023 in the top left corner. The actuators 841 and 197, in red, considered defective, were not used. Therefore, the pupil used is reduced (27 actuators along the diameter of the pupil only) and offseted on the DM.
Before March 2013, we thought that two actuators were unusable (they did not follow our voltage instructions): the 841 who could follow his neighbors if they were actuated and the 197 that seemed stuck at the 0% value. These actuators will be studied specifically in section 6. To avoid these actuators, the pupil before this analysis was reduced (27 actuators across the diameter of the pupil only) and offseted (see Figure 1).
Analysis at LAM : interferometric bench and process
The interferometric bench at LAM 2 was developed for the precise analysis of DM. Figure 2 shows the diagram of the Michelson interferometer. The source is a broadband light, which is filtered spatially by a point hole and spectrally at λ = 650 nm (this is the wavelength of the THD bench). In the interferometer, one of the mirrors is the DM to analyze (Sample in Figure 2). The other is a plane reference mirror (Reference flat in Figure 2). At the end of the other arm of the interferometer, a CCD detector (1024x1280) is placed. A lens system can be inserted in front of the camera to choose between a large field (40 mm wide, covering the whole DM) to a smaller field (a little less than 2 mm wide or 6x6 actuators). Both fields will be used in this study. The phase measurement is done using the method of Hariharan. [START_REF] Hariharan | Digital phase-shifting interferometry: a simple error-compensating phase calculation algorithm[END_REF] We introduce 5 phase differences in the reference arm:
{-2π/2, -π/2, 0, π/2, 2π/2}, (1)
and record the images with the CCD. The phase difference φ between the two arms can then be measured using:
tan(φ) = I -π/2 -I π/2 2I 0 -I -2π/2 -I 2π/2 . ( 2
)
Assuming a null phase on the reference mirror, the phase on the DM is just φ. Since the phase is only known between 0 and 2π, the overall phase is unwrapped using a path-following algorithm. This treatment can sometimes be difficult in areas with very high phase gradient. Finally, we measure the surface deformation of the DM by multiplying λ/(2π) and dividing by 2 (to measure mechanical movement of the DM from optical path difference).
The accuracy in the phase measurement is limited by the aberrations of the reference plane mirror and by the differential aberrations in the arms of the interferometer. However, the performance obtained on the measurement of the mechanical deformation of the DM are subnanometric. [START_REF] Liotard | Static and dynamic microdeformable mirror characterization by phase-shifting and time-averaged interferometry[END_REF] We can also retrieve the amplitude on the surface using:
4 M = 3 4(I -π/2 -I π/2 ) 2 + (2I 0 -I -2π/2 -I 2π/2 ) 2 2(I -2π/2 + I π/2 + I 0 + I π/2 + I 2π/2 ) . (3)
We now present the results of this analysis. Finally, the vertical lines indicate the limits of the 27x27 actuator pupil before March 2013 (red dotted line) and a pupil centered of the same size (brown dotted line). Right : Same cross sections for different voltages applied to all the actuators (from 0% to 90 %). Piston was removed and the curves were superimposed. The abscissas are measured in inter-actuator pitch and the y-axis is in nanometers.
GENERAL FORM OF THE DM
Figure 3 (left) shows, in a black solid curve, a cross section of the DM over the entire surface (in one of the main direction). We applied the same voltage of 70% to all the actuators. The x-axis is measured in inter-actuator pith and the mechanical deformation in y-axis is in nanometers. The first observation is that a uniform voltage on all the actuators does not correspond to a flat surface on the DM. The general shape is a defocus over the entire surface of approximately 500 nm (peak-to-valley, PV). The position of the 27x27 actuator pupil on the bench THD before March 2013 is drawn in red vertical lines. The brown vertical lines indicate a pupil of the same size, centered on the DM. The "natural" defocus of the DM in a pupil 27 actuators is about 350 nm (PV).
Figure 3 (right) represent the same cross section if we apply different uniform tensions to the DM (from 0% to 90%). Piston was removed and we superimposed these curves, which show that this form of defocus is present in the same proportions in all voltages. Due to slightly different gains between the actuators, there is a small variations of the actuators at the center between the various applied voltages.
The theoretical stroke of an actuator is 1.5µm and can normally compensates for this defocus by pulling the center actuators of 500 nm while letting the one on the edges at low tensions. However, this correction would be at the cost of a third of this theoretical maximum stroke on the center actuators. The chosen solution on our bench is to place the coronagraph mask outside the focal plane. Indeed, at a distance d the focal plane, the defocus introduced is:
Defoc P V = d 8(F/D) 2 , ( 4
)
in phase difference, in PV, where F/D is the opening of our bench. With the specifcations of our bench, we chose d = 7 cm, which correspond to the introduction of a defocus (in phase error) of 700 nm (PV), which exactly compensates the 350 nm (PV) of defocus (in mechanical stroke) in our 27 actuator pupil. We can therefore chose the voltages around a uniform value on the bench. Before the analysis at the LAM, we chose a voltage of 70 %, for reasons which are discussed in Section 5.2.
We also note on the black solid line on Figure 3 (left) the large variation at the edges of the DM (550 nm, PV in only one actuator pitch), when the same voltage of 70 % is applied to all actuators. This variation tends to decrease when lower voltages are applied to the actuators on the edges. However, this edge must not be included in the pupil. Note that the pupil prior to the analysis in Marseille (in red vertical lines) was very close to these edges.
On the edges, we can clearly seen the "crenelations" created by our DM actuators. To measure these deformations, I removed the lower frequencies (including all the frequencies accessible to the DM) numerically with a smoothing filter. The result is plotted in Figure 3 (left) in blue solid line. We clearly see this crenelation effect increase as we approache the edges. Once again, it is better to center the pupil on the detector to avoid the edge actuators. These effects are the main causes of the poor surface quality of MEMS DMs that we discussed in the next section. We now study the surface of our DM first through the level of aberrations, then with the study of the Power Spectral Density (PSD).
SURFACE QUALITY
In Fig 4 we show images of the surface of the DM obtained in large field (about 10 mm by 10 mm) on the left and small field (right), centered on the 4 by 4 central actuators (ie 1.2 mm by 1.2 mm). In both case, we removed all frequencies reachable by the DM (below 0.5 (inter-actuator pitch) -1 ) through a smoothing filter in post processing to observe its fine structures. For example, the defocus mentioned in the previous section has been removed. In the large field, we can observe the actuator 769 (in the lower left), which is fixed to the value 0% (see Section 6.3) and was unnoticed before this analysis but no noticeable sign of the two known faulty actuators (see Figure 1). We also note the edges and corners, very bright, due to fixed actuators.
Figure 5: On the left, cross sections on two actuators observed in small field. Each point of these cross sections is an average over a width of 0.1 inter-actuator pitch, either avoiding center and release-etch holes (curve "best case", in red) or on the contrary right in the center of an actuator (curve " worst case" in black). On the right, azimuthally averaged PSD measured on the whole DM (wide field) and on some actuators (narrow field). The frequencies on the horizontal axis are measured in µm -1 and the vertical axis is in nm 2 .µm 2 . The black dotted vertical lines indicate remarkable frequencies: the frequency of the actuators (1/300µm -1 ) and the maximum correctable frequency by the DM, of (2 inter-actuator pitch) -1 , or 1/(2 * 300)µm -1 . Finally, in red, we adjusted asymptotic curves.
Boston Micromachines announces a surface quality 30 nm (RMS) on the whole DM when it is in a "flat" position. Because of the high defocus defect that we corrected using a defocus of the coronagraphic mask, we have not tried to obtain a flat surface on the DM to verify this number. However, an estimate of the remaining aberrations in a "flat" position can be maid by removing in post processsing all the frequencies correctable by our DM. We measure the remaining aberrations without the edges and found 32 nm (RMS). This is slightly higher than the specifications of Boston Micromachines but one of the actuator at least is broken. The same measurement on our actual 27 actuators offseted pupil gives 8 nm (RMS) and 7 nm (RMS) for a same size pupil centered.
In the right image, we observe the details of the actuator. We observed three types of deformations:
• the center of the actuator, in black, with a size of about 25 µm
• the edges, which appear as two parallel lines separated by 45µm and of length one inter-actuator pitch (300µm)
• the release-etch holes of the membrane (4 in the central surface + 2 between the parallel lines of the edges).
In the principal direction of the DM, they appears very 150 µm and are only a few µm larges. A priori, these holes are a consequence of the making process by lithography.
We measured cross sections along two actuators in the small field, shown in Figure 5. The horizontal axis is in inter-actuator pitch and the vertical axis is in nanometers. Each of the points of these cross-sections is an average on a width of about 0.1 inter-actuator pitch. We placed these bands either right in the center of an actuator (curve "worst case", in black) or in a way to avoid both centers and release-etch holes (curve "best case", in red). The two bumps in 0.15 and 0.35 and in 1.15 and 1.35 inter-actuator pitch, common to both curves correspond to the parallel lines at the edges of the actuators. They produce mechanical aberrations of 12 nm (PV). The centers, in the black curve, are located in 0.65 and 1.65 inter-actuator pitch. They introduce mechanical aberrations 25 nm (PV). It is not certain that the aberrations in the release-etch holes are properly retrieved for several reasons. First, their size is a smaller than 0.1 inter-actuator pitch, so they are averaged in the cross section. We are also not sure that the phase is correctly retrieved in the unwrapping process as it encounters a strong phase gradient in these holes. They produce aberrations of 20 nm (PV). In total, on one actuator, aberrations of 30 nm (PV) and 6 nm (RMS) are obtained.
Figure 5 shows the azimuthally averaged DSP of the DM. The black curve represents the azimuthally averaged DSP for the whole DM (large field). We clearly observe the peak at the characteristic frequency of the DM (1/300 µm -1 ), indicated by a black dotted line. We can see peaks at other characteristic frequencies (1/(300 * √
2) µm -1 , 2/(300) µm -1 , ...). We took an azimuthal average to average these frequencies and observe a general trend. We repeated the same operation for DSP calculated on a small field. As shown in red in Figure 5 (right), we plotted the trends of these azimuthal DSP, which shows a asymptotic behavior in -4.4 for the big field and -3.3 for small field. Indeed, very small defects can come from differential aberrations in the interferometer, deformation of the flat reference mirror or noise in the measurement. We therefore adopt the large field value of f -4.4 for asymptotic behavior.
We now precisely study the behavior of a single actuator (Section 5), ie the influence function, the coupling with its neighbors, and the gain when we applied different voltages. For this analysis, we observe the behavior of a central actuator (number 528). We will measure its influence function and the inter-actuator coupling then study its gain, maximum and minimum strokes. These measurements were conducted by applying to the actuator 528 several voltages ranging from 10 to 90 % while the rest of the actuators are set at the value 70 %.
BEHAVIOR OF A SINGLE ACTUATOR
Influence function and coupling
We study the influence function IF of an actuator, movement of the surface when a voltage is applied. This influence function can be simulate using: 5
IF (ρ) = exp[ln(ω)( ρ d 0 ) α ], ( 5
)
where ω is the inter-actuator coupling and d 0 is the inter-actuator pitch. Figure 6 shows the influence function of a central actuator (528). At first, we apply a voltage of 40 % to the actuator (the others remaining at a voltage of 70 %) then a voltage of 70 % and made a difference, shown in the left picture. This is therefore the influence function for a voltage of -30%. We can observed that the influence function has no rotational symmetry. The main shape is a square, surrounded by a small negative halo.
We made cross section in several directions: one of the main directions of the DM, one of the diagonals. The results are presented on a logarithmic scale in Figure 6 (center). The distance to the center of the actuator is in inter-actuator pitch. We applied an offset to plot negative values in a logarithmic scale and we indicate the zero level by a dotted blue line. In the main direction, a break in the slope is observed at a distance of 1 inter-actuator pitch. The influence of the actuator in this direction is limited to 2 inter-actuator pitch in each way. In the diagonal direction, the secondary halo is about 3 nm deep, which is 0.5 % of the maximum. Due to this halo, the influence is somewhat greater (however, less than 3 inter-actuator pitch).
On Figure 6 (right), is plotted a cross section of the influence function in a principal direction of the DM. The inter-actuator coupling (height of the function at the distance of 1 inter-actuator pitch) is of 12 %. We fitted a curve using the function described in Equation 5 using this coupling and found α = 1.9. This shows that the central part of the influence function is almost a Gaussian ( alpha = 2), but do not take into account the "wings".
Gain study
In this section, we measured the maximum of the influence function for different voltages applied to the 528 actuator, the other remaining 70 %. Figure 7 (left) shows the superposition of cross sections in a principal direction of the DM for voltage values of 20 %, 30 %, 40 %, 50 %, 60 %, 70 %, 80 %, 90 %. We fitted Gaussian curves for these functions and observed that the maximum values of the peak are always located at the same place, and the width of the Gaussian is constant for the range of applied voltages. This shows that the influence function is identical for all the applied voltages. We plot the maximum of these curves as a function of the voltages in red diamonds in Figure 7 (right). The scale of these maximum can be read on the left axis, in nanometers. We then adjusted a quadratic gain (black solid curve) on this figure. This allows us to extrapolate the voltage values for 0 % and 100 %. From this figure, it can be deduced that:
• the maximum stroke is 1100 nm (1.1µ)m slightly less than the value indicated by Boston Micromachines. Temporal response to a +5% command in voltage for a normal actuator (left) and for the actuator 841 (right). Starting with a voltage of 70%, we send a +5% at 0s, wait for this command to be applied and then send a -5% command, at 5.39 s for normal actuator and at 20.14 s for the actuator 841. The vertical axis is % of the stroke, the abscissa in seconds since the update command of +5%. The dashed blue indicates the sending of the -5% command.
• the value of 70 % is the one that allows the maximum stroke in both ways (545 nm when we push and 560 nm when we pull). If the actuator is at a value of 25 %, we can only enjoy a maximum stroke of 140 nm in one direction. For this reason, we used to use the DM at values around 70 % before March 2013.
• the gain have a quadratic variation and therefore, the value of the minimum stroke in volt or in percent (8.93 10 -3 %) corresponds to different minimum strokes in nanometers depending on the location on this curve. We plotted the value of the minimum stroke in blue on the same plot (the scale of this curve, in nanometers, can be read on the right axis). We observe that a variation of 8.93 10 -3 % around 70 % produces a minimum stroke of 0.14 nm, which is twice the movement produced by the same variation around 25 % (0.07 nm).
Applying voltages around 70 % makes sense if we try to make the most of the stroke of the DM, but if we try to correct for small phase aberrations (which is our use of this DM), we should apply the lowest voltages possible.
We observed the positions of all the actuators and verified that they are evenly distributed on the surface. The gains of all actuators are very close on all the surface (variation of 20% between the minimum gain and the maximum).
We finish this study by an inventory of the different failures that we encountered and the solutions that we have fortunately been able to put in place to overcome these failures.
DAMAGED ACTUATORS
Before the analysis in March 2013, the actuators 841 and 197 were not responding correctly to our commands. A specific study on these actuators allowed us to overcome these dysfunctions and include them again in the pupil.
The slow actuator
We found that the actuator 841 responded to our voltage commands but with a very long response time. The interferometric bench in Marseille is not suited for temporal study (the successive path differences introductions limit the measurement frequency). Therefore, I used the phase measurement method developed on the THD bench: the self-coherent camera, see Mazoyer et al. (2013). [START_REF] Mazoyer | Estimation and correction of wavefront aberrations using the self-coherent camera: laboratory results[END_REF] I examined the temporal response of the 841 actuator after a command of +5% and compared it with the temporal response of a normal actuator (777). From a starting level of 70% for all of the actuator of the DM, we first sent a command to go to 75% to each of these two actuators, wait for this command to be executed and sent an order to return to the initial voltage. Figure 8 shows the results of this operation for a normal actuator (777, left) and for the slow actuator (841, right). The measurement frequency is on average 105 ms. The horizontal axis is the time (in seconds), with origin the date at which the +5% command is sent. Our phase measurement method does not give an absolute measurement of the phase and so we normalized the result (0% is the mean level before the command, 100% is the mean level after the +5%command.
For the normal actuator, the response time is inferior to the measurement period (105 ms in average). This result is consistent with the response time of an actuator announced by Boston Micromachines (< 20µs) although we cannot verify this value with this method. For the slow actuator, there is a much slower response to the rise as well as to the descent. We measured the response time to 95% of the maximum in the rise (7.5 s) and in the descent (8.1 s). However, as the static gain of the actuator is comparable with the gain of healthy actuators, we deduce that this actuator goes slowly but surely to the right position.
The coupled actuators
We realized that the actuator number 197 responded to the commands applied to the actuator 863, at the other end of the DM. It seems that the actuator 197 has a certain autonomy, but in case of large voltage differences applied on these two actuators, the 197 follows the commands applied to the 863 actuator. We carefully verified that if we apply the same voltage to them, these two actuators respond correctly to the command and have comparable gains than the other actuators. The actuator 863 is fortunately on the edge of the DM, so we can center the pupil with no influence of this actuator. Since, we have recentered the pupil to include the the 197 actuator back (see Figure 9). We systematically apply same voltages to both actuators simultaneously.
The dead actuator
Finally, we notice that the 769 actuator does not respond at all to our commands. This actuator is on the far edge of the DM. It is possible that he broke during the transportation towards LAM laboratory, but as it was far off pupil, we may have previously missed this failure. This actuator is fixed to the value 0% regardless of the applied voltage. However, we checked that it has no influence over 2 inter-actuator pitch.
CONCLUSION AND CONSEQUENCES ON THE BENCH
The identification of the faulty actuators and the solutions to overcome these dysfunctions have enabled us to recenter the pupil on our DM. Figure 9 shows the position of the pupil on the DM after the study at LAM. This centering has enabled to move away from the edges of the DM. We also saw that this centering is preferable to limit the introduction into the pupil of aberrations at high frequencies non reachable by the DM. Finally, we recently lowered the average value of the voltages on the DM from 70 % to 25 % and improve by a factor of 2 the minimum stroke reachable by each actuators. These upgrades played an important role in the improvement of our performance on the THD bench.
Figure 1 :
1 Figure 1: Numbered actuators and position of the pupil on the DM before March 2013 (in green).The numbering starts at 0 in the bottom right corner and ends in 1023 in the top left corner. The actuators 841 and 197, in red, considered defective, were not used. Therefore, the pupil used is reduced (27 actuators along the diameter of the pupil only) and offseted on the DM.
Figure 2 :
2 Figure 2: The interferometric bench at LAM. Figure from Liotard et al. (2005).2
Figure 3 :
3 Figure3: DM cross sections. Left: cross sections of the whole DM surface in black when all actuators are at 70 % in voltages. Each point on this curve corresponds to an average over an actuator wide band. In blue line, we plotted the result after removing the frequencies accessible to the DM (in post-treatment, with a smoothing filter). Finally, the vertical lines indicate the limits of the 27x27 actuator pupil before March 2013 (red dotted line) and a pupil centered of the same size (brown dotted line). Right : Same cross sections for different voltages applied to all the actuators (from 0% to 90 %). Piston was removed and the curves were superimposed. The abscissas are measured in inter-actuator pitch and the y-axis is in nanometers.
Figure 4 :
4 Figure 4: Surface of the DM in large field on the left (the whole DM is about 10 mm by 10 mm) and in small field on the right, centered on the 4 by 4 central actuators (ie 1.2 mm by 1.2 mm). In both cases, all the actuators are set to 70 % in voltages. To observe the fine structures of the DM, we removed in both cases low frequencies digitally in post-processing. On the left image, we can see actuator 769 (bottom left), which is fixed to 0% (see Section 6.3).
Figure 6 :
6 Figure 6: Influence function. Left, measurement of the influence function of a central actuator. Center, cross section of the influence function in logarithmic scale along a principal direction of the mirror and in an diagonal direction. Right, cross section of the influence function along a principal direction, on which is superimposed cross section of a simulated influence function. The abscissas are in inter-actuator pitch and the vertical axis are in nanometers.
Figure 7 :
7 Figure7: Study of one actuator: stroke and gain. Left: influence functions for different applied voltages. Right: maximum values of these influence functions in red and quadratic gain (black solid curve). The minimum percentage applicable (8.93 10 -3 %) can produce different minimum stroke depending on your position on this quadratic curve : we plot the minimum stroke around each voltage in blue (the scale of this curve, in nanometers, can be read on the right axis).
Figure 8 :
8 Figure8: Study of the slow actuator. Temporal response to a +5% command in voltage for a normal actuator (left) and for the actuator 841 (right). Starting with a voltage of 70%, we send a +5% at 0s, wait for this command to be applied and then send a -5% command, at 5.39 s for normal actuator and at 20.14 s for the actuator 841. The vertical axis is % of the stroke, the abscissa in seconds since the update command of +5%. The dashed blue indicates the sending of the -5% command.
Figure 9 :
9 Figure 9: This study allowed us to identify precisely the causes of actuator failures and recenter the pupil on the DM, including actuators 197 and 841.
1012 980 948 916 884 852 820 788 756 724 692 660 628 596 564 532 500 468 436 404 372 340 308 276 244 212 180 148 116 84 52 20 1011 979 947 915 883 851 819 787 755 723 691 659 627 595 563 531 499 467 435 403 371 339 307 275 243 211 179 147 115 83 51 19 1010 978 946 914 882 850 818 786 754 722 690 658 626 594 562 530 498 466 434 402 370 338 306 274 242 210 178 146 114 82 50 18 1009 977 945 913 881 849 817 785 753 721 689 657 625 593 561 529 497 465 433 401 369 337 305 273 241 209 177 145 113 81 49 17 1008 976 944 912 880 848 816 784 752 720 688 656 624 592 560 528 496 464 432 400 368 336 304 272 240 208 176 144 112 80 48 16 1007 975 943 911 879 847 815 783 751 719 687 655 623 591 559 527 495 463 431 399 367 335 303 271 239 207 175 143 111 79 47 15 1006 974 942 910 878 846 814 782 750 718 686 654 622 590 558 526 494 462 430 398 366 334 302 270 238 206 174 142 110 78 46 14 1005 973 941 909 877 845 813 781 749 717 685 653 621 589 557 525 493 461 429 397 365 333 301 269 237 205 173 141 109 77 45 13 1004 972 940 908 876 844 812 780 748 716 684 652 620 588 556 524 492 460 428 396 364 332 300 268 236 204 172 140 108 76 44 12 1003 971 939 907 875 843 811 779 747 715 683 651 619 587 555 523 491 459 427 395 363 331 299 267 235 203 171 139 107 75 43 11 1002 970 938 906 874 842 810 778 746 714 682 650 618 586 554 522 490 458 426 394 362 330 298 266 234 202 170 138 106 74 42 10 1001 969 937 905 873 841 809 777 745 713 681 649 617 585 553 521 489 457 425 393 361 329 297 265 233 201 169 137 105 73 41 9 1000 968 936 904 872 840 808 776 744 712 680 648 616 584 552 520 488 456 424 392 360 328 296 264 232 200 168 136 104 72 40 8 999 967 935 903 871 839 807 775 743 711 679 647 615 583 551 519 487 455 423 391 359 327 295 263 231 199 167 135 103 71 39 7 998 966 934 902 870 838 806 774 742 710 678 646 614 582 550 518 486 454 422 390 358 326 294 262 230 198 166 134 102 70 38 6 997 965 933 901 869 837 805 773 741 709 677 645 613 581 549 517 485 453 421 389 357 325 293 261 229 197 165 133 101 69 37 5 996 964 932 900 868 836 804 772 740 708 676 644 612 580 548 516 484 452 420 388 356 324 292 260 228 196 164 132 100 68 36 4 995 963 931 899 867 835 803 771 739 707 675 643 611 579 547 515 483 451 419 387 355 323 291 259 227 195 163 131 99 67 35 3 994 962 930 898 866 834 802 770 738 706 674 642 610 578 546 514 482 450 418 386 354 322 290 258 226 194 162 130 98 66 34 2 993 961 929 897 865 833 801 769 737 705 673 641 609 577 545 513 481 449 417 385 353 321 289 257 225 193 161 129 97 65 33 1 992 960 928 896 864 832 800 768 736 704 672 640 608 576 544 512 480 448 416 384 352 320 288 256 224 192 160 128 96 64 32 0 Damaged actuators Damaged actuators and pupil position before March 2013
1010 978 946 914 882 850 818 786 754 722 690 658 626 594 562 530 498 466 434 402 370 338 306 274 242 210 178 146 114 82 50 18 1009 977 945 913 881 849 817 785 753 721 689 657 625 593 561 529 497 465 433 401 369 337 305 273 241 209 177 145 113 81 49 17 1008 976 944 912 880 848 816 784 752 720 688 656 624 592 560 528 496 464 432 400 368 336 304 272 240 208 176 144 112 80 48 16 1007 975 943 911 879 847 815 783 751 719 687 655 623 591 559 527 495 463 431 399 367 335 303 271 239 207 175 143 111 79 47 15 1006 974 942 910 878 846 814 782 750 718 686 654 622 590 558 526 494 462 430 398 366 334 302 270 238 206 174 142 110 78 46 14 1005 973 941 909 877 845 813 781 749 717 685 653 621 589 557 525 493 461 429 397 365 333 301 269 237 205 173 141 109 77 45 13 1004 972 940 908 876 844 812 780 748 716 684 652 620 588 556 524 492 460 428 396 364 332 300 268 236 204 172 140 108 76 44 12 1003 971 939 907 875 843 811 779 747 715 683 651 619 587 555 523 491 459 427 395 363 331 299 267 235 203 171 139 107 75 43 11 1002 970 938 906 874 842 810 778 746 714 682 650 618 586 554 522 490 458 426 394 362 330 298 266 234 202 170 138 106 74 42 10 1001 969 937 905 873 841 809 777 745 713 681 649 617 585 553 521 489 457 425 393 361 329 297 265 233 201 169 137 105 73 41 9 1000 968 936 904 872 840 808 776 744 712 680 648 616 584 552 520 488 456 424 392 360 328 296 264 232 200 168 136 104 72 40 8 999 967 935 903 871 839 807 775 743 711 679 647 615 583 551 519 487 455 423 391 359 327 295 263 231 199 167 135 103 71 39 7 998 966 934 902 870 838 806 774 742 710 678 646 614 582 550 518 486 454 422 390 358 326 294 262 230 198 166 134 102 70 38 6 997 965 933 901 869 837 805 773 741 709 677 645 613 581 549 517 485 453 421 389 357 325 293 261 229 197 165 133 101 69 37 5 996 964 932 900 868 836 804 772 740 708 676 644 612 580 548 516 484 452 420 388 356 324 292 260 228 196 164 132 100 68 36 4 995 963 931 899 867 835 803 771 739 707 675 643 611 579 547 515 483 451 419 387 355 323 291 259 227 195 163 131 99 67 35 3 994 962 930 898 866 834 802 770 738 706 674 642 610 578 546 514 482 450 418 386 354 322 290 258 226 194 162 130 98 66 34 2 993 961 929 897 865 833 801 769 737 705 673 641 609 577 545 513 481 449 417 385 353 321 289 257 225 193 161 129 97 65 33 1 992 960 928 896 864 832 800 768 736 704 672 640 608 576 544 512 480 448 416 384 352 320 288 256 224 192 160 128 96 64 32 0 Fixed actuator (dead) Coupled actuators Slow actuator Damaged actuators and new pupil position
To see the latest results on this high contrast bench, see Galicher et al. (2014) 7 and Delorme et al. (2014). 8
ACKNOWLEDGMENTS
J. Mazoyer is grateful to the CNES and Astrium (Toulouse, France) for supporting his PhD fellowship. The DM study at LAM was funded by CNES (Toulouse, France). | 34,534 | [
"740223",
"828771"
] | [
"233513",
"541774",
"541774",
"541774",
"179944",
"179944",
"541774"
] |
01763133 | en | [
"spi"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01763133/file/article_final.pdf | A Shanwan
S Allaoui
email: [email protected]
Different experimental ways to minimize the preforming defects of multilayered interlock dry fabric
Keywords: Fabrics/textiles, Lamina/ply, Preform, Defects
Different experimental ways to minimize the preforming defects of multi-layered interlock dry fabric
Anwar Shanwan, Samir Allaoui
INTRODUCTION
Long fiber-reinforced composites are widely used in various industries, especially in transportation, because it gives the possibility to reach a light final product. Liquid Composite Molding (LCM) processes are among the most interesting manufacturing processes to produce composite parts with complex geometry, because they offer a very interesting compromise in terms of repeatability. The first stage of this process (preforming) is delicate because there are several deformation mechanisms, which are very different from those of steel sheets stamping [START_REF] Allaoui | Experimental tool of woven reinforcement forming International Journal of Material Forming[END_REF].
The quality of preforms of double curved geometries depends on several parameters, such as: punch geometry, relative orientation of punch/fabric-layers and blank-holders pressure. These parameters play a major role on the quality of the final shape in term of defects appearance [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF].
Predicting of preforms quality, for a given shape with a given fabric, and subsequently, the defects that may appear, can be verified by using of finite element simulations [START_REF] Boisse | Modelling the development of defects during composite reinforcements and prepreg forming[END_REF][START_REF] Ten Thije | Large deformation simulation of anisotropic material using an updated lagrangian finite element method[END_REF][START_REF] Nosrat Nezami | Analyses of interaction mechanisms during forming of multilayer carbon woven fabrics for composite applications[END_REF][START_REF] Nosrat Nezami | Active forming manipulation of composite reinforcements for the suppression of forming defects[END_REF][START_REF] Hamila | A meso macro three node finite element for draping of textile composite performs[END_REF][START_REF] Allaoui | Experimental and numerical analyses of textile reinforcement forming of a tetrahedral shape[END_REF] or experimental studies [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Allaoui | Experimental and numerical analyses of textile reinforcement forming of a tetrahedral shape[END_REF][START_REF] Soulat | Experimental device for the performing step of the RTM process[END_REF][START_REF] Vanclooster | Experimental validation of forming simulations of fabric reinforced polymers using an unsymmetrical mould configuration[END_REF][START_REF] Chen | Defect formation during preforming of a bi-axial non-crimp fabric with a pillar stitch pattern[END_REF][START_REF] Lightfoot | Defects in woven preforms: Formation mechanisms and the effects of laminate design and layup protocol[END_REF][START_REF] Shan Liu | Investigation of mechanical properties of tufted composites: Influence of tuft length through the thickness reinforcement[END_REF]. In addition, during the manufacturing of composite parts, several layers of fabric are stacked together. As these layers (sheets) are not being interdependent, they have different behaviors and can relatively slide, each to other. By this way, an inter-ply friction is generated between them. Several studies showed that the preform quality depends highly on the inter-ply friction, which takes place between the superposed layers during forming [START_REF] Bel | Finite element model for NCF composite reinforcement preforming: Importance of inter-ply sliding[END_REF][START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF][START_REF] Ten Thije | A multi-layer triangular membrane finite element for the forming simulation of laminated composites[END_REF][START_REF] Vanclooster | Simulation of multi-layered composites forming[END_REF][START_REF] Chen | Intra/inter-ply shear behaviors of continuous fiber reinforced thermoplastic composites in thermoforming processes[END_REF][START_REF] Hamila | Simulations of textile composite reinforcement draping using a new semi-discrete three node finite element[END_REF]. Moreover, the friction effect is more severe in case of dry woven fabrics, due to shocks between the overhanging yarns of the superposed layers [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behavior[END_REF]. A recent study highlighted the influence and criticality of interply friction according to sequence of layers stacking, especially when the inter-ply sliding is greater than the unit cell length of the fabric [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF].
The aim of this study is to improve the quality of preforms of dry woven fabrics, by reducing or eliminating the defects via two criteria: the definition of the best process parameters and the second one is the reduction of inter-ply friction by improving the interface between the layers.
MATERIAL AND METHODS
Tests presented in this paper are performed on a commercial composite woven reinforcement, which is a powdered interlock fabric, denoted Hexcel G1151®, with a surface weight of 630 g /m². This fabric is composed of around 7.5 yarns / cm in warp and weft directions.
The unit cell of G1151® consists of 6 warp yarns and 15 weft yarns distributed on three levels. In situ, the average yarn width is about 2mm for warp and 3mm for weft. A specific forming device, developed at LaMé laboratory was used to perform the shaping tests [START_REF] Allaoui | Experimental tool of woven reinforcement forming International Journal of Material Forming[END_REF][START_REF] Soulat | Experimental device for the performing step of the RTM process[END_REF]. This device is equipped with two CCD cameras to track the yarns position and measure the plane shear of the reinforcement.
During preforming process, there is a complex relationship between three parameters: fabric mechanical properties, forming process parameters and punch (part) shape. This paper aims to improve the preforming quality of dry interlock fabric and to avoid, as possible as, the appearance of defects during preforming process on a given shape. To study a wide range of defects with a maximum of amplitudes, our tests were carried-out by means of a highly non-expandable and double curved form (prismatic punch) having a triple point and small curvature radii (10mm). The punch dimensions are shown in Figure 1 [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF].
The different preforming configurations, presented in this study, are illustrated in Figure 2 [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF], where eight blank-holders are used around the preform to apply a pressure of 0.15 bar on the fabric (Figure 2, a). The tests were done with a punch speed of 30 mm/min.
For both monolayer and two-layers preforming tests, the same experimental conditions are used.
In case of mono-layer preforming, several orientations of ply/punch are also used (α° : 0°, 30°, 45°, 60° and 90°). The 0° orientation, which is considered as reference configuration (Figure 2.a), means that the weft and warp directions of the stacked layers are parallels to the lateral edges of the punch faces. In case of two-layers performing, the tests are conducted by stacking one of layers at 0° and the other-one at α° (Figure 2.b), with several configurations such as : 0°/0°, 0°/90°, 0°/45°, 45°/0°, etc. Herein, α°/0° means that the upper layer is oriented at α° and the lower one at 0°.
RESULTS AND DISCUSSIONS
For monolayer preforming configuration, the first tests were carried-out with a monolayer oriented at 0° (reference configuration) and at the same optimal conditions, found in a previous study [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF] for the same type of fabric. Preforming tests showed a good preforms quality at macroscopic level (Figure 3.a) since the preform useful area does not have wrinkles defect.
Nevertheless, at mesoscopic level, ''buckle'' defects occur on the faces and the edges of prismatic preform where yarns are subjected to in plane bending. Subsequently, these yarns undergo an outof-plane buckling, so that the weaving pattern is not respected for a long time. In terms of shear angles, the maximal values are reached at the bottom corners of the preforms (50° and 55°). These values are close to the interlock fabric-locking angle. On the other hand, no wrinkles defects occurred in the preform useful area due to the coupling between shear and tension, which can delay the onset of wrinkles when the tension applied on fabric increases [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Launay | Experimental analysis of the influence of tensions on in plane shear behaviour of woven composite reinforcements[END_REF][START_REF] Harrison | Characterising the shear-tension coupling and wrinkling behaviour of woven engineering fabrics[END_REF].
A comparison between the 0° configuration and the others of 90°, 0°/0°, 0°/90°, 90°/0° and 90°/90°
shows the same results with the same defects (Figure 3.b). In fact, in all of these cases, the relative orientation between yarn networks and the punch remains unchangeable, which confirm the effect of the relative position of punch/fabric. The only difference between the 0° and 90° preforms is the inverting of the position of weft and warp networks.
Oriented monolayer preforming tests
The preforms obtained by oriented monolayers (α ≠0° and α ≠90°) show more extensive defects than the above-mentioned case (0° and 90°) although the shear angle values remain in the same scope obtained in the case of reference configurations.
Despite the small shear angles, wrinkles occur at the useful area, as illustrated at zone 1 of the Figure 4 (case of monolayer oriented at 30°). As shown in this figure, wrinkles appear on two opposite corners of the preform where the observed shear angles are low (22°). Moreover, there are no wrinkles defects on the frontal face (area 3), where high shear angles are observed (49°).
In addition, "buckle" defects are also observed in this preform (areas 2) and are located at different emplacements by comparison with those obtained at 0° monolayer orientation. These "buckle" defects are almost due to bending stresses applied on yarns during preforming. These observations correspond to those obtained in previous studies [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF] and confirm the significant effect of the relative orientation of punch/ply on the preform quality for complex geometries. Subsequently, in the case of oriented configurations (0° < α <90°), the preforms quality is not acceptable. In fact, a bad preforms quality leads to aesthetic problems and non-respect of the dimensional specifications. In addition, these defects may have an effect on the mechanical performances of the final part [START_REF] Hörrmann | The effect of fiber waviness on the fatigue life of CFRP materials[END_REF][START_REF] Cruanes | Effect of mesoscopic out-of-plane defect on the fatigue behavior of a GFRP[END_REF]. Therefore, the preforms quality needs to be improved.
The improvements can be achieved by different strategies like : substitution of fabric by another one with a better formability, changing of manufacturing process, applying of best manufacturing process parameters, and/or modifying of ply orientations and parts geometry, etc. From an industrial point of view, certain strategies could be costly and/or time-consuming (change of process, change of reinforcement).
However, some parameters are often set by the technical specifications of the part (such as: ply orientations, geometry, type of reinforcement, etc…). Furthermore, the modifying of such parameters can affect the entire project of system in which the part would evolve. Thus, the most interesting strategy to adopt is to modify the parameters that do not affect the specifications of the part. It will be possible, for example, to improve the preforms quality by optimizing of the process parameters.
Hence, a new strategy depending on modifying of some process parameters, such as the blank holder's pressure and their geometry, was adopted. On the other hand, the other parameters like the orientation of layers, the punch geometry and the type of fabric, remain fixed. Preforming tests are held on oriented layers, a change was applied on two parameters: blank holder's pressure and blank holder's geometry. This change was applied separately in order to analyze the results and define which parameter is more important than the others on the preforms quality.
The tests showed that the increasing of tensile force applied on the yarn's networks, which is obtained by increasing of blank holder pressure, leads to a delay on the onset of wrinkles or to avoid them [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Launay | Experimental analysis of the influence of tensions on in plane shear behaviour of woven composite reinforcements[END_REF][START_REF] Harrison | Characterising the shear-tension coupling and wrinkling behaviour of woven engineering fabrics[END_REF]. Therefore, the pressure was firstly increased up to 0.2 bars only on the two square blank holders located on the opposite corners B&D, where wrinkles defect appear (Figure 4). The maximal value of the pressure was defined by the capacity of the compressed air system.
The obtained results show that the wrinkles remain on the preform but their amplitude is slightly decreased (Figure 5). In addition, the shear angles did not change on the preform areas compared to the case where the pressure applied on fabric is 0.15 bar (figure 4).
The tests showed that the increase in the blank holders' pressure did not allow avoiding of wrinkles completely. Hence, an adapted blank holder's geometry was suggested to improve the preforms quality [START_REF] Capelle | Complex shape forming of flax woven fabrics: Design of specific blank-holder shapes to prevent defects[END_REF]. A single blank holder surrounding the preform, which eliminates the gaps present in the initial configuration, has been used to replace the eight individual ones. Thus, new tests were performed by using of this geometry with a pressure of 0.15 bars. As shown in Figure 6, the obtained preform has a better quality because there is a higher decrease in wrinkles amplitude than the one obtained by pressure increasing. It means that the effect of blank holder geometry on the preform quality is more significant than the effect of pressure because the blank holder controls the force application and its distribution on the yarns. However, the change of blank-holders geometry did not allow the elimination of wrinkles completely.
For this reason, a third solution, depending on combining of the two previous strategies was used in order to improve the quality of the preforms (pressure of 0.20 bars + single blank holder). In this case, a good quality was obtained without any wrinkles defect, as shown in Figure 7. Subsequently, the combining of several optimized parameters can leads to the avoiding of wrinkle defects.
On the other hand, buckles defects remain appeared in the useful area in spite of the previous suggested solutions. The extent of the region of these defects and their amplitude are almost identical. Consequently, "buckle" defects can not be avoided completely by the strategy used in this study. In fact, this defect is generated by in plane bending of yarns that leads to their out of plane buckling promoted by the fact that the fibers are continuous and not bonded together. To avoid this defect completely, it may be possible to change the nature of yarns and/or their geometry. This solution is sometimes possible with natural yarns [START_REF] Capelle | Complex shape forming of flax woven fabrics: Design of specific blank-holder shapes to prevent defects[END_REF], which are close to a homogeneous material because the fibers are bonded between them, but it is difficult or impossible to be achieved in the case of carbon and glass yarns.
As a conclusion of this part, the relative orientation of punch/layer has a significant influence on the preform quality whereas the optimizing of process parameters (blank holder geometry and/or applied pressure) can lead to further improvements in preforms quality. In addition, it is highlighted that the blank holder geometry has more significant effect than the tensile applied on the yarn networks. Finally, the combination of these two solutions led to better results. Moreover, the two improvements do not have an important influence on the mesoscopic defects (buckles) because they have no act on the mechanisms involved in the appearance of these defects.
Multilayers preforming tests
The same approach used in monolayer tests was applied for two-layers preforming tests. In this paragraph, we present the results of 45°/0° preforms (45°/0° means that the oriented layer is the external one in the stacking order). The 45°/0° stacking sequence was chosen for two reasons: firstly, because it is more prone to have defects than the 0°/45° configuration [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF] and secondly, it enables to observe and make the required measurements on the outer layer (45°), in order to be compared with a monolayer, preformed at 45°. The preforming results of 45°/0° stacking sequence, obtained with the same initial process parameters, show more numerous defects than 45° monolayer preform, and thus, a bad quality is obtained as shown in Figure 8. The type and location of these defects remain the same of those of 45° monolayer configuration but their amplitude and their quantities are significantly higher, whereas the shear angle values remain unchanged relatively (Figure 8). In fact, the bad quality is attributed to inter-ply friction as it has been highlighted and demonstrated in previous study [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF].
The inter-ply friction lead to the appearance of additional wrinkles in the center of the frontal face of two-layer preforms, where there are highest shear angles. Moreover, when compared with 30° and 30°/0° preforms, the configurations of 45° and 45/0° show more numerous defects and other additional types of defects also, like the weave pattern heterogeneity (Figure 4 and Figure 8). The increase in type and amplitude of defects is induced by the effect of the punch/layer relative orientation.
To improve the quality of 45°/0° preforms, the same strategy, used for oriented monolayers, was applied. Thus, draping tests were conducted firstly with an increasing in the blank holder's pressure, then by using the new blank holder geometry and finally by combining of the two solutions together. The obtained results show an improvement in the two-ply preforms quality with the same trend observed for monolayer preforms, i.e., the use of the new blank holder geometry gives better results than the increase in pressure (Figure 9). When the two solutions are used together, their effect was combined and thus a greater improvement was obtained (Figure 10).
However, defects remain always on the preform in spite of these improvements. Subsequently, the combination of these two solutions did not enable to avoid wrinkles completely.
In fact, in the 45°/0° configuration, there is an interface between the two stacked layers, which plays a major role on the preform quality. It has been showed in a previous study that the amount and the amplitude of defects increase because of inter-ply friction, caused by the relative sliding between layers [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF]. The fabric/fabric friction behavior is governed by shock phenomenon occurring between transverse overhanging yarns of each ply, which leads to signal variation with high amplitudes due to the high tangential forces generated by shocks (Figure 11). The tangential forces hampers the sliding of the plies locally and lead to an increase in defects appearance and amplitude. The inter-ply friction effect is significant when inter-ply sliding is larger than the fabric's unit cell length. In the case of 45°/0°, the measured sliding distance can reach more than 70 mm while the unit cell length is about 8 mm.
To avoid the overhanging yarns shocks, it is necessary to reduce the inter-ply sliding or to decrease the inter-ply friction. The reduction of sliding distance remains difficult to be achieved because it depends on both relative positions of ply/ply and punch/ply. However, it is possible to reduce the global ply/ply friction behavior by making the dynamic friction smoother. For this aim, avoiding or reduction of shock phenomenon, occurring between yarns, is necessary. This can be achieved by several solutions that require modifying of: crimp, fabric meso-architecture, yarns shape and/or their material, surface treatment, etc. These improvements will therefore induce a change of reinforcement or its characteristics, whereas it is sometimes not possible to change them according to technical specifications.
To overcome this problem, we proposed the inserting of an intermediate mat reinforcement layer between the two performed plies. This solution does not need to change the fabric or change its characteristics (crimp, meso-architecture, ...). As the mat is not a woven fabric, there is no shocks, which take place between the different plies of fabric. Indeed, it is evident that the mat insertion induces a modification of the stack, and therefore, a modification of the mechanical performances and of stage following stage of LCM processes (resin injection/infusion), which have to be considered.
To verify this assumption, friction tests were conducted on fabric/mat in order to compare their results with those of fabric/fabric. Commercial glass mat, with an areal weight of 300 gr/m², was used for this study. These tests were done by means of experimental test bench, which was developed at LaMé laboratory of Orleans University [START_REF] Hivet | Design and potentiality of an apparatus for measuring yarn/yarn and fabric/fabric friction[END_REF]. The bench working principle depends on the sliding of two plans surfaces (Figure 12). A normal force FN is applied on the upper sample, which is fixe and connected to a tensile force sensor. The lower sample could be moved horizontally so that it generates a tangential force, which can be measured by the sensor. Fabric/mat friction tests were carried-out in warp and weft directions according to the experimental conditions illustrated on table 1.
The obtained fabric/mat friction behaviors are presented on Figure 13, where we observe smoother dynamic friction behaviors in comparison with behavior of interlock/interlock (Figure 11 and Figure 13). This means that there are no yarn's shocks during the relative sliding of plies. In addition, the average values of the dynamic friction coefficient for fabric/mat are 0.25 in wrap direction and 0.35 in weft direction. These values are reduced at least to the half in comparison with fabric/fabric case, where the friction coefficient is around 0.61.
Theses results confirm our hypothesis and, therefore, could leads to an improvement of the preform quality of two stacked layers. To verify this fact, preforming tests were carried out after inserting of glass mat between the two layers of 45°/0° configuration with 0.15 bars pressure applied by the single blank holder, which surrounds the preform (Erreur ! Source du renvoi introuvable.). The Figure 15 shows the positive effect of this strategy because the wrinkles defects have decreased significantly in comparison with the case illustrated in Figure 10. The remaining defects have a low amplitude, which could be negligible with comparison to the initial configuration.
Consequently, defects amplitude is highly reduced thanks to the mat using. The observed improvements were specially been observed at B and D corners (Figure 15). In this case, the global amelioration is due to two reasons: The mat prevents any direct contact between overhanging yarns of G1151® preformed layers, i.e., there are no shocks between the yarns of the two layers, even if the sliding between layers is higher than the fabric's unit cell length. Subsequently, the using of an intermediate layer (glass mat) has an important role on the stabilization of friction coefficient. In addition, stress is also reduced during the sliding between performed layers.
The mat allows a smooth friction during the inter-ply sliding, and so, the friction coefficient is reduced. Hence, wrinkles and buckles amplitudes are enormously reduced.
These results highlight the importance of the intermediate mat layer in the reduction of friction coefficient and consequently in the reduction of the amplitude of defects. In addition, the using of an intermediate mat layer ensures a stabilization in friction coefficient variation during the sliding, i.e. the variation amplitude of friction coefficient in the case of mat/fabric fiction was reduced to a quart comparing to the case of fabric/fabric friction.
Finally, it can be concluded that the combining of the three previous solutions (new blank-holders geometry, pressure increasing and mat using) allows an enormous reduction in the defects appearance and in their amplitude. Thanks to combining of these solutions, it was practically possible to avoid the defects appearance in the preform useful area, especially, wrinkles defects.
Nevertheless, whichever the improvement level, the defects remain even with small amplitude. For this reason, we decided to combine all previous improvements solutions with the last improvement, which is the use of compaction effect between layers. Indeed, it has been shown that if the layer subjected to defects was at the inner position, relatively to the punch, the outer ply applies a compaction effect that leads to decrease in defects [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF] (case of configuration 0°/45°). This configuration is more interesting because in conventional laminates, a ply oriented at 0 ° or 90 ° is often placed outside the stacking, which make this improvement viable industrially. Subsequently, Laying the oriented ply below the non-oriented one, i.e., using the 0°/45° stacking sequence. Indeed, as shown in Figure 16, wrinkles defects were completely disappeared, thanks to combining of the four mentioned solutions. Only the buckles defect still on the preform, as shown at the central face of the preform (Figure 16).
Finally, each one of these four improvement solutions was applied alone, in this experimental study, in order to classify them according to their influence on the defects appearance. The defects and the preform quality obtained after applying of each solution were compared quantitatively and qualitatively. The results are summarized on table 2. The sign (+) means that the solution has a positive effect to avoid the considered defect while the (-) sign means a bad effect.
According to these results, the improvement solutions can be classified according to their importance, from the more significant to the smallest one, as follows:
1) Reduction and stabilization of dynamic friction coefficient (by introducing of intermediate mat between the layers);
2) Adapted blank-holders' geometry and number;
3) Laying the oriented layer below the non-oriented one; 4) Applying a tensile, through the blank holder pressure, on the yarns' networks.
CONCLUSION
This study presents a strategy to improve the quality of dry complex preforms. The results showed that the inter-ply friction and the relative orientation between layers and the punch influence the preforms quality significantly by inducing numerous defects with large amplitudes and extent. The modifying of the blank holder geometry and the increase of their pressure enabled to improve the quality of the monolayer preforms. The changes in these parameters allowed avoiding wrinkles in the monolayer preforms. On the other hand, they did not show significant improvements in the two-layer preforms quality since the inter-ply friction, occurring during the preforming of multi-layers, affects the defects appearance hugely.
The reduction of the inter-ply friction can be achieved by several solutions; the majority of them induces a change of the reinforcement or its characteristics, which is sometimes not possible, according to technical specifications of a composite part. The better solution to be proposed is to insert a mat fabric between the preformed layers. This one allow the decreasing of the number and amplitude of wrinkles significantly. However, this modification in the stack has to be considered, as it will induce a modification in the mechanical performance of the material and also of stage following stage of LCM processes (resin injection/infusion).
The obtained results showed that the inter-ply friction is the first and the most important parameter, which influences the defects appearance. Then, the blank-holders geometry is considered as the second parameter according to its importance. Next, the compaction between layers and finally the tensile applied on the yarns networks. In conclusion, suitable technical solutions should be applied to improve the friction during the shaping of dry reinforcements, in order to improve the preforms quality, which is hugely affected by the friction between layers.
preforming tests were done by combining of the four following solutions together: Use of a new blanks-holders geometry; Use of the optimal pressure value (0,15 bars); Use of intermediate mat layer between the preformed fabric's layers;
Figure 1 Figure 2 :Figure 3 :Figure 4 :Figure 5 :Figure 6 :Figure 7 :Figure 8 :
12345678 Figure 1 : Punch dimensions
57°± 4 48°± 2 (
2 a ) External layer, oriented at 45° ( b ) Internal layer at 0° ******** Wrinkles zone at B&D corners and in interne layer ******** Buckles zones ******** Zone of an important shear without wrinkles
Figure 9 :
9 Figure 9 : 45°/0° defects after using of new blank holder geometry
Figure 10 :Figure 11 :
1011 Figure 10 : 45°/0° defects after combining of new blank holder geometry and pressure increasing
Figure 12 :
12 Figure 12 : Friction bench principle
Figure 13 :
13 Figure 13 : Fabric/mat friction behaviors according to wrap and weft directions
Figure 14 :
14 Figure 14 : Configuration of two layer preforming (45°/0°) with mat inserting.
Figure 16 :
16 Figure 16 : Complete disappearance of wrinkles defect when combining the four solutions: new blanks-holders geometry, optimal pressure value, using of mat layer and laying of the oriented layer below the non-oriented one.
Table 1 :
1 Experimental condition of friction tests
Improvement solution Wrinkles Buckles Global quality
Two layers in initial configuration 45°/0° --- - -4
Pressure increasing -- - -3
New blanks-holders geometry (rectangular form) - - -2
Intermediate Mat insertion ++ + +3
Pressure decreasing and new blanks-holder geometry +++ + +4
Pressure decreasing, new blanks-holders geometry and intermediate mat ++++ + +5
Pressure decreasing, new blanks-holders geometry, intermediate mat and 0°/45° stacking sequence ++++ ++ +6
Table 2 :
2 Classification of the process parameters effect and their influence on the preform quality. | 33,030 | [
"182129"
] | [
"525494"
] |
01763182 | en | [
"spi"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01763182/file/Mesoscopic%20and%20macroscopic%20friction%20behaviours.pdf | L Montero
S Allaoui
email: [email protected]
G Hivet
Characterisation of the mesoscopic and macroscopic friction behaviours of glass plain weave reinforcement
Keywords: A. Fabrics/textiles, A. Yarn, B. Mechanical properties, E. Preforming
Friction at different levels of the multi-scale structure of textile reinforcements is one of the most significant phenomena in the forming of dry fabric composites. This paper investigates the effect of the test conditions on fabric/fabric and yarn/yarn friction. Friction tests were performed on a glass plain weave and its constitutive yarns, varying the pressure and velocity.
The results showed that the friction behaviours at the two scales were highly sensitive to these two parameters. An increase in pressure led to a decrease in the friction coefficients until steady values were reached, while an increase in velocity led to an increase in the friction coefficients.
At each scale, the frictional behaviour of the material was significantly influenced by the structural reorganisation of the lower scale.
Introduction
Fibre-reinforced composite materials are gaining in popularity in industry because of their high performances, lightweight and design flexibility. In addition, textile composites offer sustainable solutions concerning environmental issues, for instance in transport sectors where decreasing the weight of the different structures can reduce fuel consumption and hence polluting emissions. However, even if fibrous composites appear to be a good solution, many issues remain, especially as regards mastering processes such as the predictability of the quality of the part, cycle time, cost price, etc.
Liquid Composite Moulding (LCM) processes are among the most attractive candidates to manufacture complex composite shapes with a high degree of efficiency (cost/time/quality).
The first step in LCM processes consists in forming the fibrous reinforcement. The mechanical behaviour of dry reinforcement with respect to the shape geometry is a key point in order to ensure both a correct final shape and good mechanical properties of the final part. In addition, during a multi-layer forming process, friction between the reinforcement layers and between tools and external layers has a significant effect on the quality of the preform obtained (appearance of defects) [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF][START_REF] Thije | A multi-layer triangular membrane finite element for the forming simulation of laminated composites[END_REF]. However, the mechanisms governing the preforming of dry reinforcements are far from being fully understood [START_REF] Hivet | Analysis of woven reinforcement preforming using an experimental approach[END_REF]. During preforming, the reinforcements are subjected to different loadings such as tension, shear, compression, bending, and friction at different levels of the multi-scale structure of the textile reinforcement. Friction can cause local defects such as wrinkling or yarn breakage, significantly altering the quality of the final product, and can modify the final orientation of the fibres, which is crucial for the mechanical behaviour of the composite part. Friction also plays a significant role in the cohesion and the deformation mechanism of a dry fibrous network. Consequently, understanding the friction behaviour between reinforcements is necessary so as to understand, master and optimize the first forming step in LCM processes. A growing number of studies have therefore been conducted on the friction behaviour between fibrous reinforcements or on the relationship between friction and formability [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF][START_REF] Thije | A multi-layer triangular membrane finite element for the forming simulation of laminated composites[END_REF][START_REF] Hamila | Simulations of textile composite reinforcement draping using a new semi-discrete three node finite element[END_REF][START_REF] Gorczyca-Cole | A friction model for thermostamping commingled glass-polypropylene woven fabrics[END_REF][START_REF] Sachs | Characterization of the dynamic friction of woven fabrics: Experimental methods and Benchmark results[END_REF][START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF][START_REF] Thije | Design of an experimental setup to measure tool-ply and plyply friction in thermoplastic laminates[END_REF]. However, since it is a complex issue due to the multi-scale fibrous nature of the reinforcements, considerable research remains to be done in order to fully understand e this phenomenon.
Different kinds of studies on the frictional behaviours of textile and technical reinforcements have been conducted over the past years. These materials are in general defined in terms of their multi-scale character: macroscopic (fabric), mesoscopic (tow or yarn) and microscopic (fibre). Studies carried out at each scale, using different devices, show that depending on the scale considered, the behaviour obtained appears to be different.
At the microscopic scale, Nowrouzieh et al. evaluated experimentally and with a microscopic model the inter-fibre friction forces of cotton to study the fibre processing and the effect of these forces on the yarn behaviour [START_REF] Nowrouzieh | The investigation of frictional coefficient for different cotton varieties[END_REF][START_REF] Nowrouzieh | Inter fiber frictional model[END_REF]. They found that the friction behaviour was correlated to the yarn strength and its irregularity (variation in the fibre section). The fibre with the highest friction coefficient produced more regular yarns. Analysis of the variance of the modelling results showed that inter-fibre friction was more sensitive to the normal load than to the velocity.
At the mesoscopic scale, the friction between various couples of materials such as tow/tow, tow/metal and tow/fabric has been studied on reinforcements made from different fibres (aramid, carbon and E-glass). The results demonstrated the significance of the relative orientation between the tows (parallel and perpendicular) on inter-tow friction for technical reinforcements [START_REF] Vidal-Sallé | Friction Measurement on Dry Fabric for Forming Simulation of Composite Reinforcement[END_REF][START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: Friction experiments[END_REF]. The contact model proposed by Cornelissen provides a physical explanation for the experimentally observed orientation dependence in tow friction (tow/metal or inter-tow) [START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: Friction experiments[END_REF][START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: A contact mechanics model of tow-metal friction[END_REF]. The mesoscopic frictional behaviour of carbon tows was explained by the microscopic constitution of the tow assuming a close packing of filaments which leads the normal load in a stationary tow to transfer from one layer of filaments to the layer beneath. Some recent papers deal with fabric/fabric and fabric/metal friction at the macroscopic scale.
Many of them deal with textile materials and focus on the effect of test conditions with the aim of improving the manufacturing process or adapting and functionalizing the final product (clothes). Ajayi studied the effect of the textile structure on its frictional properties by varying the yarn sett (number of yarns/cm) and the crimp while keeping the Tex and thickness constant [START_REF] Ajayi | Effects of fabric structure on frictional properties[END_REF]. The frictional properties increased by increasing the crimp (and thus the density), which was attributed to the knuckle effect of the textile. The term knuckle refers to the cross-over points of the warp and weft yarns making up the fabric. During the weaving process, knuckles generate yarn undulations, i.e. an irregular and rough surface of the fabric, because the two sets of yarns interlace with each other. The yarn undulation is characterised by the yarn knuckle, which is defined as the yarn crown. Furthermore, several studies have been conducted to understand the effect of the test conditions, such as atmospheric conditions which are relevant for textiles used for clothing especially as they are often made of natural materials. Several parameters, such as relative humidity, fabric structure, type of fibre material and direction of motion were found to exhibit an effect on the textile/textile friction while temperature (0-50°C) did not significantly influence the frictional parameters [START_REF] Arshi | Modeling and optimizing the frictional behavior of woven fabrics in climatic conditions using response surface methodology[END_REF]. Here again, the most significant parameter was related to the fabric structure. Das and co-workers [START_REF] Das | A study on frictional characteristics of woven fabrics[END_REF] examined the textile/textile and textile/metal frictional characteristics that simulate interaction between clothing items and fabric movement over a hard surface. They performed frictional tests with different normal pressures on commercial fabrics typically used in clothing industries in which some are composed of 100% of the same material while others are blended (made with two materials such as polyester/cotton). It was concluded that fabric friction is affected by the rubbing direction, type of fibre, type of blend, blend proportion, fabric structure and crimp. Fabric/metal friction is less sensitive to the rubbing direction.
A few studies deal with the macroscopic frictional response of technical reinforcements. These materials have many similarities with the textile family but also differences such as material constitution, unit cell size and some of the mechanisms involved during their frictional behaviour. A recent benchmark compared results obtained with different devices developed by teams working on this topic [START_REF] Sachs | Characterization of the dynamic friction of woven fabrics: Experimental methods and Benchmark results[END_REF]. Experimental tests on fabric/metal friction performed by the different teams on Twintex reinforcement exhibited an effect of pressure and velocity on the dynamic friction coefficient. In another study, the fabric/fabric friction behaviour was characterized using a specific device on different glass and carbon fabric architectures [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF]. It was shown that the fabric/fabric friction was highly different and more complex than that of textile or homogeneous materials. The measured values varied by up to a factor of two during the friction test under the same conditions. A period and an amplitude that depend strongly on the relative positioning and shift of the two samples characterize the frictional signal. The period of the signal can be directly related to the unit-cell length (periodic geometry). In addition, the specificity of the fabric/fabric contact behaviour was found to be directly related to the shocks taking place between overhanging yarns. However, no studies were found in the literature addressing the interesting question of the effect of test conditions that are representative of the preforming of dry reinforcements on fabric/fabric friction behaviour.
The study carried out by Cornelissen is undoubtedly useful to build a relationship between the micro and the meso scales as regards friction, but extensive experimental work needs to be performed at the meso and macro scales in order to obtain enough data for the correct definition and identification of a future model. It is therefore necessary to study the variation in friction behaviour with respect to the normal pressure and velocity for different fabric architectures to contribute to a better understanding of fabric/fabric friction behaviour. This is the goal of the present paper.
Materials and Methods
Tested dry fabric
The experiments were conducted on a glass plain weave dry fabric (figure 1.a). This balanced fabric has a thickness of 0.75 mm and an areal weight of 504 g/m 2 . The width of the yarn is 3.75mm and the average spacing between neighbouring yarns (for weft and warp directions) is around 5mm comprising 1.25mm of spacing because the yarns are not tightened together. The unit cell length is ~10 mm. For the tow samples, the yarns were extracted from the woven fabric.
Description of the device
When undertaking experimental investigations of dry-fabric friction, the various mesoscopic heterogeneities, the different unit cell sizes and anisotropy should be considered. This requires the use of specific experimental equipment designed to consider these properties. A specific experimental device at the laboratory PRISME, presented in Figure1.b, is dedicated to this task [START_REF] Hivet | Design and potentiality of an apparatus for measuring yarn/yarn and fabric/fabric friction[END_REF]. The device consists of two plane surfaces, on which the two samples are fixed, sliding relative to each other. The bottom sample is fixed on a rigidly and accurately guided steel plate that can be moved horizontally in a fixed direction. The imposed velocity can vary from 0 to 100 mm/s. The top sample is fixed on a steel plate which is linked to a load sensor connected to a data acquisition system used to record tangential forces during the test. A dead weight on the top sample provides a constant normal load F N . To obtain a uniform pressure distribution on the contact area of the samples, a calibration procedure was performed before testing to determine the optimal position of the dead weight [START_REF] Hivet | Design and potentiality of an apparatus for measuring yarn/yarn and fabric/fabric friction[END_REF]. For fabric/fabric experiments, the position of the dead weight was defined by using the mean of the tangential force. This approach gives an average position which limits the effect of specimen misalignment.
Test conditions
Before starting the experiments, the samples were conditioned in standard laboratory conditions (T~23°, RH~50%). To distinguish the different physical phenomena occurring during the friction tests, an acquisition frequency of 50Hz was used. This value was chosen based on tests performed in a previous study on the same material with the same bench. The friction coefficient (µ) was calculated using Coulomb's theory:
𝜇 = 𝐹 𝑇 𝐹 𝑁 = 𝐹 𝑇 𝑀•𝑔 (1)
where F T is the tangential load measured by the sensor, F N is the normal load, M is the total mass of the upper specimen with the dead weight and g is gravitational acceleration.
To investigate the effect of the shaping process conditions on both the macroscopic and mesoscopic behaviours of the fabric, two kinds of friction tests were performed: fabric/fabric and yarn/yarn. Tests were conducted in four relative positions of the samples, varying: pressure and test speed. For the relative positioning of the samples, four different orientations were tested: 0°/0°, 0°/90°, 90°/90° and 0°/45°. These configurations, generally used in laminates, exhibit extreme friction coefficients (maximum and minimum) of two fabric plies [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF]. The position 0°/0° was the reference one, and consisted in orienting the weft yarns of the two samples in the stroke direction. For 0°/90° and 0°/45°, the lower sample was kept along the same direction as the reference configuration, while the upper sample was rotated. For the 90°/90° configuration, the warp yarns of the two samples were oriented in the sliding direction.
For the yarn samples, the 0° orientation corresponds to the tows oriented in the movement direction, while 90° means that they are perpendicular.
Tests were conducted at five different pressures (3, 5, 10, 20 and 50 kPa) that are in the range of values involved during dry fabric preforming and at a speed of 1 mm/s. This velocity is the one used in a previous study to investigate the effect of pressure on fabric/metal friction behaviour [START_REF] Sachs | Characterization of the dynamic friction of woven fabrics: Experimental methods and Benchmark results[END_REF]. It is in the order of magnitude of inter-ply velocity values during the forming of dry reinforcements [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF]. The pressure is calculated by dividing the normal force by the areal size of the upper specimen. This definition is even the same for the fabric/fabric and yarn/yarn tests. Indeed, each specimen (upper and lower) of yarn/yarn tests contain several yarns that were placed next to each other. Consequently, the calculated pressure for fabric/fabric tests will be a theoretical one and not a real one because the contact between the two samples will not occur on this whole surface due to crimp and nesting.
The velocity values selected were 0.1, 1, 10 and 50 mm/s. This gives us a factor of 500 between the lowest and highest speed which covers the inter-ply sliding speed range during multi-layer forming whatever the laminate considered [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF]. The tests with different pressures were analysed to determine the pressure value at which the tests with various velocities were performed. This pressure was determined in order to distinguish the effect of the two parameters and make the comparison reliable.
At least five tests were performed for each test case.
Results and discussion
FABRIC/FABRIC FRICTION BEHAVIOUR
Typical layer/layer friction behaviour for dry fabric is illustrated in figure 2. This curve shows very clearly that the fabric/fabric friction behaviour is very different from the Coulomb/Amonton friction behaviour of a homogeneous material. A previous study showed that this behaviour is due to the superposition of two phenomena [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF]: yarn/yarn friction between the yarns of the two dry fabric plies, and shocks between the transverse overhanging yarns of each ply. During the weaving process, warp and weft yarns are manipulated in such a way that the two sets of yarns interlace with each other to create the required pattern of the fabric. The sequence in which they interlace with each other is called the woven structure (meso-structure). The yarns of one direction are bent around their crossing neighbour yarns, generating different crimps of the two networks resulting from the asymmetry of the weaving process. As a result, a height difference is obtained between the weft and neighbouring warp which is defined as depth overhanging or knuckle height (see figure 3) which promotes the shock phenomenon (lateral compression of yarns). These shocks occur periodically and generate high tangential reaction forces (F) leading to a substantial increase in the maximum friction values. The periodicity of the shocks is linked to the fabric meso-architecture and the relative position of the plies since it is difficult to control the relative position of the two samples during the tests, especially for reinforcements that have a weak unit cell length. During the positioning of the two samples on each other before testing, one may obtain a configuration in which the two are perfectly superimposed (figure 4.a) or laterally shifted (figure 4.b). When the two plies are perfectly superimposed the peak period is associated to the length of the sample unit cell, which is ~10 mm for the glass plain weave considered here. Figure 2 illustrates this configuration. On the other hand, when the plies are not perfectly superimposed (shifted samples), the peaks appear at periods equal to a portion (half in the case of the plain weave) of the fabric unit cell length.
In order to analyse the variation in the values of the fabric/fabric friction coefficient, the static frictional coefficient (μ s ) was first taken as the highest peak at the beginning of the motion (e.g. around 4 seconds in figure 2). After an area containing the maximum peak (20 seconds on figure 2), the dynamic friction domain can be considered as established. The dynamic friction coefficient (μ k ) can then be associated to the average of all the measured values. Moreover, the maximum values of the dynamic friction (peaks) and minimum values (valleys) were measured in order to assess the effect of the test conditions on the shock phenomenon. Maximum and minimum friction coefficients are noted respectively μ maxi and μ mini . The mean and standard deviation (σ) of each are calculated. These measurements were only considered for friction tests in which the period was close to the length of the unit cell for the configurations 0°/0°, 0°/90° and 90°/90°. For 0°/45°, as it is difficult to distinguish the unit cell length in the signal, all the peaks and valleys were considered to calculate μ maxi and μ mini.
Effect of Normal Pressure
The first test parameter considered in this study was the normal pressure. Fabric/fabric friction experiments were conducted at five pressures: 3, 5, 10, 20 and 50 kPa. The results of the static friction coefficients (µs) and the mean values of the dynamic friction coefficients (µk) are presented in Table 1 and illustrated on figure 5 It can be seen that the fabric/fabric static friction coefficients were higher than the dynamic coefficients in all the test configurations (Table 1) and had slightly higher standard deviations. This is a common observation in friction responses, which has also been noticed on textiles [START_REF] Ajayi | Effects of fabric structure on frictional properties[END_REF][START_REF] Das | A study on frictional characteristics of woven fabrics[END_REF]. Furthermore, the relative orientation of the two specimens has an effect on the frictional behaviour. In all cases, the static and dynamic frictional coefficients were higher for the 0°/0° configuration than for the 90°/90° configuration (Figure 5, Figure 6 and figure 7). The same trend has been observed on other fabric architectures, such as carbon interlock [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF], and can be attributed to the weaving effect (difference in crimp between the two yarn networks). As already mentioned, the fabric/fabric friction behaviour is governed by yarn/yarn friction and shocks between overhanging transverse yarns. The friction coefficient varied hugely (by up to a factor of two) because of high tangential forces due to the second phenomenon which predominates the global friction behaviour. According to the measures obtained, the tangential reaction forces due to shocks between weft yarns (configuration 0°/0°) are higher than those between warp yarns (configuration 90°/90°). As the reinforcement is assumed to be balanced and networks (weft and warp) are composed of the same yarns, this can be explained by the difference in crimp between the two networks resulting from the asymmetry of the weaving process. To confirm this fact, the crimp of warp and weft yarns was measured according to the ASTM D3883-04 standard [START_REF]and Yarn Take-up in Woven Fabrics[END_REF]. The crimp obtained for warp and weft yarns were respectively 0.35% and 0.43% confirming that higher crimp leads to higher tangential reaction forces due to yarn shocks. The increase in crimp results in a higher overhanging of the more crimped network yarn and thus in an increase of the friction coefficient. This conclusion is in good agreement with the study by Ajayi [START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: A contact mechanics model of tow-metal friction[END_REF] which showed that an increase in the weft yarn density generated an increase in the frictional resistance of textile.
For the 0°/90° configuration, transverse yarns are warp yarns (with high overhang value) for the bottom sample and weft yarns (with a low overhang value) for the upper sample. The shock phenomenon occurs between warp yarns that have high crimp and weft yarns with low crimp.
As a result, the tangential forces obtained in this configuration and thus the friction coefficients remain in between those obtained in the 0°/0° and 90°/90° configurations (figure 5, figure 6 and figure 7). There are still some points for which this trend was not confirmed (e.g. 3 kPa and 5 kPa), especially for maximum and minimum friction coefficients (figure 7), which may be due to the imperfect superimposition of the two samples.
As expected, the lowest friction coefficients were obtained for the configuration 0°/45° (see figures 5, 6 and tables 1 and2). The measured signal of the tangential force was smoother than the other configurations (figure 8). The amplitude of the signal was very weak (figures 7 and figure 8), which means that shocks between the overhanging yarns were not severe. In fact, Shocks occurred between network of yarns, one of which was oriented at 45°, which led to a very weak instantaneous lateral contact width between yarns. As a result, the yarn/yarn friction in this configuration governed the frictional behaviour and consequently the measured dynamic coefficient was closer to that of the yarn/yarn at 0°/90° (see section 3.2). When the normal pressure increased, the static and dynamic friction coefficients decreased (figure 5 and figure 6). This observation is in agreement with the results obtained for fabric/metal tests [START_REF] Sachs | Characterization of the dynamic friction of woven fabrics: Experimental methods and Benchmark results[END_REF]. For the four configurations (0°/0°, 0°/90°, 90°/90° and 0°/45°), friction coefficients significantly decreased in the pressure range of 3-20 kPa before converging to a steady value whatever the test configuration (figure 5, figure 6 and figure 7). The only singular point is the friction behaviour at 0°/90°, for which the maximum coefficient (μ max ) measured at a pressure of 5 kPa was higher than that measured at 3 kPa. This has an impact on the static and dynamic coefficient values, which show the same trend. This can be attributed to the low pressure. At this level of pressure (3 kPa), the shock phenomenon is not the predominant one and the behaviour is dominated by yarn/yarn friction. Thus, at this level of pressure, the friction behaviour tends closer to that of the 90°/90° configuration. This point will be addressed in future work.
After a pressure of 20 kPa, the fabric/fabric friction behaviours levelled off. Thus, the static (µs) and dynamic (µk) coefficients reached stabilized values with a low standard deviation, which denotes the good reproducibility of these behaviours (Table 1). The maximum decrease was obtained for the 0°/0° configuration (around 30% for the static and dynamic coefficients) while in the 0°/45°configuration only a 7% and 11% decrease was respectively observed for µs and µk.
This decrease in the friction coefficients can be attributed to the effect of fabric compaction and its consequences at the mesoscopic and macroscopic levels. When the pressure increases, the upper sample exerts a greater compaction on the lower. This leads to a high transverse compression strain of the fabric inducing at the mesoscopic level a reduction of the yarn's overhang height and a spreading of the yarns. At the macroscopic level, the reduction in the thickness of the reinforcement can potentially lead to a lateral spreading of the fabric in the inplane directions, and therefore a decrease in crimp. As the crimp at these stress states decreases, the texture ("roughness") of the fabric related to its meso-architecture decreases, and the contact area increases. However, the real contact area between the two samples driving the effective pressure is difficult to quantify for fabric/fabric friction tests (due to nesting, for example) in contrast to fabric/metal [START_REF] Cornelissen | Dry friction characterisation of carbon fibre tow and satin weave fabric for composite applications[END_REF].
The reduction in yarn's overhang height due to compaction leads to the decrease of the tangential reaction forces between the transverse yarns of the samples. As a result, the maximum friction coefficient (µ max ), measured on the peaks of the curves, decreased (figure 7).
In addition, the yarn spreading associated with the decrease in their overhang height generated a lower fabric "roughness". Therefore, the tangential reaction forces' signal, due to this new "roughness", was also smoother as observed for the 50 kPa pressure, while below 20 kPa the rise and fall in forces at each peak were more pronounced and abrupt (figure 9). It can also be seen that the maximum friction coefficient µ max continued to decrease slightly and was not completely stable at 50 kPa (figure 7), except for the configuration 0°/45 where the shocks phenomenon has a weak effect. Stabilization would probably occur at almost no meso roughness, which might be achieved at very high pressure not encountered during the composite shaping process.
The minimum friction coefficient (µ min ) measured in the curve valleys, which is attributed to yarn/yarn friction [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF], was not affected by this phenomenon and its evolution followed the same trend as the global friction behaviour (figure 7).
Effect of Velocity
In order to evaluate the effect of the sliding velocity on the fabric/fabric frictional behaviour, tests were conducted at four velocities: 0.1, 1, 10 and 50mm/s. These tests were performed at a pressure of 35 kPa as the friction behaviour versus pressure is stabilized at this pressure. It was previously observed that the friction coefficients at 0°/90° were situated between those of the 0°/0° and 90°/90° configurations and generally close to the 0°/0° configuration. For this reason, only the 0°/0°, 0°/45° and 90°/90° configurations were tested here. The static and dynamic friction coefficients obtained are summarized in table 2. The coefficients were measured with a good reproducibility (maximum deviation of 12% for the dynamic coefficient).
The friction coefficients changed slightly regardless of the configuration tested. The static friction coefficient increased from 0.427 to 0.503 when the velocity increased from 0.1 to 50 mm/s in the 0°/0° orientation, an increase of approximatively 17 % (figure 10). On the other hand, the 90°/90° configuration was less sensitive to the velocity with a maximum decrease of 9% at 50 mm/s compared to 0.1mm/s, which remains in the order of magnitude of experimental deviations. Once again, the 0°/0° configuration was more sensitive to the test parameters, which is likely due to its high crimp and therefore the predominance of the shock phenomenon.
The same trend was observed for the dynamic friction coefficients but to a lesser extent. The coefficients increases between 9 % and 14 % for all the configurations (figure 10). This slight increase in the dynamic friction is essentially due to the contribution of the minimum friction (µ min ) coefficient while the maximum (µ max ) decreased as can be observed on figure 11.
Consequently, increasing the speed has more influence on the amplitude variation than on the average value. Increasing the speed leads to an increase in the frequency of shocks which has two consequences: The kinetic energy of the yarns (of the lower sample) enables them to pass over the overhanging transverse yarns in a shorter time with a lower force. The maximum friction coefficient therefore decreases. This also causes an up and down movement of the upper sample that can be described as stick-slip phenomenon.
Between two shocks, a steady sliding between yarns does not exist, consequently the tangential force does not reach the stabilized value tending towards the friction coefficient of the yarns. Accordingly, the minimum coefficient increases.
For the configuration 0°/45°, both coefficients (µ max and µ mini ) increased. As, it has been discussed previously, these values are given in this configuration for information only and are not related to the meso-architecture in which case the values would have been different. Therefore, they cannot be used for comparison with other configurations.
To summarize, as the velocity increases, the upper sample does not follow strictly the irregularities of the lower sample which leads to a decrease in the signal variation just as if the irregularities were lower. It can be concluded that at low velocities (0.1 to 1mm/s) the dynamic frictional coefficient can be considered as almost constant whatever the orientation while beyond a velocity of 1 mm/s, its evolution as a function of the velocity should be considered.
YARN/YARN FRICTION BEHAVIOUR
The second aim of this study was to determine the effect of the test parameters on the friction behaviour at the mesoscopic level. Yarn/yarn friction tests were therefore conducted, varying the pressure and the velocity. Only the 0°/0° (parallel case) and 0°/90° (perpendicular case) configurations were performed. Recall that the 0° orientation means that the yarns are oriented in the direction of the stroke and 90° in the transverse direction.
Effect of Normal Pressure
As in the fabric tests, yarn/yarn friction tests were conducted by varying the normal pressure at 1mm/s. The results of the static (μs) and dynamic (μk) friction coefficients are summarized in table 3 and illustrated on figure 12 and figure 13. As expected, the static friction coefficients and their standard deviations were higher than for the dynamic coefficients. Moreover, the friction values were greater at 0°/0° than at 0°/90° which is in good agreement with previous studies carried out on carbon, aramid and glass yarns [START_REF] Vidal-Sallé | Friction Measurement on Dry Fabric for Forming Simulation of Composite Reinforcement[END_REF][START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: Friction experiments[END_REF].
The friction behaviour for the perpendicular configuration is mainly controlled by inter-fibre friction while for the parallel case, other phenomena are involved, among which: fibre bending, fibre reorganisation in the yarns, transverse compression, fibre damage, intermingling of fibres between yarns, etc. These phenomena are promoted by the spinning process, because fibres do not remain straight and some of them are damaged like it can be seen on figure 14. This generates reaction forces during tests that lead to the increase in friction forces. When the pressure increased, the static and dynamic coefficients decreased before reaching a plateau beyond 10 kPa. The decrease was larger in the parallel case (0°/0° configuration) with respect to the perpendicular one (0°/90°). For high pressure (50kPa), the friction increase because the higher compression rate generates more fibre damage. This fact is illustrated on Figure 15 showing the results where the friction coefficient increases after stabilization (beyond 40 s).
Observations performed using a microscope on this sample after the test showed a large number of broken fibres.
Effect of Velocity
The friction tests according to velocity were performed under 35 kPa as for the fabric/fabric tests. The results are summarized in table 4 and illustrated in figure 16. Once again, it was observed that the static friction coefficient was higher than the dynamic one.
As for fabric/fabric, the tow/tow friction behaviour remained unchanged in the range 0.1-1 mm/s velocities whatever the relative position of the samples. The friction was still constant for the parallel yarns (0°/0°) while for the perpendicular (0°/90°) case, an increase of more than 43% in the friction coefficient values was observed with the increasing velocity. It can be concluded that while the velocity does not affect the phenomena (intermingling of fibres between yarns, fibre reorganisation in the yarns, bending, etc.) controlling inter-tow friction in parallel yarns, it significantly affects the response for the 0°/90° configuration which is mainly controlled by interfibre friction. This trend is very different from the one observed on natural cotton fibres, which are more sensitive to pressure than to speed [START_REF] Nowrouzieh | The investigation of frictional coefficient for different cotton varieties[END_REF][START_REF] Nowrouzieh | Inter fiber frictional model[END_REF]. The differences between these fibres and those used in the present study (glass) are mainly related to their mechanical behaviour (brittle vs ductile), their compressibility and roughness. These characteristics are highly correlated with the friction coefficient [START_REF] Nowrouzieh | The investigation of frictional coefficient for different cotton varieties[END_REF]. As a conclusion, the fibre material significantly influences the effect of the test conditions on the microscopic friction behaviour (fibre/fibre) which results in the same effect at the mesoscopic level (yarn/yarn).
CONCLUSIONS
This study has highlighted the influence of test conditions on the frictional behaviour of dry reinforcements at mesocopic and macroscopic scales. It was found that the friction behaviour depends strongly on the relative orientation of the samples. Furthermore, experimental tests performed at the macroscopic level (fabric/fabric) showed that, for a given yarn, the friction coefficient is highly related to the yarn crimp because of the shock phenomenon occurring between transverse overhanging yarns. Friction coefficients decrease when the normal pressure increases until reaching steady values, which are almost identical whatever the relative orientation of the specimens. The greater decrease observed for the 0°/0° configuration can be attributed to the effect of the fabric compaction and its consequences on the yarn and fabric structure, i.e. a decrease in the yarn's overhang height and yarn spreading leading to the decrease in crimp.
Velocity has the opposite effect on fabric/fabric friction since the coefficients increase with the velocity. The static friction coefficient and the 0°/0° configuration are more sensitive to this parameter. The dynamic friction coefficient remains almost unchanged at low velocities whatever the relative orientation of the two samples while it increases slightly for high speeds.
This increase is due to the contribution of the minimum friction coefficient (µ min ) that increases because the high frequency between two shocks does not permit a stabilization of the tangential force at values tending towards the friction coefficient between yarns. However, the main effect of high speeds is a finite decrease in the amplitude variation of the friction response.
At the mesoscopic level, the results show the same trend as for macroscopic friction as a function of the test parameters. The parallel configuration 0°/0° is more sensitive to pressure while the 0°/90° is more influenced by velocity. This is due to the fact that friction behaviour in the perpendicular configuration is mainly controlled by inter-fibre friction while for the parallel case, other phenomena promoted by the spinning process are involved. It has been shown that the material constituting the fibres mainly influences the effect of the test conditions on the microscopic friction behaviour (fibre/fibre) which results in the same effect at the mesoscopic level (yarn/yarn).
We can conclude that at each scale, the frictional behaviour of the material studied here, which is heterogeneous and multiscale (micro-meso-macro), is governed by friction but is also significantly influenced by the structure of the lower scale. These structures (meso, micro) reorganise when test conditions such as pressure are varied, which leads to a variation in the friction behaviour. Thus, even if the same trends of the effect of test conditions are observed at different scales (meso, macro), they are caused by different mechanisms which are due to the structural reorganization at the lower scale.
and figure 6. The results of the maximum values of the dynamic friction (measured at the peaks of the signal) and minimum values (measured at the valleys) are illustrated on figure 7. The error bars represent the standard deviations (σ).
Figure 1 .Figure 2 .Figure 3 :
123 Figure 1.Material and equipment of the study
Figure 4 :Figure 5 .Figure 6 .Figure 7 .Figure 8 .Figure 9 .Figure 10 .Figure 11 .Figure 12 .Figure 13 .Figure 14 .Figure 15 .
456789101112131415 Figure 4: Relative positioning of the samples
Figure 16 .
16 Figure 16. Evolution of yarn/yarn friction coefficients according to testing velocity at pressure of 35kPa. The error bars represent the standard deviations
Table 1 .
1 Fabric/fabric frictional characteristics function of the normal pressure at 1mm/s.
Orienta tion Normal pressure (kPa) µs Standard deviation [σ] [σ/µs]*100 (%) µk Standard deviation [σ] [σ/µk]*100 (%)
3 0.5590 0.1461 26.13 0.4074 0.0155 3.80
5 0.5224 0.0690 13.21 0.3499 0.0148 4.22
0°/0° 10 0.4543 0.0442 9.72 0.3092 0.0131 4.23
20 0.3911 0.0276 7.06 0.2810 0.0162 5.78
50 0.3915 0.0339 8.65 0.2698 0.0120 4.45
3 0.4041 0.0467 11.55 0.3315 0.0198 5.98
5 0.5012 0.0878 17.51 0.3401 0.0461 13.54
0°/90° 10 0.4327 0.0375 8.66 0.2931 0.0067 2.30
20 0.3566 0.0142 3.97 0.2789 0.0396 14.20
50 0.3558 0.0320 9.00 0.2656 0.0107 4.02
3 0.3950 0.0210 5.33 0.3123 0.0223 7.12
5 0.3928 0.0705 17.94 0.2982 0.0030 1.02
90°/90° 10 0.3479 0.0287 8.25 0.2635 0.0167 6.34
20 0.3504 0.0158 4.51 0.2678 0.0042 1.58
50 0.3661 0.0354 9.67 0.2625 0.0062 2.35
3 0.2256 0.0320 14.18 0.1799 0.0070 3.90
5 0.2093 0.0133 6.35 0.1818 0.0159 8.75
0°/45° 10 0.1985 0.0162 8.14 0.1737 0.0039 2.27
20 0.2014 0.0040 1.99 0.1602 0.0019 1.17
50 0.2102 0.0073 3.49 0.1605 0.0012 0.75
Table 2 .
2 Experimental friction coefficients of fabric/fabric at 35 kPa according to velocity
Orientation Velocity (mm/s) Log [velocity] µs Standard deviation [σ] [σ/µs]*100 (%) µk Standard deviation [σ] [σ/µk]*100 (%)
0.1 -1.0 0.4274 0.0541 12.65 0.2846 0.0269 9.46
1.0 0.0 0.4275 0.0484 11.31 0.2760 0.0032 1.17
0°/0°
10.0 1.0 0.4338 0.0424 9.78 0.2928 0.0031 1.07
50.0 1.7 0.5031 0.0289 5.75 0.3098 0.0149 4.82
0.1 -1.0 0.3993 0.0130 3.25 0.2623 0.0136 5.20
1.0 0.0 0.4186 0.0358 8.55 0.2746 0.0084 3.06
90°/90°
10.0 1.0 0.3809 0.0394 10.33 0.2802 0.0057 2.03
50.0 1.7 0.3652 0.0089 2.42 0.2950 0.0185 6.26
0.1 -1.0 0.2072 0.0095 4.57 0.1687 0.0082 4.8858
0°/45° 1.0 0.0 0.2034 0.0009 0.43 0.1593 0.0074 4.6386
10.0 1.0 0.2267 0.0245 10.79 0.1640 0.0042 2.5559
50.0 1.7 0.2383 0.0146 6.12 0.1925 0.0025 1.2873
Table 3 .
3 Yarn/yarn frictional characteristics function of normal pressure at 1 mm/s
Orientation Normal pressure (kPa) µs Standard deviation [σ] [σ/µs]*100 (%) µk Standard deviation [σ] [σ/µk]*100 (%)
3 0.4037 0.1766 43.74 0.3188 0.0126 3.96
5 0.3478 0.0751 21.59 0.2831 0.0084 2.96
0°/0° 10 0.2670 0.0058 2.16 0.2584 0.0068 2.64
20 0.3336 0.0442 13.26 0.2822 0.0090 3.18
50 0.3332 0.0122 3.67 0.3055 0.0059 1.93
3 0.2165 0.0164 7.57 0.1867 0.0091 4.88
0°/90°
5 0.2186 0.0059 2.71 0.1913 0.0051 2.65
Table 4 .
4 Experimental friction coefficients of yarn/yarn at 35 kPa according to velocity
Orientation Velocity (mm/s) Log [velocity] µs Standard deviation [σ] [σ/µs]*100 (%) µk Standard deviation [σ] [σ/µk]*100 (%)
0.10 -1.0 0.3471 0.0017 0.49 0.2963 0.0412 13.89
0°/0° 1.00 0.0 0.3406 0.0500 14.67 0.2914 0.0224 7.69
10.00 1.0 0.3448 0.0320 9.28 0.2947 0.0104 3.53
50.00 1.7 0.3664 0.0564 15.39 0.2997 0.0063 2.10
0.1 -1.0 0.1757 0.0002 0.13 0.1563 0.0068 4.37
0°/90° 1.0 0.0 0.1825 0.0068 3.71 0.1739 0.0048 2.76
10.0 1.0 0.2163 0.0075 3.45 0.1829 0.0026 1.43
50.0 1.7 0.2458 0.0143 5.82 0.2243 0.0214 9.52
Acknowledgments
The research leading to these results received funding from the Mexican National Council of Science and Technology (CONACyT) under grant no I0010-2014-01. | 44,654 | [
"18816"
] | [
"525494"
] |
01763203 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01763203/file/3DAHM_HAERING.pdf | D Haering
C Pontonnier
G Nicolas
N Bideau
G Dumont
3D Analysis of Human Movement Science 2018
Introduction
In industrialized countries, musculoskeletal disorders (MSD) represent 80% to 90% of work related disorders. Ulnar nerve entrapment (UNE) and epicondylitis are the most common elbow MSDs within manual workers (1,2). UNE and epicondylitis are associated with maintained elbow flexion, or near-maximal elbow extension coupled with large loads (3). Awkward shoulder postures while using elbow increase MSD risk (4). Articular mechanical load can be estimated in regards of its maximal isometric torque from dynamometric measurements [START_REF] Haering | Proc XXVI Congress ISB[END_REF]. Most studies focusing on elbow ergonomics considered tasks executed in oneusually naturalshoulder configuration. A comparison between elbow isometric torque characteristics in natural and awkward shoulder configurations could help reduce the risk of elbow MSDs.
Research Question
The study highlights differences in elbow isometric torque characteristics when varying shoulder configurations and implications for ergonomics.
Methods
Dynamometric measurements and personalized torque-angle modelling were performed on a worker population to define elbow isometric torque characteristics during natural or awkward manual tasks.
Twenty-five middle-aged workers (33±6 years, 1.80±.07 m, 79±8 kg) participated in our study. One classical and five awkward shoulder configurations were tested: flexion 0° with external rotation (F0ER), 90° flexion with external rotation (F90ER), 180° flexion with external rotation (F180ER), 90° abduction with external rotation (A90ER), 90° abduction with internal rotation (A90IR), and 90° flexion with internal rotation (F90IR) (Fig. 1). Dynamometric measurements consisted in static calibration, submaximal concentric and eccentric warm-up, and isometric trials. Trials included 5 isometric contractions maintained for 5 s in flexion and extension evenly distributed through the angular range of movement of the participants.
A quadratic torque-angle model ( 6) was used to fit isometric torque measurements, where model parameters: peak isometric torque Γ 𝑚𝑎𝑥 , maximal range of motion Γ 𝑚𝑎𝑥 , and optimal angle Γ 𝑚𝑎𝑥 , were optimized. Optimal isometric torque for awkward shoulder configurations were compared to natural configuration in terms of: optimal model parameters (one-way repeated measures Anova), torque magnitude 𝑀 and angle phase 𝑃 (7).
Results
Significant effects of shoulder configuration on elbow peak isometric torque are shown (p<.01). In flexion, F0ER displays larger than F90ER and F180ER. In extension, F90ER shares highest with F0ER larger than F180ER (table 1). Magnitude analysis also reveals that maximal isometric torque over the full range of motion is overall the largest for A90IR in flexion or F0ER in extension.
No significant differences are found for maximal range of motion 𝑅𝑜𝑀.
Effect of shoulder configuration on elbow optimal angle 𝜃 𝑜𝑝𝑡 is found (p<.01). F0ER, F90ER and F180ER display smallest optimal angles (closest to anatomical reference) in flexion. Inversely, F0ER and A90IR show largest 𝜃 𝑜𝑝𝑡 in extension. Phase analysis show similar correspondences. Table 1. Awkward versus natural shoulder configuration in terms of average isometric torque parameters, torque magnitude and angle phase.
Torque direction
Shoulder configuration
Γ 𝑚𝑎𝑥 [N. m] 𝑅𝑜𝑀 [°] 𝜃 𝑜𝑝𝑡 [°] 𝑀 [%] 𝑃 [%] FLEXION F0ER (
Discussion
Peak isometric torque and magnitude results give a clear idea of elbow torque available for all shoulder configurations. Results confirm that natural position (F0ER) allows good compromise between peak torque and torque magnitude. For flexion, F90ER and F180ER appear as weakest configurations. Except for F90ER in elbow extension tasks, shoulder flexion should be minimized in strenuous working tasks. Those results agree with common ergonomic recommendations in terms of posture [START_REF] Mcatamney | [END_REF]9).
In flexion, F0ER with smaller optimal angle could also help reduce UNE occurrence by favoring tasks with less flexed elbow. Similarly in extension, F0ER and A90IR could help reduce epicondylitis by favoring less extended elbow work.
While A90IR appears as good alternate on torque and angle criterion, visibility issue might interfere.
Figure 1 .
1 Figure 1. Natural and awkward shoulder configurations tested for elbow torque on dynamometer. | 4,475 | [
"994",
"864627",
"1387"
] | [
"466360",
"491419",
"414778",
"105160",
"528695",
"105160",
"105160",
"528695",
"491419",
"105160"
] |
01763320 | en | [
"phys",
"spi"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01763320/file/Time_Reversal_OFDM.pdf | Wafa Khrouf
email: [email protected]
Zeineb Hraiech
email: [email protected]
Fatma Abdelkefi
email: [email protected]
Mohamed Siala
email: [email protected]
Matthieu Crussière
email: [email protected]
On the Joint Use of Time Reversal and POPS-OFDM for 5G Systems
Keywords: Waveform Optimization, Waveform Design, Pulse Shaping, POPS-OFDM, Signal to Interference plus Noise Ratio (SINR), Time Reversal
This paper investigates the efficiency of the combination of the Ping-pong Optimized Pulse Shaping-Orthogonal Frequency Division Multiplexing (POPS-OFDM) algorithm with the Time Reversal (TR) technique. This algorithm optimizes the transmit and receive OFDM waveforms with a significant reduction in the system Inter-Carrier Interference (ICI)/Inter-Symbol Interference (ISI) and guarantees maximal Signal to Interference plus Noise Ratio (SINR) for realistic mobile radio channels in 5G Systems. To this end, we characterize the scattering function of the TR channel and we derive the closedform expression of the SINR as a Generalized Rayleigh Quotient. Numerical analysis reveals a significant gain in SINR and Out-Of-Band (OOB) emissions, brought by the proposed TR-POPS-OFDM approach.
I. INTRODUCTION
OFDM modulation has witnessed a considerable interest from both academia and industry. This is due to many advantages such as low complexity receiver, simplicity and efficient equalization structure. Indeed, it has been adopted in various wired and wireless standards, such as ADSL, DVB-T, Wimax, WiFi, and LTE. Nevertheless, in its present form, OFDM presents several shortcomings and as such it is not capable of guaranteeing the quality of service of new and innovative applications and services that will be brought by 5G systems. In fact, it has a high spectral leakage and it requires strict frequency synchronization because it uses a rectangular waveform in time, leading to significant sidelobes in frequency [START_REF] Yunzheng | A Survey: Several Technologies of Non-Orthogonal Transmission for 5G[END_REF]. As a consequence, any lack of perfect frequency synchronization, to be expected from most of the innovative Machine Type Communications (MTC) in 5G, causes important Inter-Carrier Interferences (ICI). In addition to that, a variety of services and new applications will be provided by 5G systems, such as high data-rate wireless connectivity, which requires large spectral and energy efficiency, and Internet of Things (IoT), requiring robustness to time synchronization errors [START_REF] Luo | Signal Processing for 5G: Algorithms and Implementations[END_REF].
In order to overcome OFDM limitations and meet 5G requirements, various modulations have been suggested in the literature such as Generalized Frequency Division Multiplexing (GFDM), Universal Filtered Multi-Carrier (UFMC) and Filter Bank Multi-Carrier (FBMC), which are proposed in 5GNOW project [START_REF] Wunder | 5GNOW: Non-Orthogonal, Asynchronous Waveforms for Future Mobile Applications[END_REF]. It is shown at [START_REF] Wunder | 5GNOW: Non-Orthogonal, Asynchronous Waveforms for Future Mobile Applications[END_REF] that GFDM offers high flexibility for access to fragmented spectrum and low Out-Of-Band (OOB) emissions. However, in contrast to UFMC, GFDM has a low robustness to frequency synchronization errors in the presence of Doppler spread. Moreover, like UFMC [START_REF] Schaich | Waveform Contenders for 5G -OFDM vs. FBMC vs. UFMC[END_REF], FBMC has a high spectral efficiency and a good robustness to ICI. Nevertheless, FBMC, because of its long shaping filters, cannot be used in the case of low latency, sporadic traffic and small data packets transmission. Furthermore, authors in [START_REF] Siala | Novel Algorithms for Optimal Waveforms Design in Multicarrier Systems[END_REF] and [START_REF] Hraiech | POPS-OFDM: Ping-pong Optimized Pulse Shaping-OFDM for 5G Systems[END_REF] propose new class of waveforms, namely POPS-OFDM, which iteratively maximize the SINR in order to create optimal waveforms at the transmitter (Tx) and the receiver (Rx) sides. The obtained waveforms are well localized in time and frequency domains and they are able to reduce the ISI and the ICI as they are not sensitive to time and frequency synchronization errors. Another alternative which aims to reduce interference especially in time, caused by highly dispersive channels, is the time reversal (TR) technique which has been recently proposed for wireless communications systems. Its time and space focusing properties make it an attractive candidate for green and multiuser communications [START_REF] Dubois | Performance of Time Reversal Precoding Technique for MISO-OFDM Systems[END_REF]. In fact, it reduces ISI at the Rx side and it mitigates the channel delay spread.
This paper aims to design new waveforms for 5G systems by combining the benefits of POPS and TR techniques in terms of interference resilience. For this end, we analyze the corresponding system and we derive the SINR expression. We also evaluate the performances of the proposed approach in terms of SINR and OOB emissions.
The remaining of this paper is organized as follows. In Section III, we present the system model. In Section IV, we focus on the derivation of the SINR expression for TR systems and describe the TR-POPS-OFDM algorithm for waveforms design. Section V is dedicated to the illustration of the obtained optimization results and shed light on the efficiency of the proposed TR-POPS-OFDM approach. Finally, Section VI presents conclusion and perspectives to our work.
II. NOTATIONS
Boldface lower and upper case letters refer to vectors and matrices, respectively. The superscripts . * and . T denote the element-wise conjugation and the transpose of a vector or matrix, respectively. We denote by v = (. . . , v -2 , v -1 , v 0 , v 1 , v 2 , . . .) T = (v q ) q∈Z = (v q ) q the infinite vector v. The last notation, (v q ) q , with the set of values taken by q is not explicitly specified, that means that q spans Z.
Let M = (M pq ) p∈Z,q∈Z = (M pq ) pq refers to the infinite matrix M. The matrix shift operator Σ k (•) shifts all matrix entries by k parallel to the main diagonal of the matrix, i.e. if M = (M pq ) pq is a matrix with (p, q)-th entry M pq , then Σ k (M) = (M p-k,q-k ) pq . The symbol ⊗ is the convolution operator of two vectors and the symbol is the componentwise product of two vectors or matrices. We denote by E the expectation operator and by |.| the absolute value.
III. SYSTEM MODEL
In this Section, we first present the TR principal. Then, we describe the channel and system models in which we will apply our approach.
A. Time Reversal Principle
The Time Reversal (TR) principle [START_REF] Fink | Time Reversal of Ultrasonic Fields -Part I: Basic Principles[END_REF], [START_REF] Lerosey | Time Reversal of Electromagnetic Waves[END_REF], comes from the acoustic research field and allows a wave to be localized in time and space. Such a technique can be exploited to separate users, addressed simultaneously on the same frequency band, by their different positions in space.
The use of TR in transmission systems has generated a particular excitement as it allows, from a very high temporal dispersion channel, to obtain an ideal pulse in time and in space. This property has several useful advantages in wireless communications, among which we cite the followings:
• Negligible or null ISI brought a nearly "no memory" equivalent channel. • Minimum inter-user interference thanks to space power localization, with received negligible power outside a focal spot targeted to a given Rx. • Physical-layer-secured data transmission towards a desired user, as other users located outside of the focal spot of the targeted user will receive only a few power. TR integration into a telecommunication system is very simple. It consists in applying a filter on the transmitted signal. We suppose that we have a perfect knowledge of the transmission channel and that it is invariant between the instants of its measurements and the application of the TR at the Tx side. This filter is made up of the Channel Impulse Response (CIR) reversed in time and conjugated. It has the form of a matched filter to the propagation channel which guarantees optimal reception in terms of Signal to Noise Ratio (SNR). Then, the transmitted signal will cross an equivalent filter equal to the convolution between the channel and its time reversed version.
B. Channel Model
We consider a Wide Sense Stationary Uncorrelated Scattering (WSSUS) channel in order to have more insights on the TR-POPS-OFDM performances in the general case. To simplify the derivations, we consider a discrete time system. We denote by T s the sampling period and by R s = 1 Ts the sampling rate. We suppose that the channel is composed of K paths and that the Tx has a perfect knowledge of the channel state at any time. Note that this hypothesis is realistic in the case of low Doppler spread. Let
h (p) = (h (p) 0 , h (p) 1 , . . . , h (p) K-1 ) T be the channel discrete version at instant p such as h (p) l = M -1 m=0
h lm e j2πν lm pTs is the path corresponding to a delay lT s , where M is the number of Doppler rays, h lm and ν lm denote respectively the amplitude and the Doppler frequency of the l th path and the m th Doppler ray. The ray amplitudes, h lm , are supposed to be centered, independent and identically distributed complex Gaussian variables with average powers
π lm = E[|h lm | 2 ]. We denote by π l = M -1 m=0 π lm , where K-1 l=0 π l = 1.
The channel time reversed version at the instant p can be written as:
g (p) = h (p) * K-1 , . . . , h (p) * 1 , h (p) * 0 T . (1)
When we apply the TR technique at the Tx in a Single Input Single Output (SISO) system, the equivalent channel, experienced by the transmission at time instant pT s at the Rx, could be seen as the convolution between the channel and its time reversed version, as follows:
H (p) = h (p) ⊗ g (p) (2) = H (p) -(K-1) , . . . , H (p) -1 , H (p) 0 , H (p) 1 , . . . , H (p) K-1 T ,
where
H (p) k = K-1-|k| l=0 M -1 m,m =0 f (k, l, m, m ), such as: f (k, l, m, m ) = h * lm h l+k,m e j2π(ν l+k,m -ν lm )pTs , if k ≥ 0 h lm h * l-k,m e -j2π(ν l-k,m -ν lm )pTs , else.
It should be noted that
H (p) -k = H (p) * k
, which means that the channel is Hermitian symmetric, and that the equivalent aggregate channel coefficients, H (p) k , are still decorrelated, as in the actual channel.
C. OFDM System
In this paper, we consider a discrete time version of the waveforms to simplify the theoretical derivations that will be investigated.
Let T and F refer to the OFDM symbol duration and the frequency separation between two adjacent subcarriers respectively. The sampling period is equal to T s = T N where N ∈ N. We denote by δ = 1 F T = Q N the time-frequency lattice density, where Q = 1 TsF ≤ N is the number of subcarriers. We denote by e = (e q ) q the sampled version of the transmitted signal at time qT s , with a sampling rate R s = 1 Ts , expressed as: e = m,n a mn ϕ ϕ ϕ mn ,
where ϕ ϕ ϕ mn = (ϕ q-nN ) q (e j2πmq/Q ) q is the time and frequency shifted version of the OFDM transmit prototype waveform, ϕ ϕ ϕ = (ϕ q ) q , used to transmit the symbol a mn . We suppose that the transmitted symbols are decorrelated, with zero mean and energy equal to
E = E[|a mn | 2 ] ϕ ϕ ϕ 2 .
The received signal is expressed as:
r = mn a mn φ φ φmn + n, (4)
where
[ φ φ φmn ] q = K-1 k=-(K-1) H (q)
k [ϕ ϕ ϕ mn ] q-k is the channel distorted version of ϕ ϕ ϕ mn and n = (n q ) q is a discrete complex Additive White Gaussian Noise (AWGN), with zero mean and variance N 0 .
The decision variable, denoted Λ kl , on the transmitted symbol a kl is obtained by projecting r on the receive pulse ψ ψ ψ kl , such as:
Λ kl = ψ ψ ψ kl , r = ψ ψ ψ H kl r, (5)
where ψ ψ ψ kl = (ψ q-lN ) q (e j2πkq/Q ) q is the time and frequency shifted version of the OFDM receive prototype waveform ψ ψ ψ = (ψ q ) q and •, • is the Hermitian scalar product over the space of square-summable vectors.
IV. TR-POPS ALGORITHM
The main objective of this part is to optimize the waveforms at the Tx/Rx sides in our system based on TR technique. To this end, we adopt the POPS-OFDM principal [START_REF] Siala | Novel Algorithms for Optimal Waveforms Design in Multicarrier Systems[END_REF]. This algorithm consists in maximizing the SINR for fixed synchronization imperfections and propagation channel.
Without loss of generality, we will focus on the SINR evaluation for the symbol a 00 . Referring to (5), the decision variable on a 00 can be written as:
Λ 00 = a 00 ψ ψ ψ 00 , φ φ φ00 + (m,n) =(0,0)
a mn ψ ψ ψ 00 , φ φ φmn + ψ ψ ψ 00 , n and it is composed of three terms. The first term is the useful part, the second term is the ISI and the last term presents the noise term. Their respective powers represent useful signal, interference and noise powers in the SINR and which we derive their closed form expressions in the sequel. This SINR will be the same for all other transmitted symbols.
A. Average Useful, Interference and Noise Powers
The useful term is denoted U 00 = a 00 ψ ψ ψ 00 , φ φ φ00 . For a given realization of the channel, the average power of the useful term can be written as:
P h S = E ϕ ϕ ϕ 2 | ψ ψ ψ 00 , φ φ φ00 | 2 .
Thus, the useful power average over channel realizations is given by:
P S = E P h S = E ϕ ϕ ϕ 2 E | ψ ψ ψ 00 , φ φ φ00 | 2 . ( 6
)
The interference term, I 00 = (m,n) =(0,0) a mn ψ ψ ψ 00 , φ φ φmn , results from the contribution of all other transmitted symbols a mn , such as (m, n) = (0, 0).
For a given realization of the channel, the average power of the interference term can be written as:
P h I = E ϕ ϕ ϕ 2 (m,n) =(0,0) | ψ ψ ψ 00 , φ φ φmn | 2 .
Therefore, the interference power average over channel realizations has the following expression:
P I = E P h I = E ϕ ϕ ϕ 2 (m,n) =(0,0) E | ψ ψ ψ 00 , φ φ φmn | 2 , ( 7
)
where
E | ψ ψ ψ 00 , φ φ φmn | 2 = ψ ψ ψ H E φ φ φmn φ φ φH mn ψ ψ ψ. (8)
In the sequel, we consider a diffuse scattering function in the frequency domain, with a classical Doppler spectral density, decoupled from the dispersion in the time domain. So,
E [ φ φ φmn ] p φ φ φH mn q = K-1 k=-(K-1) Π k J 2 0 (πB d T s (p -q)) ×[ϕ ϕ ϕ mn ] p-k [ϕ ϕ ϕ mn ] * q-k
, where B d is the Doppler Spread, J 0 (•) is the Bessel function of the first kind of order zero and
Π k = K-1 l=0 π 2 l + K-1 l=0 π l 2 , if k = 0 K-1-|k| l=0 π l π l+|k| , else
is the average power of the global channel. Then the average useful and interference powers have the following expressions:
P S = E ϕ ϕ ϕ 2 ψ ψ ψ H KS ϕ ϕ ϕ ψ ψ ψ and P I = E ϕ ϕ ϕ 2 ψ ψ ψ H KI ϕ ϕ ϕ ψ ψ ψ, (9)
where KS ϕ ϕ ϕ and KI ϕ ϕ ϕ are Hermitian, symmetric, positive and semidefinite matrices:
KS ϕ ϕ ϕ = K-1 k=-(K-1) Π k Σ k ϕ ϕ ϕϕ ϕ ϕ H Λ (10)
and
KI ϕ ϕ ϕ = n ΣnN K-1 k=-(K-1) Π k Σ k ϕ ϕ ϕϕ ϕ ϕ H Ω -KS ϕ ϕ ϕ . (11)
The entries of matrices Λ and Ω are defined as:
Λ pq = J 2 0 (πB d T s (p -q))
and
Ω pq = QJ 2 0 (πB d T s (p -q)) , if (p -q) mod Q = 0, 0, else, with p, q ∈ Z.
The noise term is given by N 00 = ψ ψ ψ 00 , n n n . Thus, the noise power average is the following:
P N = E | ψ ψ ψ 00 , n | 2 = ψ ψ ψ H E nn H ψ ψ ψ.
As the noise is supposed to be white, its covariance matrix is equal to R nn = E nn H = N 0 I, where I is the identity matrix. Consequently,
P N = N 0 ψ ψ ψ 2 . (12)
B. Optimization Technique
The SINR expression is the following:
SIN R = P S P I + P N = ψ ψ ψ H KS ϕ ϕ ϕ ψ ψ ψ ψ ψ ψ H KIN ϕ ϕ ϕ ψ ψ ψ , (13)
where
KIN ϕ ϕ ϕ = KI ϕ ϕ ϕ + N0 E ϕ ϕ ϕ 2 I.
Our optimization technique is an iterative algorithm where we maximize alternately the Rx waveform ψ ψ ψ, for a given Tx waveform ϕ ϕ ϕ, and the Tx waveform ϕ ϕ ϕ, for a given Rx waveform ψ ψ ψ.
Note that (13) can also be written as:
SIN R = ϕ ϕ ϕ H KS ψ ψ ψ ϕ ϕ ϕ ϕ ϕ ϕ H KIN ψ ψ ψ ϕ ϕ ϕ , (14)
where KS ψ ψ ψ and KIN ψ ψ ψ are expressed as:
KS ψ ψ ψ = K-1 k=-(K-1) Π k Σ k ψ ψ ψψ ψ ψ H Λ (15)
and
KIN ψ ψ ψ = KI ψ ψ ψ + N 0 E ψ ψ ψ 2 I (16)
with
KI ψ ψ ψ = n Σ nN K-1 k=-(K-1) Π k Σ k ψ ψ ψψ ψ ψ H Ω-KS ψ ψ ψ .
(17) Thus, the optimization problem is equivalent to maximizing a generalized Rayleigh quotient. As N0 E ϕ ϕ ϕ 2 > 0, we can affirm that KIN ϕ ϕ ϕ is always invertible and relatively wellconditioned.
The main steps of the proposed algorithm, presented by Figure 1, are the following:
• Step 1: We initialize the algorithm with ϕ ϕ ϕ (0) , • Step 2: For the iteration (i), we compute ψ ψ ψ (i) as the eigenvector of (KIN ϕ ϕ ϕ (i) ) -1 KS ϕ ϕ ϕ (i) with maximum eigenvalue,
• Step 3: For the obtained ψ ψ ψ (i) , we determine ϕ ϕ ϕ (i+1) as the eigenvector of (KIN ψ ψ ψ (i) ) -1 KS ψ ψ ψ (i) with maximum eigenvalue,
• Step 4: We proceed to the next iteration, (i + 1),
• Step 5: We stop the iterations when we obtain a negligible variation of SINR. We note that eig, used in Figure 1, is a function that returns the eigenvector of a square matrix with the largest eigenvalue.
V. SIMULATION RESULTS
In this section, the performances of the proposed TR-POPS technique are evaluated. To show the gain in terms of SINR and Power Spectral Density (PSD), a comparison with POPS-OFDM and conventional OFDM with TR is also realized.
The results of POPS-OFDM algorithm applied to our system based on TR technique are carried out for a discrete timefrequency lattice. The optimal Tx/Rx waveform couple maximizing the SINR, ϕ ϕ ϕ opt , ψ ψ ψ opt , is evaluated for a Gaussian initialization waveform ϕ ϕ ϕ (0) .We presume having an exponential truncated decaying model. Figure 2 presents the evolution of the SINR versus the normalized Doppler spread B d /F for a normalized channel delay spread T m /T where Q = 128, N = 144, a lattice density is equal to 8/9 and for a waveform support duration D = 3T . The obtained results demonstrate that TR-POPS-OFDM approach improves the SINR with a gain of 2.3 dB for B d /F = 0.1 compared with POPS-OFDM and a gain that can reach 5.2 dB for B d /F = 0.02 compared with conventional OFDM with TR. Moreover, this figure is a mean to find the adequate couple (T, F ) of an envisaged application to insure the desired transmission quality. Figure 4 shows that, thanks to the TR technique, the obtained optimal transmit waveform, ϕ ϕ ϕ opt , reduces the OOB emissions by about 40 dB compared to the POPS-OFDM system without TR. We present in Figure 5 the Tx/Rx waveforms, ϕ ϕ ϕ opt and ψ ψ ψ opt , corresponding to the optimal SINR for Q = 128, N = 144, F T = 1 + 16 128 , B d T m = 0.001 and D = 3T . Since the channel is characterized by an Hermitian and symmetric response thanks to the TR effect, we obtain identical Tx/Rx waveforms as it is illustrated in this figure.
0 Initialize φ 0 0 1 0 eig φ φ ψ KIN KS 0 0 1 1 eig ψ ψ K N KS φ I 1 1 1 1 eig φ φ ψ KIN KS
VI. CONCLUSION
In this paper, we studied the association of POPS-OFDM algorithm with TR precoding technique to design novel waveforms for 5G systems. To this end, we presented the corresponding system model and we derived the analytical SINR expression. Despite the additional complexity of applying the combination process, simulation results showed that the proposed approach offers a highly flexible behavior and better performances in terms of maximization of the SINR and reduction of the ISI/ICI. Another possible challenging research axis consists in applying this combination in MIMO-OFDM and FBMC/OQAM systems.
Figure 1 :
1 Figure 1: Optimization philosophy.
Figure 2 :
2 Figure 2: Optimized SINR as a function of B d F for Q = 128, SN R = 30 dB, B d T m = 10 -3 and D = 3T .
Figure 3
3 Figure3illustrates the effect of TR by showing the evolution of the SINR with respect to the time-frequency parameter F T . As in Figure2, our proposed system outperforms the POPS-OFDM system and conventional OFDM with TR. The presented results reveal an increase in the obtained SINR that can reach 1.45 dB for F T = 1 + 8 128 compared with POPS-OFDM and an increase of 4.5 dB for F T = 1+48 128 compared to conventional OFDM with TR.Figure4shows that, thanks to the TR technique, the obtained optimal transmit waveform, ϕ ϕ ϕ opt , reduces the OOB
Figure 3 :
3 Figure 3: SINR versus F T for Q = 128, SN R = 30 dB, B d T m = 10 -2 and D = 3T .
Figure 4 :
4 Figure 4: PSD of the optimized transmit waveform for Q = 128, SN R = 30 dB, B d T m = 10 -3 , F T = 1.25 and D = 3T .
Figure 5 :
5 Figure 5: Tx/Rx optimized waveforms for D = 3T . | 20,740 | [
"172049",
"13113"
] | [
"23718",
"23718",
"23718",
"117606",
"185974",
"105160"
] |
01763324 | en | [
"phys",
"sdv"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01763324/file/Verezhak_ACOM_ActaBiomater_Hal.pdf | M Verezhak
E F Rauch
M Véron
C Lancelon-Pin
J.-L Putaux
M Plazanet
A Gourrier
Ultrafine heat-induced structural perturbations of bone mineral at the individual nanocrystal level
Keywords: Bone, mineral nanocrystals, hydroxyapatite, TEM, electron diffraction, heating effects
The nanoscale characteristics of the mineral phase in bone tissue such as nanocrystal size, organization, structure and composition have been identified as potential markers of bone quality. However, such characterization remains challenging since it requires combining structural analysis and imaging modalities with nanoscale precision. In this paper, we report the first application of automated crystal orientation mapping using transmission electron microscopy (ACOM-TEM) to the structural analysis of bone mineral at the individual nanocrystal level. By controlling the nanocrystal growth of a cortical bovine bone model artificially heated up to 1000 ºC, we highlight the potential of this technique. We thus show that the combination of sample mapping by scanning and the crystallographic information derived from the collected electron diffraction patterns provides a more rigorous analysis of the mineral nanostructure than standard TEM. In particular, we demonstrate that nanocrystal orientation maps provide valuable information for dimensional analysis. Furthermore, we show that ACOM-TEM has sufficient sensitivity to distinguish between phases with close crystal structures and we address unresolved questions regarding the existence of a hexagonal to monoclinic phase transition induced by heating. This first study therefore opens new perspectives in bone characterization at the nanoscale, a daunting challenge in the biomedical and archaeological fields, which could also prove particularly useful to study the mineral characteristics of tissue grown at the interface with biomaterials implants.
Introduction.
Bone tissue is a biological nanocomposite material essentially composed of hydrated collagen fibrils of ~100 nm in diameter and up to several microns in length, reinforced by platelet-shaped nanocrystals of calcium phosphate apatite of ~ 4×25×50 nm 3 in size [START_REF] Weiner | The Material Bone: Structure-Mechanical Function Relations[END_REF]. These mineralized fibrils constitute the building blocks of bone tissue, and their specific arrangement is known to depend primarily on the dynamics of the formation and repair processes. Since these cellular processes can occur asynchronously in space and time, the mineralized fibrils adopt a complex hierarchical organization [START_REF] Weiner | The Material Bone: Structure-Mechanical Function Relations[END_REF], which was shown to be a major determinant of the macroscopic biomechanical properties [START_REF] Zimmermann | Intrinsic mechanical behavior of femoral cortical bone in young, osteoporotic and bisphosphonate-treated individuals in low-and high energy fracture conditions[END_REF]. Extensive research programs are therefore currently focused on bone ultrastructure for biomedical diagnoses or tissue engineering applications.
However, structural studies at the most fundamental scales remain challenging due to the technical difficulties imposed by nanoscale measurements and by the tissue heterogeneity. Nevertheless, as a natural extension of bone mineral density (BMD) analysis, an important marker in current clinical studies, the following key characteristics of the mineral nanocrystals have been identified as potential markers of age and diseases: chemical composition, crystallinity vs disorder, crystal structure, size, shape and orientation [START_REF] Matsushima | Age changes in the crystallinity of bone mineral and in the disorder of its crystal[END_REF][START_REF] Boskey | Variations in bone mineral properties with age and disease[END_REF]. Recent progress in the field showed that in order to obtain a deeper medical insight into the mechanisms of bone function, several such parameters need to be combined and correlated to properties at larger length scales [START_REF] Granke | Microfibril Orientation Dominates the Microelastic Properties of Human Bone Tissue at the Lamellar Length Scale[END_REF]. Interestingly, from a totally different point of view, the archaeological community has drawn very similar conclusions concerning nanoscale studies for the identification, conservation and restoration of bone remains and artifacts [START_REF] Chadefaux | Archaeological Bone from Macro-to Nanoscale: Heat-Induced Modifications at Low Temperatures[END_REF].
From a materials science perspective, this is a well identified challenge in the analysis of heterogeneous nanostructured materials. Yet, technically, a major difficulty stems from the fact that most of the identified nanostructural bone markers require individual measurements on dedicated instruments which are generally difficult to combine in an integrative approach.
One 'gold standard' in nanoscale bone characterization is X-ray diffraction (XRD), which allows determining atomic-scale parameters averaged over the total volume illuminated by the X-ray beam. An important result from XRD studies conducted with laboratory instruments is that the bone mineral phase has, on average, a poorly crystalline apatite structure which, to a certain extent, is induced by a high fraction of carbonate substitutions [START_REF] De | Le substance minerale dans le os[END_REF]. Such studies enabled localization of a substantial number of elements other than calcium and phosphorus present in bone via ionic substitutions [START_REF] Wopenka | A mineralogical perspective on the apatite in bone[END_REF], which can lead to serious pathological conditions, e.g. skeletal fluorosis [START_REF] Boivin | Fluoride content in human iliac bone: Results in controls, patients with fluorosis, and osteoporotic patients treated with fluoride[END_REF]. When an average description of bone properties is insufficient, synchrotron X-ray beams focused to a typical diameter of 0.1 -10 µm [START_REF] Schroer | Hard x-ray nanoprobe based on refractive x-ray lenses[END_REF][START_REF] Fratzl | Position-Resolved Small-Angle X-ray Scattering of Complex Biological Materials[END_REF][START_REF] Paris | From diffraction to imaging: New avenues in studying hierarchical biological tissues with x-ray microbeams (Review)[END_REF] operated in scanning mode allow mapping the microstructural heterogeneities. However, this remains intrinsically an average measurement and the current instrumentation limits prevent any analysis at the single mineral nanocrystal level.
Transmission electron microscopy (TEM) is a second 'gold standard' in bone characterization at the nanoscale. In high resolution mode, it allows reaching sub-angstrom resolution [START_REF] Xin | HRTEM Study of the Mineral Phases in Human Cortical Bone[END_REF] and therefore provides atomic details of the crystals. This increased resolution comes at the cost of the image field of view, which may not provide representative results due to the tissue heterogeneity. This limitation can partly be alleviated in scanning mode, which is more adapted to the collection of a large amount of data for statistical usage. In particular, for the process known as Automated Crystal Orientation Mapping (ACOM-TEM) [START_REF] Rauch | Automated crystal orientation and phase mapping in TEM[END_REF], diffraction patterns are systematically acquired while the electron beam is scanning micron-sized areas, such that the structural parameters of hundreds of individual nanocrystals may be characterized and used to reconstruct orientation maps with nanometer spatial resolution.
To our best knowledge, the present study is the first reported use of the ACOM-TEM method to analyze mineral nanocrystals in bone tissue. To demonstrate the potential of this technique for bone studies, a test object is required which structure should be as close as possible to native bone while offering a wide range of nanocrystal dimensions. Heated bone provides an ideal model for such purposes, ensuring a tight control over the nanocrystal size by adjusting the temperature.
This system was extensively studied in archeological and forensic contexts. Upon heating to 100-150 °C, bone is progressively dehydrated [START_REF] Legeros | Types of 'H2O' in human enamel and in precipitated apatites[END_REF] and collagen is considered to be fully degraded at ~ 400 °C [START_REF] Kubisz | Differential scanning calorimetry and temperature dependence of electric conductivity in studies on denaturation process of bone collagen[END_REF][START_REF] Etok | Structural and chemical changes of thermally treated bone apatite[END_REF]. Most X-ray studies concluded an absence of mineral crystal structure modifications before 400 °C, while a rapid crystal growth has been reported at ~ 750 °C [START_REF] Rogers | An X-ray diffraction study of the effects of heat treatment on bone mineral microstructure[END_REF][START_REF] Hiller | Bone mineral change during experimental heating: An X-ray scattering investigation[END_REF][START_REF] Piga | A new calibration of the XRD technique for the study of archaeological burned human remains[END_REF]. In a recent study we provided evidence that the mineral nanocrystals increase in size and become more disorganized at temperatures as low as 100 °C [START_REF] Gourrier | Nanoscale modifications in the early heating stages of bone are heterogeneous at the microstructural scale[END_REF]. In addition, many debates remain open concerning the nature of a postulated high temperature phase transition, the coexistence of different crystallographic phases, as well as the presence of ionic defects above and below the critical temperature of Tcr = 750 °C [START_REF] Greenwood | Initial observations of dynamically heated bone[END_REF]. The heated bovine cortical bone model therefore presents two main advantages to assess the potential of ACOM-TEM: 1) the possibility to fine-tune the mineral nanocrystal size upon heating and 2) the existence of a phase transition at high temperatures.
Using a set of bovine cortical bone samples in a control state and heated at eight temperatures ranging from 100 to 1000 °C, we show that ACOM-TEM provides enough sensitivity to probe fine crystalline modifications induced by heating; in particular, nanocrystal growth, subtle changes in stoichiometry and space group. Those results provide new insight into the detailed effects of heating on bone and validate the use of ACOM-TEM for fundamental studies of the nanoscale organization of bone tissue in different contexts.
Materials and methods.
Sample preparation: A bovine femur was obtained from the local slaughterhouse (ABAG, Fontanil-Cornillon, France). The medial cortical quadrant of a femoral section from the mid-diaphysis was extracted with a high precision circular diamond saw (Mecatome T210, PRESI) and fixed in ethanol 70 % for 10 days (supplementary information, Fig. S1). Nine 2×2×10 mm 3 blocks were cut in the longitudinal direction and subsequently dehydrated (48 hours in ethanol 70 % and 100 %) and slowly dried in a desiccator. One block was used as a control, while the others were heated to eight temperatures: 100, 200, 300, 400, 600, 700, 800 and 1000 °C for 10 min in vacuum (10 -2 mbar) inside quartz tube and cooled in air. The temperature precision of the thermocouple was ~ 2-3 °C and the heating rate was ~ 30-40 °C/min. The heating process resulted in color change, as shown in Fig. S2 of supplementary information. The samples were then embedded in poly-methyl methacrylate (PMMA) resin following the subsequent steps: impregnation, inclusion and solidification. For impregnation, a solution of methyl methacrylate (MMA) was purified by aluminum oxide and a solution of MMA was prepared with dibutyl phthalate in a 4:1 proportion (MMA1). The samples were kept at 4 °C in MMA1 for 5 days. For inclusion, the samples were stored in MMA1 solution with 1 w% of benzoyl peroxide for 3 days and in MMA1 solution with 2 w% of benzoyl peroxide for 3 days. The solidification took place in PTFE flat embedding molds covered by ACLAR film at 32 °C for 48 h. The resin-embedded blocks were then trimmed and cut with a diamond knife in a Leica UC6 ultramicrotome. The 50-nm-thick transverse sections (i.e., normal to the long axis of the femur) were deposited on 200 mesh Cu TEM grids coated with lacey carbon.
TEM data acquisition:
The measurements were performed using a JEOL 2100F FEG-TEM (Schottky ZrO/W field emission gun) operating at an accelerating voltage of 200 kV and providing an electron beam focused to 2 nm in diameter at sample position. A camera was positioned in front of the TEM front window to collect diffraction patterns as a function of scanning position with a frame rate of 100 Hz. The regions of interest were first selected in standard bright-field illumination (supplementary Fig. S3). The field of view for ACOM acquisition was chosen to be 400×400 nm 2 with a 10 ms acquisition time and a 2 nm step size. A sample-tocamera distance of 30 cm was chosen for all samples, except for the larger crystals treated at 800 and 1000 °C, for which a camera length of 40 cm was used, applying a precession angle of 1.2 ° at a frequency of 100 Hz in order to minimize dynamical effects [START_REF] Rauch | Automated crystal orientation and phase mapping in TEM[END_REF]. Following distortion and camera length corrections, a virtual brightfield image was reconstructed numerically by selecting only the transmitted beam intensities.
Radiation damage assessment:
No severe radiation damage was observed during ACOM-TEM data acquisition. This was assessed by independent bright-field acquisitions in the region close to the one scanned by ACOM-TEM for each heat-treatment temperature. These measurements were performed under the same conditions but with a smaller spot size of 0.7 nm (i.e. with a higher radiation dose) and were repeated 25 times to emphasize potential damage. Examples of bright-field images before the first and last (25th) frames are shown in Fig. S5 (supplementary information), showing very limited radiation damage.
ACOM-TEM analysis:
The data analysis relies on the comparison between the electron diffraction patterns collected at every scan position and simulated patterns (templates) calculated for a given crystal structure in all possible orientations [START_REF] Rauch | Automated crystal orientation and phase mapping in TEM[END_REF][START_REF] Rauch | Rapid spot diffraction patterns idendification through template matching[END_REF], thus allowing the reconstruction of crystal orientation maps (Fig. 1). The template matching was performed using the ASTAR software package from NanoMEGAS SPRL. In its native state, bone mineral is calcium phosphate close to a well-known hydroxyapatite, Ca 10 (PO 4 ) 6 (OH) 2 , a subset of the widespread geological apatite minerals [START_REF] Wopenka | A mineralogical perspective on the apatite in bone[END_REF]. Hence, our initial model for the crystal structure is a hexagonal space group (P63/m) with 44 atoms per unit cell and lattice parameters of a = 9.417 Å; c = 6.875 Å [START_REF] Hughes | Structural variations in natural F, OH, and Cl apatites[END_REF]. Every i-th acquired diffraction pattern collected at position (x i ,y i ) was compared to the full set of templates through image correlation (template matching) and the best fit gave the most probable corresponding crystallographic orientation. This first result can thus be represented in the form of a color map representing the crystalline orientation (Fig. 1f). To assess the quality of the fit, a second map of the correlation index Q i can be used, defining Q i as:
Q i = ∑ j=1 m P ( x j , y j ) T i ( x j , y j ) √ ∑ j=1 m P 2 ( x j , y j ) √ ∑ j=1 m T i 2 ( x j , y j )
where P(x j ,y j ) is the intensity of measured diffraction patterns and T i (x,y) corresponds to the intensity in every ith template. Q i compares the intensities of the reflection contained in the diffraction pattern, denoted by the function P(x,y), to the corresponding modeled intensities T i (x,y) in every i-th template in order to select the best match [START_REF] Rauch | Rapid spot diffraction patterns idendification through template matching[END_REF]. The degree of matching is represented in an 'index map' that plots the highest matching index at every location (Fig. 1g).
This parameter therefore weights the degree of correlation between the acquired and simulated diffraction patterns. If more than one phase is expected to be present, several sets of templates can simultaneously be fitted to the data in order to identify the best one. This allows constructing 'phase maps' in which each crystallographic phase is associated to a given color.
A critical aspect of the ACOM analysis is to judge the quality of the proposed crystal orientation. Indeed, it is worth emphasizing that the template matching algorithm always provides a solution, which requires evaluating the fidelity of the phase/orientation assignment, especially in the case of overlapping crystals. A reliability parameter R i was proposed to address this point. It is proportional to the ratio of the correlation indices for the two best solutions and is defined by:
R i =100(1- Q i2 Q i1 )
where Q i1 is the best solution (represented as a red circle on the stereographic projection in Fig. 1e) and Q i2 is the second best solution (shown with a green circle). Reliability values range between 0 (unsafe/black) and 100 (unique solution/white). In practice, a value above 15 is sufficient to ascertain the validity of the matching (Fig. 1h).
Results
.
Individual nanocrystal visualization.
Bone nanocrystal orientation maps were derived for the set of heated bone samples (Fig. 2) with the corresponding collective orientations on the 0001 stereographic projection. A moderate increase in crystal size was observed below 800 °C, followed by a rapid growth at higher temperatures with a crystal shape change from platelet to polyhedral.
An important advantage of ACOM is that the size and geometry measurements in bright-field TEM such as in Fig. 1a are generally performed on the whole image. In our case, the nanocrystals can exhibit a broad distribution of orientations, such that size estimation is impractical due to the platelet-shaped crystal geometry and leads to overestimated values as pointed out in earlier studies [START_REF] Ziv | Bone Crystal Sizes: A Comparison of Transmission Electron Microscopic and X-Ray Diffraction Line Width Broadening Techniques[END_REF]. The additional visualization of the nanocrystal orientation therefore allows restricting the dimensional analysis to crystals in the same orientation (the same color-code). For example, crystals displayed in red in Fig. 1f are oriented with their c-axis (longest axis) perpendicular to the scanning plane. The crystals displayed in green and blue have a 90° misorientation from this particular zone axis. While the colors are representative of the crystal orientations, the overall difference between the ratio of red vs green and blue colors at different temperatures reflect the spatial coherence in the phase index and, thus, the level of heterogeneity within a particular sample or between different samples. This fact is confirmed by additional scans with larger fields of view acquired for each sample of the temperature series (supplementary information, Fig. S4).
The crystallographic texture can be inferred from the stereographic projection along the 0001 direction showing that, below 800 °C, most regions consist of crystals mainly aligned along the c-axis, which is in good agreement with other larger scale XRD studies [START_REF] Voltolini | Hydroxylapatite lattice preferred orientation in bone: A study of macaque, human and bovine samples[END_REF]. At higher temperatures, we observed randomly oriented crystals resulting from a phase transition which nature will be discussed in a later section. Bone mineral nanocrystal size estimation.
The nanocrystal size was measured assuming two models: platelet (anisotropic) for the low temperature (LT) phase below 750 °C and polyhedral (isotropic) for the high temperature (HT) phase above 750 °C.
In the LT phase, the smallest platelet dimensions are obtained by line profiling (as displayed in Fig. 3a) of crystals having their c-axis aligned with the beam direction (represented by a red color). Crystal overlapping effects are not expected for this orientation as the long axis length of a platelet is comparable to the sample thickness. Therefore, the electron beam is predicted to pass through one crystal only, thus providing reliable size estimation.
For the HT phase, the nanocrystal size was determined using a spherical approximation (crystal diameters) based on the definition of grain boundaries, i.e. the locations where the misorientation between two pixels at the orientation map (Fig. 3c) is higher than a user selected threshold value (5° in the present case). An example of a grain boundary map for the 800 °C sample is shown in Fig. 3d. An average grain size is then estimated using a sphere diameter weighted by the grain's area in order to avoid misindexing from numerous small grains (mainly noise). For the HT phase, the nanocrystal size was determined using a spherical approximation (crystal diameters) based on the definition of grain boundaries, i.e. the locations where the misorientation between two pixels at the orientation map (Fig. 3c) is higher than a user selected threshold value (5° in the present case). An example of a grain boundary map for the 800 °C sample is shown in Fig. 3d. An average grain size is then estimated using a sphere diameter weighted by the grain's area in order to avoid misindexing from numerous small grains (mainly noise).
The grain size distribution was then obtained for the HT phase (Fig. 3e). According to the two described models for the nanocrystal sizes (summarized in Fig. 3b), we found that the smallest bone mineral particle size rises, on average, from 3.5 nm in the control state to 5.1 nm at 700 °C. Subsequently, the average particle diameter dramatically increases upon further heating: up to 70 nm at 800 °C and 94 nm at 1000 °C.
Identification of the high temperature apatite phase.
While stoichiometric hydroxyapatite has a calcium-to-phosphate ratio of 1.67, bone mineral is known to accommodate ~ 7 wt.% of carbonate and numerous other ionic substitutions [START_REF] Wopenka | A mineralogical perspective on the apatite in bone[END_REF]. To test the sensitivity of ACOM-TEM to changes in stoichiometry and therefore of space group, we used the data set containing the largest grains (1000 °C). We compared the fits obtained with the hydroxyapatite template against four template structures of different minerals occurring in nature with similar chemical composition and stoichiometry: alphatetra calcium phosphate (TCP) (α-Ca 3 (PO 4 ) 2 ) (space group -P21/a), beta-Ca pyrophosphate (β-Ca 2 P 2 O 7 ) (P4 1 ), tetra-calcium phosphate (Ca 4 (PO 4 ) 2 O) (P2 1 ), CaO (Fmm) and whitlockite (R3c), which chemical compositions and structures are shown in Fig. 4a-e. Those phases were previously encountered in synthetic apatites subjected to heat treatments and could, therefore, be potential candidates for heated bone [START_REF] Berzina-Cimdina | Research of Calcium Phosphates Using Fourier Transform Infrared Spectroscopy[END_REF]. Since these minerals have different space groups, all possible orientations are described by different fractions of the stereographic projection allowed by symmetry, as shown by the color map shapes in Fig. 4.
Two criteria can be used to conclude that the hydroxyapatite template (Fig. 4a) provides the best solution: 1) the highest index value that characterizes the quality of the solution is nearly twice larger for hydroxyapatite than for other apatite structures; in addition, 2) a given particle is expected to be fitted with the same orientation if monocrystalline, resulting in a uniform color, which is only fulfilled for hydroxyapatite. This analysis provides a first proof-of-concept that ACOM-TEM has a sufficient sensitivity to identify subtle variations of the crystal lattice that can be expected in highly disordered biological mineral structures, such as bone mineral nanocrystals. Other apatite minerals which are not expected to be found in bone tissue but have close to hydroxyapatite stoichiometry and chemical composition such as brushite (space group Ia), monetite ( P 1 ) and tuite ( R 3 m ) [START_REF] Schofield | The role of hydrogen bonding in the thermal expansion and dehydration of brushite, di-calcium phosphate dihydrate[END_REF][START_REF] Catti | Hydrogen bonding in the crystalline state. CaHPO4 (monetite), P1 or P1? A novel neutron diffraction study[END_REF][START_REF] Sugiyama | Structure and crystal chemistry of a dense polymorph of tricalcium phosphate Ca3 (PO4)2: A host to accommodate large lithophile elements in the earth's mantle[END_REF][START_REF] Calvo | The crystal structure of whitlockite from the Palermo quarry[END_REF] were also used to test the ACOM-TEM sensitivity. I.e. if ACOM-TEM had resulted in an equal probability to find these phases, it would clearly have suggested a lack of precision of the method. The analysis shows that this is not the case, i.e. these phases did not allow describing bone data as well as hydroxyapatite (see Figure S5a-d in supplementary information).
Space group: monoclinic or hexagonal ?
A common issue in the identification of a hydroxyapatite phase at different temperatures is the hypothesis of the existence of a hexagonal (P6 3 /m) to monoclinic (P2 1 /b) phase transition above T cr = 750 °C. The corresponding structures are shown in Fig. 5a. However, such a transition was mainly predicted by theoretical models [START_REF] Slepko | Hydroxyapatite: Vibrational spectra and monoclinic to hexagonal phase transition[END_REF][START_REF] Corno | Periodic ab initio study of structural and vibrational features of hexagonal hydroxyapatite Ca10(PO4)6(OH)2[END_REF] and was only observed in artificially synthesized hydroxyapatite [START_REF] Ma | Hydroxyapatite: Hexagonal or monoclinic?[END_REF][START_REF] Ikoma | Phase Transition of Monoclinic Hydroxyapatite[END_REF][START_REF] Suda | Monoclinic -Hexagonal Phase Transition in Hydroxyapatite Studied by X-ray Powder Diffraction and Differential Scanning Calorimeter Techniques[END_REF]. From the theoretical point of view, the monoclinic hydroxyapatite structure is thermodynamically more stable than the hexagonal one. Nevertheless, the hexagonal phase allows an easier exchange of OH-groups with other ions, which is necessary for bone tissue functions.
This issue was therefore addressed by matching the hydroxyapatite templates of the hexagonal [START_REF] Hughes | Structural variations in natural F, OH, and Cl apatites[END_REF] (Fig. 5b) and the monoclinic [START_REF] Elliott | Monoclinic hydroxyapatite[END_REF] (Fig. 5c) structures with the 1000 °C bone data set. The phase map in Fig. 5d represents the structure with the highest index at each scan point. Based on the index values, as well as on the uniform colorcode criterion for single crystals, one can conclude that the hexagonal space group in bone mineral HT phase is more probable than the monoclinic one. Discussion.
Our results provide a first demonstration that a structural analysis is possible at the single nanocrystal level within bone tissue using ACOM-TEM. This constitutes a valuable improvement combining the advantages of selected area electron diffraction (SAED) and TEM. While SAED produces a global diffraction pattern from all the nanocrystals probed by an electron beam defined by a micron sized aperture, ACOM-TEM allows a detailed structural analysis within a similar field of view.
The use of the artificially heated bone model shows that the phase and orientation can be unambiguously determined for temperatures above ~ 400 o C where the crystal size is > 5 nm (Fig. 2). Even below this temperature, where the situation is less clear, since the scanning resolution given by the beam size (~ 2 nm) is close to the actual nanocrystals size (~ 4-5 nm), the observation of larger fields of view reveals the presence of coherently oriented domains in the control sample with characteristic sizes of ~ 100 -200 nm which are compatible with the diameter of collagen fibrils (supplementary information, Fig. S4). Such crystallographic information is not available from standard bright-field TEM and can be used to obtain more rigorous estimates of the nanocrystals dimensions. An important limitation of a bright-field TEM assessment of the thickness of the mineral platelets is that the nanocrystals can be sectioned in different orientation, hence resulting in an artificial broadening in projection [START_REF] Ziv | Bone Crystal Sizes: A Comparison of Transmission Electron Microscopic and X-Ray Diffraction Line Width Broadening Techniques[END_REF]. These artifacts can be avoided using ACOM-TEM by only considering the platelets oriented on-edge, i.e. with the c-axis normal to the observation plane (Fig. 3). Similarly, depending on the thickness of the sample section, several nanocrystals may partly overlap, which further complicates the analysis. This case can be avoided by discarding the areas corresponding to a low correlation index and reliability parameters, i.e. to a poor structural refinement.
Our analysis reveals three stages of crystal growth upon heating: a first, moderate growth (from ~ 3.5 to 5.1 nm) between ambient temperature and ~ 700 o C, an order of magnitude increase in size (from ~ 5.1 to 70 nm) between 700-800 o C, followed by an additional growth (from 70 to 94 nm) between 800-1000 o C. This is in very good agreement with previous X-ray studies [START_REF] Rogers | An X-ray diffraction study of the effects of heat treatment on bone mineral microstructure[END_REF][START_REF] Piga | A new calibration of the XRD technique for the study of archaeological burned human remains[END_REF][START_REF] Piga | Is X-ray diffraction able to distinguish between animal and human bones?[END_REF]. However, X-ray scattering provides average information from all crystals illuminated by the beam, while ACOM-TEM allows a precise mechanistic interpretation of the heating process. In particular, the appearance of new uncorrelated orientations at temperatures > 700 o C strongly suggests a recrystallization process by fusion-recrystallization of smaller grains. Interestingly, this process can qualitatively be observed from 400 o C onwards (i.e. following total collagen degradation) in Fig. 2, as the structure qualitatively becomes more heterogeneous (larger polydispersity in crystal sizes) and disordered. However, there is a sharp transition from platelets to polyhedral crystals between 700-800 o C, clearly indicating a non-linear crystal growth.
One major difficulty in assessing the crystalline structure of bone is that the crystal chemistry is known to fluctuate, potentially giving rise to modulations of the intensity and breadth of the Bragg peaks which thus tend to overlap in XRD [START_REF] Posner | Crystal chemistry of bone mineral[END_REF][START_REF] Sakae | Historical Review of Biological Apatite Crystallography[END_REF]. For the same reason, the precise interpretation of Raman and FTIR spectra is still a matter of debate after decades of studies [START_REF] Wopenka | A mineralogical perspective on the apatite in bone[END_REF]. Since the crystal structure is used as input in the ACOM-TEM analysis, such fine deviations to an ideal crystal structure could not be assessed reliably. Nevertheless, the analysis conducted with different templates shows that this method permits a reliable distinction between different phases on the basis of intrinsic quality metrics (correlation index) and extrinsic ones, e.g. the spatial coherence (color uniformity) of the phase and orientation determination.
This allowed, in particular, testing the hypothesis proposed in previous XRD studies that diffraction patterns of heated bone could be equally well indexed with a monoclinic space group instead of a hexagonal one. The lattice parameters of the two structures were found to be very close, with a β angle close to 120 o for the monoclinic phase [START_REF] Piga | Is X-ray diffraction able to distinguish between animal and human bones?[END_REF]. The main difference is that the length of the b-axis can fluctuate significantly from the a-axis in the monoclinic case (contrary to the hexagonal case where the a-and b-axis are identical by definition). Thus, the difference between the two structures is relatively subtle but, in principle, a monoclinic structure should better account for a higher degree of crystallinity generated by heating, as demonstrated with synthetic apatites [START_REF] Ma | Hydroxyapatite: Hexagonal or monoclinic?[END_REF].
Our results conclusively show that even for samples treated at 1000 o C, the mineral is better represented by a hexagonal structure. This was further confirmed by a close manual examination of the proposed solutions for a number of representative diffraction patterns (example in supplementary information, Fig. S6). Because the monoclinic phase is more representative of stoichiometric hydroxyapatite, the fact that bone mineral is better indexed by a hexagonal group implies that there is still a substantial degree of disorder in the crystal structure even for bone heated at high temperatures.
It is important to note that the ACOM setup can be readily implemented in standard existing TEM instruments and can therefore provide a close-to routine basis for biological, medical and archeological studies. Given the resolution level, we believe that ACOM-TEM could be advantageously exploited to analyze the interface layer (typically < 1 µm) between biomaterials and bone formed at the surface of implants, a critical aspect of osseointegration [START_REF] Davies | Bone bonding at natural and biomaterial surfaces[END_REF][START_REF] Legeros | Calcium phosphate-based osteoinductive materials[END_REF]. TEM was widely used to investigate the tissue structure at this interface [START_REF] Grandfield | High-resolution three-dimensional probes of biomaterials and their interfaces[END_REF][START_REF] Palmquist | Bone--titanium oxide interface in humans revealed by transmission electron microscopy and electron tomography[END_REF], but the collective mineral nanocrystals structure and organization was never analyzed. Similarly, in the biomedical field, severe pathological perturbations of mineral nanocrystals have been reported in many bone diseases, e.g. osteoporosis [START_REF] Rubin | TEM analysis of the nanostructure of normal and osteoporotic human trabecular bone[END_REF], osteogenesis imperfecta [START_REF] Fratzl-Zelman | Unique micro-and nano-scale mineralization pattern of human osteogenesis imperfecta type VI bone[END_REF] and rickets [START_REF] Karunaratne | Significant deterioration in nanomechanical quality occurs through incomplete extrafibrillar mineralization in rachitic bone: Evidence from in-situ synchrotron X-ray scattering and backscattered electron imaging[END_REF], for which a detailed nanoscale description is still lacking [START_REF] Gourrier | Scanning small-angle X-ray scattering analysis of the size and organization of the mineral nanoparticles in fluorotic bone using a stack of cards model[END_REF]. Additionally, this method could also have a positive impact in the archaeological field, since diagenetic effects associated with long burial time of bone remains are known to affect the mineral ultrastructure in numerous ways, hence impacting the identification and conservation of bone artifacts [START_REF] Reiche | The crystallinity of ancient bone and dentine: new insights by transmission electron microscopy[END_REF]. Finally, it should be mentioned that ACOM-TEM would most likely benefit from more advanced sample preparation methods such as focused ion beam milling coupled to scanning electron microscopy (FIB-SEM) which has been shown to better preserve the tissue ultrastructure [START_REF] Jantou | Focused ion beam milling and ultramicrotomy of mineralised ivory dentine for analytical transmission electron microscopy[END_REF][START_REF] Mcnally | A Model for the Ultrastructure of Bone Based on Electron Microscopy of Ion-Milled Sections[END_REF][START_REF] Reznikov | Three-dimensional structure of human lamellar bone: the presence of two different materials and new insights into the hierarchical organization[END_REF].
Conclusion.
In the present work, we showed that both insights, the direct visualization of individual bone nanocrystals and structural information, can simultaneously be accessed using ACOM-TEM analysis. The mineral nanocrystal orientation, crystallographic phase and symmetry can be quantified, even in biological samples such as bone tissue that are known to be very heterogeneous down to the nanoscale. Our analysis of a heated bone model points to a crystal growth by fusion and recrystallization mechanisms, starting from ̴ 400 o C onwards with a sharp transition between 700 o C and 800 o C. By testing different phases corresponding to deviations from the hydroxyapatite stoichiometry as input for the structural refinement, we were able to assess the sensitivity of ACOM-TEM. We tested the hypothesis of a monoclinic space group attribution to the bone sample heated at 1000 o C and found that a hexagonal structure was more probable, suggesting the presence of crystalline defects even after heating at high temperatures. We therefore believe that ACOM-TEM could have a positive impact on applied research in biomaterials development, biomedical investigations of bone diseases and, possibly, analysis of archaeological bone remains. Bright field images with corresponding ACOM orientation maps.
Fig. 1 :
1 Fig. 1: Generalized scheme of ACOM-TEM acquisition and data interpretation. (a) Bright-field (BF) image of bone tissue with the illustration of the scan area (dots diameter is 2 nm, enhanced for visibility), (b) recorded set of diffraction patterns, (c) example of fit using (d) the structure template for hydroxyapatite. (e) Fraction of stereographic projection with inverse pole figure color map, (f) orientation, (g) index and (h) reliability maps (high values appear brighter). Scale bar: 100 nm.
Fig. 2 :
2 Fig. 2: Mineral crystal orientation distribution in bone tissue as a function of temperature. Orientation maps reconstructed from ACOM-TEM from control to 1000 °C with corresponding inverse pole figure color map (scale bar: 100 nm). Inset: collective crystal orientations on 0001 stereographic projection with the color bar normalized to the total number of crystals per scan.
Fig. 3 :
3 Fig. 3: Bone mineral crystal size evolution with temperature. (a) an example of size measurement by line profiling along the c-axis of the platelet-shaped crystals for the LT regime (700 °C); (b) average crystal size vs temperature (black -smallest crystal size, red -crystal diameter). Examples of polyhedral-shaped crystal diameter measurement for the HT regime (800 °C): (c, d) indicate orientation and grain boundary maps, respectively, and (e) represents the distribution of crystal diameters.
Fig. 4 :
4 Fig.4: Phase sensitivity. a-f, orientation maps for 1000 °C heated bone data fitted with six apatite structures with corresponding inverse pole figure color maps. Maximum index values (Imax) are given for comparison. The hydroxyapatite structure produces the best fit indicated by the highest Imax value and homogeneous colors within single crystals. Scale bar: 50 nm.
Fig. 5 :
5 Fig. 5: Hexagonal vs monoclinic symmetry. (a) crystal structures of hexagonal and monoclinic hydroxyapatite (view along c-axis); (b, c) corresponding orientation maps for the 1000 °C sample with color code and maximum index values; (d) phase map showing mainly the presence of hexagonal phase. Scale bar: 50 nm.
Fig. S3 :
S3 Fig. S3: Regions of scans for the set of heat-treated bone samples. Bright field images with corresponding ACOM orientation maps. Scale bar in orientation maps: 100 nm.
Fig. S4 :
S4 Fig. S4: Larger fields of view scan regions for the set of heat-treated bone samples.Bright field images with corresponding ACOM orientation maps.
Acknowledgements.
The authors would like to acknowledge M. Morais from SIMaP for the support with the heating apparatus, D. Waroquy (ABAG, Grenoble, France) for providing the bovine samples, and the NanoBio-ICMG Platform (FR 2607, Grenoble) for granting access to the TEM sample preparation facility.
Author contributions.
M.V. ‡ , C.L.P. and J.L.P. prepared samples; M.V. ‡ , E.F.R., M.V., M.P. and A.G. performed the research; A.G., M.P. and E.F.R. provided the financial support for the project; M.V. ‡ and E.F.R. analyzed data; M.V. ‡ and A.G. wrote the paper with contributions from all authors; A.G. and M.P. designed the research.
Supplementary information | 41,247 | [
"16955",
"174447",
"20829"
] | [
"1041857",
"1041828",
"1041828",
"1041817",
"1041817",
"1041857"
] |
01763341 | en | [
"spi"
] | 2024/03/05 22:32:13 | 2013 | https://hal.science/hal-01763341/file/MSMP_TI_2013_MEZGHANI.pdf | Mohamed El Demirci
Hassan Mansori
S Mezghani
email: [email protected]
I Demirci
M El Mansori
H Zahouani
Energy efficiency optimization of engine by frictional reduction of functional surfaces of cylinder ring-pack system
Keywords: Honing process, Surface roughness, elastohydrodynamic friction, Cylinder engine
Energy efficiency optimization of engine by frictional reduction of functional surfaces of cylinder ring-pack system
Sabeur Mezghani, Ibrahim
Penalty term parameter z r Pressure viscosity index (Roelands), z r = p r /(ln(η 0 ) + 9.67)
Introduction
The surface features of a cylinder liner engine are the fingerprint of the successive processes the surface has undergone, and they influence the functional performance of the combustible engine [START_REF] Caciu | Parametric textured surfaces for friction reduction in combustion engine[END_REF][START_REF] Tomanik | Friction and wear bench tests of different engine liner surface finishes[END_REF][START_REF] Pawlus | A study on the functional properties of honed cylinders surface during runningin[END_REF][START_REF] Mcgeehan | A literature review of the effects of piston and ring friction and lubricating oil viscosity and fuel economy[END_REF][START_REF] Srivastava | Effect of liner surface properties on wear and friction in a non-firing engine simulator[END_REF]. Therefore, surfaces and their measurement provide a link between the manufacture of cylinder bores and their functional performances [START_REF] Whitehouse | Surfacesa link between manufacture and function[END_REF]. Hence, the quantitative characterization of surface texture can be applied to production process control and design for functionality [START_REF] De Chiffre | Quantitative Characterisation of Surface Texture[END_REF]. The optimum surface texture of an engine cylinder liner should ensure quick running-in, minimum friction during sliding, low oil consumption, and good motor engine operating parameters in terms of effective power and unitary fuel consumption. Increasingly stringent engine emissions standards and power requirements are driving an evolution in cylinder liner surface finish [START_REF] Lemke | Characteristic parameters of the abbot curve[END_REF]. Unfortunately, the full effect of different cylinder liner finishes on ring-pack performance is not well understood [START_REF] De Chiffre | Quantitative Characterisation of Surface Texture[END_REF].
In mass production of internal combustion engine cylinder liners, the final surface finish on a cylinder bore is created by an interrupted multistage abrasive finishing process, known as the plateau-honing process. In honing, abrasive stones are loaded against the bore and simultaneously rotated and oscillated. Characteristically, the resulting texture consists of a flat smooth surface with two or more bands of parallel deep valleys with stochastic angular position. Figure 1 shows a typical plateau-honed surface texture from an engine cylinder.
To guarantee efficient production at industrial level of a cylinder liner of specific shape with acceptable dimensional accuracy and surface quality, three honing stages are usually required: rough honing, finish honing, and plateau honing. The surface texture is presumably provided by the ″finish honing″ [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF][START_REF] Sabri | Functional optimisation of production by honing engine cylinder liner[END_REF]. Thus careful control of this operation is central to the production of the structured surface so that the cylinder liner will fulfil its mechanical contact functionalities in piston ring/cylinder liner assemblies (i.e. running-in performance, wear resistance, load-carrying capacity, oil consumption, etc.) [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF][START_REF] Pawlus | Effects of honed cylinder surface topography on the wear of piston-piston ring-cylinder assemblies under artistically increased dustiness conditions[END_REF]. ring-pack friction reduction through cylinder liner finish optimization it is necessary to be able to distinguish the effect of each process variable on the roughness of these honed surfaces [START_REF] De Chiffre | Quantitative Characterisation of Surface Texture[END_REF].
In this work, strategies for piston ring-pack friction reduction through cylinder liner finish optimization were analyzed with the goal of improving the efficiency of selection of the honing process variable. The fundamental aim was to find a relation between the honing operating variables and the hydrodynamic friction at the piston rings/cylinder interface. An additional aim was to determine how the cylinder surface micro-geometry of plateau-honed cylinders affects the predicted friction. Thus, an experimental test rig consisting of an industrial honing machine instrumented with sensors to measure spindle power, expansion pressure, and honing head displacement was developed. Honing experiments were carried out using honing stones with varying sizes of abrasive grits and varying expansion speeds, that is, the indentation pulse of the honing stone's surface against the liner wall. Furthermore, a numerical model of lubricated elastohydrodynamic contact was developed to predict the friction performances and lubricant flow of the various liner surface finishes. It uses the real topography of the liner surface as input. In fact previous studies have found that the detailed nature of the surface finish plays an important role in ring friction and oil film thickness predictions [START_REF] Jocsak | The effects of cylinder liner finish on piston ring-pack friction[END_REF]. An appreciation of the limitations of the surface roughness parameters commonly used in automotive industries in providing a link between the honing process and the generated surface performance in the hydrodynamic regime is presented.
Experimental procedure
In this work, honing experiments were carried out on a vertical honing machine with an expansible tool (NAGEL no. 28-8470) (Figure 2). The workpiece consists of four cylinder liners of a lamellar gray cast iron engine crankcase.
The steps involved in the fabrication of the cylinder liners before the finish-honing operation are boring and rough honing, respectively (Table 1 treatment by impregnation with sulfur. Another interesting variation in the feed system is the expansion mechanism in the honing head, where three expansion velocities "V e " (1.5µm/s, 4µm/s and 8µm/s) were considered. All the other working variables were kept constant (Table 1). Note that the rough and finish honing operations use a mechanical expansion system and the plateau honing uses a hydraulic system. For each combination of grit size and expansion velocity, tests were repeated five times. Thus, the sensitivity of the produced surface finish to its generation process was considered.
Negative surface replicas made of a silicon rubber material (Struers, Repliset F5) were used to assess the texture of honed surfaces after the plateau-honing stage at the mid-height of the cylinder bore specimen. Topographical features of replica surfaces were measured in three locations by a three-dimensional white light interferometer, WYKO 3300 NT (WLI). The surface was sampled at 640 × 480 points with the same step scale of 1.94 μm in the x and y directions. Form component is removed from acquired 3D data using least square method based on cubic Spline function.
We can assume that the initial roughness of the cylinder bore has no influence on the obtained surface texture in this study. It affects only the honing cycle and the stone life, that is, the wear of the abrasive grits. In fact, the thickness of the removed material after finish honing (32.17 ± 2.21 µm) is greater than the total height of the original surface, which was about 24.56 ± 6.75 µm. This means that the finish honing operation completely penetrates the original surface topography and generates a new surface texture.
Numerical model for hydrodynamic friction simulation in piston ringpack system
A numerical model was developed to estimate friction at the ring-liner-piston contact. It takes into account the real topography of the cylinder liner. The scope of this model is to predict qualitatively the friction coefficient obtained to optimize the performances when the groove characteristics of cylinder liner surfaces are variables.
Geometry definition
An incompressible viscous fluid occupying, at a given moment, a field limited by a smooth plane surface P and by a rough surface R is considered. This field is represented on figure 3 (we did not represent the profile in the x2 direction). It extends from 0 to l1, 0 to l2 and h (x1, Figure 3 The separation field between a smooth surface and a rough one
EHL Equations
To estimate the pressure distribution, film thickness, and friction coefficient, a full system approach for elastohydrodynamic lubrication (EHL) was developed. The Reynolds equations have been written in dimensionless form using the Hertzian dry contact parameters and the lubricant properties at ambient temperature. To account for the effects of non-Newtonian lubricant behaviour effectives viscosities are introduced.
With the boundary condition P = 0 and the cavitation condition (or free boundary condition) P ≥ 0 are used everywhere Special treatment is used for cavitation condition as explained below. In this equation, is equal to . and are the effective viscosities in the X and Y directions, respectively. For point contact, it is not possible to derive these effective viscosities analytically. The perturbational approach described by Ehret et al. [START_REF] Ehret | On the lubricant transport condition in elastohydrodynamic conjunctions[END_REF] is used.
This analysis is based on the assumption that the shear stresses are only partially coupled and that the mean shear stress is negligible in the y direction [START_REF] Ehret | On the lubricant transport condition in elastohydrodynamic conjunctions[END_REF][START_REF] Greenwood | Two-dimensional flow of a non-Newtonian lubricant[END_REF].
In our model, the Eyring model is used. The perturbational approach leads to the following dimensionless effective viscosities:
where the dimensionless mean shear stress is written as:
where S is the slide to roll ratio (2(u 1u 2 )/(u 1 + u 2 )) and N is given by :
The constant parameters of Equation ( 5) are given in the nomenclature.
The lubricant's viscosity and density are considered to depend on the pressure according to the Dowson and Higginson relation [START_REF] Dowson | Elastohydrodynamic lubrication. The fundamentals of roller and gear lubrication[END_REF] (Eq. 6) and Roelands equation [START_REF] Roelands | Correlational aspects of the viscosity-temperature-pressure relationships of lubricant oil[END_REF] (Eq. 7):
where 0 is the density at ambient pressure.
where η 0 is the viscosity at ambient pressure, p r is a constant equal to 1.96 x 10 8 , and z r is the pressure viscosity index (z r = 0.65).
The film thickness equation is given in dimensionless form by the following equation:
is the height of the liner surface topography at each position (X,Y). H 0 is a constant determined by the force balance condition:
The normal elastic displacement of contacting bodies is obtained by solving the linear elasticity equations in three-dimensional geometry with appropriate boundary conditions [START_REF] Habchi | A full-system approach of the elastohydrodynamic line/point contact problem[END_REF][START_REF] Habchi | A full-system approach to elastohydrodynamic lubrication problems: application to ultra-low-viscosity fluids[END_REF]. The geometry (Ω) used (figure 4) is large enough compared to contact size (Ω c ) to be considered as semi-infinite structures. The linear elastic equations consist of finding the displacement vector U in the computational domain with the following boundary conditions:
In order to simplify the model, the equivalent problem defined by [START_REF] Habchi | A full-system approach to elastohydrodynamic lubrication problems: application to ultra-low-viscosity fluids[END_REF] is used to replace the elastic deformation computation for both contacting bodies. One of the bodies is assumed to be rigid while the other accommodates the total elastic deformation. The following material properties of the bodies are used in order to have (w is the dimensionless absolute value of the Z-component of the displacement vector):
where E i and i are the Young's modulus and Poisson's coefficient, respectively, of the material for contacting bodies (i = 1, 2).
Finally, the friction coefficient is evaluated by the following formula: these negative pressures are not relevant. In such cases, the fluid will evaporate and the pressure is limited by the vapor pressure of the fluid. This process is the cavitation. This problem is usually solved by setting the negative pressure to zero. This ensures that there will be zero pressure and gradient pressure on the free boundary. In the full system approach, this solution is not possible and the penalty method is used as an alternative, as explained in [START_REF] Habchi | A full-system approach to elastohydrodynamic lubrication problems: application to ultra-low-viscosity fluids[END_REF].
Cavitation problem
This method was introduced in EHL by Wu [START_REF] Wu | A penalty formulation and numerical approximation of the Reynolds-Hertz problem of elastohydrodynamic lubrication[END_REF]. An additional penalty term was introduced in the Reynolds equation:
where is a large positive number and is the negative part of the pressure distribution. This penalty term constrains the system to P 0 and forces the negative pressure to zero.
Numerical procedure
The iterative process is repeated until the maximum relative difference between two consecutive iterations reaches 10 -6 . Table 2 summarizes the fluid properties and contact parameters used in our simulation. The difference between our model and Venner and Lubrecht work [START_REF] Venner | Multilevel Methods in Lubrication[END_REF] is less than 1%.
This test confirms the validity of the model presented in this paper. Figure 5 show an example of a pressure distribution and film thickness profiles along the central line in the X direction for rough surface like one presented in Figure 6.
Table 3 Comparison of the current model with the Venner & Lubrecht model [START_REF] Venner | Multilevel Methods in Lubrication[END_REF] Venner and Lubrecht [START_REF] Venner | Multilevel Methods in Lubrication[END_REF]
Results and discussion
The numerical model presented in Section 3 was used to predict friction in the ring-linerpiston contact and to analyze possible friction reduction strategies in the piston ring-pack.
Table 4 regroups all the experimental and numerical results. As a result of simulation of the cylinder ring-pack contact, only the average friction coefficients were compared.
Cylinder liner surface roughness distribution can differ between surfaces with the same root mean square roughness. This difference can have a significant effect on the performance and behavior of the surface within the piston ring-pack system. The surface finish created by the honing process is controlled by the size and dispersion of abrasive particles adhering to the surface of the honing sticks. Figure 6 shows the effect of varying the grit size of abrasive honing stones used in the finish honing stage on the threedimensional topographical features of the produced surface. As shown in Figure 6, coarse abrasive grits yield deeper and larger lubrication valleys and consequently rougher surfaces. working variables: V e in finish honing stage = 4 µm/s; all others parameters are kept constant and are given in Table 1.)
Influence of abrasive grit size and expansion velocity on the impregnated surface texture and its friction performance within the piston ring-pack system
As a result of these honing experiments carried out with various sizes of abrasive grits, Figure 7 presents predicted values of the coefficient of friction and mean oil film thickness as a function of abrasive grit size of honing stone used at the finish honing stage. It demonstrates clearly that the surface texture achieved with finer abrasive grits yields a lower hydrodynamic friction coefficient in a cylinder ring-pack system than that obtained by coarse abrasive grits. Since the generated honed surfaces have the same honing cross-hatch angle, the differences in predicted hydrodynamic friction observed between these different finishes are mainly a result of surface peak and valley characteristics. Hence, the increase in valleys volume may increase the oil through the valleys of the surface, yielding a decrease in the oil film thickness, which will in turn induce an increase in hydrodynamic friction.
A reduction of the expansion speed operating condition leads to lower valley depth on the surface texture impregnated during coarse honing as observed in Figure 8. This figure also
shows that the expansion velocity has no effect on the spatial morphology of the generated surface texture and hence the roughness scale, as demonstrated by multiscale surface analysis in [START_REF] Lemke | Characteristic parameters of the abbot curve[END_REF]. 1).
Relationship between liner surface friction performance and honing process efficiency
The specific energy is used as a fundamental parameter for characterizing the honing process.
It is defined as the energy expended per unit volume of material removed. The specific honing energy defines the mechanisms of removal of material from the operated workpiece. It is calculated from the following relationships:
* honing Pm t Esp Qw ( 12
)
where honing t is the effective honing time, Pm is the average power absorbed by the honing process, calculated as the difference between on-load power recorded during the finishing and average off-load power recorded before and after the test, and Qw is the volumetric removal given by the following equation:
22 c Qw H D d ( 13
)
where c H is the cylinder height, and D and d are the cylinder diameter before and after the finish honing operation, respectively. It suggests that the optimum coefficient of friction with a good honing efficiency is reached by using a grit size of 80-100 µm and an expansion velocity equal to 4 µm/s. Furthermore, significant smooth surfaces are produced by plateau-honing with fine abrasive grit sizes due to the low indentation capacity of the fine abrasive grain. This generated surface texture also yields a low predicted coefficient of friction in the piston-ring-liner interface.
However, the use of fine abrasives has the lowest efficiency. In fact, it presents lower material removal and consumes a large specific energy due to the predominance of the plowing abrasion mechanism [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF][START_REF] Sabri | Functional optimisation of production by honing engine cylinder liner[END_REF]. This yields a lower stone life and generates undue tool wear.
Thus, to ameliorate the honing efficiency, conventional abrasives can be replaced by superabrasive crystals which do not wear or break rapidly.
Roughness characteristics of optimal plateau-honed surface texture
To give a rough estimate of the potential side effects of the surface optimization, surface roughness has been evaluated using the functional roughness parameters R k (height of the roughness core profile), R pk (reduced peak height), and R vk (reduced valley height) given by the ISO 13565-2 standard [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF][START_REF] Jocsak | The effects of cylinder liner finish on piston ring-pack friction[END_REF]. These parameters are obtained based on the analysis of a bearing curve (the Abott-Firestone curve), which is simply a plot of the cumulative probability distribution of surface roughness height [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF]. The peak height is an estimate of the These bubble plots show that all optimal surfaces in the hydrodynamic lubrication regime belong to the domain defined by: R pk < 1 µm R k < 3 µm R vk < 2.5 µm However, this critical domain does not guarantee the optimal behavior of the honed surface finish. For example, in Figure 12, the criterion R vk < 2.5 µm cannot exclude honed surfaces that induce a high friction coefficient. For the R pk and R k criteria, the same observation can be expressed. This suggests that the standard functional roughness parameters commonly used in the automotive industry cannot give a good classification of plateau-honed surfaces according to their functional performance. Table 5 shows linear correlation coefficients between roughness parameters and predicted coefficients of friction.
Table 5 The linear correlation coefficient between roughness parameters and the coefficient of friction
Correlation coefficient between Rpk and µ 0.666
Correlation coefficient between Rk and µ 0.664
Correlation coefficient between Rvk and µ 0.658
Hence, these standard functional parameters are not sufficient to give a precise and complete functional description of "ideal" honed surfaces. This can be attributed to the fact that bearing curve analysis is one-dimensional and provides no information about the spatial characteristics and scale of surface roughness.
Conclusion
This work focused on developing ring-pack friction reduction strategies within the limitations of current production honing processes. First, three-dimensional honed surface topographies were generated under different operating conditions using an instrumented industrial honing machine. Then, the three-dimensional surface topography of each honed cylinder bore is input into a numerical model which allows the friction performance of a cylinder ring-pack system in an EHL regime to be predicted. The strategy developed allows manufacturing to be related to the functional performance of cylinder bores through characterization. The results show that an increase in grit size will lead to an increase in surface roughness, with deeper valleys leading to an increase in hydrodynamic friction. They also show that the standard functional surface roughness parameters which are commonly used in the automotive industries do not provide a link between the honing process and the generated surface performance in the hydrodynamic regime.
Note that the analysis presented in this study does not take into consideration the effects of cylinder surface topography on its ability to maintain oil, that is, the oil consumption level. Experimental studies using a reciprocating bench tester will be carried out to evaluate the effect of honing operating conditions and cylinder surface topography on scuffing and oil consumption.
coefficient (Gpa -1 ) Elastic deflection of the contacting bodies (m) Dimensionless elastic deflection of the contacting bodies D, d Cylinder diameter before and after the finish honing operation, respectively (mm) E eq , eq Equivalent Young's modulus (Pa) and Poisson's coefficient, respectively E i , i Young's modulus (Pa) and Poisson's coefficient of component i, respectively E r Reduced modulus of elasticity Esp Specific honing energy (J/mm 3 ) temperature zero-pressure viscosity (Pa.s) Effective viscosities in the X and Y of curvature in x direction (m) Rg height of the liner surface topography at each position (m) Lubricant density (kg.m -3 ) Dimensionless lubricant density (= / 0 ) 0 Lubricant's density under ambient condition (kg.m -3 ) S Slide to roll ratio: S = 2(u 1u 2 )/(u 1 + u 2 ) Surface velocity of body I in x-direction (m.s -1 ) u m Mean entrainment velocity (m.s -1 ) x,y,z Space coordinates (m) X,Y Dimensionless space coordinates (=x/a, y/a) w Value of Z-component of the displacement vector (m)
Figure 1
1 Figure 1 Typical plateau-honed surface texture
Figure 2
2 Figure 2 (a) Vertical honing machine with expansible tool; (b) Schematic representation of the honing head in continuous balanced movement.
x2) respectively according to x1, x2 and x3. h (x1, x2) represents the fluid thickness. The smooth body is animated by a movement at the constant velocity "u1" along Ox1 axis whereas rough surface is static.
Figure 4
4 Figure 4 Scheme of geometric model for computations of the elastic deformation (1) and of the Reynolds equation (2).
pressure appears in the resolution of the Reynolds equation. Physically,
Reynolds equation, linear elastic equations, and load balance equation are simultaneously solved using a Newton-Raphson resolution procedure. The dimensionless viscosity , density , and film thickness H in the Reynolds equation are replaced by the expression given above. Except for the load balance equation, a standard Galerkin formulation is used. For the load balance equation, an ordinary integral equation is added directly with the introduction of an unknown H 0 . Unstructured variable tetrahedral meshing is used for both Reynolds and linear elastic equations. A total number of 100000 degrees of freedom are used in the simulation. An
Figure 5
5 Figure 5 Pressure (P) and film thickness (H) profiles along the central line in the X direction.
Figure 6
6 Figure 6 Three-dimensional topographies of plateau-honed surfaces produced using different abrasive grit sizes in the finish honing stage: (a) 40 µm, (b) 110 µm, and (c) 180 µm. (Process a b c
Figure 7
7 Figure 7 Evolution of friction coefficient of the cylinder ring-pack as a function of the abrasive grit size in the honing finish stage.
Figure 8
8 Figure 8 Three-dimensional topographies of plateau-honed surfaces produced using three different expansion velocities in the finish honing stage: (a) 1.5 µm/s, (b) 4 µm/s, and (c) 8 µm/s. (Process working variables: Abrasive grit size in the finish honing stage equal to 110µm; all other parameters are kept constant and are given in Table1).
Figure 9 ,
9 Figure 9, which presents a plot of the friction coefficient versus specific energy, highlights the link between honing process operating conditions and the functional behavior of plateau-honed surfaces in the hydrodynamic lubrication regime.
Figure 9
9 Figure 9Predicted coefficient of friction of surface texture generated with different honing grit sizes and indentation pulse as a function of consumed specific energy during finish honing. (The size of circles is proportional to the size of the honing abrasives which varies from 30 µm to 180 µm.)
Figures 10,[START_REF] Pawlus | Effects of honed cylinder surface topography on the wear of piston-piston ring-cylinder assemblies under artistically increased dustiness conditions[END_REF], and 12 display, for different abrasive grit sizes and at various expansion velocities, the existing correlation between the predicted friction coefficient within the cylinder ring-pack system and the functional roughness parameters of the plateau-honed surfaces of the cylinder bore, R pk , R k , and R vk , respectively.
Figure 10 Figure 11 Figure 12
101112 Figure 10 Predicted coefficient of friction vs. functional roughness parameter R pk . (The size of circles is proportional to the size of the honing abrasives which varies from 30 µm to 180 µm.)
Table 1
1 Honing working conditions
Honing process variables Rough honing Finish honing Plateau honing
V a : Axial speed (m/min) 28 28 28
V r : Rotation speed (rpm) 230 230 230
Honing time (sec) 20 15 2
Expansion type Mechanical Mechanical Hydraulic
V e : Expansion velocity (µm/s) 5 1.5, 4, and 8
Number of stones 6 6 6
Abrasive grit type Diamond Silicon carbide Silicon carbide
Grain size (µm) 125 30-180 30
Bond type Metal Vitrified Vitrified
Abrasive stone dimensions 2 × 5 × 70 6 × 6 × 70 6 × 6 × 70
(mm × mm × mm)
Table 2
2 Parameters for our simulation used with rough surfaces
Parameter Value Parameter Value
F N (N) 500 (GPa -1 ) 22.00
u m (m.s -1 ) 10.0 R x (m) 0.04
η 0 (Pa.s) 0.04 E i (GPa) 210
i 0.3 τ 0 (MPa) 0.5
Table 3
3
gives dimensionless central and minimum oil film thickness for the following dimensionless Moes and Venner parameters M=200 and L=10.
Table 4
4 Honing process variables, roughness parameters of honed surfaces and theirs predicted coefficient of friction
Ve Grit size Rpk Rk Rvk µ (%)
1.5 180 0.663 1.859 1.960 2.495
1.5 145 0.777 1.798 2.429 2.466
1.5 110 0.679 1.813 2.025 2.474
1.5 90 0.564 1.954 1.825 2.459
1.5 80 0.566 1.628 1.678 2.426
1.5 50 0.253 0.625 0.598 2.441
1.5 40 0.217 0.659 0.426 2.446
4 180 1.091 2.680 3.650 2.543
4 145 1.091 2.418 3.455 2.468
4 110 0.979 2.521 2.969 2.465
4 90 0.913 2.045 2.720 2.447
4 80 0.838 2.016 2.800 2.434
4 50 0.353 0.622 0.748 2.459
4 40 0.287 0.931 0.530 2.450
8 180 1.292 3.403 4.276 2.462
8 145 1.308 2.922 4.057 2.487
8 110 1.135 2.828 3.469 2.462
8 90 1.096 2.534 3.213 2.488
8 50 0.521 1.007 1.263 2.461
8 40 0.359 0.510 0.644 2.442 | 29,310 | [
"767158",
"999075"
] | [
"211915",
"211915",
"211915",
"698"
] |
01763347 | en | [
"spi"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01763347/file/MSMP_JMPT_2016_FOUILLAND.pdf | K Le Mercier
email: [email protected]
M Watremez
E S Puchi-Cabrera
L Dubar
J D Guérin
L Fouilland-Paillé
Dynamic recrystallization behaviour of spheroidal graphite iron. Application to cutting operations
Keywords: SG iron, Hot cutting, Dynamic recrystallization, Finite element modelling
To increase the competitiveness of manufacturing processes, numerical approaches are unavoidable. Nevertheless, a precise knowledge of the thermo-mechanical behaviour of the materials is necessary to simulate accurately these processes. Previous experimental studies have provided a limited information concerning dynamic recrystallization of spheroidal graphite iron under hot cutting operations. The purpose of this paper is to develop a constitutive model able to describe accurately the occurrence of this phenomenon. Compression tests are carried out using a Gleeble 3500 thermo-mechanical simulator to determine the hot deformation behaviour of spheroidal graphite iron at high strains. Once the activation range of the dynamic recrystallization process is assessed, a constitutive model taking into account this phenomenon is developed and implemented in the Abaqus/Explicit software.
Finally, a specific cutting test and its finite element model are introduced. The ability of the numerical model to predict the occurrence of dynamic recrystallization is then compared to experimental observations.
Over the past few years, austempered ductile iron emerged for its application in several fields such as automotive and railway industries. This specific spheroidal graphite iron provides an efficient compromise between specific mechanical strength, fracture toughness and resistance to abrasive wear. Therefore, this material is intended to substitute forged steels for the weight reduction of numerous manufactured components [START_REF] Kovacs | Development of austempered ductile iron (ADI) for automobile crankshafts[END_REF]. To reach these enhanced mechanical properties, austempered ductile iron is obtained by a specific thermo-mechanical treatment (Figure 1). This consists in an austenitization of the cast iron in the temperature range 1123 -1223 K followed by quenching to an austempering temperature of 523 to 623 K causing the transformation of the austenite phase into ausferrite. A combined casting and forging process prior to this specific quenching is often performed to reduce manufacturing costs [START_REF] Meena | Drilling performance of green austempered ductile iron (ADI) grade produced by novel manufacturing technology[END_REF]. Also, with the aim of increasing the competitiveness, the removal of risers and feeder head is then performed at about 1273 K just after the casting operation. However, this stage can give rise to severe surface degradations under the cut surface, compromising the process viability.
A recent experimental investigation, performed by [START_REF] Fouilland | Experimental study of the brittle ductile transition in hot cutting of SG iron specimens[END_REF], The plastic flow stress evolution of a material which undergoes DRX is shown schematically in Figure 2. At stresses less than the critical stress for the onset of DRX (σ c ), the material undergoes both WH and DRV. However, once σ c is exceeded DRX will become operative and the three processes will occur simultaneously. As the strain applied to the material increases, the volume fraction recrystallized dynamically (X v ) will also increase In numerical cutting models, the [START_REF] Johnson | A constitutive model and data for metals subjected to large strains, high strain rates and high temperatures[END_REF] constitutive formulation is generally implemented because of its simplicity and numerical robustness [START_REF] Limido | SPH method applied to high speed cutting modelling[END_REF]. However, this empirical law describes the flow stress as a function of the total strain applied to the material, which is not a valid state parameter, by means of a simple parametric power-law relationship. Such a description would be incompatible with the evolution of flow stress mentioned above. Therefore, the present paper deals with the development of a specific constitutive model which not only allows the DRX process to be considered, but also that expresses the flow stress in terms of valid state parameters. Thus, the first challenge is to determine the activation range in which DRX occurs, by means of hot compression tests. Then, the selected model is implemented in the finite element analysis of a specific cutting operation. Finally, the prediction of the numerical model concerning the occurrence of DRX is discussed in relation to the experimental observations.
Experimental techniques
Material
The material employed for the present study is an ASTM A536 100-70-03 iron similar to that employed by [START_REF] Fouilland | Experimental study of the brittle ductile transition in hot cutting of SG iron specimens[END_REF]. This spheroidal graphite iron (SGI) exhibits a pearlitic matrix at room temperature and a small amount of ferrite surrounding the graphite nodules, called bullseye ferrite (Figure 3). Its chemical composition is given in Table 1.
Page 7 of 35
Mechanical characterization
The experiments were performed on a Gleeble 3500 thermo-mechanical testing machine.
Compression specimens of 10 mm in diameter and 12 mm in length were tested under constant deformation conditions in a vacuum chamber. The samples were heated at 5 K.s -1 to the testing temperature and then held for one minute at the test temperature. A K-thermocouple was welded at the half height of the specimen to ensure the temperature measurement. The tests were conducted at mean effective strain rates of 0.5, 1 and 5 s -1 , at nominal temperatures of 1073, 1173 and 1273 K. At the end of the tests, the specimens were air cooled. At least two tests were conducted for each deformation condition.
Cutting operation
These experiments were conducted on an orthogonal cutting test bench. The SGI specimens replicated the feeder head obtained just after casting. These were cylindrical
MPa
Effective strain
1273 K -5 s -1 1273 K -1 s -1 1273 K -0.5 s -1 1073 K -5 s -1 1173 K -1 s -1 1173 K -5 s -1 1073 K -1 s -1
Experimental
Figure 5: Effective-stress-effective strain curves obtained at different temperatures and strain rates.
The samples tested at 5 s -1 were observed using optical microscopy. Figures 6a and6b show the optical micrographs of the samples deformed at 1073 K and 1173 K. These microstructures present a significant variation in their pearlitic matrix from the original state (Figure 3). Indeed, ferrite grains are more prominent and their size is finer. The microstructure of the sample deformed at 1273 K, exhibited in Figure 6c, has a pearlitic matrix and the crushed graphite nodules are surrounded by ferrite grains to a lesser extent.
A c c e p t e d M a n u s c r i p t
The predominant change is observed with the pearlite grains, which have a finer size than in the original state. Figure 8 shows the experimental cutting forces and also multiple pictures extracted from the recording of the cutting operation, whose main steps are described below.
1. The pin is in full contact with the tool and its base is deformed plastically, 2. A shear band is detected on the pin base, the cutting forces reach a maximum,
Constitutive model employed for the description of the flow stress curves
In the past few years, several constitutive relations including DRX effects have been developed to describe the behaviour of austenite under hot working conditions. Most of these constitutive models are based on the dependence of different variables on the Zener-Hollomon parameter [START_REF] Kim | Study on constitutive relation of AISI 4140 steel subject to large strain at elevated temperatures[END_REF][START_REF] Lin | Constitutive modeling for elevated temperature flow behavior of 42CrMo steel[END_REF][START_REF] Lurdos | Empirical and physically based flow rules relevant to high speed processing of 304L steel[END_REF]Wang et al., 2012).
Description of the constitutive model
In the present work, the description of the flow stress curves corresponding to the SGI samples deformed under hot-working conditions has been carried out on the basis of the models earlier advanced by Puchi-Cabrera et al. for structural steels deformed under hotworking conditions [2011; 2013a; 2013b; 2014a; 2014b; 2015]. Accordingly, the flow stress data is employed for determining the main stress parameters characteristic of the deformation of the material under these conditions, which include: yield stress, critical stress for the onset of DRX, actual or hypothetical saturation stress (depending on deformation conditions) and actual steady-state stress. Additionally, both the Avrami exponent and the time required to achieve 50% DRX are determined from the work-softening transients present in some of the stress-strain curves. However, the model can be simplified on the ba- Thus, the work-hardening (WH) and dynamic recovery (DRV) transient of each stressstrain curve is described by means of the evolution equation derived from the workhardening law advanced by [START_REF] Sah | Recrystallization during hot deformation of nickel[END_REF], which is expressed as:
dσ ε dε = µ(T ) A 1 - σ ε -σ y (T, ε) σ sat (T, ε) -σ y (T, ε) 2 σ sat (T, ε) -σ y (T, ε) σ ε -σ y (T, ε) (1)
In the above evolution equation, σ ε represents the current flow stress, σ y the yield stress, σ sat the hypothetical saturation stress, µ(T ) the temperature-dependent shear modulus, ε the effective strain rate, T the deformation temperature and A a material parameter that could either be a constant or a function of deformation conditions through the Zener-Hollomon parameter, which is defined as
Z = ε exp Q RT
where Q is an apparent activation energy for hot deformation and R is the universal gas constant. As shown in the forthcoming, the constant A in the above equation is computed from the experimental stress-strain data corresponding to each stress-strain curve determined under constant deformation conditions.
Regarding the temperature-dependent shear modulus, it can be confidently computed from the equation [START_REF] Kocks | Laws for work-hardening and low-temperature creep[END_REF]:
µ(T ) = 88884.6 -37.3T , MPa (2)
The two stress parameters present in eq.( 1) can also be expressed in terms
σ y (T, ε) = δ y sinh -1 ε exp Q RT B y 1/my (3)
Whereas, for the hypothetical saturation stress:
σ sat (T, ε) = δ s sinh -1 ε exp Q RT B s 1/ms (4)
In eqs. 3 and 4, δ y , B y and m y , as well as δ s , B s and m s represent material parameters and Q an apparent activation energy for hot deformation.
The actual steady-state stress (σ ss ), which as indicated above is considered to be equal to the critical stress for the onset of DRX (σ c ), can also be correlated with Z by means of the STG model, according to the following expression:
σ ss (T, ε) = δ ss sinh -1 ε exp Q RT B ss 1/mss (5)
As in equation 4, δ ss , B ss and m ss represent material parameters.
From the computational point of view, equation 1 is firstly integrated numerically. If the resulting value of σ ε is less than σ c (σ c = σ ss ), the flow stress is determined by σ ε (σ = σ ε ), otherwise, the flow stress should be computed from a second evolution equation, which includes the description of the work-softening transient associated to DRX. This second evolution law is expressed as:
dσ dε = µ(T ) A 1 - σ -σ y + ∆σX v σ sat -σ y 2 σ -σ y + ∆σX v σ sat -σ y - n Av ∆σ(1 -X v ) ln 2 ε t n Av 0.5 -t n Av 0.5 ln (1 -X v ) ln 2 1-1 n Av (6)
A c c e p t e d M a n u s c r i p t
Thus, the incremental change in the flow stress with strain, once DRX becomes operative, is observed to depend on σ, σ y , σ sat , σ ss , the dynamically recrystallized volume fraction, X v , the Avrami exponent, n Av , the time for 50% DRX, t 0.5 , µ(T ), ε, T and constant A.
Since plastic deformation of the material can occur under transient conditions involving arbitrary changes in temperature and strain rate, X v should also be computed from the Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation expressed in differential form:
dX v dt = n Av (1 -X v ) ln 2 t n Av 0.5 -t n Av 0.5 ln (1 -X v ) ln 2 1-1 n Av (7)
However, in this case the change in the volume fraction recrystallized with time is also expressed in terms of the time required to achieved 50% DRX, t 0.5 , which can be conveniently computed by means of the simple parametric relationship proposed by Jonas et al.
(2009), expressed as:
t 0.5 = D ε exp Q RT -q exp Q DRX RT , s (8)
In the above equation, D represents a material constant weakly dependent on the austenitic grain size, whereas q and Q DRX represent a material parameter and the apparent activation energy for dynamic recrystallization, respectively. Thus, the constitutive description of the material is represented by equations 1 through 8. Clearly, two important features can be observed. Firstly, the flow stress is absolutely independent of the total strain (ε) applied to the material. Secondly, that given the determination of the flow stress from the numerical integration of two evolution laws, such a parameter can be readily evaluated and ( 6) of the constitutive formulation presented in the previous section. The accurate description of the experimental curves suggests that their individual modelling allows a precise and reliable identification of all the parameters of interest indicated above. Table 2 summarizes the value of the relevant parameters that were determined for each deformation condition. However, in order to formulate a global constitutive equation able to predict the flow stress of the material under arbitrary deformation conditions, the functional dependence of σ y , σ sat , σ ss , t 0.5 and A (if any) on ε, T should also be accurately established.
Previous studies conducted on 20MnCr5 steel deformed under hot-working conditions (Puchi-Cabrera et al., 2014a) indicated that a single activation energy value of approximately 283.3 kJ.mol -1 could be satisfactorily employed for the computation of the Zener-Hollomon parameter (Z), as well as for the corresponding description of σ y , σ sat , σ ss and t 0.5 as a function of deformation temperature and strain rate. Therefore, in the present work the same value will be employed for both purposes. Thus, Figure 13 illustrates the change in σ y , σ sat and σ ss as a function of Z, as well as their corresponding description according to the STG model (eqs. 3 through 5). Table 3
A c c e p t e d M a n u s c r i p t
As can be observed from Figure 13, the predicted change in each parameter with Z, indicated by the solid lines, describes quite satisfactorily the experimental data, which provides a reliable formulation for modelling purposes. An interesting feature that can be observed from this figure is that related to the behaviour exhibited by σ sat and σ ss .
The curve corresponding to the change in σ sat with Z, in the temperature and strain rate intervals explored in the present work, is always above that corresponding to the change in σ ss . However, as Z increases above a value of approximately 10 16 s -1 , both curves tend to approach each other, which suggests that DRX will occur to a lesser extent and therefore, that DRV will be the only operative dynamic restoration process. cutting operation. However, it also shows that as the temperature decreases and the strain rate increases, the extent to which DRX continues to occur is more limited.
Regarding the temperature and strain rate description of t 0.5 , Figure 14 clearly illustrates that the simple parametric relationship given in eq.( 8) constitutes a quite satisfactory approach for the computation of such a parameter under arbitrary deformation conditions.
1E-17 1E-16 1E-15 1E-14 1E-13 1E-12 1E-1 1 1E-10 1E+10 1E+11 1E+12 1E+13 1E+14 1E+15 1E+16 t 0.5 exp(-283300/RT) , s -1 283300 ε exp , s RT -0.85 0.5 283300 Z = ε exp RT 283300 t = 0.0032 Z exp RT
Figure 14: Evolution of the time required to achieve 50% DRX as function of Z.
As indicated on the plot, this relationship can be simplified further by assuming that the apparent activation energy for DRX has the same magnitude than that for hot deformation, which reduces the number of material parameters in the global constitutive formulation without compromising the accuracy of the model prediction. The values of the different constants involved in eq.( 8) are shown on the plot.
Another important consideration of the proposed constitutive formulation involves the temperature and strain rate dependence of constant A, as can be clearly observed from
A c c e p t e d M a n u s c r i p t
Table 2. Previous work conducted both on C-Mn, 20MnCr5 and Fe-Mn23-C0.6 steels [START_REF] Puchi-Cabrera | Constitutive description of a low C-Mn steel deformed under hot-working conditions[END_REF]2013a;2013b;2014a;2014b;[START_REF] Puchi-Cabrera | Constitutive description of FeMn23C0.6 steel deformed under hot-working conditions[END_REF] indicates that this constant in general does not exhibit any significant dependence on T and ε. However, in the present case it can be clearly observed that such a constant exhibits a significant dependence on deformation conditions, which should be taken into consideration into the global constitutive formulation. Thus, Figure 15 highlights the change in A as a function of Z, which clearly indicates that an increase in Zener-Hollomon parameter value leads to a significant and unexpected increase in the athermal work-hardening rate of the material, θ 0 = µ/A.
Finite element modelling of the cutting operation
Finite element model description
Finite element modelling and analysis of the cutting operation were performed with the Abaqus/Explicit software. Figure 20 shows the initial mesh of the workpiece and the cutting tool. The specimen was meshed using 8-nodes 3D solid elements (C3D8RT). A higher mesh density was applied to the pin, as compared to the rest of the model. In order to reduce the computing time of this analysis, only half of both the specimen and the tool were modelled. This results in 9710 elements. A symmetry condition on the z-axis was applied to both right faces of the workpiece and the tool. Appropriate boundary conditions were applied to constrain the bottom, front and left faces of the specimen. The tool, which is considered as a rigid surface, was constrained to move in the cutting direction, at a speed, which varies with time, as shown in Figure 7. No other displacements were allowed. The contact between the tool and the workpiece was modelled using a Coulomb's friction law. The friction coefficient has been identified by an inverse method and set to an almost constant value of 0.2. The workpiece was modelled as an elastic-plastic material A c c e p t e d M a n u s c r i p t with isotropic hardening.
Implementation of the constitutive model
Since the cutting operation is assumed to be performed at a mean constant temperature and strain rate, the integrated version of the constitutive formulation presented in section 4 was implemented using a VUHARD user subroutine (Figure 21). The variables to be defined in the subroutine are the flow stress, Σ, and its variations with respect to the total effective strain, the effective strain rate and the temperature.
ε r = 1 2 A µ(T ) (σ sat (T, ε) -σ y (T, ε)) ε c = -ε r ln 1 - σ ss (T, ε) -σ y (T, ε) σ sat (T, ε) -σ y (T, ε) 2 If ε < ε c σ ε = σ y (T, ε) + (σ sat (T, ε) -σ y (T, ε)) 1 -exp - ε ε r 1/2 Required variables Σ = σ ε ∂Σ ∂ε (T, ε) , ∂Σ ∂ ε (T,ε) , ∂Σ ∂T (ε, ε) Else t = ε -ε c ε X v = 1 -exp -ln(2) t t 0.5 n Av ∆σ = X v (σ sat (T, ε) -σ ss (T, ε)) Required variables Σ = σ ε -∆σ ∂Σ ∂ε (T, ε) , ∂Σ ∂ ε (T,ε) , ∂Σ ∂T (ε, ε)
A c c e p t e d M a n u s c r i p t
The volume fraction recrystallized, X v , was set as a solution-dependent variable within the subroutine. The critical strain for the onset of DRX (ε c ), the relaxation strain (ε r ) and the time during which DRX occurs (t) were introduced.
As mentioned in the previous section, the cutting operation is conducted at a mean temperature of 1273 K and a mean strain rate of approximately 200 s -1 . Furthermore, the high cutting speed allows no time for heat transfer between the tool and the workpiece material. Thus, the model was assumed to be adiabatic. According to [START_REF] Soo | 3D FE modelling of the cutting of Inconel 718[END_REF],
this assumption is generally used for the simulation of high-speed manufacturing processes.
The initial temperature of the test was applied to the specimen. No fracture criterion is introduced in this simulation as the emphasis is put on the beginning of the hot cutting operation corresponding to the first 4 ms. Since the constitutive law has been implemented in its integrated form, it assumes that during the cutting operation the mean strain rate remains constant and therefore, no effect of crack propagation has been taken into consideration concerning the evolution of the volume fraction recrystallized dynamically. In order to take into account changes in temperature and strain rate during the cutting operation, the constitutive law should be implemented in its differential formulation.
Page 29 of 35
A c c e p t e d M a n u s c r i p t
Simulation results
Figure 22 illustrates the comparison between the predicted forces before the crack initiation and the experimental forces. The normal force is predicted quite satisfactorily, whereas the cutting force is clearly overestimated between the steps 1 and 2. However at 4 ms, the prediction errors are respectively about 2.8 percent and 1.8 percent for the cutting and normal forces. The gap between the finite element model and the experimental results can be explained by the fact that the SGI specimen is not perfectly clamped in its insert. The relative velocity between the tool and the pin is then less than expected at the beginning of the test. This, results in a time lag between the predicted cutting force and the actual one.
DRT
Abbreviations DIC Digital image correlation DRV Dynamic recovery DRX Dynamic recrystallization JMAK Johnson-Mehl-Avrami-Kolmogorov model SGI Spheroidal graphite iron STG Sellars-Tegard-Garofalo model WH Work hardening Arabic symbols A Material parameter B s , B ss , B y Material parameters in the STG model, s -1 D Material parameter, s m s , m ss , m y Material parameters in the STG model n Av Avrami exponent Q Apparent activation energy for hot-working, kJ.mol -1 q Material parameter Q DRX Apparent activation energy for dynamic recrystallization, kJ.mol -1 R Universal gas constant, J.mol -1 .K -1 T Absolute temperature, K t Time during which DRX occurs, s t 0.5 Time for 50 percent recrystallization, s X v Volume fraction recrystallized Z Zener-Hollomon parameter, s -1 Greek symbols δ s , δ ss , δ y Material parameters in the STG model, MPa ε Total effective strain
Figure 1 :
1 Figure 1: Heat treatment example for austempered ductile iron.
Figure 2 :
2 Figure 2: Typical dynamic recrystallization hardening curve.
effect of WH and DRV. As a consequence, a work softening transient will occur, leading to the presence of a peak stress (σ p ) on the flow stress curve. As the strain applied to the material continuous to increase, the balance among WH, DRV and DRX will lead to the achievement of a steady-state stress (σ ss ), whose magnitude is equal to σ c .
Figure 3 :
3 Figure 3: Optical micrograph of the ASTM A536 100-70-03 iron in its original state (etched with saturated nitric acid).
mm in height and 10 mm in diameter. Also, the fillet radius on the pin base was of 2.5 mm. At the beginning of the test, the specimens were heated in a furnace up to the required temperature. Then, they were clamped in a refractory insert bed, which prevented from heat losses. Finally, during the cutting operation, the high strength steel cutting tool moves against the cylinder, as shown in Figure4. This experimental device includes a piezoelectric sensor for measuring cutting loads. A high speed camera records the cutting operation at a frequency of 15000 frames per second with a resolution of 768 x 648 pixels. A speckle pattern covering the tool allows the determination of the effective cutting velocity by means of a digital image correlation (DIC). The tests were performed three times at a temperature of 1273 K with a tool rake angle of -10˚and an initial cutting speed of 1.2 m.s -1 . The choice of the negative rake angle was based on the results of the study conducted by[START_REF] Fouilland | Experimental study of the brittle ductile transition in hot cutting of SG iron specimens[END_REF], which revealed that such rake angle allows the observation of the brittle-ductile transition.
Figure 4 :
4 Figure 4: Clamped specimen and cutting tool.
Figure 5
5 Figure5illustrates the mean effective stress-effective strain curves obtained at different temperatures and strain rates. It was observed that the typical deviation of the flow stress values from the mean curve was about + or -2 MPa. The experimental stress-strain curves exhibit the same shape as that portrayed in Figure2, highlighting the occurrence of DRX during the compression tests.
Figure 6 :Figure 7 :
67 Figure 6: Optical micrographs of the ASTM A536 100-70-03 iron deformed at 5 s -1 and different temperatures (etched with saturated nitric acid).
Figure 8 :Figure 9 :
89 Figure 8: Axial and normal forces with the corresponding pictures recorded by the high speed camera.
A c c e p t e d M a n u s c r i p t sis of the experimental results reported by Jonas et al. (2009) and Quelennec et al. (2011), who demonstrated that, for a broad range of steel grades, the steady state stress is equal to the critical stress for the nucleation of DRX.
of T and ε by means of the well established Sellars-Tegart-Garofalo (STG) model [1966]. For the Page 15 of 35 A c c e p t e d M a n u s c r i p t yield stress, its functional dependence on T and ε is expressed as:
when the deformation of the material occurs under transient deformation conditions, which are characteristic of actual industrial hot deformation processes. The experimental flow stress data determined at different deformation temperatures and strain rates constitutes the raw data for the rational computation of the different material parameters involved Identification of the different parameters involved in the constitutive modelThe precise determination of the different stress parameters involved in the constitutive description of material, as well as the time required to achieve 50% DRX, can be conducted by means of the individual modelling of each stress-strain curve determined under constant conditions of temperature and strain rate. Figures 10 through 12 illustrate the comparison of the experimental stress-strain curves and the predicted ones employing equations (1)
Figure 10 :
10 Figure 10: Comparison of the experimental stress-strain curves and the constitutive formulation at 1073 K.
Figure 11 :Figure 12 :
1112 Figure 11: Comparison of the experimental stress-strain curves and the constitutive formulation at 1173 K.
Figure 13 :
13 Figure 13: σy, σsat and σss as a function of Z.
Figure 15 :Figure 16 :
1516 Figure 15: A as a function of Z.
Figure 17 :
17 Figure 17: Comparison between predicted and experimental stress-strain curves at 1173 K.
Figure 18 :
18 Figure 18: Comparison between predicted and experimental stress-strain curves at 1273 K.
Figure 19 :
19 Figure 19: Maximum relative error between the computed and predicted values of the flow stress.
Figure 20 :
20 Figure 20: Isometric view of the initial mesh configuration.
Figure 21 :
21 Figure 21: Algorithm defined in the VUHARD subroutine.
Figure 22 :
22 Figure 22: Comparison between experimental and predicted forces.
Figure 23 :Figure 24 :
2324 Figure23shows the von Mises stress (σ eq ) distribution within the specimen during the cutting operation. The fillet radius on the pin base is an area of stress concentration. The average value of von Mises stresses in the fillet radius at 4 ms is about 250 MPa. Under these deformation conditions, DRX is then operative as the effective stress associated to the WH and DRV curve is greater than the critical stress for the onset of DRX.
Table 1 :
1 Chemical composition of the ASTM A536 100-70-03 iron.
Element C Si Mn S Cu Ni Cr Mo Mg
Composition (wt%) 3.35 2.72 0.16 0.009 0.87 0.71 <0.03 0.21 0.043
Table 2 :
2 Relevant parameters involved in the description of the individual stress-stain curves.
T , K
summarizes the value of the different material parameters involved.
δ y , MPa B y , s -1 m y δ s , MPa B s , s -1 m s δ ss , MPa B ss , s -1 m ss
19.0 1.77E+08 3 104.2 3.47E+10 4.96 88.3 1.13E+11 3.74
Table 3 :
3 Materials parameters involved in description of σy, σsat and σss as a function of Z, according to the STG model.
A c c e p t e d M a n u s c r i p t of dynamic recrystallization has been determined. A typical cutting process has been modelled both from the experimental and numerical point of view. The numerical predictions agree with the experimental results and highlighted some explanations concerning the occurrence of dynamic recrystallization within the shear zone. Currently, further investigations are being carried out in order to validate the proposed constitutive description, by modelling other cutting configurations. Also, a fracture criterion is being characterized for the spheroidal graphite iron, in order to investigate the competition between the material fracture and the occurrence of DRX during the cutting operation.
Acknowledgements
The present research work has been supported by the ARTS Carnot Institute and was made possible through the collaboration between MSMP and LAMIH laboratories. The authors gratefully acknowledge the support of this institute. They also express their sincere | 30,270 | [
"1161480",
"957581",
"1232819",
"957582"
] | [
"1303",
"1303",
"1303",
"1303",
"1303",
"211915"
] |
01763369 | en | [
"info"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01763369/file/MSR18_paper%20%28camera-ready%20version%29.pdf | César Soto-Valero
email: [email protected]
Johann Bourcier
email: [email protected]
Benoit Baudry
email: [email protected]
Detection and Analysis of Behavioral T-patterns in Debugging Activities
Keywords: Debugging interactions, developers' behavior, T-patterns analysis, empirical software engineering
A growing body of research in empirical software engineering applies recurrent patterns analysis in order to make sense of the developers' behavior during their interactions with IDEs. However, the exploration of hidden real-time structures of programming behavior remains a challenging task. In this paper, we investigate the presence of temporal behavioral patterns (T-patterns) in debugging activities using the THEME software. Our preliminary exploratory results show that debugging activities are strongly correlated with code editing, file handling, window interactions and other general types of programming activities. The validation of our T-patterns detection approach demonstrates that debugging activities are performed on the basis of repetitive and well-organized behavioral events. Furthermore, we identify a large set of T-patterns that associate debugging activities with build success, which corroborates the positive impact of debugging practices on software development.
INTRODUCTION
Debugging is a widely used practice in the software industry, which facilitates the comprehension and correction of software failures. When debugging, developers need to understand the pieces of the software system in order to successfully correct specific bugs. Modern Integrated Development Environments (IDEs) incorporate useful tools for facilitating the debugging process, allowing developers to focus only in their urgent needs during the fixing work. However, debugging is still a very challenging task that typically involves the interaction of complex activities through an intense reasoning workflow, demanding a considerable cost in time and effort [START_REF] Perscheid | Studying the advancement in debugging practice of professional software developers[END_REF].
Due to the complex and dynamic nature of the debugging process, the identification and analysis of repetitive patterns can benefit IDE designers, researchers, and developers. For example, IDE designers can build more effective tools to automate frequent debugging activities, suggesting related tasks, or designing more advanced code tools, thus improving the productivity of developers. Furthermore, researchers can better understand how debugging behavior is related to developers' productivity and code quality. Unfortunately, most of existing studies on debugging activities within IDEs do not consider the complex temporal structure of developers' behavior, thus including only information about a small subset of possible events in the form of data streams [START_REF] Parnin | Are Automated Debugging Techniques Actually Helping Programmers?[END_REF].
The detection of temporal behavioral patterns (T-patterns) is a relevant multivariate data analysis technique used in the discovery, analysis and description of temporal structures in behavior and interactions [START_REF] Magnusson | Discovering hidden time patterns in behavior: T-patterns and their detection[END_REF]. This technique allows to determine whether two or more behavioral events occur sequentially, within statistically significant time intervals.
In this paper, we perform a T-patterns analysis to study debugging behavior. More specifically, we examine the relations of debugging events with other developers' activities. Through the analysis of the MSR 2018 Challenge Dataset, consisting of enriched event streams of developers' interactions on Visual Studio, we guide our work by the following research questions:
• RQ 1 : What developing events are the most correlated with debugging activities? • RQ 2 : Can we detect behavioral T-patterns in debugging activities? • RQ 3 : Is the analysis of T-patterns a suitable approach to show the effect of systematic debugging activities on software development?
We aim to answer these question by analyzing a set of 300 debugging sessions filtered from the MSR 2018 Challenge Dataset of event interactions. The objective of our analysis is twofold: [START_REF] Amann | FeedBaG: An interaction tracker for Visual Studio[END_REF] to provide the researchers with useful information concerning the application of T-patterns analysis in the study of developers' behavior; and (2) to present empirical evidence about the influence of debugging on software development.
Previous studies analyzed debugging behavior using patterns detection methods. For example, in the development of automated debugging techniques for IDE tools improvement [START_REF] Parnin | Are Automated Debugging Techniques Actually Helping Programmers?[END_REF]. However, to the best of our knowledge, this is the first attempt of using T-patterns analysis to investigate debugging session data.
DATA MANAGEMENT
The dataset for the 2018 MSR Challenge, released on March 2017 by the KaVE Project1 , contains over 11M enriched events that correspond to 15K hours of working time, originating from a diverse group of 81 developers [START_REF] Proksch | Enriched Event Streams: A General Dataset For Empirical Studies On In-IDE Activities Of Software Developers[END_REF]. The data was collected using FeedBaG, an interaction tracker for Visual Studio, which was designed with the purpose of capturing a large set of different in-IDE interactions during software developing in the shape of enriched event streams [START_REF] Amann | FeedBaG: An interaction tracker for Visual Studio[END_REF].
The THEME software2 supports the detection, visualization and analysis of T-patterns. It has been successfully applied in many different areas, from behavioral interaction between human subjects and animals to neural interactions within living brains [START_REF] Magnusson | Discovering hidden temporal patterns in behavior and interaction: T-pattern detection and analysis with THEME[END_REF]. Due to the data transferred by contributors is anonymous, we base our T-patterns analysis on the session Id that identifies developers' work during each calendar day. Our filtering routine removes duplicate events and generates individual session files with a structure appropriate for THEME. Date-time information of triggered events is converted to epoch-second values, which is an integer representing the number of elapsed seconds from 1970-01-01T00:00:00Z. Only sessions with debugging interactions where retained for further analysis. Our resulting dataset contains 300 sessions and more than 662K events. Figure 1 shows an example of the data inputs: the variable vs. value correspondence table with the debugging-related event types filtered ("vvt.vvt") and a data file of debugging interactions ("DebuggingSession.txt"). We are mostly interested in debugging events triggered using commands, such as "Debug.Start" or "Debug.StepInto", which represent the user's invocation of a direct debugging action in the IDE. We decide to keep other related event types that can bring additional information about the programmer's debugging behavior (e.g., "EditEvent", "TestEvent" or "BuildEvent"). To do so, we append onto each event type string its respective descriptor. For instance, we retain information about the amount of editing according to the size of changes made in the file (e.g., "Large" or "Short"), the result of tests (e.g., "Successful" or "Failed"), or the build result (e.g., "Successful" or "Unsuccessful").
Our analysis goes beyond the discovery of events' associations. We are more interested in explaining those connections in terms of developers' behaviour by means of T-patterns analysis. In the following, we perform the events analysis using THEME software. First, we show how interesting Tpatterns can be detected and visualized through the finegrained inspection of interactions in individual debugging sessions. Next, we aim to find general behavioral patterns that occur within statistical significance time thresholds for all the debugging sessions studied.
T-PATTERNS ANALYSIS
In this section, we summarize the main concepts regarding the detection and analysis of T-patterns [START_REF] Magnusson | Discovering hidden time patterns in behavior: T-patterns and their detection[END_REF]. Through the use of an active debugging session as case study, we illustrate the benefits of using THEME software as a tool for exploring hidden real-time structures of programming behaviour in IDEs. Our general approach consists of 3 phases: and evolution to deal with redundant detections, where partial and equivalent patterns are removed. THEME provides statistical validation features, global and per pattern, using randomization or Monte Carlo repeated [START_REF] Magnusson | Discovering hidden temporal patterns in behavior and interaction: T-pattern detection and analysis with THEME[END_REF].
T-patterns visualization. A T-pattern can be viewed as a hierarchical and self-similar pseudo fractal pattern, characterized by significant translation symmetry between their occurrences. Figure 2b shows the binary detection tree of a complex T-pattern of length 7 found in the debugging session of Figure 2a. The large vertical lines connecting event points indicate the occurrence time of the T-pattern. The node marked in green indicates an event that can be predicted from the earlier parts of the pattern (also called T-retrodictor).
GENERAL FINDINGS
We perform an exploratory data analysis to examine the association among events. We use the phi coefficient of correlation, a common measure for binary correlation, and the tidytext R package in order to visualize how often events appear together relative to how often they appear separately [START_REF] Silge | tidytext: Text Mining and Analysis Using Tidy Data Principles in R[END_REF]. Figure 3 shows the 10 developers' activities that we find more correlated with debugging (𝜑 > 0.5). From the figure, we observe that debugging activities are strongly correlated with code editing, window interactions, document saving, and activity events. In addition, we found that code completion, keyboard navigation and short code editing events are not directly correlated with debugging activities. Based on the observation of Figure 3, we derive the answer to the RQ 1 as follows:
Answer to RQ 1 : Debugging activities are more correlated with editing, file handling, window interactions and activity events than with other general commands or event types.
We are mostly interested in analyzing general patterns of events that occur within the debugging workflow. Such patterns allow for insights into the dynamic nature of developer's behavior while debugging software. Accordingly, all debugging sessions were ordered and concatenated in time to conform a single dataset for global analysis with THEME. Thus, the 300 debugging sessions were merged, resulting in a dataset with 263 different event types and more than 460K events' occurrences.
The following search parameters were fit in THEME via grid search: (a) detection algorithm = FREE; (b) minimum number of occurrences of pattern = 10; (c) significance level = 0.0005 (0.05% probability of any CI relationship to occur by chance); (d) maximum number of hierarchical search levels = 10; (e) exclusion of frequent event types occurring above the mean number of occurrences of ±1 standard deviations.
For the above parameters, more than of 12K of T-patterns were detected. We run the algorithm on 10 randomized versions of the data, using the same search parameters, to check if the set of detected T-patterns differentiate significantly from those obtained randomly. Figure 4 shows the comparison between the distributions of the detected patterns on the original data and the average number of patterns detected after the randomization procedure. The incidence of T-patterns in real data was significantly greater than in its randomized versions. Accordingly, it is clear that the T-patterns detected in the original dataset were not obtained by chance. This result demonstrates that debugging activities are organized on the basis of behavioral events, which occur sequentially and within significant constraints on the time intervals that separates them. Based on this result, we derive the answer to the RQ 2 as follows:
Answer to RQ 2 : The validation of the T-patterns detected using THEME provides meaningful evidence about the presence of behavioral patterns in debugging activities. Once T-patterns have been detected, the next challenge is to select relevant T-patterns for subsequent analysis. We are interested in study T-patterns that associate debugging activities with build results. To this end, we used the filters available in THEME, which allow to search for the presence of desired event types in patterns. We found a total of 735 T-patterns that directly associate debugging activities with successful builds, whereas only 67 T-patterns were found for unsuccessful builds. This result shows that, after a methodical sequence of debugging activities, generally the developers have much more chances to achieve successful builds.
Table 1 present a global comparison between the T-patterns found in debugging sessions that are directly related with successful and unsuccessful build results. From the table, we can see that T-patterns related to successful builds occurs more frequently and have a more complex structure, with higher values of patterns' length and duration. On the other hand, T-patterns associated with unsuccessful builds present a more simple structure, with a mean length value of nearly 2 events only and a duration that is almost five times smaller than T-patterns associated with successful builds. This result show that more complex debugging sessions (e.g., those in which developers utilize more specialized debugging tools or invert more time to complete) are more likely to pass the builds and correct software failures.
By analyzing the T-patterns of sessions with unsuccessful builds, we find that their contain mostly events that introduce minor changes in code (e.g., "Edit.Delete, "Edit.Paste"). We hypothesize that this type of debugging sessions were used to quick trace the effect of these changes. Table 1 also shows representative examples of T-patterns occurrences for both types of build results. Based on the T-patterns analysis performed, we derive the answer to the RQ 3 as follows:
Answer to RQ 3 : The quantitative analysis of detected T-patterns in debugging sessions shows that, in general, complex debugging activities achieve successful builds.
CONCLUSION
In this paper, we introduced T-patterns analysis as a useful approach to better understand developer's behavior during in-IDE activities. Through the analysis of 300 sessions with debugging interactions, the results obtained using the THEME software bring evidences about the presence of common Tpatterns during debugging. In particular, our analysis show a strong connection between debugging activities and successful builds. We believe that the study of the developers' activities using T-patterns analysis can advance the understanding about the complex behavioral mechanism that meddle during the process of software developing, which can benefit to both practitioners and IDE designers. In order to aid in future replication of our results, we make our THEME project, filtered dataset and R scripts publicly available online3 .
Figure 1 :
1 Figure 1: Data input structure for THEME software.
(1) visualization of debugging interactions in the form of T-data; (2) detection of T-patterns in debugging sessions; and (3) validation and analysis of the detected T-patterns. T-data. A T-data consists in a collection of one or more T-series, where each T-series represents the occurrence points 𝑝1, ..., 𝑝𝑖, ..., 𝑝𝑛 of a specific type of event during some observation interval [1, 𝑇 ]. Figure 2a shows an example of T-data coded from a debugging session with 166 squared data points (events occurrences), 25 T-series (event types), and a duration of 823 units. Each T-series in the Y-axis represents an event activity triggered in the IDE during the session, while the X-axis is the time in which each specific event was invoked. For the search parameters used, the blue squares represent detected T-patters, while the red ones did not. T-pattern. A T-pattern is composed of 𝑚 ordered components 𝑋1 . . . 𝑋𝑖 . . . 𝑋𝑚, any of which may be occurrence points or T-patterns, on a single dimension (time in this case), such that, over the occurrences of the pattern the distances 𝑋𝑖 𝑋𝑖+1, with 𝑖 . . . 𝑚 -1, varies within a significant small interval [𝑑1, 𝑑2]𝑖, called a critical interval (CI). Hence, a T-pattern 𝑄 can be expressed as: 𝑄 = 𝑋1[𝑑1, 𝑑2] 1 . . . 𝑋𝑖[𝑑1, 𝑑2]𝑖𝑋𝑖+1 . . . 𝑋𝑚-1[𝑑1, 𝑑2]𝑚-1𝑋𝑚 where 𝑚 is the length of 𝑄 and 𝑋𝑖[𝑑1, 𝑑2]𝑋𝑖+1 means that within all occurrences of the pattern in T-data, after an occurrence of 𝑋𝑖 at the instant 𝑡, there is a time window [𝑡 + 𝑑1, 𝑡 + 𝑑2]𝑖 within which 𝑋𝑖+1 will occur. Any T-pattern 𝑄 can be divided into at least one pair of shorter ones related by a corresponding CI: 𝑄 𝑙𝑒𝑓 𝑡 [𝑑1, 𝑑2]𝑄 𝑟𝑖𝑔ℎ𝑡 . Recursively, 𝑄 𝑙𝑒𝑓 𝑡 and 𝑄 𝑟𝑖𝑔ℎ𝑡 can thus each be split until the pattern 𝑋1 . . . 𝑋𝑚 is expressed as the 1 to 𝑚 terminals (occurrence points or event types) of a binary-tree. T-patterns detection. The T-patterns detection algorithm consists in a set of routines for CI detection, pattern construction and pattern completeness competition. The algorithm works bottom-up, level-by-level and uses competition
(a) T-data representation. (b) T-pattern visualization.
Figure 2 :
2 Figure 2: T-patterns analysis of a debugging session, both figures were created with THEME.
Figure 3 :
3 Figure 3: Pairwise correlation between events related to debugging activities.
Figure 4 :
4 Figure 4: Distribution of T-patterns lengths detected in real and randomized data.
Table 1 :
1 Summary of T-patterns detected which reflect the relation of debugging activities with build results.
Build Result Occurrence Length Duration T-pattern Example
Successful 735 4.87±0.72 580.09±232.51 (Debug.Start((Debug.StepOver Debug.StopDebugging)BuildEvent.Successful))
Unsuccessful 67 120.71±35.91 (Debug.Start(Edit.Delete(DocumentEvent.Saved BuildEvent.Unsuccessful)))
Available at http://www.kave.cc/datasets
For more information see http://patternvision.com
https://github.com/cesarsotovalero/msr-challenge2018 | 18,631 | [
"1030619",
"927183"
] | [
"452096",
"491189",
"366312"
] |
01763373 | en | [
"info"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01763373/file/VSS18-v4.pdf | Tonametl Sanchez
email: [email protected]
Andrey Polyakov
email: [email protected]
Jean-Pierre Richard
email: [email protected]
Denis Efimov
email: [email protected]
J.-P Richard
A robust Sliding Mode Controller for a class of bilinear delayed systems
In this paper we propose a Sliding Mode Controller for a class of scalar bilinear systems with delay in both the input and the state. Such a class is considered since it has shown to be suitable for modelling and control of a class of turbulent flow systems. The stability and robustness analysis for the reaching phase in the controlled system are Lyapunov-based. However, since the sliding dynamics is infinite dimensional and described by an integral equation, we show that the stability and robustness analysis is simplified by using Volterra operator theory.
I. INTRODUCTION
Turbulent Flow Control is a fundamental problem in several areas of science and technology, and improvements to address such a problem can produce a very favourable effect in, for example, costs reduction, energy consumption, and environmental impact [START_REF] Brunton | Closed-Loop Turbulence Control: Progress and Challenges[END_REF]. Unfortunately, in general, model based control techniques find several obstacles to be applied to the problem of Flow Control. One of the main difficulties is that the par excellence model for flow is the set of Navier-Stokes equations that is very complicated for simulation and control design [START_REF] Brunton | Closed-Loop Turbulence Control: Progress and Challenges[END_REF]. On the other hand, when the model is very simple it is hard to represent adequately the behaviour of the physical flow. In [1] the authors say that the remaining missing ingredient for turning flow control into a practical tool is control algorithms with provable performance guarantees. Hence, adequate models (a trade-off between simplicity, efficiency, and accuracy) and algorithms for Flow Control are required.
In [START_REF] Feingesicht | A bilinear input-output model with state-dependent delay for separated flow control[END_REF] a model for a flow system was proposed, such a model consists in a bilinear differential equation with delays in the input and in the state. An attractive feature of the model is that, according to the experimental results, with some few parameters the model reproduces the behaviour of the physical flow with a good precision. The justification for using such a kind of equations as models for flow systems was presented in [START_REF] Feingesicht | Nonlinear active control of turbulent separated flows: Theory and experiments[END_REF]. We reproduce that reasoning as a motivational example in Section II.
For a particular case of the model introduced in [START_REF] Feingesicht | A bilinear input-output model with state-dependent delay for separated flow control[END_REF], a sliding mode controller was proposed in [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF]. That control technique was chosen due to the switching features of the actuators. A good experimental performance was obtained with such a controller1 . Hence, it is worth to continue with the study of the class of bilinear delayed systems and to develop general schemes for analysis and control design. In this paper we design a Sliding Mode Controller for a subclass of such systems by following the idea proposed in [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF]. Nonetheless, the result of this paper differs from [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF] in the following points.
• The dynamics on the sliding surface is infinite dimensional and is described by an integral equation. In [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF] the asymptotic stability of the sliding motion was analysed in the frequency domain. In this paper we propose, as one of the main contributions, to analyse the stability properties of such dynamics by considering it as a Volterra integral equation. This allows us to simplify the analysis and to give simple conditions to guarantee asymptotic stability of the solutions. Hence we avoid the necessity of making a frequency domain analysis to determine the stability properties of an infinite dimensional system. • The analysis of the reaching phase is Lyapunov-based, this is important because it is not only useful to establish stability properties, but also robustness, whose analysis is performed applying Volterra operator theory. • Although the systems considered in this paper and those in [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF] are similar, the assumptions on the parameters are different. This allows us to enlarge the class of systems which can be considered for the application of the proposed methodology. Paper organization: In Section III a brief description of the control problem is given. Some properties of the system's solutions are studied in Section IV. The design and analysis of the proposed controller are explained in Section V. A robustness analysis is given in Section VI. A numerical example is shown in Section VII. Some final remarks are stated in Section VIII.
Notation: R denotes the set of real numbers. For any a ∈ R, R ≥a denotes the set {x ∈ R : x ≥ a}, and analogously for R >a . For any p ∈ R ≥1 , L p (J) denotes the set of measurable functions x : J ⊂ R → R with finite norm x L p (J) = J |x(s)| p ds 1 p , and L ∞ (J) denotes the set of measurable functions with finite norm x L ∞ (J) = ess sup t∈J |x(t)|.
II. MOTIVATIONAL EXAMPLE
In this section we reproduce the example given in [START_REF] Feingesicht | Nonlinear active control of turbulent separated flows: Theory and experiments[END_REF] on how a bilinear delayed differential equation can be obtained as a model for a flow system.
A unidimensional approximation to the Navier-Stokes equations is the Burgers' equation given by
∂v(t, x) ∂t + v(t, x) ∂v(t, x) ∂x = ν ∂ 2 v(t, x) ∂x 2 , (1)
where v : R2 → R is the flow velocity field, x ∈ R is the spatial coordinate, and ν ∈ R ≥0 is the kinematic viscosity. Assume that x ∈ [0, F ] for some F ∈ R >0 . Suppose that v(t, x) = v(x -ct), i.e. the solution of (1) is a travelling wave with velocity c ∈ R ≥0 , it has been proven that (1) admits this kind of solutions [START_REF] Debnath | Nonlinear Partial Differential Equations for Scientists and Engineers[END_REF]. A model approximation of (1) can be obtained by discretizing (1) in the spatial coordinate. Here, we use central finite differences, for the spatial derivatives, with a mesh of three points (and step of h = F/2). Thus
∂v(t, F/2) ∂t + v(t,F/2) F [v(t, F ) -v(t, 0)] = 4ν F 2 [v(t, F ) -2v(t, F/2) + v(t, 0)] . ( 2
)
Since v is assumed to be a travelling wave, it has a periodic pattern in space and time. In particular, note that v(t,
F/2) = v(F/2 -ct) = v(t + F/(2c), F ) = v(t -F/(2c), 0). Now, define y(t) = v(t, F ) and u(t) = v(t, 0), thus, (2)
can be rewritten as
ẏ(t) = -1 F y(t -ς)u(t -2ς) + 1 F y(t)u(t -ς)+ 4ν F 2 [y(t -ς) -2y(t) + u(t -ς)]
, where ς = F/2c. Hence, in [START_REF] Feingesicht | Nonlinear active control of turbulent separated flows: Theory and experiments[END_REF], [START_REF] Feingesicht | A bilinear input-output model with state-dependent delay for separated flow control[END_REF], [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF] the authors propose a more general model for separated flow control:
ẏ = N1 i=1 a i y(t -τ i ) + N2 i=1 N3 k=1 āk y(t -τk ) + b j × u(t -ς k ) . (3)
Observe that this approximating model still recovers two main features of the original flow model: first, it is nonlinear; and second, it is infinite dimensional.
III. PROBLEM STATEMENT
Consider the system
ẋ(t) = a 1 x(t -τ 1 ) -a 2 x(t -τ 2 ) + [c 1 x(t -τ1 ) -c 2 x(t -τ2 ) + b] u(t -ς) , (4)
where a 1 , a 2 , c 1 , c 2 , b, τ 1 , τ 2 , τ1 , τ2 ∈ R ≥0 . We assume that all the delays are bounded and constant. We also assume that the initial conditions of (4) are x(t) = 0 for all t < 0 and x(0) = x 0 for some x 0 ≥ 0.
The control objective is to drive the state of the system to a constant reference x * ∈ R >0 . Such an objective must be achieved under the following general restrictions:
• Since the equation is used to model a positive physical system, some conditions on the model parameters have to be given to guarantee that the solutions of (4) can only take nonnegative values. • Due to the physical nature of the on/off actuator, the control input is restricted to take values from the set {0, 1}.
IV. SYSTEM'S PROPERTIES
As stated in Section III we require some features of the solutions of (4) to guarantee that it constitutes a suitable model for the physical system. In this section we study the conditions on the parameters of ( 4) that guarantee nonnegativeness and boundedness of the solutions. Of course, existence and uniqueness of solutions must be guaranteed. To this aim we rewrite (4) as
ẋ = a 1 x(t -τ 1 ) + c 1 u(t -ς)x(t -τ1 ) -a 2 x(t -τ 2 )- c 2 u(t -ς)x(t -τ2 ) + bu(t -ς) , (5)
that can be seen as a linear delayed system with timevarying coefficients. The term bu(t -ς) is considered as the input. In a first time, we will consider the general case u : R ≥0 → R and, then, restrict it to u : R ≥0 → {0, 1}.
A locally absolutely continuous function that satisfies [START_REF] Feingesicht | Nonlinear active control of turbulent separated flows: Theory and experiments[END_REF], for almost all t ∈ [0, ∞), and its initial conditions for all t ≤ 0 is called a solution of ( 5) [START_REF] Agarwal | Nonoscillation Theory of Functional Differential Equations with Applications[END_REF]. Hence, if in addition to the assumptions in the previous section we assume that u : [0, ∞) → R is a Lebesgue-measurable locally essentially bounded function, then the solution of ( 5) exists and it is unique, see Appendix A. Such a definition of solution is adequate for the analysis made in this section, however, for the closed-loop behaviour analysis, we will also consider another framework, see Remark 1 in Section V.
A. Nonnegative solutions
We have said that the model has to be guaranteed to provide nonnegative solutions. Thus, we first search for some conditions that guarantee that the solutions of (5) are nonoscillatory 2 . Consider (5) and define
P (t) = a 1 +c 1 u(t- ς) and N (t) = a 2 + c 2 u(t -ς).
Lemma 1 ([2], Corollary 3.13): Consider (5) with b = 0. If min(τ 2 , τ2 ) ≥ max(τ 1 , τ1 ), N (t) ≥ P (t) for all t ≥ t 0 , and there exists λ ∈ (0, 1) such that
lim t→∞ sup t-min(τ1,τ1) t-max(τ2,τ2) (N (s) -λP (s)) ds < ln(1/λ) e , lim t→∞ sup t t-max(τ2,τ2) (N (s) -λP (s)) ds < 1 e ,
then, the fundamental solution of ( 5) is such that X(t, s) > 0, t ≥ s ≥ t 0 , and ( 5) has an eventually positive solution with an eventually nonpositive derivative. Now, having nonoscillation conditions for (5) we can state the following.
Corollary 1: Consider (5) with b ≥ 0. Suppose that the assumptions of Lemma 1 hold. Assume that x(t) = 0, u(t) = 0 for all t < 0 and x(0) = x 0 for some x 0 ≥ 0. If u(t) ≥ 0 for all t ≥ 0, then x(t) ≥ 0 for all t ≥ 0. The proof is straightforward through the solution representation by using the fundamental function, see Lemma 3. Note that, in particular, the integral conditions of Lemma 1 are satisfied if
a 2 - 1 e (a 1 + c 1 ) max(τ 2 , τ2 ) < 1 e .
Although this is only sufficient, it constitutes a simple formula to verify the integral conditions of Lemma 1.
B. Boundedness of solutions
Observe that the nonoscillation conditions of Lemma 1 also guarantee the boundedness of the system's trajectories for b = 0. For the case b = 0 we have the following result.
Lemma 2: Consider (4) with its parameters satisfying Lemma 1, and with the initial conditions
x(t) = 0, u(t) = 0 for all t ≤ 0. If b = 0, N (t) -P (t) ≥ α , ∀ t ≥ 0 , (6)
for a strictly positive α, and u(t) = 1 ∀t ≥ 0, then the solution of ( 5) is such that x(t) ≤ x for all t ≥ 0 and
lim t→∞ x(t) = x , x = b a 2 + c 2 -a 1 -c 1 . (7)
Proof: According to Lemma 1, if b = 0, then we can ensure that there exists t 1 such that x(t) > 0 and ẋ(t) ≤ 0 for all t ≥ t 1 . Hence, there exists t 2 ≥ t 1 such that for all
t ≥ t 2 ẋ≤a 1 x(t -max(τ 1 , τ1 )) + c 1 u(t -ς)x(t -max(τ 1 , τ1 )) -a 2 x(t -min(τ 2 , τ2 )) -c 2 u(t -ς)x(t -min(τ 2 , τ2 )) ≤P (t)x(t -max(τ 1 , τ1 )) -N (t)x(t -min(τ 2 , τ2 )) ,
thus, since N (t) -P (t) ≥ α, we can ensure that lim t→∞ x(t) = 0, see e.g. [START_REF] Agarwal | Nonoscillation Theory of Functional Differential Equations with Applications[END_REF]Theorem 3.4]. Now, for the particular case u(t) = 1 and b = 0, (4) is time-invariant and the asymptotic behaviour of x(t) guarantees that x = 0 is asymptotically stable, therefore, it is exponentially stable and its fundamental solution X(t, s) is exponentially bounded (see e.g. [START_REF] Györi | Oscillation Theory of Delay Differential Equations With Applications[END_REF], [START_REF] Fridman | Introduction to Time-Delay Systems[END_REF]). Hence, for the case b = 0, u(t) = 1, the solution of (4) can be expressed as (see Lemma 3 in Appendix A)
x(t) = X(t, t 0 )x(0) + t t0 X(t, s)b ds .
Since X(t, s) decreases exponentially in t, x(t) is bounded, moreover, x(t) increases monotonically due to the input term. Thus lim t→∞ x(t) exists and it is some constant x, therefore, lim t→∞ ẋ(t) = 0 = -(a 2 + c 2 -a 1 -c 1 )x + b. This equality gives the limit value [START_REF] Feingesicht | A bilinear input-output model with state-dependent delay for separated flow control[END_REF].
V. SLIDING MODE CONTROLLER
In this section we present the Sliding Mode Controller for (4), but first, define k : R → R given by
k(r) = k a1 (r) -k a2 (r) + k c1 (r) , (8)
where
k a1 (r) = a 1 , r ∈ [min(ς, τ 1 ), max(ς, τ 1 )], 0, r / ∈ [min(ς, τ 1 ), max(ς, τ 1 )], k a2 (r) = a 2 , r ∈ [ς, τ 2 ], 0, r / ∈ [ς, τ 2 ], k c1 (r) = c 1 , r ∈ [ς, τ1 ], 0, r / ∈ [ς, τ1 ].
Theorem 1: If system (4) satisfies the conditions of Lemma 1, the condition (6), ς ≤ τ1 , and
τ2 min(ς,τ1) |k(r)| dr < 1 , (9)
then, for any x * ∈ (0, x) (where x is given by ( 7)), the solution of the closed loop of (4) with the controller
u(t) = 1 2 (1 -sign(σ 0 (t) -σ * )) , (10)
σ 0 (t) = x(t) + a 1 t t-τ1
x(s) ds -a 2 t t-τ2
x(s) ds+
c 1 t t-τ1+ς x(s) ds + t t-ς [c 1 x(s -τ1 + ς)- c 2 x(s -τ2 + ς) + b] u(s) ds , (11)
where
σ * = x * [1 -a 2 (τ 2 -ς) + a 1 (τ 1 -ς) + c 1 (τ 1 -ς)],
establishes a sliding motion in finite-time on the surface σ 0 (t) = σ * , and the sliding motion converges exponentially to x * . The design procedure is explained through the proof of the theorem given in the following sections. Note that for implementation, the following equivalent formula can also be used
σ 0 (t) = x(t) + t 0 {(a 1 + c 1 -a 2 )x(s) -a 1 x(s -τ 1 )+ a 2 x(s -τ 2 ) -c 1 x(s -τ1 + ς)(1 -u(s))+ [-c 2 x(s -τ2 + ς) + b] -[c 1 x(s -τ1 )- c 2 x(s -τ2 ) + b] u(s -ς)} ds .
A. Sliding variable
Let us, from [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF], define the sliding variable as σ(t) = σ 0 (t) -σ * . The time derivative of σ is
σ(t) = -(a 2 -a 1 -c 1 )x(t) -c 1 x(t -τ1 + ς)+ [c 1 x(t -τ1 + ς) -c 2 x(t -τ2 + ς) + b] u(t).( 12
)
Observe that σ 0 is acting as a kind of predictor since it allows us to have u without delay in [START_REF] Kiong | Comments on "robust stabilization of uncertain input-delay systems by sliding mode control with delay compensation[END_REF]. Now, let us verify that the trajectories of (4) in closed loop with [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF] reach and remain on the sliding surface σ = 0 in finite-time. To this end, we substitute [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF] in [START_REF] Kiong | Comments on "robust stabilization of uncertain input-delay systems by sliding mode control with delay compensation[END_REF] to obtain the differential equation
σ(t) = -1 2 g 1 (t) sign(σ(t)) + g 2 (t) , (13)
where
g 1 (t) = c 1 x(t -τ1 + ς) -c 2 x(t -τ2 + ς) + b , g 2 (t) = g 1 (t)/2 + (a 1 + c 1 -a 2 )x(t)- (14) c 1 x(t -τ1 + ς) .
Before we proceed to prove the establishment of a sliding regime on σ = 0, we have to guarantee the existence of solutions of [START_REF] Polyakov | Minimization of disturbances effects in time delay predictor-based sliding mode control systems[END_REF].
Remark 1: Note that (13) can be seen as a nonautonomous differential equation with discontinuous right-hand side, therefore, we can use the definition of solutions given by Filippov in [8, p. 50]3 . But, g 1 and g 2 in (13) depend on x, and it is the solution of the functional differential equation ( 4) that in turn depends on σ through the input u. However, if we study recursively the system (4), [START_REF] Polyakov | Minimization of disturbances effects in time delay predictor-based sliding mode control systems[END_REF], on the intervals [nς, (n + 1)ς), n = 0, 1, 2..., we can see that the Filippov approach still works. Indeed, from the assumptions of the initial conditions for (4), u(t) = 0 for t ∈ [0, ς), hence, in such an interval the solutions of (4) do not depend on σ. Therefore, in the same interval, (13) can be seen as a simple differential equation with discontinuous right-hand side. Now, the solutions of (4) are not affected for the values of σ(t) for any t ∈ [ς, 2ς), thus, in such interval, ( 13) is a simple differential equation with discontinuous right-hand side, and so forth.
Consider the Lyapunov function candidate V (σ) = 1 2 σ 2 , whose derivative along ( 13) is given by
V = σ(g 2 (t) -1 2 g 1 (t) sign(σ)) , = -1 2 (g 1 (t) -2g 2 (t) sign(σ))|σ| .
Hence, V is a Lyapunov function for (13) if g 1 (t) -2g 2 (t) sign(σ) ≥ 0. Let us start with the case σ > 0. In this case we have g 1 (t)-2g 2 (t) = (a 2 -a 1 -c 1 )x(t)+c 1 x(t-τ 1 + ς). Therefore, since the solutions of (4) are guaranteed to be nonnegative, g 1 (t) -2g 2 (t) ≥ 0. For the case σ < 0 we have
g 1 (t) -2g 2 (t) = 2(b -(a 2 -a 1 -c 1 )x(t) -c 2 x(t -τ2 + ς)).
Note that since σ(0) < σ * then x(0) < σ * , and we know for this case that x(t) is bounded from above by x. This clearly implies that b -(a 2 -a 1 -c 1 )x(t) -c 2 x(t -τ2 + ς) ≥ 0.
Up to now, we have proven that σ = 0 is Lyapunovstable, however, to guarantee finite-time convergence of σ(t) to the origin, we have to verify that g 1 (t) -2g 2 (t) sign(σ) is bounded from below by a strictly positive constant. The condition σ > 0 implies that x 0 > σ * . If x(t) is increasing it is convenient for the analysis, however, the critic situation is when x(t) is decreasing and x(t) < x. Note that, in such a case, u = 0 necessarily. Now, suppose that for some t 1 we have x(t 1 ) = x * , then
σ(t 1 ) = -a 2 t t-τ2 x(s) ds + a 1 t t-τ1 x(s) ds+ c 1 t t-τ1+ς x(s) ds -a 2 (τ 2 -ς)+ a 1 (τ 1 -ς) + c 1 (τ 1 -ς) ,
which is clearly negative. Hence, we can guarantee that, for the case σ > 0, x(t) is bounded from below by x * , and therefore, g 1 (t) -2g 2 (t) ≥ (a 2 -a 1 )x * . Now, for practical purposes, let us define
S(t) = x(t) -a 2 t-ς t-τ2 x(s) ds+ a 1 t-ς t-τ1 x(s) ds + c 1 t-ς t-τ1
x(s) ds . [START_REF] Utkin | Sliding Modes in Control and Optimization[END_REF] Observe that the sliding variable σ can be rewritten as σ(t) = S(t) -σ * + R(t), where
R(t) = - t t-ς [(a 2 -a 1 -c 1 )x(s) + c 1 x(s -τ1 + ς)] ds+ t t-ς [c 1 x(s -τ1 + ς) -c 2 x(s -τ2 + ς) + b] u(s) ds .
Now, we want to prove that b -(a 2 -a 1 -c 1 )x(t)c 2 x(t -τ2 + ς) is strictly positive when σ < 0. In this case the critic situation is when x is monotonically increasing. This happens only if u = 1. Note that in such situation
σ(t) = S(t) -σ * + t t-ς [b -(a 2 -a 1 -c 1 )x(s)- c 2 x(s -τ2 + ς)] ds ,
where the term
t t-ς [b-(a 2 -a 1 -c 1 )x(s)-c 2 x(s-τ 2 +ς)] ds is strictly positive. Note also that S(t) -σ * ≥ x(t) -x * .
Hence, if for some t 1 we have that x(t 1 ) = x * then σ(t 1 ) ≥ 0. Thus, we can conclude that b-(a 2 -a 1 -c 1 )x(t)-c 2 x(t-τ2 + ς) is bounded from below by b -(a 2 + c 2 -a 1 -c 1 )x * when σ < 0. Therefore, we have proven that the sliding mode is established in finite-time.
B. Sliding dynamics
To obtain the dynamics on the sliding surface σ = 0, we use the Equivalent Control method [START_REF] Utkin | Sliding Modes in Control and Optimization[END_REF], see also [START_REF] Utkin | Sliding Mode Control in Electro-Mechanical Systems[END_REF], [START_REF] Fedorovich | Differential equations with discontinuous right-hand side[END_REF], [START_REF] Polyakov | Stability notions and lyapunov functions for sliding mode control systems[END_REF]. To compute the equivalent control, we make σ(t) = 0 and obtain that
[c 1 x(t -τ1 + ς) -c 2 x(t -τ2 + ς) + b] u(t) = -(a 1 + c 1 -a 2 )x(t) -c 1 x(t -τ1 + ς) .
By substituting this expression in the equation for σ(t) = 0 we obtain that the sliding dynamics is given by the integral equation
S(t) -σ * = 0 , ( 16
)
where S is given by [START_REF] Utkin | Sliding Modes in Control and Optimization[END_REF]. Hence, our objective is to prove that the solution x(t) of ( 16) converges exponentially to x * .
Here we are going to use the results provided in Appendix B. First, let us rewrite ( 16) in a more suitable way. Define the change of variable z(t) = x(t) -x * , thus, from the dynamics on the sliding surface, we obtain the following integral equation
z(t) -a 2 t-ς t-τ2 z(s) ds + a 1 t-ς t-τ1 z(s) ds+ c 1 t-ς t-τ1
z(s) ds = 0 .
Note that this equation can be rewritten as follows
z(t) + t t * k(t -s)z(s) ds = f (t) , t ≥ t * , (17)
where t * is the reaching time to the sliding surface (i.e. the minimum t such that σ(t) = 0), k is given by ( 8) replacing the parameter r by t -s, and
f (t) = - t * t * -τ2 k(t -s)φ(s) ds , φ(t) = z(t) , ∀t ≤ t * .
Observe that (17) is a Volterra integral equation of the second type and the kernel k of the integral is a convolution kernel. Now, we can state directly the following result.
Theorem 2: If k : R ≥t * × R ≥t * → R is a measurable kernel with ||k|| L p (R ≥t * ) < 1, then for any f ∈ L 1 (R ≥t * )
there exists a unique solution of (17) and it is such that z ∈ L 1 (R ≥t * ). Moreover, z(t) → 0 exponentially as t → ∞.
Proof: First we claim that f is in L 1 (R ≥t * ), see the verification in Appendix C. According to Lemma 5, the assumptions in Theorem 1 guarantee that ||k|| L p (R ≥t * ) < 1. Thus, we can use Lemma 4 to guarantee existence and uniqueness of solutions of (17). Now, Lemmas 6 and 7 guarantee the exponential stability of z = 0.
Since z = 0 is exponentially stable, x(t) → x * exponentially on the sliding surface.
VI. ROBUSTNESS
In this section we analyse the robustness of the closed loop of (4) with [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF], [START_REF] Györi | Oscillation Theory of Delay Differential Equations With Applications[END_REF]. For this, consider the system
ẋ(t) = a 1 x(t -τ 1 ) -a 2 x(t -τ 2 ) + [c 1 x(t -τ1 )- c 2 x(t -τ2 ) + b] u(t -ς) + δ(t) , (18)
where δ : R → R is an external disturbance. We assume that δ L ∞ (R ≥0 ) = ∆ for some finite ∆ ∈ R ≥0 . Considering (18), the time derivative of the sliding variable σ is
σ(t) = -1 2 g 1 (t) sign(σ(t)) + g 2 (t) + δ(t) , (19)
where g 1 and g 2 are given by [START_REF] Polyakov | Stability notions and lyapunov functions for sliding mode control systems[END_REF]. Consider V (σ) = 1 2 σ 2 as a Lyapunov function candidate for (19). The derivative of V along ( 13) is given by
V ≤ -1 2 [g 1 (t) -2g 2 (t) sign(σ) -|δ(t)|] |σ| .
In Section V we proved that there exists a strictly positive such that g 1 (t) -2g 2 (t) sign(σ) ≥ for all t along the reaching phase, thus if ∆ < , then V ≤ -1 2 [ -∆|] |σ| and the sliding regime is established in finite-time. Nevertheless, since the sliding variable contains delayed terms of the control, the establishment of the sliding mode does not guarantee the complete disturbance rejection, see e.g. [START_REF] Kiong | Comments on "robust stabilization of uncertain input-delay systems by sliding mode control with delay compensation[END_REF], [START_REF] Polyakov | Minimization of disturbances effects in time delay predictor-based sliding mode control systems[END_REF]. Thus, let us analyse the behaviour of the sliding motion in the presence of the disturbance δ. By using again the equivalent control method (by taking into account the disturbance) we obtain the sliding dynamics S(t) -σ * -δ(t) = 0. If we use again the change of variable z(t) = x(t) -x * , then the sliding dynamics can be rewritten as
z(t) + t t * k(t -s)z(s) ds = f (t) + δ(t) , t ≥ t * , (20)
or equivalently z(t) + (k z)(t) = f (t) + δ(t). We have proven that the solution of (20) is given by
z(t) = [f + δ](t) -(r [f + δ])(t) ,
where r is a convolution operator of type L 1 (R ≥t * ), see Theorem 2 and Appendix B. Now, since f is bounded (see
Appendix C) and δ ∈ L ∞ (R ≥0 ) then [f + δ] ∈ L ∞ (R ≥t * ).
Hence, according to Lemma 8, we can ensure that z ∈ L ∞ (R ≥0 ).
VII. NUMERICAL EXAMPLE
Consider (4) with the parameters a 1 = 0.2, a 2 = 1, c 1 = 0.1, c 2 = 0.4, b = 1, τ 1 = 0.05, τ 2 = 0.11, τ1 = 0.07, τ2 = 0.09. The values of these parameters were chosen in the same order as those obtained in [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF]. Of course, they satisfy all the conditions of Theorem 1. The simulations were made with Matlab by using an Explicit Euler integration method with a step of 1ms. In Fig. 1 we can observe the system's state for a simulation with initial condition x 0 = 0 in the nominal case. Fig. 2 shows the control signal. In Fig. 3 we can see a simulation considering a disturbance δ(t) = sin(10t)/10. Note in Fig. 4 that, for this example, the amplitude in steady state is less than the amplitude of the disturbance.
VIII. CONCLUSIONS
We proposed a Sliding Mode Controller for a class of scalar bilinear systems with delays. We have shown that the combination of Lyapunov function and Volterra operator theory provides a very useful tool to study the stability and robustness properties of the proposed control scheme. Naturally, a future direction in this research is to try to extend the control scheme to higher order systems.
APPENDIX
A. Solutions of delayed differential equations
The theory recalled in this section was taken from [START_REF] Agarwal | Nonoscillation Theory of Functional Differential Equations with Applications[END_REF], see also [START_REF] Györi | Oscillation Theory of Delay Differential Equations With Applications[END_REF] and [START_REF] Fridman | Introduction to Time-Delay Systems[END_REF]. Consider the system
ẋ = N i=1 a i (t)x(t -τ i ) , (21)
where each τ i ∈ R ≥0 , and each a i is a Lebesgue-measurable locally essentially bounded function. Definition 1: The function X(t, s) that satisfies, for each s ≥ 0, the problem
ẋ = N i=1 a i (t)x(t -τ i ) ,
x(t) = 0 for t < s, x(s) = 1, is called the fundamental function of (21).
It is assumed that X(t, s) = 0 when 0 ≤ t < s. Now consider the system
ẋ = N i=1 a i (t)x(t -τ i ) + f (t) , (22)
with initial conditions x(t) = 0 for all t < 0 and x(0) = x 0 for some x 0 ∈ R. Lemma 3: Assume that a i , τ i are as above and f is a Lebesgue-measurable locally essentially bounded function, then there exists a unique solution of ( 22) and it can be written as
x(t) = X(t, 0)x 0 + t 0 X(t, s)f (s) ds .
B. Volterra equations
Most of the results recalled in this section was taken from [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF]. For z : R → R consider the integral equation
z(t) + t t * k(t, s)z(s) ds = f (t) , t ≥ t * . ( 23
)
Define the map t → t t * k(t, s)z(s) ds as k z. Hence, we rewrite (23) as
z(t) + (k z)(t) = f (t) . (24)
A function r is called a resolvent of (24
) if z(t) = f (t) - (r f )(t). We say that the kernel k : J × J → R is of type L p on the interval J if k L p (J) < ∞, where k L p (J) = sup g1 L p (J) ≤1 g2 L p (J) ≤1 J J |g 1 (t)k(t, s)g 2 (s)| p ds dt.
The question about the existence and uniqueness of solutions of (24) is answered by the following lemma.
Lemma 4 ([10], Theorem 9-3.6): If k is a kernel of type L P on J that has a resolvent r of type L P on J, and if f ∈ L P (J), then (24) has a unique solution z ∈ L P (J), and such solution is given by z
(t) = f (t) -(r f )(t).
Now, we have two problems, verify if k is a kernel of type L P and if it has a resolvent r of type L P . For the particular case of p = 1 we have the following lemma.
Lemma 5 ([10], Proposition 9-2.7): Let k : J × J → R be a measurable kernel. k is of type L 1 on J if and only if N (k) = ess sup s∈J J |k(t, s)| dt < ∞. Moreover N (k) = k L p (J) . And finally. Lemma 6 ([10], Corollary 9-3.10): If k is a kernel of type L P on J and k L p (J) < 1, then k has a resolvent r of type L P on J. Now we can guarantee some asymptotic behaviour of z(t) according to the asymptotic behaviour of f (t) for a Volterra kernel k. Nonetheless, if such a kernel is of convolution kind, i.e. k(t, s) = k(t -s), we can say something else. For the following lemma let us denote the Laplace transform of k(t) as K(z), z ∈ C.
Lemma 7 ([10], Theorem 2-4.1): Let k be a Volterra kernel of convolution kind and L 1 type on R ≥0 . Then the resolvent r is of type L 1 on R ≥0 if and only if det(I + K(z)) = 0 for all z ∈ C such that Re{z} ≥ 0.
To finalise this section we recall the following lemma that is useful for the robustness analysis. Here we verify that f is in L 1 . First note that the integral in f restricts to t * -τ 2 ≤ s ≤ t * , therefore, the argument of k(t -s) is restricted to t -t * ≤ t -s ≤ t -t * + τ 2 . Recall that k(t-s) is different from zero only in the interval [min(ς, τ 1 ), τ 2 ]. Hence, under the integral in f , k(t -s) can be different from zero only for t * + min(ς, τ 1 ) -τ 2 ≤ t ≤ t * + τ 2 . Thus
||f (t)|| L 1 (R ≥0 ) = ∞ 0 |f (t)| dt = t * +τ2
t * +min(ς,τ1)-τ2 |f (t)| dt . Now, since φ(t) = x(t) -x * and x(t) was guaranteed to be bounded then there exists a finite φ * ∈ R ≥0 such that |φ(t)| ≤ φ * for all t ∈ [t * -τ 2 , t * ]. Note that also k is bounded by some finite k * , thus
f (t) L 1 (R ≥0 ) ≤ t * +τ2 t * +min(ς,τ1)-τ2 t * t * -τ2
k * φ * ds dt , therefore f (t) L 1 (R ≥0 ) ≤ k * φ * τ 2 (2τ 2 -min(ς, τ 1 )) .
Fig. 1 .Fig. 2 .
12 Fig. 1. State of the system in the nominal case.
Fig. 3 .Fig. 4 .
34 Fig. 3. State of the system in presence of disturbance.
Lemma 8 (
8 [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF], Theorem 2-2.2): Let r be a convolution Volterra kernel of type L 1 (R ≥0 ), and let b ∈ L P (R ≥0 ) for some p ∈ [1, ∞]. Then r b ∈ L P (R ≥0 ), andr b L P (R ≥0 ) ≤ r L 1 (R ≥0 ) b L p (R ≥0 ) . C. Function f is in L 1
A video with some experiments, reported in[START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF], can be seen at https://www.youtube.com/watch?v=b5NnAV2qeno.
For the definition of a nonoscillatory solution see e.g.[START_REF] Györi | Oscillation Theory of Delay Differential Equations With Applications[END_REF],[START_REF] Agarwal | Nonoscillation Theory of Functional Differential Equations with Applications[END_REF].
For the particular case of (13), the three methods given in[8, p. 50-56] to construct the differential inclusion coincide, see also[START_REF] Polyakov | Stability notions and lyapunov functions for sliding mode control systems[END_REF]. | 31,978 | [
"735471",
"17232",
"20438"
] | [
"525219",
"525219",
"374570",
"120930",
"410272",
"525219",
"525219"
] |
01763410 | en | [
"info"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01763410/file/sttt2018.pdf | Zheng Cheng
email: [email protected]
Massimo Tisi
email: [email protected]
Slicing ATL Model Transformations for Scalable Deductive Verification and Fault Localization
Model-driven engineering (MDE) is increasingly accepted in industry as an effective approach for managing the full life cycle of software development. In MDE, software models are manipulated, evolved and translated by model transformations (MT), up to code generation. Automatic deductive verification techniques have been proposed to guarantee that transformations satisfy correctness requirements (encoded as transformation contracts). However, to be transferable to industry, these techniques need to be scalable and provide the user with easily accessible feedback.
In MT-specific languages like ATL, we are able to infer static trace information (i.e. mappings among types of generated elements and rules that potentially generate these types). In this paper we show that this information can be used to decompose the MT contract and, for each sub-contract, slice the MT to the only rules that may be responsible for fulfilling it. Based on this contribution, we design a fault localization approach for MT, and a technique to significantly enhance scalability when verifying large MTs against a large number of contracts. We implement both these algorithms as extensions of the VeriATL verification system, and we show by experimentation that they increase its industry-readiness.
1 A deductive approach for fault localization in ATL MTs (Online). https://github.com/veriatl/ VeriATL/tree/FaultLoc.
2 On scalability of deductive verification for ATL MTs (Online).
Introduction
Model-Driven Engineering (MDE), i.e. software engineering centered on software models, is widely recognized as an effective way to manage the complexity of software development. In MDE, software models are manipulated, evolved and translated by model transformation (MTs), up to code generation. An incorrect MT would generate faulty models, whose effect could be unpredictably propagated into subsequent MDE steps (e.g. code generation), and compromise the reliability of the whole software development process.
Deductive verification emphasizes the use of logic (e.g. Hoare logic [START_REF] Hoare | An axiomatic basis for computer programming[END_REF]) to formally specify and prove program correctness. Due to the advancements in the last couple of decades in the performance of constraint solvers (especially satisfiability modulo theory -SMT), many researchers are interested in developing techniques that can partially or fully automate the deductive verification for the correctness of MTs (we refer the reader to [START_REF] Ab | A survey of approaches for verifying model transformations[END_REF] for an overview).
While industrial MTs are increasing in size and complexity (e.g. automotive industry [START_REF] Selim | Model transformations for migrating legacy models: An industrial case study[END_REF], medical data processing [START_REF] Wagelaar | Using ATL/EMFTVM for import/export of medical data[END_REF], aviation [START_REF] Berry | Synchronous design and verification of critical embedded systems using SCADE and Esterel[END_REF]), existing deductive verification approaches and tools show limitations that hinder their practical application.
Scalability is one of the major limitations. Current deductive verification tools do not provide clear evidence of their efficiency for largescale MTs with a large number of rules and contracts [START_REF] Ab | A survey of approaches for verifying model transformations[END_REF]. Consequently, users may suffer from unbearably slow response when verification tasks scale. For example, as we show in our evaluation, the verification of a realistic refactoring MT with about 200 rules against 50 invariants takes hours (Section 6). In [START_REF] Briand | Making model-driven verification practical and scalable -experiences and lessons learned[END_REF], the author argues that this lack of scalable techniques becomes one of the major reasons hampering the usage of verification in industrial MDE.
Another key issue is that, when the verification fails, the output of verification tools is often not easily exploitable for identifying and fixing the fault. In particular, industrial MDE users do not have the necessary background to be able to exploit the verifier feedback. Ideally, one of the most user-friendly solutions would be the introduction of fault localization techniques [START_REF] Roychoudhury | Formulabased software debugging[END_REF][START_REF] Wong | A survey on software fault localization[END_REF], in order to directly point to the part of MT code that is responsible for the fault. Current deductive verification systems for MT have no support for fault localization. Consequently, manually examining the full MT and its contracts, and reasoning on the implicit rule interactions remains a complex and time-consuming routine to debug MTs.
In [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF], we developed the VeriATL verification system to deductively verify the correctness of MTs written in the ATL language [START_REF] Jouault | ATL: A model transformation tool[END_REF], w.r.t. given contracts (in terms of pre-/postconditions). Like several other MT languages, ATL has a relational nature, i.e. its core aspect is a set of so-called matched rules, that describe the mappings between the elements in the source and target model. VeriATL automatically translates the axiomatic semantics of a given ATL transformation in the Boogie intermediate verification language [START_REF] Barnett | Boogie: A modular reusable verifier for object-oriented programs[END_REF], combined with a formal encoding of EMF metamodels [START_REF] Steinberg | EMF: Eclipse modeling framework[END_REF] and OCL contracts. The Z3 au-tomatic theorem prover [START_REF] De Moura | Z3: An efficient SMT solver[END_REF] is then used by Boogie to verify the correctness of the ATL transformation. While the usefulness of Veri-ATL has been shown by experimentation [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF], its original design suffers from the two mentioned limitations, i.e. it does not scale well, and does not provide accessible feedback to identify and fix the fault.
In this article, we argue that the relational nature of ATL can be exploited to address both the identified limitations. Thanks to the relational structure, we are able to deduce static trace information (i.e. inferred information among types of generated target elements and the rules that potentially generate these types) from ATL MTs. Then, we use this information to propose a slicing approach that first decomposes the postcondition of the MT into subgoals, and for each sub-goal, slices out of the MT all the rules that do not impact the subgoal. Specifically, -First, we propose a set of sound natural deduction rules. The set includes 4 rules that are specific to the ATL language (based on the concept of static trace information), and 16 ordinary natural deduction rules for propositional and predicate logic [START_REF] Huth | Logic in Computer Science: Modelling and Reasoning About Systems[END_REF]. Then, we propose an automated proof strategy that applies these deduction rules on the input OCL postcondition to generate sub-goals. Each sub-goal contains a list of newly deduced hypotheses, and aims to prove a sub-case of the input postcondition. -Second, we exploit the hypotheses of each sub-goal to slice the ATL MT into a simpler transformation context that is specific to each sub-goal.
Finally we propose two solutions that apply our MT slicing technique to the tasks of enabling fault localization and enhancing scalability:
-Fault Localization. We apply our natural deduction rules to decompose each unverified postcondition in sub-goals, and generate several verification conditions (VCs), i.e. one for each generated sub-goal and corresponding MT slice. Then, we verify these new VCs, and present the user with the unverified ones. The unverified sub-goals help the user pinpoint the fault in two ways: (a) the failing slice is underlined in the original MT code to help localizing the bug; (b) a set of debugging clues, deduced from the input postcondition are presented to alleviate the cognitive load for dealing with unverified sub-cases. The approach is evaluated by mutation analysis. -Scalability. Before verifying each postcondition, we apply our slicing approach to slice the ATL MT into a simpler transformation context, thereby reducing the verification complexity/time of each postcondition (Section 5.1). We prove the correctness of the approach. Then we design and prove a grouping algorithm, to identify the postconditions that have high probability of sharing proofs when verified in a single verification task (Section 5.2). Our evaluation confirms that our approach improves verification performance up to an order of magnitude (79% in our use case) when the verification tasks of a MT are scaling up (Section 6).
These two solutions are implemented by extending VeriATL. The source code of our implementations and complete artifacts used in our evaluation are publicly available 12 .
This paper extends an article contributed to the FASE 2017 conference [START_REF] Cheng | A deductive approach for fault localization in ATL model transformations[END_REF] by the same authors. While the conference article was introducing the fault localization approach, this paper recognizes that the applicability of our slicing approach is more general, and can benefit other requirements for industry transfer, such as scalability. Paper organization. Section 2 motivates by example the need for fault localization and scalability in MT verification. Section 3 presents our solution for fault localization in the deductive verification of MT. Section 4 illustrates in detail the key component of this first appli-Fig. 1. The hierarchical and flattened state machine metamodel cation, the deductive decomposition and slicing approach. Section 5 applies this slicing approach to our second task, i.e. enhancing general scalability in deductive MT verification. The practical applicability and performance of our solutions are shown by evaluation in Section 6. Finally, Section 7 compares our work with related research, and Section 8 presents our conclusions and proposed future work.
Motivating Example
We consider as running case a MT that transforms hierarchical state machine (HSM ) models to flattened state machine (FSM ) models, namely the HSM2FSM transformation. Both models conform to the same simplified state machine metamodel (Fig. 1). For clarity, classifiers in the two metamodels are distinguished by the HSM and FSM prefix. In detail, a named StateMachine contains a set of labelled Transitions and named AbstractStates. Each Ab-stractState has a concrete type, which is either RegularState, InitialState or CompositeState. A Transition links a source to a target AbstractState. Moreover, CompositeStates are only allowed in the models of HSM, and optionally contain a set of AbstractStates.
Fig. 2 depicts a HSM model that includes a composite state3 . Fig. 3 demonstrates how the HSM2FSM transformation is expected to flatten it: (a) composite states need to be removed, the initial state within needs to become a regular state, and all the other states need to be preserved; (b) transitions targeting a composite state need to redirect to the initial state of such composite state, transitions outgoing from a composite state need to be duplicated for the states within such composite state, and all the other transitions need to be preserved.
Specifying OCL contracts. We consider a contract-based development scenario where the developer first specifies correctness conditions for the to-be-developed ATL transformation by using OCL contracts. For example, let us consider the contract shown in Listing 1. The precondition Pre1 specifies that in the input model, each Transition has at least one source. The postcondition Post1 specifies that in the output model, each Transition has at least one source.
While pre-/post-conditions in Listing 1 are generic well-formedness properties for state machines, the user could specify transformationspecific properties in the same way. For instance, the complete version of this use case also contains the following transformation-specific Developing the ATL transformation. Then, the developer implements the ATL transformation HSM2FSM (a snippet is shown in Listing 2 4 ). The transformation is defined via a list of ATL matched rules in a mapping style. The first rule maps each StateMachine element to the output model (SM2SM ). Then, we have two rules to transform AbstractStates: regular states are preserved (RS2RS ), initial states are transformed into regular states when they are within a composite state (IS2RS ). Notice here that initial states are deliberately transformed partially to demonstrate our problem, i.e. we miss a rule that specifies how to transform initial states when they are not within a composite state. The remaining three rules are responsible for mapping the Transitions of the input state machine.
1 context HSM!Transition inv Pre1: 2 HSM!Transition.allInstances()->forAll(t | not t.source.oclIsUndefined()) 3 -------------------------------- 4 context FSM!
Each ATL matched rule has a from section where the source pattern to be matched in the source model is specified. An optional OCL constraint may be added as the guard, and a rule is applicable only if the guard evaluates to true on the source pattern. Each rule also has a to section which specifies the elements to be created in the target model. The rule initializes the attributes/associations of a generated target element via the binding operator (<-). An important feature of ATL is the use of an implicit resolution algorithm during the target property initialization. Here we illustrate the algorithm by an example: 1) considering the binding stateMachine <-rs1.stateMachine in the RS2RS rule (line 13 of Listing 2), its righthand side is evaluated to be a source element of type HSM!StateMachine; 2) the resolution algorithm then resolves such source element to its corresponding target element of type FSM!StateMachine (generated by the SM2SM rule); 3) the resolved result is assigned to the left-hand side of the binding. While not strictly needed for understanding this paper, we refer the reader to [START_REF] Jouault | ATL: A model transformation tool[END_REF] for a full description of the ATL language.
Formally verifying the ATL transformation by VeriATL. The source and target EMF metamodels and OCL contracts combined with the developed ATL transformation form a VC which can be used to verify the correctness of the ATL transformation for all possible inputs, i.e. MM, Pre, Exec Post. The VC semantically means that, assuming the axiomatic semantics of the involved EMF metamodels (MM ) and OCL preconditions (Pre), by executing the developed ATL transformation (Exec), the specified OCL postcondition has to hold (Post).
In previous work, Cheng et al. have developed the VeriATL verification system that allows such VCs to be soundly verified [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF]. Specifically, the VeriATL system describes in Boogie what correctness means for the ATL language in terms of structural VCs. Then, Ve-riATL delegates the task of interacting with Z3 for proving these VCs to Boogie.In particular, VeriATL encodes: 1) MM using axiomatized Boogie constants to capture the semantics of metamodel classifiers and structural features, 2) Pre and Post using first order logic Boogie assumption and assertion statements respectively to capture the pre-/post-conditions of MTs, 3) Exec using Boogie procedures to capture the matching and applying semantics of ATL MTs. We refer our previous work [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF] for the technical description of how to map a VC to its corresponding Boogie program.
Problem 1: Debugging. In our example, VeriATL successfully reports that the OCL postcondition Post1 is not verified by the MT in Listing 2. This means that the transformation does not guarantee that each Transition has at least one source in the output model. Without any capability of fault localization, the developer needs to manually inspect the full transformation and contracts to understand that the transformation is incorrect because of the absence of an ATL rule to transform InitialStates that are not within a CompositeState.
To address problem 1, our aim is to design a fault localization approach that automatically presents users with the information in Listing 3 (described in detail in the follow-ing Section 3.1). The output includes: (a) the slice of the MT code containing the bug (that in this case involves only three rules), (b) a set of debugging clues, deduced from the original postcondition (in this case pointing to the the fact that T2TC can generate transitions without source). We argue that this information is a valuable help in identifying the cause of the bug.
Problem 2: Scalability. While for illustrative purposes in this paper we consider a very small transformation, it is not difficult to extend it to a realistically sized scenario. For instance we can imagine Listing 2 to be part (up to renaming) of a refactoring transformation for the full UML (e.g. including statecharts, but also class diagrams, sequence diagrams, activity diagrams etc.). Since the UML v2.5 [START_REF]Object Management Group: Unified modeling language (ver. 2.5)[END_REF] metamodel contains 194 concrete classifiers (plus 70 abstract classifiers), even the basic task of simply copying all the elements not involved in the refactoring of Listing 2 would require at least 194 rules. Such large transformation would need to be verified against the full set of UML invariants, that describe the well-formedness of UML artifacts according to the specification 5 . While standard Ve-riATL is successfully used for contract-based development of smaller transformations [START_REF] Cheng | A deductive approach for fault localization in ATL model transformations[END_REF], in our experimentation we show that it needs hours to verify a refactoring on the full UML against 50 invariants.
To address problem 2, we design a scalable verification approach aiming at 1) reducing the verification complexity/time of each postcondition (Section 5.1) and 2) grouping postconditions that have high probability of sharing proofs when verified in a single verification task (Section 5.2). Thanks to these techniques the verification time of our use case in UML refactoring is reduced by about 79%. 5 OCL invariants for UML. http://bit.ly/ UMLContracts 1 context HSM!Transition inv Pre1: ...
Fault localization in the running case
We propose a fault localization approach that, in our running example, presents the user with two problematic transformation scenarios. One of them is shown in Listing 3. The scenario consists of the input preconditions (abbreviated at line 1), a slice of the transformation (abbreviated at lines 3 -5), and a sub-goal derived from the input postcondition. The subgoal contains a list of hypotheses (lines 7 -12) with a conclusion (line 13).
The scenario in Listing 3 contains the following information, that we believe to be valuable in identifying and fixing the fault:
-Transformation slice. The only relevant rules for the fault captured by this problematic transformation scenario are RS2RS, IS2RS and T2TC (lines 3 -5). They can be directly highlighted in the source code editor. -Debugging clues. The error occurs when a transition t0 is generated by the rule T2TC (lines 8 -10), and when the source state of the transition is not generated (line 11). In addition, the absence of the source for t0 is due to the fact that none of the RS2RS and IS2RS rules is invoked to generate it (line 12).
From this information, the user could find a counter-example in the source models that falsifies Post1 (shown in the top of Fig. 4): a transition t c between an initial state i c (which is not within a composite state) and a composite state c c , where c c composites another initial state i c . This counter-example matches the source pattern of the T2TC rule (as shown in the bottom of Fig. 4). However, when the T2TC rule tries to initialize the source of the generated transition t2 (line 41 in Listing 2), i c cannot be resolved because there is no rule to match it. In this case, i c (of type HSM!Initial-State)) is directly used to initialize the source of t2 (t2.source is expected to be a sub-type of FSM!AbstractState). This causes an exception of type mismatch, thus falsifying Post1. The other problematic transformation scenario pinpoints the same fault, showing that Post1 is not verified by the MT also when t0 is generated by T2TA.
In the next sections, we describe in detail how we automatically generate problematic transformation scenarios like the one shown in Listing 3.
Solution overview
The flowchart in Fig. 5 shows a bird's eye view of our approach to enable fault localization for VeriATL. The process takes the involved metamodels, all the OCL preconditions, the ATL transformation and one of the OCL postconditions as inputs. We require all inputs to be syntactically correct. If VeriATL successfully verifies the input ATL transformation, we directly report a confirmation message to indicate its correctness (w.r.t. the given postcondition) and the process ends. Otherwise, we generate a set of problematic transformation scenarios, and a proof tree to the transformation developer.
To generate problematic transformation scenarios, we first perform a systematic approach to generate sub-goals for the input OCL postcondition. Our approach is based on a set of sound natural deduction rules (Section 4.1). The set contains 16 rules for propositional and predicate logic (such as introduction/elimination rules for ∧ and ∨ [START_REF] Huth | Logic in Computer Science: Modelling and Reasoning About Systems[END_REF]), but also 4 rules specifically designed for ATL expressions (e.g. rewriting single-valued navigation expression).
Then, we design an automated proof strategy that applies these natural deduction rules on the input OCL postcondition (Section 4.2). Executing our proof strategy generates a proof tree. The non-leaf nodes are intermediate results of deduction rule applications. The leafs in the tree are the sub-goals to prove. Each sub-goal consists of a list of hypotheses and a conclusion to be verified. The aim of our automated proof strategy is to simplify the original postcondition as much as possible to obtain a set of sub-conclusions to prove. As a byproduct, we also deduce new hypotheses from the input postcondition and the transformation, as debugging clues.
Next, we use the trace information in the hypotheses of each sub-goal to slice the input MT into simpler transformation contexts (Section 4.3). We then form a new VC for each subgoal consisting of the semantics of metamodels, input OCL preconditions, sliced transformation context, its hypotheses and its conclusion.
We send these new VCs to the VeriATL verification system to check. Notice that successfully proving these new VCs implies the satisfaction of the input OCL postcondition. If any of these new VCs is not verified by Ve-riATL, the input OCL preconditions, the corresponding sliced transformation context, hypotheses and conclusion of the VC are presented to the user as a problematic transformation scenario for fault localization. The VCs that were automatically proved by VeriATL are pruned away, and are not presented to the transformation developer. This deductive verification step by VeriATL makes the whole process practical, since the user is presented with a limited number of meaningful scenarios. Then, the transformation developer consults the generated problematic transformation scenarios and the proof tree to debug the ATL transformation. If modifications are made on the inputs to fix the bug, the generation of sub-goals needs to start over. The whole process keeps iterating until the input ATL transformation is correct w.r.t. the input OCL postcondition.
A Deductive Approach to Transformation Slicing
The key step in the solution for fault localization that we described in the previous section is a general technique for: 1) decomposing the postcondition into sub-goals by applying MT-specific natural deduction rules, and 2) for each sub-goal, slice the MT to the only rules that may be responsible for fulfilling that sub-goal.
In this section we describe this algorithm in detail, and in the next section we show that its usefulness goes beyond fault localization, by applying it for enhancing the general scalability of VeriATL.
Natural Deduction Rules for ATL
Our approach relies on 20 natural deduction rules (7 introduction rules and 13 elimination rules). The 4 elimination rules (abbreviated by X e ) that specifically involve ATL are shown in Fig. 6. The other rules are common natural deduction rules for propositional and predicate logic [START_REF] Huth | Logic in Computer Science: Modelling and Reasoning About Systems[END_REF]. Regarding the notations in our natural deduction rules:
-Each rule has a list of hypotheses and a conclusion, separated by a line. We use standard notation for typing (:) and set operations.
-Some special notations in the rules are T for a type, M M T for the target metamodel, R n for a rule n in the input ATL transformation, x.a for a navigation expression, and i for a fresh variable / model element.
In addition, we introduce the following auxiliary functions: cl returns the classifier types of the given metamodel, trace returns the ATL rules that generate the input type (i.e. the static trace information) 6 , genBy(i,R) is a predicate to indicate that a model element i is generated by the rule R, unDef(i) abbreviates i.oclIsUndefined(), and All(T) abbreviates T.allInstances().
Some explanation is in order for the natural deduction rules that are specific to ATL:
-First, we have two type elimination rules (T P e1 , T P e2 ). T P e1 states that every singlevalued navigation expression of the type T in the target metamodel is either a member of all generated instances of type T or undefined. T P e2 states that the cardinality of every multi-valued navigation expression of the type T in the target metamodel is either greater than zero (and every element i in the multi-valued naviga- The set of natural deduction rules is sound, as we show in the rest of this section. However, it is not complete, and we expect to extend it in future work. As detailed in Section 6.3, when the bug affects a postcondition that we don't support because of this incompleteness, we report to the user our inability to perform fault localization for that postcondition.
x.a : T T ∈ cl(M M T ) x.a ∈ All(T ) ∨ unDef (x.a) Tpe1 x.a : Seq T T ∈ cl(M M T ) (|x.a| > 0 ∧ ∀i • (i ∈ x.a ⇒ i ∈ All(T ) ∨ unDef (i))) ∨ |x.a| = 0 Tpe2 T ∈ cl(M M T ) trace(T ) = {R 1 , ..., Rn } i ∈ All(T ) genBy(i, R 1 ) ∨ ... ∨ genBy(i, Rn ) Tre1 T ∈ cl(M M T ) trace(T ) = {R 1 , ..., Rn} i : T unDef (i) ¬(genBy(i,
Soundness of natural deduction rules. The soundness of our natural deduction rules is based on the operational semantics of the ATL language. Specifically, the soundness for type elimination rules T P e1 and T P e2 is straightforward. We prove their soundness by enumerating the possible states of initialized navigation expressions for target elements. Specifically, assuming that the state of a navigation expression x.a is initialized in the form x.a<-exp where x.a is of a non-primitive type T :
-If exp is not a collection type and cannot be resolved (i.e. exp cannot match the source pattern of any ATL rules), then x.a is undefined 7 . -If exp is not a collection type and can be resolved, then the generated target element of the ATL rule that matches exp is assigned to x.a. Consequently, x.a could be either a member of All(T) (when the resolution result is of type T ) or undefined (when it is not). -If exp is of collection type, then all of the elements in exp are resolved individually, and the resolved results are put together into a pre-allocated collection col, and col is assigned to x.a.
The first two cases explain the two possible states of every single-valued navigation expressions (T P e1 ). The third case explains the two possible states of every multi-valued navigation expressions (T P e2 ). The soundness of trace elimination rules T R e1 is based on the surjectivity between each ATL rule and the type of its created target elements [START_REF] Büttner | On verifying ATL transformations using 'off-the-shelf' SMT solvers[END_REF]: elements in the target metamodel exist if they have been created by an ATL rule since standard ATL transformations are always executed on an initially empty target model. When a type can be generated by executing more than one rule, then a disjunction considering all these possibilities is made for every generated elements of this type.
About the soundness of the T R e2 rule, we observe that if a target element of type T is undefined, then clearly it does not belong to All(T). In addition, the operational semantics for the ATL language specifies that if a rule R is specified to generate elements of type T, then every target elements of type T generated by that rule belong to All(T) (i.e. R ∈ trace(T ) ⇒ ∀i • (genBy(i, R) ⇒ i ∈ All(T ))) [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF]. Thus, T R e2 is sound as a logical consequence of the operational semantics for the ATL language (i.e. R ∈ trace(T ) ⇒ ∀i • (i / ∈ All(T ) ⇒ ¬genBy(i, R))).
Automated Proof Strategy
A proof strategy is a sequence of proof steps. Each step defines the consequences of applying a natural deduction rule on a proof tree. A proof tree consists of a set of nodes. Each node is constructed by a set of OCL expressions as hypotheses, an OCL expression as the conclusion, and another node as its parents node.
Next, we illustrate a proof strategy (Algorithm 1) that automatically applies our natural deduction rules on the input OCL postcondition. The goal is to automate the derivation of information from the postcondition as hypotheses, and simplify the postcondition as much as possible.
Algorithm 1 An automated proof strategy for VeriATL
1: Tree ← {createNode({}, Post, null)} 2: do 3: leafs ← size(getLeafs(Tree)) 4:
for each node leaf ∈ getLeafs(Tree) do end for 13: while leafs = size(getLeafs(Tree))
Our proof strategy takes one argument which is one of the input postconditions. Then, it initializes the proof tree by constructing a new root node of the input postcondition as conclusion and no hypotheses and no parent node (line 1). Next, our proof strategy takes two sequences of proof steps. The first sequence applies the introduction rules on the leaf nodes of the proof tree to generate new leafs (lines 2 -7). It terminates when no new leafs are yield (line 7). The second sequence of steps applies the elimination rules on the leaf nodes of the proof tree (lines 8 -13). We only apply type elimination rules on a leaf when: (a) a free variable is in its hypotheses, and (b) a navigation expression of the free variable is referred by its hypotheses. Furthermore, to ensure termination, we enforce that if applying a rule on a node does not yield new descendants (i.e. whose hypotheses or conclusion are different from their parent), then we do not attach new nodes to the proof tree.
Transformation Slicing
Executing our proof strategy generates a proof tree. The leafs in the tree are the sub-goals to prove by VeriATL. Next, we use the rules referred by the genBy predicates in the hypotheses of each sub-goal to slice the input MT into a simpler transformation context. We then form a new VC for each sub-goal consisting of the axiomatic semantics of metamodels, input OCL preconditions, sliced transformation context (Exec sliced ), its hypotheses and its conclusion, i.e. MM, Pre, Exec sliced , Hypotheses Conclusion.
If any of these new VCs is not verified by VeriATL, the input OCL preconditions, the corresponding sliced transformation context, hypotheses and conclusion of the VC are constructed as a problematic transformation scenario to report back to the user for fault localization (as shown in Listing 3).
Correctness. The correctness of our transformation slicing is based on the concept of rule irrelevance (Theorem 1). That is the axiomatic semantics of the rule(s) being sliced away (Exec irrelevant ) has no effects to the verification outcome of its sub-goal. Proof. Each ATL rule is exclusively responsible for the generation of its output elements 8 Exec sliced ∪ irrelevant ⇐⇒ Exec sliced ∧ Exec irrelevant (i.e. no aliasing) [START_REF] Hidaka | On additivity in transformation languages[END_REF][START_REF] Tisi | Parallel execution of ATL transformation rules[END_REF]. Hence, when a subgoal specifies a condition that a set of target elements should satisfy, the rules that do not generate these elements have no effects to the verification outcome of its sub-goal. These rules can hence be safely sliced away.
Scalability by Transformation Slicing
Being able to decompose contracts and slice the transformation as described in the previous section, can be also exploited internally for enhancing the scalability of the verification process.
Typically, verification tools like VeriATL will first formulate VCs to pass to the theorem prover. Then, they may try to enhance performance by decomposing and/or composing these VCs:
-VCs can be decomposed, creating smaller VCs that may be more manageable for the theorem prover. For instance Leino et al. introduce a VC optimization in Boogie (hereby referred as VC splitting) to automatically split VCs based on the control-flow information of programs [START_REF] Leino | Verification condition splitting[END_REF]. The idea is to align each postcondition to its corresponding path(s) in the control flow, then to form smaller VCs to be verified in parallel. -VCs can be composed, e.g. by constructing a single VC to prove the conjunction of all postconditions (hereby referred as VC conjunction). This has the benefit to enable sharing parts of the proofs of different postconditions (i.e. the theorem prover might discover that lemmas for proving a conjunct are also useful for proving other terms).
However, domain-agnostic composition or decomposition does not provide significant speedups to our running case. For instance the Boogielevel VC splitting has no measurable effect. Once the transformation is translated in an imperative Boogie program, transformation rules, even if independent from each other, become part of a single path in the control-flow [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF]. Hence, each postcondition is always aligned to the whole set of transformation rules. We argue that a similar behavior would have been observed also if the transformation was directly developed in an imperative language (Boogie or a general-purpose language): a domain-agnostic VC optimization does not have enough information to identify the independent computation units within the transformation (i.e. the rules).
In what follows, we propose a two-step method to construct more efficient VCs for verifying large MTs. In the first step, we want to apply our MT-specific slicing technique (Section 4) on top of the Boogie-level VC splitting (Section 5.1): thanks to the abstraction level of the ATL language, we can align each postcondition to the ATL rules it depends on, thereby greatly reduce the size of each constructed VC. In the second step, we propose an ATL-specific algorithm to decide when to conjunct or split VCs (Section 5.2), improving on domain-agnostic VC conjunction.
Applying the Slicing Approach
Our first ATL-level optimization aims to verify each postcondition only against the rules that may impact it (instead of verifying it against the full MT), thus reducing the burden on the SMT solver. This is achieved by a transformation slicing approach for postconditions: first applying the decomposition in sub-goals and the slicing technique from Section 4, and then merging the slices of the generated sub-goals. The MT rules that lay outside the union are sliced away, and the VC for each postcondition becomes: MM, Pre, Exec slice Post, where Exec slice stands for axiomatic semantics of sliced transformation, and the sliced transformation is the union of the rules that affect the sub-goals of each postcondition.
Correctness. We first define a complete application of the automated proof strategy in Definition 1.
Definition 1 (Complete application of the automated proof strategy). The automated proof strategy is completely applied to a postcondition if it correctly identifies every element of target types referred by each sub-goal and every rule that may generate them.
Clearly, if not detected, an incomplete application of our automated proof strategy could cause our transformation slicing to erroneously slice away the rules that a postcondition might depend on, and invalidate our slicing approach to verify postconditions. We will discuss how we currently handle and can improve the completeness of the automated proof strategy in Section 6.3. One of the keys in handling incomplete cases is that we defensively construct the slice to be the full MT. Thus, the VCs of the incomplete cases become MM, Pre, Exec
Post. This key point is used to establish the correctness of our slicing approach to verify postconditions (Theorem 2).
Theorem 2 (Rule Irrelevance -Postconditions). MM, Pre, Exec sliced , Post ⇐⇒ MM, Pre, Exec sliced ∪ irrelevant Post Proof. We prove this theorem by a case analysis on whether the application of our automated proof strategy is complete: -Assuming our automated proof strategy is completely applied. First, because the soundness of our natural deduction rules, it guarantees the generated sub-goals are a sound abstraction of their corresponding original postcondition. Second, based on the assumption that our automated proof strategy is completely applied, we can ensure that the union of the static trace information for each sub-goal of a postcondition contains all the rules that might affect the verification result of such postcondition. Based on the previous two points, we can conclude that slicing away its irrelevant rules has no effects to the verification outcome of a postcondition following the same proof strategy as in Theorem 1. -Assuming our automated proof strategy is not completely applied. In this case, we will defensively use the full transformation as the slice, then in this case, our theorem becomes MM, Pre, Exec Post ⇐⇒ MM, Pre, Exec Post, which is trivially proved.
1 context HSM!Transition inv Pre1: ... For example, Listing 4 shows the constructed VC for Post1 of Listing 1 by using our program slicing technique. It concisely aligns Post1 to 4 responsible rules in the UML refactoring transformation. Note that the same slice is obtained when the rules in Listing 2 are a part of a full UML refactoring. Its verification in our experimental setup (Section 6) requires less than 15 seconds, whereas verifying the same postcondition on the full transformation would exceed the 180s timeout.
Grouping VCs for Proof Sharing
After transformation slicing, we obtain a simpler VC for each postcondition. Now we aim to group the VCs obtained from the previous step in order to further improve performance. In particular, by detecting VCs that are related and grouping them in a conjunction, we encourage the underlying SMT solver to reuse subproofs while solving them. We propose an heuristics to identify the postconditions that should be compositionally verified, by leveraging again the results from our deductive slicing approach.
In our context, grouping of two VCs A and B means that MM, Pre, Exec A∪B Post A ∧ Post B . That is, taking account of the axiomatics semantics of metamodel, preconditions, and rules impacting A or B, the VC proves the conjunction of postconditions A and B.
It is difficult to precisely identify the cases in which grouping two VCs will improve efficiency. Our main idea is to prioritize groups that have high probability of sharing subproofs. Conservatively, we also want to avoid grouping an already complex VC with any other one, but this requires to be able to estimate verification complexity. Moreover we want to base our algorithm exclusively on static information from VCs, because obtaining dynamic information is usually expensive in a large-scale MT settings.
We propose an algorithm based on two properties that are obtained by applying the natural deduction rules of our slicing approach: number of static traces and number of subgoals for each postcondition. Intuitively, each one of the two properties is representative of a different cause of complexity: 1) when a postcondition is associated with a large number of static traces, its verification is challenging because it needs to consider a large part of the transformation, i.e. a large set of semantic axioms generated in Boogie by VeriATL; 2) a postcondition that results a large number of sub-goals, indicates a large number of combinations that the theorem prover will have to consider in a case analysis step.
We present our grouping approach in Algorithm 2. Its inputs are a set of postconditions P, and other two parameters: max traces per group (MAX t ) and max sub-goals per group (MAX s ). The result are VCs in groups (G).
The algorithm starts by sorting the input postconditions according to their trace set size (in ascending order). Then, for each postcondition p, it tries to pick from G the candidate groups (C ) that may be grouped with p (lines 5 to 10). A group is considered to be a candidate group to host the given postcondition if the inclusion of the postcondition in the candidate group (trail ) does not yield a group whose trace and sub-goals exceed MAX t and MAX s .
If there are no candidate groups to host the given postcondition, a new group is created (lines 11 to 12). Otherwise, we rank the suitability of candidate groups to host the postcondition by using the auxiliary function rank (lines 13 to 15). A group A has a higher rank than another group B to host a given postcondition p, if grouping A and p yields a smaller trace set than grouping B and p. When two groups are with the same ranking in terms of traces, we subsequently give a higher rank to the group that yields smaller total number of sub-goals when including the input postcondition.
This ranking is a key aspect of the grouping approach: (a) postconditions with overlapping trace sets are prioritized (since the union of their trace sets will be smaller). This raises the probability of proof sharing, since overlapping trace sets indicate that the proof of the two postconditions has to consider the logic of some common transformation rules. (b) postconditions with shared sub-goals are prioritized (since the union of total number of sub-goals will be smaller). This also raises the probability of proof sharing, since case analysis on the same sub-goals does not need to be analyzed again.
Finally, after each postcondition found a group in G that can host it, we generate VCs for each group in G and return them. Note that the verification of a group of VCs yields a single result for the group. If the users wants to know exactly which postconditions have failed, they will need to verify the postconditions in the failed group separately.
Correctness. The correctness of our grouping algorithm is shown by its soundness as stated in Theorem 3.
Theorem 3 (Soundness of Grouping). MM, Pre, Exec
A∪B Post A ∧ Post B =⇒ MM, Pre, Exec A Post A ∧ MM, Pre, Exec B Post B
Proof. Following the consequences of logical conjunction and Theorem 2.
Evaluation
In this section, we first evaluate the practical applicability of our fault localization approach (Section 6.1), then we assess the scalability of our performance optimizations (Section 6.2). Last, we conclude this section with a discussion of the obtained results and lessons learned (Section 6.3).
Our evaluation uses the VeriATL verification system [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF], which is based on the Boogie verifier (version 2.3) and Z3 (version 4.5). The evaluation is performed on an Intel 3 GHz machine with 16 GB of memory running the Windows operating system. VeriATL encodes the axiomatic semantics of the ATL language (version 3.7). The automated proof strategy and its corresponding natural deduction rules are currently implemented in Java. We configure Boogie with the following arguments for fine-grained performance metrics: timeout:180 (using a verification timeout of 180 seconds) -traceTimes (using the internal Boogie API to calculate verification time).
Fault Localization Evaluation
Before diving into the details of evaluation results and analysis, we first formulate our research questions and describe the evaluation setup.
Research questions
We formulate two research questions to evaluate our fault localization approach: (RQ1) Can our approach correctly pinpoint the faults in the given MT? (RQ2) Can our approach efficiently pinpoint the faults in the given MT?
Evaluation Setup
To answer our research questions, we use the HSM2FSM transformation as our case study, and apply mutation analysis [START_REF] Jia | An analysis and survey of the development of mutation testing[END_REF] to systematically inject faults. In particular, we specify 14 preconditions and 5 postconditions on the original HSM transformation from [START_REF] Büttner | On verifying ATL transformations using 'off-the-shelf' SMT solvers[END_REF]. Then, we inject faults by applying a list of mutation operators defined in [START_REF] Burgueño | Static fault localization in model transformations[END_REF] on the transformation. We apply mutations only to the transformation because we focus on contract-based development, where the contract guides the development of the transformation. Our mutants are proved against the specified postconditions, and we apply our fault localization approach in case of unverified postconditions. We kindly refer to our online repository for the complete artifacts used in our evaluation 9 .
Evaluation Results
Table 1 summarizes the evaluation results for our fault localization approach on the chosen case study. The first column lists the identity of the mutants 10 . The second and third columns record the unverified OCL postconditions and their corresponding verification time.
The fourth, fifth, sixth and seventh columns record information of verifying sub-goals, i.e. the number of unverified sub-goals / total number of sub-goals (4th), average verification time of sub-goals (5th), the maximum verification time among sub-goals (6th), total verification of sub-goals (7th) respectively. The last column records whether the faulty lines (L f aulty , i.e. the lines that the mutation operators operated on) are presented in the problematic transformation scenarios (P T S) of unverified sub-goals. 9 A deductive approach for fault localization in ATL MTs (Online). https://github.com/veriatl/ VeriATL/tree/FaultLoc 10 The naming convention for mutants are mutation operator Add(A) / Del(D) / Modify(M), followed by the mutation operand Rule(R) / Filter(F) / TargetElement(T) / Binding(B), followed by the position of the operand in the original transformation setting. For example, MB1 stands for the mutant which modifies the binding in the first rule. First, we confirm that there is no inconclusive verification results of the generated subgoals, i.e. if VeriATL reports that the verification result of a sub-goal is unverified, then it presents a fault in the transformation. Our confirmation is based on the manual inspection of each unverified sub-goal to see whether there is a counter-example to falsify the subgoal. This supports the correctness of our fault localization approach. We find that the deduced hypotheses of the sub-goals are useful for the elaboration of a counter-example (e.g. when they imply that the fault is caused by missing code as the case in Listing 3).
Second, as we inject faults by mutation, identifying whether the faulty line is presented in the problematic transformation scenarios of unverified sub-goals is also a strong indication of the correctness of our approach. Shown by the last column, all cases satisfies the faulty lines inclusion criteria. 3 out 10 cases are special cases (dashed cells) where the faulty lines are deleted by the mutation operator (thus there are no faulty lines). In the case of MF6#2, there are no problematic transformation scenarios generated since all the sub-goals are verified. By inspection, we report that our approach improves the completeness of VeriATL. That is the postcondition (#2) is correct under MF6 but unable to be verified by Veri-ATL, whereas all its generated sub-goals are verified.
Third, shown by the fourth column, in 5 out of 10 cases, the developer is presented with at most one problematic transformation scenario to pinpoint the fault. This positively supports the efficiency of our approach. The other 5 cases produce more sub-goals to examine. However, we find that in these cases each unverified sub-goal gives a unique phenomenon of the fault, which we believe is valuable to fix the bug. We also report that in rare cases more than one sub-goal could point to the same phenomenon of the fault. This is because the hypotheses of these sub-goals contain a semantically equivalent set of genBy predicates. Although they are easy to identify, we would like to investigate how to systematically filter these cases out in the future.
Fourth, from the third and fifth columns, we can see that each of the sub-goals is faster to verify than its corresponding postcondition by a factor of about 2. This is because we sent a simpler task than the input postcondition to verify, e.g. because of our transformation slicing, the VC for each sub-goal encodes a simpler interaction of transformation rules compared to the VC for its corresponding postcondition. From the third and sixth columns, we can further report that all sub-goals are verified in less time than their corresponding postcondition.
Scalability Evaluation
To evaluate the two steps we proposed for scalable MT verification, we first describe our research questions and the evaluation setup. Then, we detail the results of our evaluation.
Research questions
We formulate two research questions to evaluate the scalability of our verification approach:
(RQ1) Can a MT-specific slicing approach significantly increase verification efficiency w.r.t. domain-agnostic Boogie-level optimization when a MT is scaling up? (RQ2) Can our proposed grouping algorithm improve over the slicing approach for largescale MT verifications?
Evaluation Setup
To answer our research questions, we first focus on verifying a perfect UML copier transformation w.r.t. to the full set of 50 invariants (naturally we expect the copier to satisfy all the invariants). These invariants specify the well-formedness of UML constructs, similar to the ones defined in Listing 1. We implement the copier as an ATL MT that copies each classifier of the source metamodel into the target, and preserves their structural features (i.e. 194 copy rules). Note that while the copier MT has little usefulness in practice, it shares a clear structural similarity with real-world refactoring transformations. Hence, in terms of scalability analysis for deductive verification, we consider it to be a representative example for the class of refactoring transformations. We support this statement in Section 6.3, where we discuss the generalizability of our scalable approach by extending the experimentation to a set of real-world refactoring transformations.
Our evaluation consists of two settings, one for each research question. In the first setting, we investigate RQ1 by simulating a monotonically growing verification problem. We first sort the set of postconditions according to their verification time (obtained by verifying each postcondition separately before the experimentation). Then we construct an initial problem by taking the first (simplest) postcondition and the set of rules (extracted from the UML copier) that copy all the elements affecting the postcondition. Then we expand the problem by adding the next simplest postcondition and its corresponding rules, arriving after 50 steps to the full set of postconditions and the full UML copier transformation.
At each of the 50 steps, we evaluate the performance of 2 verification approaches:
-ORG b . The original VeriATL verification system: each postcondition is separately verified using Boogie-level VC splitting. -SLICE. Our MT slicing technique applied on top of the ORG b approach: each postcondition is separately verified over the transformation slice impacting that specific postcondition (as described in Section 5.1).
Furthermore, we also applied our SLICE approach to a set of real-world transformations, to assess to which extent the previous results on the UML copier transformation are generalizable: we replaced the UML copier transformation in the previous experiment with 12 UML refactoring transformations from the ATL transformations zoo 11 , and verified them against the same 50 OCL invariants. When the original UML refactorings contain currently nonsupported constructs (please refer language support in Section 6.3 for details), we use our result on rule irrelevance (Theorem 2) to determine whether each invariant would produce the same VCs when applied to the copier transformation and to the refactorings. If not, we automatically issue timeout verification result to such invariant on the refactoring under study, which demonstrates the worst-case situation for our approach. By doing so, we ensure the fairness of the performance analysis for all the corpus.
For answering RQ2, we focus on the verification problem for the UML copier transformation, and compare two verification approaches, i.e. SLICE and GROUP, that ap-plies the grouping algorithm on top of SLICE (as described in Section 5.2). In particular, we variate the pair of arguments MAX t and MAX s (i.e. maximum traces and subgoals per group) to investigate their correlation with the algorithm performance.
Our scalability evaluation is performed on an Intel 3 GHz machine with 16 GB of memory running the Linux operating system. We refer to our online repository for the complete artifacts used in our evaluation 12
Evaluation Result
The two charts in Fig. 7 summarize the evaluation results of the first setting. In Fig. 7-(a) we record for each step the longest time taken for verifying a single postcondition at that step. In Fig. 7-(b) we record the total time taken to verify all the postconditions for each step. The two figures bear the same format. Their x-axis shows each of the steps in the first setting and the y-axis is the recorded time (in seconds) to verify each step by using the ORG b and SLICE approaches. The grey horizontal line in Fig. 7-(a) shows the verifier timeout (180s).
We learn from Fig. 7-(a) that the SLICE approach is more resilient to the increasing complexity of the problem than the ORG b approach. The figure shows that already at the 18th step the ORG b approach is not able to verify the most complex postcondition (the highest verification time reaches the timeout). The SLICE technique is able to verify all postconditions in much bigger problems, and only at the 46th step one VC exceeds the timeout.
Moreover, the results in Fig. 7-(b) support a positive answer to RQ1. The SLICE approach consistently verifies postconditions more efficiently than the ORG b approach. In our scenario the difference is significant. Up to step 18, both the approaches verify within timeout, but the verification time for ORG b shows exponential growth while SLICE is quasi-linear. At step 18th, SLICE takes 11.8% of the time of ORG b for the same verification result (171s against 1445s). For the rest of the experimentation ORG b hits timeout for most postconditions, while SLICE loses linearity only when the most complex postconditions are taken into account (step 30).
In our opinion, the major reason for the differences in shape of Fig. 7 is because the ORG b approach always aligns postconditions to the whole set of transformation rules, whereas the SLICE approach aligns each postcondition only to the ATL rules it depends on, thereby greatly reducing the size of each constructed VC.
Table 2 shows to which extent the previous results on the UML copier transformation are generalizable to other MTs. For each transformation the table shows the verification time (in seconds) spent by the ORG b and SLICE approaches respectively. The fourth column shows the improvement of the SLICE approach over ORG b .
From Table 2, we learn that when using the SLICE approach on the corpus, on average 43 (50 -7) out of 50 postconditions can expect a similar verification performance as observed in verifying the UML copier transformation. The reason is that our SLICE approach does not depend on the degree of supported features to align postconditions to the corresponding ATL rules. This gives more confidence that our approach can efficiently perform large scale verification tasks as shown in the previous experimentation, while we enable unsupported features.
We report that for the 12 transformations studied, the SLICE approach 1) is consistently faster than the ORG b approach and 2) is consistently able to verify more postconditions than the ORG b approach in the given timeout. On the full verification SLICE gains an average 71% time w.r.t. ORG b . The most gain is in the UML2Profiles case, which we observe 78% speed up than ORG b . The least gain is in the UML2Java case (68% speed up w.r.t. ORG b ), caused by 9 timeouts issued because of currently non-supported constructs (e.g. imperative call to helpers and certain iterators on OCL sequneces). All in all, these results con- firm the behavior observed in verifying the UML copier transformation.
Table 3 shows the evaluation result of the second setting. The first two columns record the two arguments sent to our grouping algorithm. In the 3rd column, we calculate the group ratio (GR), which measures how many of the 50 postconditions under verification are grouped with at least another one by our algorithm. In the 4th column (success rate), we calculate how many of the grouped VCs are in groups that decrease the global verification time. Precisely, if a VC P is the grouping result of VCs P 1 to P n , T a is the verification time of P using the GROUP approach, T 2 is the sum of the verification times of P 1 to P n using the SLICE approach, then we consider P 1 to P n are successfully grouped if T 1 is not reaching timeout and T 1 is less than T 2 . In the 5th column, we record the speedup ratio (SR), i.e. the difference of global verification time between the two approaches divided by the global verification time of the SLICE approach. In the 6th column, we record the time saved (TS) by the GROUP approach, by calculating the difference of global verification time (in seconds) between the two approaches.
The second setting indicates that our grouping algorithm can contribute to performance on top of the slicing approach, when the parameters are correctly identified. In our evaluation, the highest gain in verification time (134 seconds) is achieved when limiting groups to 6 maximum traces and 18 subgoals. In this case, 25 VCs participate in grouping, all of them successfully grouped. Moreover, we report that these 25 VCs would take 265 seconds to verify by using the SLICE approach, more than twice of the time taken by the GROUP approach. Consequently, the GROUP approach takes 1931 seconds to verify all the 50 VCs, 10% faster than the SLICE approach (2065 seconds), and 79% faster than the ORG b approach (9047 seconds).
Table 3 also shows that the two parameters chosen as arguments have a clear correlation with the grouping ratio and success rate of grouping. When the input arguments are gradually increased, the grouping ratio increases (more groups can be formed), whereas the success rate of grouping generally decreases (as the grouped VCs tend to become more and more complex). The effect on verification time is the combination of these two opposite behaviors, resulting in a global maximum gain point (MAX t =6, MAX s =18).
Finally, Table 3 shows that the best case for grouping is obtained by parameter values that extend the group ratio as much as possible, without incurring in a loss of success rate. However, the optimal arguments for the grouping algorithm may depend on the structure of the transformation and constraints. Their precise estimation by statically derived information is an open problem, that we consider for future work. Table 3 and our experience have shown that small values for the parameters (like in the first 5 rows) are safe pragmatic choices.
Discussions
In summary, our evaluations give a positive answer to all of our four research questions. It confirms that our fault localization approach can correctly and efficiently pinpoint the faults in the given MT: (a) faulty constructs are presented in the sliced transformation; (b) deduced clues assist developers in various debugging tasks (e.g. the elaboration of a counterexample); (c) the number of sub-goals that need to be examined to pinpoint a fault is usually small. Moreover, our scalability evaluation shows that our slicing and algorithmic VC grouping approaches improve verification performance up to 79% when a MT is scaling up. However, there are also lessons we learned from the two evaluations. Completeness. We identify three sources of incompleteness w.r.t. our proposed approaches.
First, incomplete application of the automated proof strategy (defined in Definition 1). Clearly, if not detected, an incomplete application of our automated proof strategy could cause our transformation slicing to erroneously slice away the rules that a postcondition might depend on. In our current solution we are able to detect incomplete cases, report them to the user, and defensively verify them. We detect incomplete cases by checking whether every elements of target types referred by each post-condition are accompanied by a genBy predicate (this indicates full derivation). While this situation was not observed during our experimentation, we plan to improve the completeness of the automated proof strategy in future by extending the set of natural deduction rules for ATL and design smarter proof strategies. By defensive verification, we mean that we will construct the slice to be the full MT for the incomplete cases. Thus, the VCs of the incomplete cases become MM, Pre, Exec Post, and fault localization is automatically disabled in these cases.
Second, incomplete verification. The Boogie verifier may report inconclusive results in general due to the underlying SMT solver. We hope the simplicity offered by our fault localization approach would facilitate the user in making the distinction between incorrect and inconclusive results. In addition, if the verification result is inconclusive, our fault localization approach can help users in eliminating verified cases and find the source of its inconclusiveness. In the long run, we plan to improve completeness of verification by integrating our approaches to interactive theorem provers such as Coq [START_REF] Bertot | Interactive Theorem Proving and Program Development: Coq'Art The Calculus of Inductive Constructions[END_REF] and Rodin [START_REF] Abrial | Rodin: An open toolset for modelling and reasoning in Event-B[END_REF] (e.g. drawing on recursive inductive reasoning). One of the easiest paths is exploiting the Why3 language [START_REF] Filliâtre | Why3 -where programs meet provers[END_REF], which targets multiple theorem provers as its back-ends.
Third, incomplete grouping. The major limitation of our grouping algorithm is that we currently have not proposed any reliable deductive estimation of optimal parameters MAX t and MAX s for a given transformation. Our evaluation suggests that conservatively choosing these parameters could be a safe pragmatic choice. Our future work would be toward more precise estimation by integrating with more statically derived information. Generalization of the experimentation. While evaluating our fault localization approach, we take a popular assumption in the fault localization community that multiple faults perform independently [START_REF] Wong | A survey on software fault localization[END_REF]. Thus, such assumption allows us to evaluate our fault localization approach in a one-post-condition-at-a-time manner. However, we cannot guarantee that this is general for realistic and industrial MTs. We think classifying contracts into related groups could improve these situations.
To further improve the generalization of our proposed approaches, we also plan to use synthesis techniques to automatically create more comprehensive contract-based MT settings. For example, using metamodels or OCL constraints to synthesize consistency-preserving MT rules [START_REF] Kehrer | Automatically deriving the specification of model editing operations from meta-models[END_REF][START_REF] Radke | Translating essential OCL invariants to nested graph constraints focusing on set operations[END_REF], or using a MT with OCL postconditions to synthesize OCL preconditions [START_REF] Cuadrado | Translating target to source constraints in model-to-model transformations[END_REF].
Language Support. Our implementation supports a core subset of the ATL and OCL languages: (a) declarative ATL (matched rules) in non-refining mode, many-to-many mappings of (possibly abstract) classifiers with the default resolution algorithm of ATL; (b) firstorder OCL contracts, i.e. OCL-Type, OCLAny, Primitives (OCLBool, OCLInteger, OCLString), Collection data types (i.e. Set, OrderedSet, Sequence, Bag), and 78 OCL operations on data types, including the forAll, collect, select, and reject iterators on collections. Refining mode (that uses in-place scheduling) is supported by integrating our previous work [START_REF] Cheng | Formalised EMFTVM bytecode language for sound verification of model transformations[END_REF]. The imperative and recursive aspects of ATL are currently not considered.
Usability. Currently, our fault localization approach relies on the experience of the transformation developer to interpret the deduced debugging clues. We think that counter-example generation would make this process more userfriendly, e.g. like quickcheck in Haskell [START_REF] Claessen | QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs[END_REF], or random testing in Isabelle/HOL [START_REF] Berghofer | Random Testing in Isabelle/HOL[END_REF]. In [START_REF] Cuadrado | Uncovering errors in ATL model transformations using static analysis and constraint solving[END_REF], the authors show how to combine derived constraints with model finder to generate counterexamples that uncover type errors in MTs. In the future, we plan to investigate how to use this idea to combine our debugging clues with model finders to ease the counter-example generation in our context. Finally, in case of large slices, we plan to automatically prioritize which unverified subgoals the user needs to examine first (e.g. by giving higher priority to groups of unverified sub-goals within the same branch of the proof tree). We are also working in eliminating sub-goals that are logically equivalent (as discussed in Section 6.1.3).
Related Work
Scalable Verification of MT. There is a large body of work on the topic of ensuring MT correctness [START_REF] Ab | A survey of approaches for verifying model transformations[END_REF], or program correctness in general [START_REF] Hatcliff | Behavioral interface specification languages[END_REF][START_REF] Prasad | A survey of recent advances in SAT-based formal verification[END_REF].
Poernomo outlines a general proof-as-modeltransformation methodology to develop correct MTs [START_REF] Poernomo | Proofs-as-modeltransformations[END_REF]. The MT and its contracts are first encoded in a theorem prover. Then, upon proving them, a functional program can be extracted to represent the MT based on the Curry-Howard correspondence [START_REF] Howard | The formulae-as-types notion of construction[END_REF].
UML-RSDS is a tool-set for developing correct MTs by construction [START_REF] Lano | A framework for model transformation verification[END_REF]. It chooses wellaccepted concepts in MDE to make their approach more accessible by developers, i.e. it uses a combination of UML and OCL to create a MT design and contracts.
Calegari et al. encode the ATL MTr and its metamodels into inductive types [START_REF] Calegari | A type-theoretic framework for certi-fied model transformations[END_REF]. The contracts for semantic correctness are given by OCL, which are translated into logical predicates. As a result, they can use the Coq proof assistant to interactively verify that the MTr is able to produce target models that satisfy the given contracts Büttner et al. use Z3 to verify a declarative subset of the ATL and OCL contracts [START_REF] Büttner | On verifying ATL transformations using 'off-the-shelf' SMT solvers[END_REF]. Their approach aims at providing minimal axioms that can verify the given OCL contracts.
Our work complements these works by focusing on scalability to make the verification more practical. To our knowledge our proposal is the first applying transformation slicing to increase the scalability of MT verification. Our work is close to Leino et al. [START_REF] Leino | Verification condition splitting[END_REF]. They introduce a Boogie-level VC splitting approach based on control-flow information. For example, the then and else blocks of an if statement branch the execution path, and can be hints for splitting VCs. This optimization does not have significant results in our context because the control-flow of ATL transformations is simple, yielding a single execution path with no potential to be split. This motivated us to investigate language-specific VC optimizations based on static information of ATL transformations. Our evaluation shows the integration of the two approaches is successful.
Fault Localization. Being one of the most user-friendly solutions to provide the users with easily accessible feedback, partially or fully automated fault localization has drawn a great attention of researchers in recent years [START_REF] Roychoudhury | Formulabased software debugging[END_REF][START_REF] Wong | A survey on software fault localization[END_REF]. Program slicing refers to identification of a set of program statements which could affect the values of interest [START_REF] Tip | A survey of program slicing techniques[END_REF][START_REF] Weiser | Program slicing[END_REF], and is often used for fault localization of general programming languages. W.r.t. other program slicing techniques, our work is more akin to traditional statement-deletion style slicing techniques than the family of amorphous slicing [START_REF] Harman | Amorphous program slicing[END_REF], since our approach does not alter the syntax of the MT for smaller slices. While amorphous slicing could potentially produce thinner slices for large MTs (which is important for the practicability of verification), we do not consider it in this work because: (a) the syntax-preserving slices constructed by the traditional approach is a more intuitive information to debug the original MT; (b) the construction of an amorphous slice is more difficult, since to ensure correctness, each altered part has to preserve the semantics of its correspondence.
Few works have adapted the idea of program slicing to localize faults in MTs. Aranega et al. define a framework to record the runtime traces between rules and the target elements these rules generated [START_REF] Aranega | Traceability mechanism for error localization in model transformation[END_REF]. When a target element is generated with an unexpected value, the transformation slices generated from the run-time traces are used for fault localization. While Aranega et al. focus on dynamic slicing, our work focuses on static slicing which does not require test suites to exercise the transformation.
To find the root of the unverified contracts, Büttner et al. demonstrate the UML-2Alloy tool that draws on the Alloy model finder to generate counter examples [START_REF] Büttner | Verification of ATL transformations using transformation models and model finders[END_REF]. However, their tool does not guarantee that the newly generated counter example gives additional information than the previous ones. Oakes et al. statically verify ATL MTs by symbolic execution using DSLTrans [START_REF] Oakes | Fully verifying transformation contracts for declarative ATL[END_REF]. This approach enumerates all the possible states of the ATL transformation. If a rule is the root of a fault, all the states that involve the rule are reported.
Sánchez Cuadrado et al. present a static approach to uncover various typing errors in ATL MTs [START_REF] Cuadrado | Uncovering errors in ATL model transformations using static analysis and constraint solving[END_REF], and use the USE constraint solver to compute an input model as a witness for each error. Compared to their work, we focus on contract errors, and provide the user with sliced MTs and modularized contracts to debug the incorrect MTs.
The most similar approach to ours is the work of Burgueño et al. on syntactically calculating the intersection constructs used by the rules and contracts [START_REF] Burgueño | Static fault localization in model transformations[END_REF]. To our knowledge our proposal is the first applying natural deduction with program slicing to increase the precision of fault localization in MT. W.r.t. the approach of Burgueño et al., we aim at improving the localization precision by considering also semantic relations between rules and contracts. This allows us to produce smaller slices by semantically eliminating unrelated rules from each scenario. Moreover, we provide debugging clues to help the user better understand why the sliced transformation causing the fault. However, their work considers a larger set of ATL. We believe that the two approaches complement each other and integrating them is useful and necessary.
Conclusion and Future Work
In summary, in this work we confronted the fault localization and scalability problems for deductive verification of MT. In terms of the fault localization problem, we developed an automated proof strategy to apply a set of designed natural deduction rules on the input OCL postcondition to generate sub-goals. Each unverified sub-goal yields a sliced transformation context and debugging clues to help the transformation developer pinpoint the fault in the input MT. Our evaluation with mutation analysis positively supports the correctness and efficiency of our fault localization ap-proach. The result showed that: (a) faulty constructs are presented in the sliced transformation, (b) deduced clues assist developers in various debugging tasks (e.g. to derive counterexamples), (c) the number of sub-goals that need to be examined to pinpoint a fault are usually small.
In terms of scalability, we lift our slicing approach to postconditions to manage large scale MTs by aligning each postcondition to the ATL rules it depends on, thereby reducing the verification complexity/time of individual postcondition. Moreover, we propose and prove a grouping algorithm to identify the postconditions that should be compositionally verified to improve the global verification performance. Our evaluation confirms that our approach improves verification performance up to 79% when a MT is scaling up.
Our future work includes facing the limitations identified during the evaluation (Section 6.3). We also plan to extend our slicing approach to metamodels and preconditions, i.e. slicing away metamodel constraints or preconditions that are irrelevant to each sub-goal. This would allow us to further reduce the size of problematic transformation scenario for the users to debug faulty MTs.
In addition, we plan to investigate how our decomposition can help us in reusing proof efforts. Specifically, due to requirements evolution, the MT and contracts are under unpredictable changes during the development. These changes can invalidate all of the previous proof efforts and cause long proofs to be recomputed. We think that our decomposition of sub-goals would increase the chances of reusing verification results, i.e. sub-goals that are not affected by the changes.
Fig. 2 .Fig. 3 .
23 Fig. 2. Example of HSM. Abstract (top) and concrete graphical syntax (bottom)
Listing 2 .
2 module HSM2FSM; create OUT : FSM from IN : HSM; rule SM2SM { from sm1 : HSM!StateMachine to sm2 : FSM!StateMachine ( name <-sm1.name ) } rule RS2RS { from rs1 : HSM!RegularState to rs2 : FSM!RegularState ( stateMachine <-rs1.stateMachine, name <-rs1.name ) } rule IS2RS { from is1 : HSM!InitialState (not is1.compositeState.oclIsUndefined()) to rs2 : FSM!RegularState ( stateMachine <-is1.stateMachine, name <-is1.name ) } -mapping each transition between two non-composite states rule T2TA { ... } -mapping each transition whose source is a composite state rule T2TB { ... } -mapping each transition whose target is a composite state rule T2TC { from t1 : HSM!Transition, src : HSM!AbstractState, trg : HSM!CompositeState, c : HSM!InitialState ( t1.source = src and t1.target = trg and c.compositeState = trg and not src.oclIsTypeOf(HSM!CompositeState)) to t2 : FSM!Transition ( label <-t1.label, stateMachine <-t1.stateMachine, source <-src, target <-c } Snippet of the HSM2FSM MT in ATL
Fig. 4 .
4 Fig. 4. Counter-example derived from Listing 3 that falsify Post1
Fig. 5 .
5 Fig. 5. Overview of providing fault localization for VeriATL
5 :
5 Tree ← intro(leaf) ∪ Tree 6: end for 7: while leafs = size(getLeafs(Tree)) 8: do 9: leafs ← size(getLeafs(Tree)) 10: for each node leaf ∈ getLeafs(Tree) do 11: Tree ← elimin(leaf) ∪ Tree 12:
Algorithm 2 7 : 17 :
2717 Algorithm for grouping VCs (P, MAX t , MAX s )1: P ← sortt(P) 2: G ← {} 3: for each p ∈ P do if trailt < MAXt ∧ trails < MAXs then return generate(G)
Fig. 7 .
7 Fig. 7. The evaluation result of the first setting
Transition inv Post1:
5 FSM!Transition.allInstances()->forAll(t | not
t.source.oclIsUndefined())
Listing 1. The OCL contracts for HSM and FSM
contract: if states have unique names within
any source model, states will have unique names
also in the generated target model. In general,
there are no restrictions on what kind of cor-
rectness conditions could be expressed, as long
as they are expressed in the subset of OCL we
considered in this work (see language support
in Section 6.3 for more details).
Theorem 1 (Rule Irrelevance -Sub-goals). MM, Pre, Exec sliced , Hypotheses Conclusion ⇐⇒ MM, Pre, Exec sliced ∪ irrelevant , Hypotheses Conclusion 8
Table 1 .
1 Evaluation metrics for the HSM2FSM case study
Unveri. Post. Veri. ID Time(ms) Unveri. / Total Sub-goals Max Time (ms) Avg. Time (ms) Total Time (ms) L f aulty ∈ P T S
MT2 #5 3116 3 / 4 1616 1644 6464 True
DB1 #5 2934 1 / 1 1546 1546 1546 -
MB6 #4 3239 1 / 12 1764 2550 21168 True
AF2 #4 3409 2 / 12 1793 2552 21516 True
MF6 #2 #4 3779 3790 0 / 6 1 / 12 1777 1774 2093 2549 10662 21288 N/A True
DR1 #1 #2 2161 2230 3 / 6 3 / 6 1547 1642 1589 1780 9282 9852 --
AR #1 #3 3890 4057 1 / 8 6 / 16 1612 1769 1812 1920 12896 28304 True True
Table 2 .
2 The generalization evaluation of the first setting
ID TimeORG b TimeSLICE Time Gained
UMLCopier 9047 2065 77%
UML2Accessors 9094 2610 71%
UML2MIDlet 9084 2755 70%
UML2Profiles 9047 2118 77%
UML2Observer 9084 2755 70%
UML2Singleton 9094 2610 71%
UML2AsyncMethods 9084 2755 70%
UML2SWTApplication 9084 2755 70%
UML2Java 9076 2923 68%
UML2Applet 9094 2610 71%
UML2DataTypes 9014 2581 71%
UML2JavaObserver 9084 2755 70%
UML2AbstractFactory 9094 2610 71%
Average 9078 2653 71%
Table 3 .
3 The evaluation result of the second setting
Maxt Maxs GR Succ. Rate SR TS
3 10 8% 100% 48% 16
4 13 22% 100% 49% 51
5 15 44% 100% 47% 108
6 18 50% 100% 51% 134
7 20 56% 93% 23% 73
8 23 62% 81% 11% 41
9 25 64% 72% -108% -400
10 28 62% 55% -158% -565
11 30 64% 31% -213% -789
12 33 68% 0% -212% -1119
13 35 68% 18% -274% -1445
14 38 72% 17% -433% -3211
15 40 72% 0% -438% -3251
16 43 76% 0% -547% -4400
17 45 76% 0% -620% -4988
We name the initial states in the concrete syntax of HSM and FSM models for readability.
Our HSM2FSM transformation is adapted from[START_REF] Büttner | On verifying ATL transformations using 'off-the-shelf' SMT solvers[END_REF]. The full version can be accessed at: https://goo.gl/MbwiJC.
In practice, we fill in the trace function by examining the output element types of each ATL rule, i.e. the to section of each rule.
In fact, the value of exp is assigned to x.a because of resolution failure. This causes a type mismatch exception and results in the value of x.a becoming undefined (we consider ATL transformations in non-refinement mode where the source and target metamodels are different).
The ATL transformations zoo. http://www. eclipse.org/atl/atlTransformations/
On scalability of deductive verification for ATL MTs (Online). https://github.com/veriatl/ VeriATL/tree/Scalability
https://github.com/veriatl/ VeriATL/tree/Scalability. | 82,403 | [
"11715",
"5451"
] | [
"419153",
"525283",
"489559",
"489559",
"525283"
] |
01763422 | en | [
"math",
"info"
] | 2024/03/05 22:32:13 | 2020 | https://hal.science/hal-01763422/file/2017_LegrainOmerRosat_DynamicNurseRostering.pdf | Antoine Legrain
email: [email protected]
Jérémy Omer
email: [email protected]
Samuel Rosat
email: [email protected]
An Online Stochastic Algorithm for a Dynamic Nurse Scheduling Problem
Keywords: Stochastic Programming, Nurse Rostering, Dynamic problem, Sample Average Approximation, Primal-dual Algorithm, Scheduling
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
In western countries, hospitals are facing a major shortage of nurses that is mainly due to the overall aging of the population. In the United Kingdom, nurses went on strike for the first time in history in May 2017. Nagesh [START_REF] Nagesh | Nurses could go on strike for the first time in british history[END_REF] says that "It's a message to all parties that the crisis in nursing recruitment must be put center stage in this election". In the United States, "Inadequate staffing is a nationwide problem, and with the exception of California, not a single state sets a minimum standard for hospital-wide nurse-to-patient ratios." [START_REF] Robbins | We need more nurses[END_REF]. In this context, the attrition rate of nurses is extremely high, and hospitals are now desperate retain them. Furthermore, nurses tend to often change positions, because of the tough work conditions and because newly hired nurses are often awarded undesired schedules (mostly due to seniority-based priority in collective agreements). Consequently, providing high quality schedules for all the nurses is a major challenge for the hospitals that are also bound to provide expected levels of service.
The nurse scheduling problem (NSP) has been widely studied for more than two decades (refer to [START_REF] Burke | The state of the art of nurse rostering[END_REF] for a literature review). The NSP aims at building a schedule for a set of nurses over a certain period of time (typically two weeks or one month) while ensuring a certain level of service and respecting collective agreements. However, in practice, nurses often know their wishes of days-off no more than one week ahead of time. Managers therefore often update already-computed monthly schedules to maximize the number of granted wishes. If they were able to compute the schedules on a weekly basis while ensuring the respect of monthly constraints (e.g., individual monthly workload), the wishes could be taken into account when building the schedules. It would increase the number of wishes awarded, improve the quality of the schedules proposed to the nurses, and thus augment the retention rate.
The version of the NSP that we tackle here is that of the second International Nurse Rostering Competition of 2015 (INRC-II) [START_REF] Ceschia | The second international nurse rostering competition[END_REF], where it is stated in a dynamic fashion. The problem features a wide variety of constraints that are close to the ones faced by nursing services in most hospitals. In this paper, we present the work that we submitted to the competition and which was awarded second prize.
Literature review
Dynamic problems are solved iteratively without comprehensive knowledge of the future. At each stage, new information is revealed and one needs to compute a solution based on the solutions of the previous stages that are irrevocably fixed. The optimal solution of the problem is the same as that of its static (i.e., offline) counterpart, where all the information is known beforehand, and the challenge is to approach this solution although information is revealed dynamically (i.e., online).
Four main techniques have been developed to do this: computing an offline policy (Markov decision processes [START_REF] Puterman | Markov Decision Processes: Discrete Stochastic Dynamic Programming[END_REF] are mainly used), following a simple online policy (Online optimization [START_REF] Buchbinder | Designing Competitive Online Algorithms via a Primal-Dual Approach[END_REF] studies these algorithms), optimizing the current and future decisions (Stochastic optimization [START_REF] Birge | Introduction to Stochastic Programming[END_REF] handles the remaining uncertainty), or reoptimizing the system at each stage (Online stochastic optimization [START_REF] Van Hentenryck | Online Stochastic Combinatorial Optimization[END_REF] provides a general framework for designing these algorithms).
Markov decision processes decompose the problem into two different sets (states and actions) and two functions (transition and reward). A static policy is pre-computed for each state and used dynamically at each stage depending on the current state. Such techniques are overwhelmed by the combinatorial explosion of problems such as the NSP, and approximate dynamic programming [START_REF] Powell | Approximate Dynamic Programming: Solving the Curses of Dimensionality[END_REF] provides ways to deal with the exponential growth of the size of the state space. This technique has been successfully applied to financial optimization [START_REF] Bäuerle | Markov Decision Processes with Applications to Finance[END_REF], booking [START_REF] Patrick | Dynamic multipriority patient scheduling for a diagnostic resource[END_REF], and routing [START_REF] Novoa | An approximate dynamic programming approach for the vehicle routing problem with stochastic demands[END_REF] problems. In Markof decision processes, most computations are performed before the stage solution process, therefore this technique relies essentially on the probability model that infers the future events.
Online algorithms aim at solving problems where decisions are made in real-time, such as online advertisement, revenue management or online routing. As nearly no computation time is available, researchers have studied these algorithms to ensure a worst case or expected bound on the final solution compared to the static optimal one. For instance, Buchbinder [START_REF] Buchbinder | Designing Competitive Online Algorithms via a Primal-Dual Approach[END_REF] designs a primal-dual algorithm for a wide range of problems such as set covering, routing, and resource allocation problems, and provides a competitive-ratio (i.e., a bound on the worst-case scenario) for each of these applications. Although these techniques can solve very large instances, they cannot solve rich scheduling problems as they do not provide the tools for handling complex constraints.
Stochastic optimization [START_REF] Birge | Introduction to Stochastic Programming[END_REF] tackles various optimization problems from the scheduling of operating rooms [START_REF] Denton | Optimal allocation of surgery blocks to operating rooms under uncertainty[END_REF] to the optimization of electricity production [START_REF] Fleten | Short-term hydropower production planning by stochastic programming[END_REF]. This field studies the minimization of a statistical function (e.g., the expected value), assuming that the probability distribution of the uncertain data is given. This framework typically handles multi-stage problems with recourse, where first-level decisions must be taken right away and recourse actions can be executed when uncertain data is revealed. The value of the recourse function is often approximated with cuts that are dynamically computed from the dual solutions of some subproblems obtained with Benders' decomposition. However these Benders-based decomposition methods converge slowly for combinatorial problems. Namely, the dual solutions do not always provide the needed information and the solution process therefore may require more computational time than is available. To overcome this difficulty, one can use the sample average approximation (SAA) [START_REF] Kleywegt | The sample average approximation method for stochastic discrete optimization[END_REF] to approximate the uncertainty (using a small set of sample scenarios) during the solution and also to evaluate the solution (using a larger number of scenarios).
Finally, online stochastic optimization [START_REF] Van Hentenryck | Online Stochastic Combinatorial Optimization[END_REF] is a framework oriented towards the solution of industrial problems. The idea is to decompose the solution process in three steps: sampling scenarios of the future, solving each one of them, and finally computing the decisions of the current stage based on the solution of each scenario. Such techniques have been successfully applied to solve large scale problems as on-demand transportation system design [START_REF] Bent | Scenario-based planning for partially dynamic vehicle routing with stochastic customers[END_REF] or online scheduling of radiotherapy centers [START_REF] Legrain | Online stochastic optimization of radiotherapy patient scheduling[END_REF]. Their main strength is that any algorithm can be used to solve the scenarios.
Contributions
The INRC-II challenges the candidates to compute a weekly schedule in a very limited computational time (less than 5 minutes), with a wide variety of rich constraints, and with important correlations between the stages. Due to the important complexity of this dynamic NSP, none of the tools presented in the literature review allows to solve this problem. We therefore introduce an online stochastic algorithm that draws inspiration from the primal-dual algorithms and the SAA. In that method,
• the online stochastic algorithm offers a framework to solve rich combinatorial problems;
• the primal-dual algorithm speeds up the solution by inferring quickly the impact of some decisions;
• the SAA efficiently handles the important correlations between weeks without increasing tremendously the computational time.
Finally, the algorithm uses a free and open-source software as a subroutine to solve static versions of the NSP. It is described in details in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF] and summarized in Section 3.
We emphasize that the algorithm described in this article has been developed in a time-constrained environment, thus forcing the authors to balance their efforts between the different modules of the software.
The resulting code is shared in a public Git repository [START_REF] Legrain | Dynamic nurse scheduler[END_REF] for reproduction of the results, future comparisons, improvements and extensions. The remainder of the article is organized as follows. In Section 2, we give a detailed description of the NSP as well as the dynamic features of the competition. In Section 3, we state a static formulation and summarize the algorithm that we use to solve it. In Section 4, we present the dynamic formulation of the NSP, the design of the algorithm, and the articulation of its components.
In Section 5, we give some details on the implementation of the algorithm, study the performance of our method on the instances of the competition, and compare them to those obtained by the other finalist teams.
Our concluding remarks appear in Section 6.
The Nurse Scheduling Problem
The formulation of the NSP that we consider is the one proposed by Ceschia et al. [START_REF] Ceschia | The second international nurse rostering competition[END_REF] in the INRC-II, and the description that we recall here is similar to theirs. First, we describe the constraints and the objective of the scheduling problem Then, we discuss the challenges brought in by the uncertainty over future stages.
The NSP aims at computing the schedule of a group of nurses over a given horizon while respecting a set of soft and hard constraints. The soft constraints may be violated at the expense of a penalty in the objective, whereas hard constraints cannot be violated in a feasible solution. The dynamic version of the problem considers that the planning horizon is divided into one-week-long stages and that the demand for nurses at each stage is known only after the solution of the previous stage is computed. The solution of each stage must therefore be computed without knowledge of the future demand.
The schedule of a nurse is decomposed into work and rest periods and the complete schedules of all the nurses must satisfy the set of constraints presented in Table 1. Each nurse can perform different skills (e.g., Head Nurse, Nurse) and each day is divided into shifts (e.g., Day, Night). Furthermore, each nurse has signed a contract with their employers that determines their work status (e.g., Full-time, Part-time) and work agreements regulate the number of days and weekends worked within a month as well as the minimum and maximum duration of work and rest periods. For the sake of nurses' health and personal life and to ensure a sufficient level of awareness, some successions of shifts are forbidden. For instance, a night shift cannot be followed by a day shift without being separated by at least one resting day. The employers also need to ensure a certain quality of service by scheduling a minimum number of nurses with the right skills for each shift and day. Finally, the length of the schedules (i.e., the planning horizon) can be four or eight weeks.
Hard constraints
H1 Single assignment per day: A nurse can be assigned at most one shift per day. H2 Under-staffing: The number of nurses performing a skill on a shift must be at least equal to the minimum demand for this shift. H3 Shift type successions: A nurse cannot work certain successions of shifts on two consecutive days. H4 Missing required skill: A nurse can only cover the demand of a skill that he/she can perform. Soft constraints S1 Insufficient staffing for optimal coverage: The number of nurses performing a skill on a shift must be at least equal to an optimal demand. Each missing nurse is penalized according to a unit weight but extra nurses above the optimal value are not considered in the cost. S2 Consecutive assignments: For each nurse, the number of consecutive assignments should be within a certain range and the number of consecutive assignments to the same shift should also be within another certain range. Each extra or missing assignment is penalized by a unit weight. S3 Consecutive resting days: For each nurse, the number of consecutive resting days should be within a certain range. Each extra or missing resting day is penalized by a unit weight. S4 Preferences: Each assignment of a nurse to an undesired shift is penalized by a unit weight. S5 Complete week-end: A given subset of nurses must work both days of the week-end or none of them. If one of them works only one of the two days Saturday or Sunday, it is penalized by a unit weight. S6 Total assignments: For each nurse, the total number of assignments (worked days) scheduled in the planning horizon must be within a given range. Each extra or missing assignment is penalized by a unit weight. S7 Total working week-ends: For each nurse, the number of week-ends with at least one assignment must be less than or equal to a given limit. Each worked weekend over that limit is penalized by a unit weight. The hard constraints (Table 1, H1-H4) are typical for workforce scheduling problems: each worker is assigned an assignment or day-off every day, the demand in terms of number of employees is fulfilled, particular shift successions are forbidden, and a minimum level of qualification of the workers is guaranteed.
Soft constraints S1-S7 translate into a cost function that enhances the quality of service and retain the nurses within the unit. The quality of the schedules (alternation of work and rest periods, numbers of worked days and weekends, respect of nurses' preferences) are indeed paramount in order to retain the most qualified employees. These specificities make the NSP one of the most difficult workforce scheduling problems in the literature, because a personalized roster must be computed for each nurse. The fact that most constraints are soft eases the search for a feasible solution but makes the pursuit of optimality more difficult.
The goal of the dynamic NSP is to sequentially build weekly schedules so as to minimize the total cost of the aggregated schedule and ensure feasibility over the complete planning horizon. The main difficulty is to reach a feasible (i.e., managing the global hard constraints H3) and near-optimal (i.e., managing the global soft constraints S6 -S7 as well as consecutive constraints S2 -S3) schedule without knowing the future demands and nurses' preferences. Indeed, the hard constraints H1, H2, and H4 handle local features that do not impact the following days. Each of these constraints concern either one single day (i.e., one assignment per day H1) or one single shift (i.e., the demand for a shift H2 and the requirement that a nurse must possess a required skill H4). In the same way, soft constraints S1, and S4 -S5 are included in the objective with local costs that depend on one shift, day or weekend. To summarize, the proposed algorithm must simultaneously handle global requirements and border effects between weeks that are induced by the dynamic process. These effects are propagated to the following week/stage through the initial state or the number of worked days and weekends in the current stage.
The static nurse scheduling problem
We describe here the algorithm introduced in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF] to solve the static version of the NSP. This description is important for the purpose of this paper since parts of the dynamic method described in the subsequent sections make use of certain of its specificities. This method solves the NSP with a branch-and-price algorithm [START_REF] Desaulniers | Column generation[END_REF]. The main idea is to generate a roster for each nurse, i.e., a sequence of work and rest periods covering the planning horizon. Each individual roster satisfies constraints H1, H3 and H4, and the rosters of all the nurses satisfy H2. A rotation is a list of shifts from the roster that are performed on consecutive days, and preceded and followed by a resting day; it does not contain any information about the skills performed on its shifts. A rotation is called feasible (or legal) if it respects the single assignment and succession constraints H1 and H3. A roster is therefore a sequence of rotations, separated by nonempty rest periods, to which skills are added (see Example 1). The MIP described in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF] is based on the enumeration of possible rotations by column generation. As in most column-generation algorithms, a restricted master problem is solved to find the best fractional roster using a small set of rotations, and subproblems output rotations that could be added to improve the current solution or prove optimality. These subproblems are modeled as shortest path problems with resource constraints whose underlying networks are described in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF]. To obtain an integer solution, this process is embedded within a branch-and-bound scheme. The remainder of the section focuses on the master problem. For the sake of clarity, we assume that, for every nurse, the set of all legal rotations is available, which conceals the role of the subproblem. It is also worth mentioning that the software is based only on open-source libraries from the COIN-OR project (BCP framework for branch-and-cut-and-price and the linear solver CLP), and is thus both free and open-source.
Example 1. Consider the following single-week roster:
We consider a set N of nurses over a planning horizon of M weeks (or K = 7M days). The sets of all shifts and skills are respectively denoted as Σ and S. The nurse's type corresponds to the set of skills he or she can use. For instance, most head nurses can fill Head Nurse demand, but they can also fill Nurse demand in most cases. All nurses of type t ∈ T (e.g., nurse or head nurse) are gathered within the subset N t . For the sake of readability, indices are standardized in the following way: nurses are denoted as i ∈ N , weeks as m ∈ {1 . . . M } , days as k ∈ {1 . . . K} , shifts as s ∈ S and skills as σ ∈ Σ. We use (k, s) to denote the shift s of day k. All other data is summarized in Table 2.
Nurses L - i , L + i min/max
Remark (Initial state). Obviously, if CR 0
i > 0, then CD 0 i = CS 0 i = 0, and vice-versa, because the nurse was either working or resting on the last day before the planning horizon. Moreover, s 0 i only matters if the nurse was working on that day. The total number of worked days and worked week-ends of a nurse is set at zero (0) at the beginning of the planning horizon.
The master problem described in Formulation (1) assigns a set of rotations to each nurse while ensuring at the same time that the rotations are compatible and the demand is filled. The cost function is shaped by the penalties of the soft constraints as no other cost is taken into account in the problem proposed by the competition. For any soft constraint SX, its associated unit weight in the objective function is denoted as c X .
Let R i be the set of all feasible rotations for nurse i. The rotation j of nurse i has a cost c ij (i.e., the sum of the soft penalties S2, S4 and S5) and is described by the following parameters:
a sk ij , a k ij ,
min i∈N j∈Ri c ij x ij S2,S4,S5 + CR + i k=1 c 3 r ik + min(K+1,k+CR + i ) l=k+1 c ikl 3 r ikl S3 + c 6 (w + i + w - i ) S6 + c 7 v i S7 + c 1 K k=1 s∈S σ∈Σ z sk σ S1 (1a)
subject to:
[H1, H3] :
min(K+1,k+CR + i ) l=k+1 r ikl - j∈Ri:f + ij =k-1 x ij = 0, ∀i ∈ N , ∀k = 2 . . . K (1b) [H1, H3] : r ik -r i(k-1) + j∈Ri:f - ij =k x ij - k-1 l=max(1,k-CR + i ) r ilk = 0, ∀i ∈ N , ∀k = 2 . . . K (1c) [H1, H3] : K l=max(1,K+1-CR + i ) r ilK + r iK + j:f + ij =K x ij = 1, ∀i ∈ N (1d) [S6] : j∈Ri K k=1 a k ij x ij + w - i ≥ L - i , ∀i ∈ N (1e) [S6] : j∈Ri K k=1 a k ij x ij -w + i ≤ L + i , ∀i ∈ N (1f) [S7] : j∈Ri M m=1 b m ij x ij -v i ≤ B i , ∀i ∈ N (1g)
[H2] :
t∈T σ n sk tσ ≥ D sk σ , ∀s ∈ S, k ∈ {1 . . . K} , σ ∈ Σ (1h)
[S1] :
t∈T σ n sk tσ + z sk σ ≥ O sk σ , ∀s ∈ S, k ∈ {1 . . . K} , σ ∈ Σ (1i) [H4] : i∈Nt,j a sk ij x ij - σ∈Σt n sk tσ = 0, ∀s ∈ S, k ∈ {1 . . . K} , σ ∈ Σ (1j) x ij ∈ N, z sk σ , n sk tσ ∈ R, ∀i ∈ N , j ∈ R i , s ∈ S, k ∈ {1 . . . K} , t ∈ T , σ ∈ Σ (1k) r ikl , r ik , w + i , w - i , v i ≥ 0, ∀i ∈ N , k ∈ {1 . . . K} , l = k + 1 . . . min(K + 1, k + CR + i ) (1l)
where Σ t is the set of skills mastered by a nurse of type t (e.g., head nurses have the skills Head Nurse and Nurse), and T σ is the set of nurse types that masters skill σ (e.g., Head Nurse skill can be only provided by head nurses).
The objective function (1a) is composed of 5 parts: the cost of the chosen rotations in terms of consecutive assignments and preferences (S2, S4, S5), the minimum and maximum consecutive resting days violations (S3), the total number of working days violation (S6), the total number of worked week-ends violation (S7), and the insufficient staff for optimal coverage (S1). Constraints (1b)-(1d) are the flow constraints of the rostering graph (presented in Figure 1) of each nurse i ∈ N . Constraints (1e) and (1f) measure the distance between the number of worked days and the authorized number of assignments: variable w + i counts the number of missing days when the minimum number of assignments, L - i , is not reached, and w - i is the number of assignments over the maximum allowed when the total number of assignments exceeds L + i . Constraints (1g) measure the number of weekends worked exceeding the maximum B i . Constraints (1h) ensure that enough nurses with the right skill are scheduled on each shift to meet the minimal demand.
Constraints (1i) measure the number of missing nurses to reach the optimal demand. Constraints (1j) ensure a valid allocation of the skills among nurses of a same type for each shift. Constraints (1k) and (1l) ensure the integrality and the nonnegativity of the decision variables.
A valid sequence of rotations and rest periods can also be represented in a rostering graph whose arcs correspond to rotations and rest periods and whose vertices correspond to the starting days of these rotations and rest periods. Figure 1 shows an illustration of a rostering graph for some nurse i, and highlights the border effects. Nurse i has been resting for one day in her/his initial state, so the binary variable r i14 has a cost c 3 instead of zero, but the binary variable r i67 has a zero cost, because nurse i could continue to rest on the first days of the following week. If variable r i67 is set to one, nurse i will then start the following with one resting day as initial state. Finally, if nurse i was working in her/his initial state, the penalties associated to this border effect would be included in the cost of either the first rotation if the nurse continues to work, or the first resting arcs r i1k if the nurse starts to rest.
Ri1 Ri2 Ri3 Ri4 Ri5 Ri6 Ri7 Wi1 Wi2 Wi3 Wi4 Wi5 Wi6 Wi7
Handling the uncertain demand
This section concentrates on the dynamic model used for the NSP, and on the design of an efficient algorithm to compute near-optimal schedules in a very limited amount of computational time. We propose a dynamic math-heuristic based on a primal-dual algorithm [START_REF] Buchbinder | Designing Competitive Online Algorithms via a Primal-Dual Approach[END_REF] and embedded into a SAA [START_REF] Kleywegt | The sample average approximation method for stochastic discrete optimization[END_REF]. As previously stated, the dynamic algorithm should focus on the global constraints (i.e., H3, S6, and S7) to reach a feasible and near-optimal global solution.
The dynamic NSP
For the sake of clarity and because we want to focus on border effects, we introduce another model for the NSP, equivalent to Formulation [START_REF] Bäuerle | Markov Decision Processes with Applications to Finance[END_REF]. In this new formulation, weekly decisions and individual constraints are aggregated and border conditions are highlighted. The resulting weekly Formulation (2) clusters together all individual local constraints in a weekly schedule j for each week and enumerates all possible schedules.
The constraints of that model describe border effects. Although this formulation is not solved in practice, it is better-suited to lay out our online stochastic algorithm.
Binary variable y m j takes value 1 if schedule j is chosen for week m, and 0 otherwise. As for rotations, a global weekly schedule j ∈ R is described by a weekly cost c j and by parameters a ij and b ij that respectively count the number of days and weekends worked by nurse i. The variables w + i , w - i , and v i are defined as in Formulation [START_REF] Bäuerle | Markov Decision Processes with Applications to Finance[END_REF].
min j∈R M m=1 c j y m j S1-S5 + c 6 i∈N (w + i + w - i ) S6 + c 7 i∈N v i S7 (2a)
subject to:
[H1 -H4, S1 -S5] :
j∈R y m j = 1, ∀m ∈ {1 . . . M } [α m ] (2b)
[H3, S2, S3] :
j ∈Cj y m+1 j ≥ y m j , ∀j ∈ N , m = 1 . . . M -1 [δ m j ] (2c)
[S6] :
j∈R M m=1 a ij y m j + w - i ≥ L - i , ∀i ∈ N [β - i ] (2d)
[S6] :
j∈R M m=1 a ij y m j -w + i ≤ L + i , ∀i ∈ N [β + i ] (2e)
[S7] :
j∈R M m=1 b ij y m j -v i ≤ B i , ∀i ∈ N [γ i ] (2f)
y m j ∈ {0, 1}, ∀j ∈ R, m ∈ {1 . . . M } (2g) w + i , w - i , v i ≥ 0, ∀i ∈ N (2h)
The objective (2a) is decomposed into the weekly cost of the schedule and global penalties. Constraints (2b) ensure that exactly one schedule is chosen for each week. Constraints (2c) hide the succession constraints by summarizing them into a filtering constraint between consecutive schedules. These constraints simplify the resulting formulation, but will not be used in practice as their number is not tractable (see below). Constraints (2d)-(2f) measure the penalties associated with the number of worked days and weekends.
Constraints (2g)-(2h) are respectively integrality and nonnegativity constraints. The greek letters indicated between brackets (α, β, δ and γ) denote the dual variables associated with these constraints.
Constraints (2c) model the sequential aspect of the problem. This formulation is indeed solved stage by stage in practice, and thus the solution of stage m is fixed when solving stage m + 1. Therefore, when computing the schedule of stage m+1, binary variables y m j all take value zero but one of them, denoted as y m jm , that corresponds to the chosen schedule for week m and takes value 1. All constraints (2c) corresponding to y m j = 0 can be removed, and only one is kept:
j ∈Cj m y m+1 j ≥ 1
, where C jm is the set of all schedules compatible with j m , i.e., those feasible and correctly priced when schedule j m is used for setting the initial state of stage m + 1. Constraints (2c) can thus be seen as filtering constraints that hide the difficulties associated with the border effects induced by constraints H3, S2, and S3.
The main challenge of the dynamic NSP is to correctly handle constraints (2c)-(2h) to maximize the chance of building a feasible and near-optimal solution at the end of the horizon. Our dynamic procedure for generating and evaluating the computed schedules at each stage is based on the SAA. Algorithm 1 summarizes the whole iterative process over all stages. Each candidate schedule is evaluated before generating Sample a set Ω m of future demands for the evaluation while there is enough computational time do Generate a candidate weekly schedule j for stage m Initialize the evaluation algorithm with schedule j (i.e., set the initial state) for each scenario ω ∈ Ω m do Evaluate schedule j over scenario ω end for Store the schedule (S m := S m ∪ {j}) and its score (e.g., its average evaluation cost) end while Choose the schedule j m ∈ S m with the best score end for Compute the best schedule for the last stage M with the given computational time another new one. The available amount of time being short, we should not take the risk of generating several schedules without having evaluated them. This generation-evaluation step is repeated until the time limit is reached. Note that the last stage M is solved by an offline algorithm (e.g., the one described in Section 3), because the demand is totally known at this time.
The two following subsections describe each one of the main steps:
1. The generation of a schedule with an offline procedure that takes into account a rough approximation of the uncertainty; 2. The evaluation of that schedule for a demand scenario that measures the impact on the remaining weeks. (This step also computes an evaluation score of a schedule based on the sampled scenarios.)
Generating a candidate schedule
In a first attempt to generate a schedule, a primal-dual algorithm inspired from [START_REF] Buchbinder | Designing Competitive Online Algorithms via a Primal-Dual Approach[END_REF] is proposed. However, this procedure does not handle all correlations between the weekly schedules (i.e., Constraints (2c)). This primal-dual algorithm is then adapted to better take into account the border effects between weeks and make use of every available insight on the following weeks.
A primal-dual algorithm
Primal-dual algorithms for online optimization aim at building pairs of primal and dual solutions dynamically. At each stage, primal decisions are irrevocably made and the dual solution is updated so as to remain feasible. The current dual solution drives the algorithm to better primal decisions by using those dual values as multipliers in a Lagrangian relaxation. The goal is to obtain a pair of feasible primal and dual solutions that satisfy the complementary slackness property at the end of the process.
We use a similar primal-dual algorithm to solve the online problem associated to Formulation [START_REF] Bent | Scenario-based planning for partially dynamic vehicle routing with stochastic customers[END_REF]. In this dynamic process, we wish to sequentially solve a restriction of Formulation (2) to week m for all stages m ∈ {1, . . . , M } with a view to reaching an optimal solution of the complete formulation. This process raises an issue though: how can constraints (4c)-(4e) betaken into account in a restriction to a single week? To achieve that goal, the primal-dual algorithm uses dual information from stage m to compute the schedule of stage m + 1 by solving the following Lagrangian relaxation of Formulation (2):
min j∈R [ c j S1,S2,S3,S4,S5 + i∈N ( β+ i -β- i )a ij S6 + i∈N γi b ij S7 ]y m+1 j (3a) s.t.: [H1 -H3] : j∈Cj m y m+1 j = 1 (3b) y m+1 j ∈ {0, 1}, ∀j ∈ R (3c)
where βi , β+ i , γi ≥ 0 are multipliers respectively associated with constraints (2d) -(2f), and both constraints (2b) and (2c) that guarantee the feasibility of the weekly schedules are aggregated under Constraint (3b). More specifically, any new assignment for nurse i will be penalized with βi -β+ i and worked week-ends will cost an additional γi . It is thus essential to set these multipliers to values that will drive the computation of weekly schedules towards efficient schedules over the complete horizon. For this, we consider the dual of the linear relaxation of Formulation (2):
max M m=1 α m + i∈N (L - i β - i -L + i β + i -B i γ i ) (4a) s. t.: α m + i∈N (a ij β - i -a ij β + i -b ij γ i ) -δ m j + j ∈C -1 j δ m-1 j ≤ c j , ∀j ∈ R, ∀m [y m j ] (4b) β - i ≤ c 6 , ∀i ∈ N [w - i ] (4c) β + i ≤ c 6 , ∀i ∈ N [w + i ] (4d) γ i ≤ c 7 , ∀i ∈ N [v i ] (4e) β + i , β - i , γ i , δ m j ≥ 0, ∀j ∈ R, m ∈ {1 . . . M } , ∀i ∈ N (4f)
where set C -1 j contains all the schedules with which schedule j is compatible. Dual variables α m , δ m j , β - i , β + i and γ i are respectively associated with Constraints (2b), (2c), (2d), (2e), and (2f), and the variables δ 0 j are set to zero to obtain a unified formulation. The variables in brackets denote the primal variables associated to these dual constraints. At each stage, the primal-dual algorithm sets the values of the multipliers so that they correspond to a feasible and locally-optimal dual solution, and uses this solution as Lagrangian multipliers in Formulation [START_REF] Bent | Scenario-based planning for partially dynamic vehicle routing with stochastic customers[END_REF]. Another point of view is to consider the current primal solution at stage m as a basis of the simplex algorithm for the linear relaxation of Formulation [START_REF] Bent | Scenario-based planning for partially dynamic vehicle routing with stochastic customers[END_REF]. The resolution of stage m + 1 corresponds to the creation of a new basis: Formulation (3) seeks a candidate pivot with a minimum reduced cost according to the associated dual solution.
Not only does the choice of dual variables drives the solution towards dual feasibility, but it also guarantee that complementary conditions between the current primal solution at stage m and the dual solution computed for stage m + 1 are satisifed. In the computation of a dual solution, the variables α m and δ m j do not need to be explicitly considered, because they will not be used in Formulation [START_REF] Birge | Introduction to Stochastic Programming[END_REF]. What is more, focusing on stage m, the only dual constraints that involve α m and δ m j (4b), can be satisfied for any value of βi , β+ i and γi by setting δ m j = 0, ∀j ∈ R, and
α m = min j∈R {c j - i∈N (a ij β- i -a ij β+ i -b ij γi )}.
Observe that the expression of the objective function of Formulation (2) ensures that the only schedule variable satisfying y m jm > 0 will be such that
j m ∈ argmin j∈R {c j -i (a ij β - i -a ij β + i -b ij γ i )}, so comple- mentarity is achieved.
To set the values of βi , β+ i and γi , we first observe that complementary conditions are satisfied if
β- i = c 6 if j,m a ij y m j < L - i 0 otherwise β+ i = c 6 if j,m a ij y m j ≥ L + i 0 otherwise γi = c 7 if j,m b ij y m j ≥ B i 0 otherwise
Since the history of the nurses are initialized with zero assignment and week-end worked, we initially set βi = c 6 and β+ i = γi = 0 to satisfy complementarity. We then perform linear updates at each stage m, using the characteristics of the schedule j m chosen for the corresponding week:
β- i = max 0, β- i -c 6 a m ijm L - i , β+ i = min c 6 , β+ i + c 6 a m ijm L + i , γi = min c 7 , γi + c 7 b m ijm B i .
These updates do not maintain complementarity at each stage but allow for a more balanced penalization of the number of assignments and worked week-ends. The variations of βi , β+ i and γi ensure that constraints (2b) remain feasible for the previous stage, even though complementarity may be lost. to be able to derive a competitive-ratio. However, no competitive-ratio is sought by this approach and linear updates are easier to design. Non-linear updates could be investigated in the future.
Algorithm 2 summarizes the primal-dual algorithm. It estimates the impact of a chosen schedule on the global soft constraints through their dual variables. As it is, it gives mixed results in practice. The reason is that the information obtained through the dual variables does not describe precisely the real problem. At the beginning of the algorithm, the value of the dual variables drives the nurses to work as much as possible.
Consequently, the nurses work too much at the beginning and cannot cover all the necessary shifts at the end of the horizon. Furthermore, the expected impact of the filtering constraints (2c) are totally ignored in that version. Namely, the shift type succession constraints H3 imply many feasibility issues at the border between two weeks as Formulation ( 2) is solved sequentially with this primal-dual algorithm. The following two sections describe how this initial implementation is adapted to cope with these issues.
Algorithm 2: Primal-dual algorithm
βi = c 6 , β+ i = γi = 0, ∀i ∈ N for each stage m do Solve Formulation (3) with a deterministic algorithm
Update β- i = max 0, β- i -c 6 a m ijm L - i , ∀i ∈ N Update β+ i = min c 6 , β+ i + c 6 a m ijm L + i , ∀i ∈ N Update γi = min c 7 , γi + c 7 b m ijm Bi , ∀i ∈ N end for
Sampling a second week demand for feasibility issues
Preliminary results have shown that Algorithm 2 raises feasibility issues due to constraints H3 on forbidden shift successions between the last day of one week and the first day of the following one. In other words, there should be some way to capture border effects during the computation of a weekly schedule. Instead of solving each stage over one week, we solve Formulation (3) over two weeks and keep only the first week as a solution of the current stage. The compatibility constraints (2c) between stages m and m + 1 are now included in this two-weeks model. In this approach, the data of the first week is available but no data of future stages is available. The demand relative to the next week is thus sampled as described in Section 4.4.
The fact that the schedule is generated for stages m and m + 1 ensures that the restriction to stage m ends with assignments that are at least compatible in this scenario, thus increasing the probability of building a feasible schedule over the complete horizon.
Furthermore, for two different samples of following week demand, the two-weeks version of Formulation (3) should lead to two different solutions for the current week. As a consequence, we can solve the model several times to generate different candidate schedules for stage m. As described in Algorithm 1, we use this property to generate new candidates until time limit is reached.
Global bounds to reduce staff shortages
Preliminary results have also shown that Algorithm 2 creates many staff shortages in the last weeks.
Our intent is thus to bound the number of assignment and worked weekends in the early stages to avoid the later shortages. The naive approach is to resize constraints (2d)-(2f) proportionally to the length of the demand considered in Formulation (3) (i.e., two weeks in our case). However, it can be desirable to allow for important variations in the number of assignments to a given nurse from one week to another, and even from one pair of weeks to another. Stated otherwise, it is not optimal to build a schedule that can only draw one or two weeks-long patterns as would be the case for less constrained environments. A simple illustration arises by considering the constraints on the maximum number of worked weekends. To comply with these constraints, no nurse should be working every weekend and, because of restricted staff availability, it is unlikely that a nurse is off every weekend. Coupled with the other constraints, this results necessarily in complex and irregular schedules. Consequently, bounding the number of assignments individually would discard valuable schedules.
Instead, we propose to bound the number of assignments and worked weekends for sets of similar nurses in order to both stabilize the total number of worked days within this set and allow irregularities in the individual schedules. We choose to cluster nurses working under the same work contract, because they share the same minimum and maximum bounds on their soft constraints. Hence, for each stage m, we add one set of constraints similar to (2d)-(2f) for each contract. In the constraints associated with contract κ ∈ Γ, the left hand-sides are resized proportionally to the number of nurses with contract κ and the number of weeks in the demand horizon. Let L m- κ , L m+ κ , and B m κ be respectively the minimum and maximum total number of assignments, and the maximum total number of worked weekends over the two-weeks demand horizon for the nurses with contract κ. We define these global bounds as:
• L m- κ = 7 * 2 M -m+1 i:κi=κ max(0, L - κi - m <m m =1 j a ij y m j ), • L m+ κ = 7 * 2 M -m+1 i:κi=κ max(0, L + κi - m <m m =1 j a ij y m j ), • B m κ = 2 M -m+1 i:κi=κ max(0, B κi - m <m m =1 j b ij y m j ),
where κ i is the contract of a nurse i.
Finally, the objective (3a) is modified to take into account the new slack variables w m- κ , w m+ κ , v m κ associated to the new soft constraints. The costs of these slack variables is set to make sure that violations of the soft constraints are not penalized more than once for an individual nurse. For instance, instead of counting the full cost c 6 for variable w m+ κ , we compute its cost as (c 6 -max i|κi=κ (β + i )). This guarantees that an extra assignment is never penalized with more than c 6 for any individual nurse. The cost of the variables w m+ κ and v m κ have been modified in the same way for analogous reasons. Formulation [START_REF] Burke | The state of the art of nurse rostering[END_REF] summarizes the final model used for the generation of the schedules. We recall that the variables y m j are now selecting a schedule j which covers a two weeks demand, and that this formulation is in fact solved by a branch-and-price algorithm that selects rotations instead of weekly schedules.
min j∈R c j + i∈N (β + i -β - i )a ij + γ i b m ij y m j + κ∈Γ (c 6 -max i:κi=κ (β - i ))w m- κ + (c 6 -max i:κi=κ (β + i ))w m+ κ ) + (c 7 -max i:κi=κ (γ m i ))v m κ ) (5a)
s.t.: [H1, H2, H3, H3] :
j∈R y m j = 1, (5b)
[S6] :
i:κi=κ j∈R a ij y m j + w m- κ ≥ L m- κ , ∀κ ∈ Γ (5c)
[S6] :
i:κi=κ j∈R a ij y m j + w m+ κ ≤ L m+ κ , ∀κ ∈ Γ (5d)
[S7] :
i:κi=κ j∈R b ij y m j -v m κ ≤ B m κ , ∀κ ∈ Γ (5e) y m j ∈ {0, 1}, ∀j ∈ R (5f) w m- κ , w m+ κ , v m κ ≥ 0, ∀κ ∈ Γ (5g)
To conclude, Formulation (5) allows to anticipate the impact of a schedule on the future through two mechanisms: the problem is solved over two weeks to diminish the border effects that may lead to infeasibility, and the costs are modified to globally limit the penalties due to constraints S6 and S7. Furthermore, this formulation can generate different schedules fort the first week by considering different samples for the second week demand.
Evaluating candidate schedules
In the spirit of the SAA, the first-week schedules generated by Formulation ( 5) are evaluated to be ranked. The evaluation should measure the expected impact of each schedule on the global solution (i.e., over M weeks). This impact can be measured by solving a NSP several times over different sampled demands for the remaining weeks.
Let Ω m be the set of scenarios of future demands for weeks m + 1, . . . , M , and assume that a schedule j has been computed for week m. To evaluate schedule j, we wish to solve the NSP for each sample of future demand ω ∈ Ω m by using j to set the initial history of the NSP. Denoting V m jω the value of the solution, we can infer that the future cost c m jω of schedule j in scenario ω is equal to c j + V m jω : the actual cost of the schedule plus the resulting cost for scenario ω. Then, a score that takes into account all the future costs (c m jω ) ω∈Ω m of a given schedule j is computed. Several functions have been tested and preliminary results have shown that the expected value was producing the best results. Finally, the schedule j m with the best score is retained.
However, computing the value V m jω raises two main issues. First, the NSP is an integer program for which it can be time-consuming to even find a feasible solution. We thus use the linear relaxation of this problem as an estimation of the future cost. This simplification decreases drastically the computational time, but still can detect feasibility issues at the border between weeks m and m + 1. The second issue is that over a long time horizon, even the linear relaxation of the NSP cannot be solved in sufficiently small computational time. We thus restrict the evaluation to scenarios of future demands that are at most two weeks long. More specifically, the scenarios are one week-long for the penultimate stage (M -1) and two weeks long for the previous stages. We observed that this restriction allows to keep the solution time short enough while giving a good measure of the impact of the schedule j on the future.
To summarize, the value V m jω is computed by solving the linear relaxation of Formulation (1) for a twoweek demand ω, and the initial state is set by using the schedule j. Finally, the parameters L - i , L + i , and B i are proportionally resized over two weeks, as follows.
• L (m+1)- i = 7 * 2 M -m max(0, L - i - m m =1 j a m ij y m j ) ; • L (m+1)+ i = 7 * 2 M -m max(0, L + i - m m =1 j a m ij y m j ) ; • B m+1 i = 2 M -m max(0, B i - m m =1 j b m ij y m j ) .
As already stated, the number of evaluation scenario included in Ω m is kept low (e.g., |Ω m | = 5) to meet the requirements in computational time. These scenarios are sampled as described in the next section.
Sampling of the scenarios
The competition data does not provide any knowledge about past demands, potential probability distributions of the demand, nor any other type of information that could help for sampling scenarios of demand.
It is thus impossible to build complex and accurate prediction models for the future demand. At a given stage m, the algorithm has absolutely no knowledge about the future realizations of the demand, so the sampling can only be based on the current and past observations of the weekly demands on stages 1 to m.
To build scenarios of future demand, we simply perturb these observations with some noise that is uniformly distributed within a small range (typically one or two nurses) and randomly mix these observations (e.g., pick the Monday of one observation and the Tuesday from another one). The future preferences are not sampled in the scenarios, because they cannot lead to an infeasible solution, they do not induce border effects, and they have small costs when compared to the other soft constraints. The goal of the sampling method is only to obtain some diversity in the scenarios used to generate different candidate schedules and in those used to evaluate the candidate schedules. Assuming that the demands will not change dramatically from one week to another, this allows for additional robustness and efficiency in many situations.
Summary of the primal-dual-based sample average approximation
Algorithm 3 provides a detailed description of the overall algorithm we submitted to the INRC-II.
It generates several schedules with a primal-dual algorithm and evaluates them over a set Ω m of future demands. The evaluation step increases the probability of selecting a globally feasible schedule, that has already been feasible for several resolutions of the linear relaxation of Formulation [START_REF] Bäuerle | Markov Decision Processes with Applications to Finance[END_REF]. The performances of this algorithm are discussed in Section 5.
Algorithm 3: A primal-dual-based sample average approximation
β - i = c 6 , β + i = γ i = 0, ∀i ∈ N for each stage m = 1 . . . M -1 do
Initialize the set of candidate schedules of stage m: S m = ∅ Initialize the generation model using the chosen schedule of the previous stage m -1 (i.e., set the initial state) Sample a set Ω m of future demands for the evaluation while there is enough computational time do Sample a second week demand for the generation Solve Formulation ( 5) with a deterministic algorithm to build a two-weeks schedule (j1 , j 2 ) Store the schedule of the first week: S m := S m ∪ {j 1 } Initialize the evaluation model with schedule j 1 (i.e., set the initial state) for each demand ω ∈ Ω m do Compute the value V m jω as the optimal value of the linear relaxation of Formulation (1) over two weeks if m < M -1, and one week otherwise end for end while Choose the schedule j m with the best average evaluation cost Update the dual variables as in Algorithm 2 end for Compute the best schedule for the last stage M with the given computational time
Experimentations
This section presents the results obtained at the INRC-II. The competition was organized in two rounds.
In the selection round, each team had to submit their best results on a benchmark of 28 instances that were available to the participants before submitting the codes. The organizers then retained the best eight teams for the final round where they tested the algorithms against a new set of 60 instances. The algorithm described above ranked second in both rounds.
The instances used during each round are summarized in Tables 3 and4. They range from relatively small instances (35 nurses over 4 weeks) to really big ones (120 nurses over 8 weeks) that are very difficult to solve even in a static setting [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF]. The algorithms of the participants all had the same limited computational time to solve each stage (depending on the number of nurses and on the computer used for the tests 1 ). The solution obtained at each stage was used as an initial state for the schedule of the following week. If an algorithm reached an infeasible schedule, the iterative process was stopped. The final rank of each team was computed as the average rank over all the instances. In this section, we will focus our discussion on the quality of the results obtained with our algorithm.
More details about the competition and the results can be found on the competition website: http:// mobiz.vives.be/inrc2/.
Algorithm implementation
Algorithm 3 depends on how future demands are sampled, on the number of scenarios used for the evaluation, and last but not least, on the scheduling software.
The algorithm uses only five scenarios of future demands for the evaluation. It must indeed divide the short available computational time between the generation and the evaluation of the schedules. The first step aims at computing the best schedule according to the current demand while the second step seeks a robust planning that yields promising results for the following stages (high probability to remain feasible and near-optimal). In order to generate several schedules (at least two) within the granted computational time, the number of demand scenarios must remain small. Moreover, since the demand scenarios we generate are not based on accurate data, but only on a learned distribution, there is no guarantee that a larger number of scenarios would provide a better evaluation. In fact, we tested different configurations (3 to 10 scenarios used for the evaluation), and they all gave approximately the same results (the best results were obtained for 4 to 6 scenarios).
The code is publicly shared on a Git repository [START_REF] Legrain | Dynamic nurse scheduler[END_REF]. The scheduling software is implemented in C++ and is based on the branch-cut-and-price framework of the COIN-OR project, BCP. The choice of this framework is motivated by the competition requirement of using free and open-source libraries. The pricing problems are modeled as shortest paths with resource constraints and solved with the Boost library. The solution algorithm is not parallelized and it uses the branching strategy 'two dives' described in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF]. This strategy 'dives' two times in the branching tree and keeps the best feasible solution. If no solution is found after two dives (which was never the case), the algorithm continues until it reaches either the time limit or a feasible solution. (but for one third position). Their algorithm is also based on a mixed integer programming approach that computes weekly schedules, but they directly model the problem using a large flow network where a state-expansion is used to consider the soft constraints. For the time being, only a brief description of the algorithm is available in [START_REF] Römer | A direct MILP approach based on state-expanded network flows and anticipation for multistage nurse rostering under uncertainty[END_REF]. Algorithm 3 obtains a fair second position and competes with the best algorithm, since it also ranks first or second on every instance but two, for which it ranks third. Finally, the third team is significantly behind the first two. As highlighted by Figure 2, the solutions found by their algorithms exhibit at least a 9% relative gap with respect to the best solution.
Selection instances
It is also important to note that these algorithms are randomized, because they are all based on random sampling of future demands. For the first phase of the competition, the participants had to provide the random seeds they used to obtain their results. During the second phase, the organizers executed the code of each team ten times with different arbitrary random seeds on each instance. Because of these variations, we run the algorithm many times on each of the instances used for the selection to submit only the best ones and increase our chance of qualification. Most teams must have used the same technique, since the ranking between the selection and the final rounds did not really change.
Final instances
The final results respect the same ranking as the one obtained after the selection. However, these comparisons are more fair, since the results were computed by the organizers, so that the teams were not able to select the solutions they submitted. This configuration evaluates in a better way the proposed algorithms, and especially their robustness. The organizers have even run 10 times the algorithms on each of the 60 final instances, and have thus compared the proposed software over 600 tests. the best schedules for about 65% of the instances, but our algorithm appears to be more robust. We indeed able produced a feasible schedule in every test but one, while winners could not build a feasible schedule in 5% of the cases (i.e., 34 tests). This comparison also highlights the balance that needs to be found between the time spent in the generation of the best possible schedules and their evaluation, since this second phase provides a measure of their robustness to future demands.
Figure 5 shows the cumulative distribution of the relative gap of the winning team solutions from ours as a function of the number of nurses. It is clear that once the instances exceed a certain size (i.e., 110 nurses), the quality of the solution of the winning team decreases. Indeed, in [START_REF] Römer | A direct MILP approach based on state-expanded network flows and anticipation for multistage nurse rostering under uncertainty[END_REF], the winning team comments that their algorithms was simply unable to find feasible integer solutions for some week demands of these instances, showing that the method experiments difficulties in scaling up. Furthermore, this algorithm was also not able to find feasible solutions for an important number of the small instances (i.e., 35 nurses). As a possible explanation, we have observed that it is more difficult to find feasible solutions for these instances, because they leave less flexibility for creating the schedules, i.e., because the hard constraints of the MIP are tighter. Stated otherwise, the proportion of feasible schedules that meet the minimum demand is much smaller for the smallest instances used in the competition.
Conclusions
This article deals with the nurse scheduling problem as described in the context of the international competition INRC-II. The objective is to sequentially build, week by week, the schedule of a set of nurses over a planning horizon of several weeks. In this dynamic process, the schedule computed for a given week is irrevocably fixed before the demand and the preferences for the next week are revealed. The main difficulty is to correctly handle the border effects between weeks and the global soft constraints to compute a feasible and near-optimal schedule for the whole horizon.
Our main contribution is the design of a robust online stochastic algorithm that performs very well over a wide range of instances (from 30 to 120 nurses over a four or eight weeks horizon). The proposed algorithm embeds a primal-dual algorithm within a sample average approximation. The primal-dual procedure generates candidate schedules for the current week, and the sample average approximation allows to evaluate each of them and retain the best one. The resulting implementation is shared on a public repository [START_REF] Legrain | Dynamic nurse scheduler[END_REF] and builds upon an open source static nurse scheduling software.
The designed algorithm won second prize in the INRC-II competition. The results show that, although this procedure does not compute the best schedules for a majority of instance, it is the most robust one.
Indeed, it finds feasible solutions for almost every instance of the competition while providing high-quality schedules.
T 1 Figure 1 :
11 Figure 1: Example of a rostering graph for nurse i ∈ N over a horizon of K = 7 days, where the minimum and maximum number of consecutive resting days are respectively CR - i = 2 and CR + i = 3, and the initial number of consecutive resting days is CR 0 i = 1. The rotation arcs (x ij ) are the plain arrows, the rest arcs (r ikl and r ik ) are the dotted arcs, and the artificial flow arcs are the dashed arrows. The bold rest arcs have a cost c 3 of and the others are free.
Algorithm 1 :
1 A sample average approximation based algorithm for each stage m = 1 . . . M -1 do Initialize the set of candidate schedules of stage m: S m = ∅ Initialize the generation algorithm with the chosen schedule j m-1 of the previous stage m -1 (i.e., set the initial state)
Figure 2 :
2 Figure 2: Cumulative distribution of the relative gap on the selection instances
Figure 3 :
3 Figure 3: Distribution of the objective value for the selection instances
Figure 3
3 Figure 3 shows the distribution of the objective value for 180 observations of the solution of an instance with 80 nurses over 8 weeks using Algorithm 3. The values of the solution are within a [-6%, +7%] range.
Figure 4 :
4 Figure 4: Cumulative distribution of the relative gap on the final instances
Figure 4
4 Figure 4 presents the relative gaps obtained on the final instances. The two first teams are really close and their algorithms highlight two distinct features of the competition. The winning team's algorithm builds
Figure 5 :
5 Figure 5: Cumulative distribution of the relative gap as a function of the number of nurses
Table 1 :
1 Constraints handled by the software.
Table 2 :
2 Summary of the input data.
Demand
D sk σ O sk σ min demand in nurses performing skill σ on shift (k, s) optimal demand in nurses performing skill σ on shift (k, s)
Initial state
CD 0 i CS 0 i s 0 i CR 0 i initial number of ongoing consecutive worked days for nurse i initial number of ongoing consecutive worked days on the same shift for nurse i shift worked on the last day before the planning horizon for nurse i initial number of ongoing consecutive resting days for nurse i
total number of worked days over the planning horizon for nurse i CR - i , CR + i min/max number of consecutive days-off for nurse i B i max number of worked week-ends over the planning horizon for nurse i
+ ij represent the first and last worked days of this rotation. Let x ij be a binary decision variable which takes value 1 if rotation j is part of the schedule of nurse i and zero otherwise. The binary variables r ikl and r ik measure if constraint S3 is violated: they are respectively equal to 1 if nurse i has a rest period from day k to l -1 including at most CR +
and b m ij which are equal to 1 if nurse i works respectively on shift (k, s), on day k, and weekend m, and 0 otherwise. Finally, f - ij and f i consecutive days (cost: c ikl 3 ), and if nurse i rests on day k and has already rested for at least CR + i consecutive days before k, and to zero otherwise. The integer variables w + i and w - i count the number of days worked respectively above L + i and below L - i by nurse i. The integer variable v i counts the number of weekends worked above B i by nurse i. Finally, the integer variables n sk σ , n sk tσ , and z sk σ respectively measures the number of nurses performing skill σ, the number of nurses of type t performing skill σ, and the undercoverage of skill σ on shift (k, s).
Table 3 :
3 Instances used for the selection
Number of nurses
35 70 110
Table 4 :
4 Instances used for the final
The computational times are given for a linux computer Intel(R) Xeon(R) X5675 @ 3.07GHz with 8 Go of available memory.
Despite the limits of this algorithm, our intent with this article is to present the exact implementation that has been submitted to the competition. There is place for improvements that could be developed in the future. For instance, the primal-dual algorithm could be enhanced with non-linear updates, new features recently developed in the static nurse scheduling software could be tested, or the bounding constraints added in the primal-dual algorithm could be refined. | 63,020 | [
"9143"
] | [
"95464",
"75",
"117606",
"238023"
] |
01761384 | en | [
"phys",
"spi",
"stat"
] | 2024/03/05 22:32:13 | 2018 | https://imt-mines-albi.hal.science/hal-01761384/file/Mondiere-Controlling.pdf | A Mondiere
V Déneux
N Binot
D Delagnes
Controlling the MC and M 2 C carbide precipitation in Ferrium® M54® steel to achieve optimum ultimate tensile strength/fracture toughness balance
Keywords:
Controlling the MC and M2C carbide precipitation in
Ferrium® M54® steel to achieve optimum ultimate tensile strength/fracture toughness balance
Aurélien Mondière, Valentine Déneux, Nicolas Binot, Denis Delagnes
Introduction
Aircraft applications, particularly for landing gear, require steels with high mechanical resistance, fracture toughness and stress corrosion cracking resistance [START_REF] Flower | High Performance Materials in Aerospace[END_REF]. Additionally, the aerospace industry is looking for different ways to reduce the weight of landing gear parts, as the landing gear assembly can represent up to 7% of the total weight of the aircraft [START_REF] Kundu | Aircraft Design[END_REF]. The search for metal alloys with a better balance of mechanical properties while maintaining a constant production cost is stimulating research activities. For several decades, 300 M steel has been widely used for landing gear applications. However, its fracture toughness and stress corrosion cracking resistance need to be improved and aeronautical equipment suppliers are searching for new grades. As shown in Fig. 1, AerMet® 100 and Ferrium® M54® (M54®) grades are excellent candidates to replace the 300 M steels without any reduction in strength or increase in weight. Other grades do not present a high enough fracture toughness, or are not resistant enough.
The recent development of M54® steel since 2010 [START_REF] Jou | Lower-Cost, Ultra-High-Strength, High-Toughness Steel[END_REF] has led to a higher stress corrosion cracking resistance and lower cost due to its lower cobalt content (see Table 1), as compared to the equivalent properties of the AerMet® 100 grade. These two steels belong to the UHS Co-Ni steel family.
UHS Co-Ni steels were developed at the end of 1960s with the HP9-4-X [START_REF] Garrison | Ultrahigh-strength steels for aerospace applications[END_REF] and HY-180 [START_REF] Dabkowski | Nickel, Cobalt, Chromium Steel[END_REF] grades, with the main goal being to achieve higher fracture toughness than 300M or 4340 steels. The main idea was first to replace cementite by M 2 C alloy carbide precipitation during tempering to avoid brittle fracture without too large reduction in mechanical strength. A better balance of UTS/K 1C was achieved with AF1410 [START_REF] Little | High Strength Fracture Resistant Weldable Steels[END_REF] by increasing the content of carbide-forming elements. In addition, an improvement in fracture toughness was also requested and finally achieved by the accurate control of reverted austenite precipitation during tempering [START_REF] Haidemenopoulos | Dispersed-Phase Transformation Toughening in UltraHigh-Strength Steels[END_REF] and the addition of rare earth elements to change the sulfide type [START_REF] Handerhan | A comparison of the fracture behavior of two heats of the secondary hardening steel AF1410[END_REF][START_REF] Handerhan | Effects of rare earth additions on the mechanical properties of the secondary hardening steel AF1410[END_REF], resulting in an increase in inclusion spacing [START_REF] Garrison | Lanthanum additions and the toughness of ultra-high strength steels and the determination of appropriate lanthanum additions[END_REF]. Thus, AerMet® 100 was patented in 1993 [START_REF] Hemphill | High Strength, High Fracture Toughness Alloy[END_REF], incorporating these scientific progress to achieve the same strength level of 300 M but with a higher fracture toughness. Then, from the 1990s to the 2000s, scientists sought to improve the grain boundary cohesion to further increase the fracture toughness by W, Re and B additions [START_REF] Kantner | Designing Strength, Toughness, and Hydrogen Resistance: Quantum Steel[END_REF]. Thus, Ferrium® S53® steel, developed in 2007 [START_REF] Kuehmann | Nanocarbide Precipitation Strengthened Ultrahigh-Strength[END_REF], was the first steel of the family containing W. Seven years ago, Ferrium® M54® steel was designed, offering a steel with roughly the same mechanical properties as AerMet® 100, but with a lower price thanks to a lower cobalt content.
UHS Co-Ni steels all exhibit an excellent UTS/K 1C balance due to a M 2 C carbide precipitation during tempering in a highly dislocated lathmartensitic matrix [START_REF] Jou | Lower-Cost, Ultra-High-Strength, High-Toughness Steel[END_REF][START_REF] Little | High Strength Fracture Resistant Weldable Steels[END_REF][START_REF] Hemphill | High Strength, High Fracture Toughness Alloy[END_REF][START_REF] Ayer | Transmission electron microscopy examination of hardening and toughening phenomena in Aermet 100[END_REF][START_REF] Olson | APFIM study of multicomponent M2C carbide precipitation in AF1410 steel[END_REF][START_REF] Machmeier | Development of a strong (1650MNm -2 tensile strength) martensitic steel having good fracture toughness[END_REF]. However, there is limited literature on the recently developed M54® steel [START_REF] Wang | Austenite layer and precipitation in high Co-Ni maraging steel[END_REF][START_REF] Wang | Analysis of fracture toughness in high Co-Ni secondary hardening steel using FEM[END_REF][START_REF] Lee | Ferrium M54 Steel[END_REF][START_REF] Pioszak | Hydrogen Assisted Cracking of Ultra-High Strength Steels[END_REF].
The addition of alloying elements in UHS Co-Ni steels also forms stable carbides like M 6 C or M 23 C 6 during the heat treatment process.
The size of these stable carbides can easily reach several 100 nm, resulting in a significant decrease in fracture toughness by acting as microvoid nucleation sites during the mechanical load [START_REF] Schmidt | Solution treatment effects in AF1410 steel[END_REF]. These particles can be dissolved by increasing the austenitizing temperature, but the prior austenite grain size rapidly increases and induces a detrimental effect on the mechanical properties [START_REF] Sankaran | Metallurgy and Design of Alloys with Hierarchical Microstructures[END_REF]. The new challenge for these steels is thus to dissolve coarse stable carbides without an excessive grain growth.
This challenge is also well-known in other kinds of martensitic steels for other applications, such as hot work tool steels. Michaud [START_REF] Michaud | The effect of the addition of alloying elements on carbide precipitation and mechanical properties in 5% chromium martensitic steels[END_REF] showed that V-rich carbide precipitation during tempering achieves high mechanical properties at room temperature as well as at high temperature. However, precipitation stayed heterogeneously distributed in the matrix, regardless of the austenitizing and tempering conditions, and so fracture toughness and Charpy impact were limited. Indeed, the same V-rich precipitation (MC type) that controls the austenitic grain size during austenitizing and controls the strength during tempering were identified. The incomplete solutionizing of V-rich carbides during austenitizing does not permit a homogeneous concentration of alloying elements in the martensitic matrix after quench, which explains why the strength/fracture toughness balance is limited. The generic idea would be to introduce a double/different precipitation with a single and precise role for each population: to control the austenitic grain size OR to control the mechanical strength. In H11-type tool steels, the addition of Mo slightly improved the balance of properties [START_REF] Michaud | The effect of the addition of alloying elements on carbide precipitation and mechanical properties in 5% chromium martensitic steels[END_REF].
In steels for aircraft applications, Olson [START_REF] Olson | Overview: Science of Steel[END_REF] and Gore et al. [START_REF] Gore | Grain-refining dispersions and properties in ultrahigh-strength steels[END_REF] succeeded in introducing another type of homogeneous small particles which pin the grain even for elevated austenitization temperatures (T = 1200 °C) in AF1410: (Ti,Mo)(C,N). These carbides avoid grain coarsening between 815 °C and 885 °C at austenitization leading to an increase in fracture toughness due to coarse carbides dissolution [START_REF] Schmidt | Solution treatment effects in AF1410 steel[END_REF]. The patent of Ferrium® S53® steel also describes a nanoscale MC precipitation which pins the grain boundary and avoids grain coarsening by the dissolution of the coarse carbides [START_REF] Kuehmann | Nanocarbide Precipitation Strengthened Ultrahigh-Strength[END_REF].
Stable carbide dissolution in Ferrium® M54® seems to be particularly challenging due to the formation of both M 2 C and M 6 C Mo-rich carbides during the heat treatment process (see Fig. 2). Indeed, as Morich M 2 C carbides precipitate during tempering, the full dissolution of Mo-rich carbides is needed to achieve a homogeneous distribution of Mo within the matrix.
More specifically, particles that control the austenitic grain size need to be stable enough at high temperature to dissolve the whole population of M 2 C and M 6 C carbides without grain coarsening. The aim of this article is to investigate carbides precipitation in M54® after a cryogenic treatment following the quench as well as after tempering. Carbide distribution, size and composition are carefully described for both states.
Experiments
Materials and Heat Treatment
Specimens were taken at mid-radius of a single bar of diameter 10.25 cm in the longitudinal direction.
The performed heat treatments were in agreement with the QuesTek recommendations [27] and consisted of a preheating treatment at 315 °C/1 h, a solutionizing at 1060 °C/1 h, followed by an oil quench, cold treatment at -76 °C/2 h and tempering at 516 °C/10 h.
Experimental Techniques
Austenite grain size was measured after the quench. Precipitation in the quenched state, after cryogenic treatment, was observed to identify undissolved carbides. Secondary carbides were characterized after tempering, at the end of the whole heat treatment process.
Chemical composition of the alloy was measured with a Q4 Tasman Spark Optical Emission Spectrometer from Bruker.
Dilatometry was performed using a Netzsch apparatus, DIL402C. Samples for dilatometry were in the form of a cylinder of diameter 3,7 mm with a length of 25 mm. Samples were heated at 7 °C/min and cooled at 5 °C/min under argon atmosphere.
For the as-quenched state, carbides were extracted by chemical dissolution of the matrix with a modified Berzelius solution at room temperature [START_REF] Burke | Chemical extraction of refractory inclusions from iron-and nickel-base alloys[END_REF] as already developed by Cabrol et al. [START_REF] Cabrol | Experimental investigation and thermodynamic modeling of molybdenum and vanadium-containing carbide hardened iron-based alloys[END_REF]. At the end of the dissolution, the solution was centrifuged to collect nanoscale precipitates. A Beckman Coulter Avanti J-30I centrifugal machine equipped with a JA-30.50Ti rotor was used to centrifuge the solution. The experimental method is described precisely in [START_REF] Cabrol | Experimental investigation and thermodynamic modeling of molybdenum and vanadium-containing carbide hardened iron-based alloys[END_REF].
XRD characterizations of the powder obtained after the chemical dissolution and of the bulk sample were performed using a Panalytical X'Pert PRO diffractometer equipped respectively with a Cu or Co radiation source. Phase identification was achieved by comparing the diffraction pattern of the experimental samples with reference JCPDS patterns.
Prior austenite grain size measurement is difficult because of the very low impurity content in the grade M54®. An oxidation etching was conducted by heating polished samples in a furnace at a temperature of 900 °C and 1100 °C under room atmosphere for 1 h and slightly polishing them after quenching to remove the oxide layer inside the grains and keeping the oxide only at the grain boundary.
Transmission Electron Microscopy (TEM) observations were performed using a JEOL JEM 2100F. Thin foils for TEM were cut from the specimens and the thickness was reduced to approximately 150 μm. Then, they were cut into disks and polished to a thickness of about 60 μm. The thin foils were then electropolished in a perchloric acidmethanol solution at -15 °C with a TenuPol device.
Chemical composition at nanometer scale was determined using atom probe tomography (APT) at the Northwestern University Center for Atom-Probe Tomography (NUCAPT). Samples were prepared into rods with a cross section of 1 x 1mm 2 and electro-polished using a twostep process at room temperature [START_REF] Krakauer | Systematic procedures for atom-probe field-ion microscopy studies of grain boundary segregation[END_REF][START_REF] Krakauer | A system for systematically preparing atom-probe field-ion-microscope specimens for the study of internal interfaces[END_REF]. The APT analyses were conducted with a LEAP 4000X-Si from Cameca at a base temperature of -220 °C, a pulse energy of 30pJ, a pulse repetition rate of 250 kHz, and an ion detection rate of 0.3% to 2%. This instrument uses a localelectrode and laser pulsing with a picosecond 355 nm wavelength ultraviolet laser, which minimizes specimen fracture [START_REF] Bunton | Advances in pulsed-laser atom probe: instrument and specimen design for optimum performance[END_REF].
For the prediction of the different types and molar fraction of each phase according to temperatures, thermodynamics calculations were performed using ThermoCalc® software. This software and database were developed at the Royal Institute of Technology (KTH) in Stockholm [START_REF] Sundman | The Thermo-Calc databank system[END_REF]. ThermoCalc® calculations were performed using the TCFE3 database.
Results and Discussions
Discussion of Optimized Mechanical Properties With Finely Dispersed Nanometer Size Precipitation
Research activities on UHS steels for aircraft applications focus on maximizing mechanical strength without decreasing the fracture toughness and stress corrosion cracking resistance. To improve strength, dislocations mobility must be reduced. Consequently, increasing the number density of secondary particles (Np) is a wellknown method and the resulting hardening is given by the following equation [START_REF] Sankaran | Metallurgy and Design of Alloys with Hierarchical Microstructures[END_REF]:
∆ ≈ σ Gb f d ( ) P 0.5 (1)
where Δσ p is particle contribution to the yield strength, G is the shear modulus, d is the particle diameter, f the volume fraction of the particle and b the Burgers vector of dislocations. Indeed, for the same volume fraction, a small particles distribution leads to a better yield strength, due to the decrease in dislocation mobility.
To obtain this fine and dispersed precipitation, two different types of nucleation are generally observed to occur in UHS steels:
-Numerous preferential nucleation sites leading to heterogeneous nucleation; -Homogeneous supersaturation of carbide-forming elements.
For the first condition, the heterogeneous nucleation of M 2 C carbides on dislocations has already been observed in previous works [START_REF] Speich | Strength and toughness of Fe-10ni alloys containing C, Cr[END_REF][START_REF] Kuehmann | Thermal Processing Optimization of Nickel-Cobalt Ultrahigh-Strength Steels[END_REF]. Indeed, dislocation sites are energetically favorable due to atom segregation and the short diffusion path offered to the diffusing element (pipe diffusion). It is therefore important to maintain a high dislocation density during tempering. Consequently, cobalt is added to these alloys to keep a high dislocation density during tempering. As previously described in the literature [START_REF] Kantner | Designing Strength, Toughness, and Hydrogen Resistance: Quantum Steel[END_REF][START_REF] Olson | Overview: Science of Steel[END_REF], Co delays the dislocation recovery through the creation of short-range ordering (SRO) in the matrix. Co also decreases the solubility of Mo in ferrite and increases the carbon activity inside ferrite [START_REF] Speich | Strength and toughness of Fe-10ni alloys containing C, Cr[END_REF][START_REF] Rhoads | High strength, high fatigue structural steel[END_REF][START_REF] Honeycombe | Steels microstructure and properties[END_REF][START_REF] Speich | Tempering of steel[END_REF][START_REF] Delagnes | Cementite-free martensitic steels: a new route to develop high strength/high toughness grades by modifying the conventional precipitation sequence during tempering[END_REF], leading to a more intensive precipitation of M 2 C carbides.
The main criterion for accessing the second condition is related to the dissolution carbides during austenitizing. If carbides are not totally solutionized, the precipitation during tempering will be heterogeneously dispersed with a higher density of clusters in the areas of high concentration of the carbide-forming elements. To avoid heterogeneous concentration, remaining carbides from the previous stage of heat treatment should be totally dissolved and enough time should be spent at a temperature above the carbide solvus to obtain a homogeneous composition of the carbide-forming elements in austenite. Moreover, in order to obtain a fine and dispersed precipitation during tempering, the driving force must be increased by increasing the supersaturation resulting in a higher nucleation rate [START_REF] Kuehmann | Thermal Processing Optimization of Nickel-Cobalt Ultrahigh-Strength Steels[END_REF]. Furthermore, undissolved carbides also reduce the potential volume fraction of particles that may precipitate during tempering [START_REF] Sato | Improving the Toughness of Ultrahigh Strength Steel[END_REF] and almost total dissolution is needed. Thus, the austenitizing condition should be rationalized based on the carbide dissolution kinetics and diffusion coefficient of alloying elements in the matrix to obtain a homogeneous chemical composition of the carbide-forming elements in the martensitic matrix in the as-quenched state.
Identification of Carbide Solutionizing Temperature
The temperatures of phase transformation were determined by dilatometry experiments. According to the relative length change shown in Fig. 3(a), Ac 1 , Ac 3 and M s temperatures are clearly detected. To detect the solutionizing of carbides, the derivative of the relative length change was calculated. Carbide dissolution takes place at a temperature ranging from 970 °C to 1020 °C, as shown in Fig. 3(b).
If the austenitizing temperature is not high enough, undissolved carbides are clearly observed (see Fig. 4) and slightly decrease UTS from 1997 MPa at 1060 °C to 1982 MPa at 1020 °C, which is probably due to the carbon trapped inside those undissolved particles.
These coarse carbides can also be observed after polishing and a Nital 2% etch using SEM (Fig. 5). The volume fraction seems to be particularly high.
According to ThermoCalc® calculations, these undissolved carbides obtained after 1 h at 1020 °C are M 6 C carbides (see Fig. 2) containing a significant amount of W (see Table 2).
The high solutionizing temperature of the M54® steel as compared to other steels of the same family (free of W, see Table 3) is due to the tungsten addition which stabilizes the M 6 C carbides. If the austenitizing temperature is not high enough, undissolved carbides still remain (see Fig. 4 and Fig. 5) and the tensile properties (yield strength, UTS, elongation at rupture), as well as fatigue resistance are reduced. However, if the austenitizing temperature is too high and no carbides remain, a huge grain size coarsening can be observed also leading to a decrease in the usual mechanical properties.
According to Naylor and Blondeau [START_REF] Naylor | The respective roles of the packet size and the lath width on toughness[END_REF], thinner laths and lath packets, directly dependent on austenite grain size [START_REF] Sankaran | Metallurgy and Design of Alloys with Hierarchical Microstructures[END_REF], can improve fracture toughness by giving a long and winding route to the crack during rupture. Białobrzeska et al. [START_REF] Białobrzeska | The influence of austenite grain size on the mechanical properties of low-alloy steel with boron[END_REF] have clearly shown that at room temperature, strength, yield strength, fatigue resistance and impact energy increase when the average austenite grain size decreases. Thus, any coarsening of austenite grains should be avoided.
Pinning of the Grain Boundary and Chemical Homogenization of the Austenitic Matrix at 1060 °C
As previously mentioned in the introduction, to control the grain size during austenitizing without any impact on precipitation during tempering, the precipitation of two types of particles is needed: one type to control the grain size during solutionizing and the second type of particles which precipitates during tempering.
To achieve this goal, one way is to add MC type precipitation to avoid quick coarsening of austenitic grains. However, according to ThermoCalc® calculations, the MC solvus temperature is not sufficiently high to allow the total dissolution of M 6 C carbides (see Fig. 2). Thus, Olson [START_REF] Olson | Overview: Science of Steel[END_REF] and Gore [START_REF] Gore | Grain-refining dispersions and properties in ultrahigh-strength steels[END_REF] added some Ti to form more stable MC carbides and dissolve other coarse stable carbides. A little addition of Titanium is sufficient to obtain a significant effect on the grain size, as described by Kantner who adds 0.04%mass [START_REF] Kantner | Designing Strength, Toughness, and Hydrogen Resistance: Quantum Steel[END_REF] of titanium in Fe- 15Co-6Ni-3Cr-1.7Mo-2 W-0.25C and Fe-15Co-5Ni-3Cr-2.7Re-1.2 W-0.18C steels, or Lippard who adds only 0.01%mass [START_REF] Lippard | Microanalytical Investigations of Transformation[END_REF] in alloys AF1410, AerMet® 100, MTL2 and MTL3. A low volume fraction of thin particles seems to be efficient in preventing austenitic grain growth [START_REF] Gore | Grain-refining dispersions and properties in ultrahigh-strength steels[END_REF]. Indeed, an addition of 0.01%mass of Ti in the M54® grade is enough to shift the MC solvus temperature by approximately 100 °C above the MC solvus temperature of the M54® grade free of Titanium according to ThermoCalc® calculation (see Fig. 6).
Moreover, MC carbides contain a large amount of Ti (see Fig. 7) which is not the case for M 2 C precipitation during tempering. Consequently, Ti-rich MC carbides seem relevant, to be a solution to control the grain size without any impact on precipitation during tempering. The purpose of the following paragraph is to compare the experimental results with the above-mentioned theoretical prediction.
After austenitizing for 1 h at 1060 °C, fine undissolved carbides were found in the as-quenched state after cryogenic treatment in M54® steels. These carbides are thinner than the undissolved carbides observed. In addition, a lower volume fraction is measured after an austenitization at 1060 °C than after a 1020 °C or 920 °C austenitization (see Fig. 8). The average size of these carbides is around 70 nm, measured on a sample of 23 carbides sample. In addition, no coarse undissolved carbides are observed indicating that the optimal austenitization conditions are not far to be reached.
Chemical extraction of carbides in the as-quenched state was performed to determine the type of those undissolved carbides still remaining after a 1060 °C austenitizing. As predicted by the thermocalc calculation, a FCC structure (type MC) was clearly identified from the XRD patterns (see Fig. 9). Moreover, the chemical composition measured by EDX (Energy Dispersive X-ray spectroscopy) is (Ti 0.44 Mo 0.27 W 0.13 V 0.16 )C. This composition is in quite good agreement with the ThermoCalc® calculated composition (Ti 0.55 V 0.25 Mo 0.17 W 0.08 )C 0.95 .
According to Spark Optical Emission Spectrometer measurements, the average Ti concentration measured is about 0.013 wt% in M54® steel. Considering that all the Ti atoms precipitate and taking into account the chemical composition of the MC measured by EDX, the volume fraction of Ti-rich MC carbide is found to be nearly 0.06%.
The intercarbide distance can be estimated using the equation given by Daigne et al. [START_REF] Daigne | The influence of lath boundaries and carbide distribution on the yield strength of 0.4% C tempered martensitic steels[END_REF]:
= × d r π f 1.18 2 3 particle v ( 2
)
where d is the distance between particles, r is the radius of the particle and f v the volume fraction of particles. According to Eq. ( 2), the distance between the MC carbides with an addition of 0.013 wt% of Ti is about a micrometer. This value is in very good agreement with SEM observations (see Fig. 8) indicating that most of the titanium carbides remain undissolved after the austenitization at 1060 °C.
Furthermore, a relation has been developed in tool steels to describe the grain refinement by a particle dispersion in tools steels. Bate [START_REF] Bate | The effect of deformation on grain growth in Zener pinned systems[END_REF] suggested the following equation between the limiting grain size diameter D, the mean radius, r, and the volume fraction F v of the pinning particles:
= D r F 4 3 v (3)
The calculated average grain size diameter is 78 μm according to the Bate's Eq. ( 3) in M54®.
This value is in a very good agreement with the measured average grain size of 81 ± 39 μm at 900 °C or 79 ± 38 μm at 1100 °C (see Fig. 10). Approximately 300 grains were measured for each austenitizing temperature. According to the Bate's work, the estimated 0.06% volume fraction of undissolved MC carbides is sufficient to control the grain size of austenite.
Consequently, MC particles need only a very small quantity of carbide-forming elements required for M 2 C precipitation during tempering. In addition, the calculated diffusion lengths of the different carbide-forming elements, Mo, Cr, W, are clearly significantly higher than the distance between first neighbors of Mo, Cr, W, respectively in the austenitic matrix at the end of austenitization (1060 °C/1 h) (see Table 4). As a consequence, homogeneous composition of the austenite is quickly obtained before quenching.
By way of conclusion, a small amount of Ti-rich MC carbides control the austenitic grain size and above all, the complete dissolution of M 6 C molybdenum rich carbides leads to the homogeneous distribution of the M 2 C carbide-forming elements before quenching.
Precipitation During Tempering
The particles that precipitate during tempering are totally different from the carbides controlling the austenitic grain size. According to XRD results, M 2 C-type carbides are identified after a tempering for 500 h at 516 °C (see Fig. 11). This long duration of tempering is necessary to detect the diffraction peaks of M 2 C carbides. For the standard tempering of 10 h, the volume fraction and the size of carbides might be too low to be detected by XRD, or long-distance ordering of M 2 C carbides (hexagonal structure) might not be achieved as already suggested by Machmeier et al. [START_REF] Ayer | On the characteristics of M2C carbides in the peak hardening regime of AerMet 100 steel[END_REF].
Consequently, the same carbide type is identified in the M54®, AerMet® 100 and AF1410 steels [START_REF] Ayer | Transmission electron microscopy examination of hardening and toughening phenomena in Aermet 100[END_REF][START_REF] Ayer | Microstructural basis for the effect of chromium on the strength and toughness of AF1410-based high performance steels[END_REF]. Atom probe analyses were performed to determine the distribution of M 2 C carbides within the martensitic matrix and to estimate the chemical composition of M 2 C carbides. To define the particle/matrix interface found in the analyzed box, the adopted criterion is an isoconcentration of 36 at% of Molybdenum + Carbon. Carbides seem to be homogenously distributed within the matrix according to the (limited) volume analyzed by APT (see Fig. 12). According to TEM observations, the precipitation of M 2 C carbides during tempering is very fine with an average size of 9.6 × 1.2 nm measured on 130 carbides (see Fig. 13) and seems to be homogeneously distributed within the matrix, as already shown by APT. The shape of the M 2 C particles is very elongated with an aspect ratio near 10. The main conclusion can be summarized as follows: the 1060 °C austenitizing temperature contributes to a fine and dispersed precipitation of M 2 C carbides after tempering, thanks to a high supersaturation as well as a homogeneous distribution of carbide-forming elements.
The average chemical composition of the M 2 C carbides measured by atom probe is Mo-rich with a significant content of Cr, W and V (see Fig. 14).
The chemical composition of M 2 C measured by atom probe is in quite good agreement with the ThermoCalc® calculations (see Fig. 15). The M 2 C carbides contain mainly Mo and Cr with approx. 10% W and a small amount of Fe and V, as shown in Fig. 15 and Table 5.
However, the chemical composition of the carbides in M54® is quite different from the composition measured in AerMet® 100 and AF1410 steels (see Table 5). Indeed, the main difference comes from the W content in M 2 C carbides for the M54® steel. W has a slower diffusivity than other carbide-forming elements and stabilizes M 2 C carbides for long duration tempering [START_REF] Lee | Stability and coarsening resistance of M2C carbides in secondary hardening steels[END_REF] which guarantee the mechanical properties in a wide range of tempering condition. Moreover, very few cementite precipitates are observed in the M54® steels. This fact also contributes to the high fracture toughness value measured after tempering. Indeed, cementite is well known to strongly reduce the fracture toughness of high strength steels [START_REF] Speich | Strength and toughness of Fe-10ni alloys containing C, Cr[END_REF], particularly if the iron carbide is located at the interlath site. The W in M 2 C carbides allows a long duration of the tempering treatment resulting in the total dissolution of cementite without coarsening of M 2 C carbides.
Conclusion
Ferrium® M54® steel was developed by QuesTek using intensive thermodynamic calculations [START_REF] Olson | Materials genomics: from CALPHAD to flight[END_REF]. An excellent strength/fracture toughness balance is achieved with a UTS reaching 1965 MPa and a K 1C values up to 110 MPa√m.The main goal of this work is to provide experimental evidence and arguments explaining the outstanding UTS/ K1C balance of properties the work is focused on the precipitation identification during the heat treatment by a different scale microstructural study using advanced experimental tools (XRD, TEM, APT). To this end, the optimization of austenitizing conditions is of primary importance, in conjunction with the solutionizing of alloying elements needed for precipitation during tempering. The main results can be summarized as follows:
▪ Microstructure in the as-quenched state (after cryogenic treatment) can be defined as a Ti-rich MC carbide precipitation with a size from 50 nm to 120 nm in a martensitic matrix which is highly supersaturated in carbide-forming elements. In addition, those elements are homogeneously distributed within the matrix, according to length-diffusion calculations. ▪ The addition of small amount of titanium has led to full dissolution of the Mo-and W-rich carbides. Types of precipitates which control the grain size during the austenitization and which strengthen the steel during the tempering are then totally different. ▪ This final microstructure is obtained thanks to the proper solutionizing of alloying elements during austenitizing at high temperature (1060 °C) which results in: o A high supersaturation before tempering. o A homogeneously distributed nucleation of carbides. ▪ Microstructure in the tempered state 516 °C/10 h is characterized by a homogeneously distributed precipitation of nanometer-sized M 2 C carbides. These carbides contain W, which reduces their coarsening rate.
Table 5
Comparison of carbide composition of different UHS steels hardened by M 2 C carbide precipitation according to ThermoCalc® calculations and experimental measurements.
Fig. 1 .
1 Fig. 1. Comparison of different grades of steel according to their fracture toughness, ultimate tensile strength and stress corrosion cracking resistance (adapted from [3]).
Fig. 3 .
3 Fig. 3. Relative length change curve (a) and derivative of the relative change curves (b) obtained from dilatometer heating experiments.
Fig. 4 .
4 Fig. 4. SEM image of a fracture surface of a tensile specimen (austenitization performed at 1020 °C).
Fig. 5 .
5 Fig. 5. SEM image of an as-quenched sample austenitized at 920 °C after nital etch.
Fig. 6 .
6 Fig. 6. Mole fraction of phase according to austenitizing temperature in M54® with and without 0.01%mass Ti calculated with TCFE3 ThermoCalc® database.
Fig. 7 .
7 Fig. 7. Composition of MC carbide according to the temperature calculated with TCFE3 ThermoCalc® database.
Fig. 8 .
8 Fig. 8. SEM observations of undissolved carbides after 1060 °C austenitizing and Nital etch (as-quenched structure).
Fig. 9 .
9 Fig. 9. Pattern and experimental XRD profiles (relative intensities) of precipitates extracted from the as-quenched M54® steel.
Fig. 10 .
10 Fig. 10. Prior austenitic grain size in as-quenched state after 1 h austenitizing at 900 °C (a) and 1100 °C (b).
Fig. 11 .
11 Fig. 11. Reference JCPDS pattern and experimental XRD profiles (relative intensities) of samples tempered at 516 °C for 10 h and 500 h.
Fig. 12 .
12 Fig. 12. Three-dimensional APT reconstruction of Ni atoms (green) of a sample tempered at 516 °C for 10 h. Carbides are represented as violet isoconcentration surfaces (total concentration of Mo and C is 36 at. pct). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 14 .
14 Fig. 14. Proximity histogram of the 98 interfaces precipitate/matrix.
Fig. 15 .
15 Fig. 15. Chemical composition of M 2 C in M54® calculated with TCF3 ThermoCalc® database.
Table 1
1 Chemical composition (wt%) of UHS Co-Ni steels.
C Cr Ni Co Mo W V Ti Mn Si
M54® 0.3 1 10 7 2 1.3 0.1 0.02max / /
Aermet® 100 0.23 3.1 11.1 13.4 1.2 / / 0.05max / /
AF1410 0.15 2 10 14 1 / / 0.015 0.1 0.1
HP9-4-20 0.2 0.8 9 4 1 / 0.08 / 0.2 0.2
HY-180 0.13 2 10 8 1 / / / 0.1 0.05
S53® 0.21 10 5.5 14 2 1 0.3 0.2max / /
Fig. 2. Mole fraction of phase according to austenitizing temperature in M54® calculated with TCFE3 ThermoCalc® database (Ti-free).
Table 2
2 Composition of M 6 C carbides predicted by ThermoCalc® calculations.
Carbide M 6 C
Composition (860 °C) (Fe 2.8 Mo 2.05 W 0.96 Cr 0.12 V 0.07 )C
Table 3
3 Austenitization of different UHS steels hardened by M 2 C carbide precipitation.
Steel M54® AerMet® 100 AF1410
T aust (°C) 1060 885 843
Table 4
4 Diffusivity in γ-iron and diffusion distance during solutionizing of carbideforming elements.
Element Mo Cr W
Diffusivity in γ-iron (D, cm 2 / s) 0.036exp (-239,8/RT) 0.063exp (-252,3/RT) 0.13exp (-267,4/RT)
[46] [46] [46]
Diffusion distance during ~4 ~3 ~2
austenitization (1 h at
1060 °C) (μm)
Acknowledgements Atom-probe tomography was performed at the Northwestern University Center for Atom-Probe Tomography (NUCAPT). The LEAP tomograph at NUCAPT was purchased and upgraded with grants from the NSF-MRI (DMR-0420532) and ONR-DURIP (N00014-0400798, N00014-0610539, N00014-0910781, N00014-1712870) programs. NUCAPT received support from the MRSEC program (NSF DMR-1121262) at the Materials Research Center, the SHyNE Resource (NSF ECCS-1542205), and the Initiative for Sustainability and Energy (ISEN) at Northwestern University. Special thanks to Dr. Dieter Isheim for his analyses and invaluable help.
Assistance provided by QuesTek Innovations LLC through Chris Kern and Ricardo K. Komai.
Data availability
The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. | 36,605 | [
"19253"
] | [
"110103",
"469296",
"469296",
"110103"
] |
01762573 | en | [
"info",
"scco"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01762573v2/file/chapterBCISigProcHumans.pdf | Fabien Lotte
Camille Jeunet
Jelena Mladenovic
Bernard N'kaoua
Léa Pillette
A BCI challenge for the signal processing community: considering the user in the loop
Introduction
ElectroEncephaloGraphy (EEG)-based Brain-Computer Interfaces (BCIs) have proven promising for a wide range of applications, from communication and control for severely motor-impaired users, to gaming targeted at the general public, real-time mental state monitoring and stroke rehabilitation, to name a few [START_REF] Clerc | Brain-Computer Interfaces 2: Technology and Applications[END_REF][START_REF] Lotte | Electroencephalography Brain-Computer Interfaces[END_REF]. Despite this promising potential, BCIs are still scarcely used outside laboratories for practical applications. The main reason preventing EEG-based BCIs from being widely used is arguably their poor usability, which is notably due to their low robustness and reliability. To operate a BCI, the user has to encode commands in his/her EEG signals, typically using mental imagery tasks, such as imagining hand movement or mental calculations. The execution of these tasks leads to specific EEG patterns, which the machine has to decode by using signal processing and machine learning. So far, to address the reliability issue of BCI, most research efforts have been focused on command decoding only. This present book contains numerous examples of advanced machine learning and signal processing techniques to robustly decode EEG signals, despite their low spatial resolution, their noisy and non-stationary nature. Such algorithms contributed a lot to make BCI systems more efficient and effective, and thus more usable.
However, if users are unable to encode commands in their EEG patterns, no signal processing or machine learning algorithm would be able to decode them. There-1 fore, we argue in this chapter that BCI design is not only a decoding challenge (i.e., translating EEG signals into control commands), but also a human-computer interaction challenge, which aims at ensuring the user can control the BCI. Indeed, BCI control has been shown to be a skill, that needs to be learned and mastered [START_REF] Neuper | Neurofeedback Training for BCI Control[END_REF][START_REF] Jeunet | Human Learning for Brain-Computer Interfaces[END_REF]. Recent research results have actually shown that the way BCI users are currently trained was suboptimal, both theoretically [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF][START_REF] Lotte | Towards Improved BCI based on Human Learning Principles[END_REF] and practically [START_REF] Jeunet | Why Standard Brain-Computer Interface (BCI) Training Protocols Should be Changed: An Experimental Study[END_REF]. Moreover, the user is known to be one of the main cause of EEG signals variability in BCI, due to his/her change in mood, fatigue, attention, etc. [START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF][START_REF] Shenoy | Towards adaptive classification for BCI[END_REF].
Therefore, there are a number of open challenges to take the user into account during BCI design and training, for which signal processing and machine learning methods could provide solutions. These challenges notably concern 1) the modeling of the user and 2) understanding and improving how and what the user is learning.
More precisely, the BCI community should first work on user modeling, i.e. modeling and updating the user's mental states and skills overtime from their EEG signals, behavior, BCI performances, and possibly other sensors. This would enable us to design individualized BCI, tailored for each user, and thus maximally efficient for each user. The community should also identify new performance metrics -beyond classification accuracy -that could better describe users' skills at BCI control.
Second, the BCI community has to understand how and what the user learns to control the BCI. This includes thoroughly identifying the features to be extracted and the classifier to be used to ensure the user's understanding of the feedback resulting from them, as well as how to present this feedback. Being able to update machine learning parameters in a specific manner and a precise moment to favor learning without confusing the user with the ever-changeable feedback is another challenge. Finally, it is necessary to gain a clearer understanding of the reasons why mental commands are sometimes correctly decoded and sometimes not; what makes people sometimes fail at BCI control, in order to be able to guide them to do better.
Altogether, solving these challenges could have a substantial impact in improving BCI efficiency, effectiveness and user-experience, i.e., BCI usability. Therefore, this chapter aims at identifying and describing these various open and important challenges for the BCI community, at the user level, to which experts in machine learning and signal processing could contribute. It is organized as follows: Section 1.2 addresses challenges in BCI user modeling, while Section 1.3 targets the understanding and improvement of BCI user learning. For each section, we identify the corresponding challenges, the possible impact of solving them, and first research directions to do so. Finally the chapter summarizes these open challenges and possible solutions in Section 1.4.
Modeling the User
In order to be fully able to take the user into account into BCI design and training, the ideal solution would be to have a full model of the users, and in particular of the users' traits, e.g., cognitive abilities or personality, and states, e.g., current attention level or BCI skills at that stage of training. Signal processing and machine learning tools and research can contribute to these aspects by developing algorithms to estimate the users' mental states (e.g., workload) from EEG and other physiological signals, by estimating how well users can self-modulate their EEG signals, i.e., their BCI skills, and by dynamically modeling, using machine learning, all these aspects together. We detail these points below.
Estimating and tracking the user's mental states from multimodal sensors
Increase in the number of available low-cost sensors [START_REF] Swan | Sensor mania! the internet of things, wearable computing, objective metrics, and the quantified self 2.0[END_REF] and development in machine learning enables real time assessment of some cognitive, affective and motivational processes influencing learning, such as attention for instance. Numerous types of applications are already taking advantage of these pieces of information, such as health [START_REF] Jovanov | A wireless body area network intelligent motion sensors for computer assisted physical rehabilitation[END_REF], sport [START_REF] Baca | Rapid feedback systems for elite sports training[END_REF] or intelligent tutoring systems [START_REF] Woolf | Affective tutors: Automatic detection of and response to student emotion[END_REF]. Such states could thus be relevant to improve BCI learning as well. Among the cognitive states influencing learning, attention deserves a particular care since it is necessary for memorization to occur [START_REF] Fisk | Memory as a function of attention, level of processing, and automatization[END_REF]. It is a key factor in several models of instructional design, e.g., in the ARCS model where A stands for Attention [START_REF] Keller | The Arcs model of motivational design[END_REF]. Attention levels can be estimated in several ways. Based on the resource theory of Wickens, task performance is linked to the amount of attentional resources needed [START_REF] Wickens | Multiple resources and performance prediction[END_REF]. Therefore, performances can provide a first estimation of the level of attentional resources the user dedicates to the task. However, this metric also reflects several other mental processes, and should thus be considered with care. Moreover, attention is a broad term that encompasses several types of concepts [START_REF] Posner | Components of attention[END_REF][START_REF] Cohen | The neuropsychology of attention[END_REF]. For example, focused attention refers to the amount of information that can be processed at a given time whereas vigilance refers to the ability to pay attention to the apparition of an infrequent stimulus over a long period of time. Each type of attention has particular ways to be monitored, for example vigilance can be detected using blood flow velocity measured by transcranial Doppler sonography (TCD) [START_REF] Shaw | Effects of sensory modality on cerebral blood flow velocity during vigilance[END_REF]. Focused visual attention, which refers to the selection of visual information to process, can be assessed by measuring eye movements [START_REF] Glaholt | Eye tracking in the cockpit: a review of the relationships between eye movements and the aviators cognitive state[END_REF]. While physiological sensors provide information about the physiological reactions associated with processes taking place in the central nervous system, neuroimaging has the advantage of recording information directly from the source [START_REF] Frey | Review of the use of electroencephalography as an evaluation method for human-computer interaction[END_REF]. EEG recordings enable to discriminate some types of attention with various levels of reliability given the method used. For instance, alpha band (7.5 to 12.5 Hz) can be used for the discrimination of several levels of attention [START_REF] Klimesch | Induced alpha band power changes in the human EEG and attention[END_REF], while the amplitude of event related potentials (ERP) are modulated by visual selective attention [START_REF] Saavedra | Processing stages of visual stimuli and event-related potentials[END_REF]. While specific experiments need to be carried out to specify the exact nature of the type(s) of attention involved in BCI training, a relationship between gamma power (30 to 70 Hz) in attentional network and mu rhythm-based BCI performance have already been shown by Grosse-Wentrup et al. [START_REF] Grosse-Wentrup | Causal influence of gamma oscillations on the sensorimotor rhythm[END_REF][START_REF] Grosse-Wentrup | High gamma-power predicts performance in sensorimotor-rhythm braincomputer interfaces[END_REF]. Such linear correlation suggests the implication of focused attention and working memory [START_REF] Grosse-Wentrup | High gamma-power predicts performance in sensorimotor-rhythm braincomputer interfaces[END_REF] in BCI learning.
The working memory (WM) load or workload is another cognitive factor of influence for learning [START_REF] Baddeley | Working memory. Psychology of learning and motivation[END_REF][START_REF] Mayer | Multimedia learning (2nd)[END_REF]. It is related to the difficulty of the task, depends on the user's available resources and to the quantity of information given to the user.
An optimal amount of load is reached when the user is challenged enough not to get bored and not too much compared with his abilities [START_REF] Gerjets | Cognitive state monitoring and the design of adaptive instruction in digital environments: lessons learned from cognitive workload assessment using a passive braincomputer interface approach[END_REF]. Behavioral measures of workload include accuracy and response time, when physiological measures comprise eye-movements [START_REF] Sr | Analytical techniques of pilot scanning behavior and their application[END_REF], eye blinks [START_REF] Ahlstrom | Using eye movement activity as a correlate of cognitive workload[END_REF], pupil dilatation [START_REF] De Greef | Eye movement as indicators of mental workload to trigger adaptive automation[END_REF] or galvanic skin response [START_REF] Verwey | Detecting short periods of elevated workload: A comparison of nine workload assessment techniques[END_REF]. However, as most behavioral measures, these measures change due to WM load, but not only, making them unreliable to measure uniquely WM load. EEG is a more reliable measure of workload [START_REF] Wobrock | Continuous Mental Effort Evaluation during 3D Object Manipulation Tasks based on Brain and Physiological Signals[END_REF]. Gevins et al. [START_REF] Gevins | Monitoring working memory load during computer-based tasks with EEG pattern recognition methods[END_REF] showed that WM load could be monitored using theta (4 to 7 Hz), alpha (8 to 12Hz) and beta (13 to 30 Hz) bands from EEG data. Low amount of workload could be discriminated from high amount of workload in 27s long epochs of EEG with a 98% accuracy using Joseph-Vigliones neural network algorithm [START_REF] Joseph | Contributions to perceptron theory[END_REF][START_REF] Viglione | Applications of pattern recognition technology[END_REF]. Interestingly they also obtained significant classification accuracies when training their network using data from another day (ie. 95%), another person (ie. 83%) and another task (ie. 94%) than the data used for classification. Several experiments have since reported online (ie. real time) classification rate ranging from 70 to 99% to distinguish between two types of workload [START_REF] Blankertz | The Berlin braincomputer interface: non-medical uses of BCI technology[END_REF][START_REF] Grimes | Feasibility and pragmatics of classifying working memory load with an electroencephalograph[END_REF]. Results depend greatly on the length of the signal epoch used: the longer the epoch, the better the performance [START_REF] Grimes | Feasibility and pragmatics of classifying working memory load with an electroencephalograph[END_REF][START_REF] Mühl | EEG-based Workload Estimation Across Affective Contexts[END_REF]. The importance of monitoring working memory in BCI applications is all the more important because BCI illiteracy is associated with high theta waves [START_REF] Ahn | High theta and low alpha powers may be indicative of BCI-illiteracy in motor imagery[END_REF] which is an indicator of cognitive overload [START_REF] Yamamoto | Topographic EEG study of visual display terminal (VDT) performance with special reference to frontal midline theta waves[END_REF]. Finally, another brain imaging modality can be used to estimate mental workload: functional Near Infrared Spectroscopy (fNIRS). Indeed, it was shown that hemodynamic activity in the prefrontal cortex, as measured using fNIRS, could be used to discriminate various workload levels [START_REF] Herff | Mental workload during N-back taskquantified in the prefrontal cortex using fNIRS[END_REF][START_REF] Peck | Using fNIRS to measure mental workload in the real world[END_REF][START_REF] Durantin | Using near infrared spectroscopy and heart rate variability to detect mental overload[END_REF].
Learners state assessment has mostly focused on cognitive components, such as the ones presented above, because learning has often been considered as information processing. However, affects also play a central role in learning [START_REF] Philippot | Emotion and memory[END_REF]. For example, Isen [START_REF] Isen | Positive Affect and Decision Making, Handbook of emotions[END_REF] has shown that positive affective states facilitate problem solving. Emotions are often inferred using contextual data, performances and models describing the succession of affective states the learner goes through while learning. The model of Kort et al. [START_REF] Kort | An affective model of interplay between emotions and learning: Reengineering educational pedagogy-building a learning companion[END_REF] is an example of such model. Though physiological signals can also be used such as electromyogram, electrocardiogram, skin conductive resistance and blood volume pressure [START_REF] Picard | Affective wearables[END_REF][START_REF] Picard | Toward computers that recognize and respond to user emotion[END_REF]. Arroyo et al. [START_REF] Arroyo | Emotion Sensors Go To School[END_REF] developed a system composed of four different types of physiological sensors. Their results show that the facial recognition system was the most efficient and could predict more than 60% of the variance of the four emotional states. Several classification methods have been tried to classify EEG data and deduce the emotional state of the subject. Methods such as multilayer perceptron [START_REF] Lin | Multilayer perceptron for EEG signal classification during listening to emotional music[END_REF], K Nearest Neighbor (KNN), Linear Discriminant Analysis (LDA), Fuzzy K-Means (FKM) or Fuzzy C Means (FCM) were explored [START_REF] Murugappan | Timefrequency analysis of EEG signals for human emotion detection[END_REF][START_REF] Murugappan | Classification of human emotion from EEG using discrete wavelet transform[END_REF], using as input alpha, beta and gamma frequency bands power. Results are promising and vary around 75% accuracy for two to five types of emotions. Note, however, that the use of gamma band power features probably means that the classifiers were also using EMG activity due to different facial expressions. For emotion monitoring as well, fNIRS can prove useful. For instance, in [START_REF] Heger | Continuous affective states recognition using functional near infrared spectroscopy[END_REF], fNIRS was shown to be able to distinguish two classes of affective stimuli with different valence levels with average classification accuracies around 65%. Recognizing emotion represents a challenge because most of the studies rely on the assumption that people are accurate in recognizing their emotional state and that the emotional cues used have a similar and the intended effect on subject. Moreover, many brain structures involved into emotion are deep in the brain, e.g., the amygdala, and as such activity from these areas are often very weak or even invisible in EEG and fNIRS.
Motivation is interrelated with emotions [START_REF] Harter | A new self-report scale of intrinsic versus extrinsic orientation in the classroom: Motivational and informational components[END_REF][START_REF] Stipek | Motivation to learn: From theory to practice[END_REF]. It is often approximated using the performances [START_REF] Blankertz | The Berlin braincomputer interface: non-medical uses of BCI technology[END_REF]. Several EEG characteristics are modulated by the level of motivation. For example, this is the case for the delta rhythm (0.5 to 4 Hz) which could originate from the brain reward system [START_REF] Knyazev | EEG delta oscillations as a correlate of basic homeostatic and motivational processes[END_REF]. Motivation is also known to modulate the amplitude of the P300 event related potential (ERP) and therefore increases performance with ERP-based BCI [START_REF] Kleih | Motivation modulates the P300 amplitude during braincomputer interface use[END_REF]. Both motivation and emotions play a major role in biofeedback learning [START_REF] Miller | Some directions for clinical and experimental research on biofeedback. Clinical biofeedback: Efficacy and mechanisms[END_REF][START_REF] Yates | Biofeedback and the modification of behavior[END_REF][START_REF] Kübler | Braincomputer communication: Unlocking the locked in[END_REF][START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF][START_REF] Hernandez | Low motivational incongruence predicts successful EEG resting-state neurofeedback performance in healthy adults[END_REF] and in BCI performances [START_REF] Hammer | Psychological predictors of SMR-BCI performance[END_REF][START_REF] Neumann | Predictors of successful self control during braincomputer communication[END_REF].
Cognitive, affective and motivational states have a great impact on learning outcome and machine learning plays a key role in monitoring them. Though challenges remain to be overcome, such as detecting and removing artifacts in real time. For example, facial expressions often occur due to change in mental states and may create artifacts polluting EEG data and for which real time removal still represents an issue. Limitations also arise from the number of different states we are able to differentiate. The quantity of data to train the classifier increasing with the number of classes to differentiate. Future studies should also focus on the reliability and stability of the classification within and across individuals [START_REF] Christensen | The effects of day-today variability of physiological data on operator functional state classification[END_REF]. Indeed, classification accuracy, particularly online one, still needs to be improved. Furthermore, calibration of classifiers is often needed for each new subject or session which is time consuming and might impede the use of such technology on a larger scale. Finally, while several emotional states can be recognized from user's behavior, there is usually very limited overt behavior, e.g., movements or speech, during BCI use. Thus, future studies should try to differentiate more diverse emotional states, e.g., frustration, directly from EEG and physiological data.
Quantifying users' skills
As mentioned above, part of the user modeling consists in measuring the users' skills at BCI control. Performance measurement in BCI is an active research topic, and various metrics were proposed [START_REF] Thompson | Performance measurement for brain-computer or brain-machine interfaces: a tutorial[END_REF][START_REF] Hill | A general method for assessing braincomputer interface performance and its limitations[END_REF]. However, so far, the performance considered and measured was that of the whole BCI system. Therefore, such performance metrics reflected the combined performances of the signal processing pipeline, the sensors, the user, the BCI interface and application, etc. Standard performance metrics used cannot quantify specifically and uniquely the BCI users' skills, i.e., how well the user can self-modulate their brain activity to control the BCI. This would be necessary to estimate how well the user is doing, where are their strengths and weaknesses, in order to provide optimal instructions, feedback, application interface and training exercises.
We recently proposed some new metrics to go in that direction, i.e., to estimate specifically users' skills at BCI control, independently of a given classifier [START_REF] Lotte | Online classification accuracy is a poor metric to study mental imagery-based BCI user learning: an experimental demonstration and new metrics[END_REF]. In particular, we proposed to quantify the users' skills at BCI control, by estimating their EEG patterns distinctiveness between commands, and their stability. We notably used Riemannian geometry to quantify how far apart from each other the EEG patterns of each command are, as represented using EEG spatial covariance matrices, and how variable over trials these patterns are. We showed that such metrics could reveal clear user learning effects, i.e., improvements of the metrics over training runs, when classical metrics such as online classification accuracy often failed to do so [START_REF] Lotte | Online classification accuracy is a poor metric to study mental imagery-based BCI user learning: an experimental demonstration and new metrics[END_REF].
This work thus stressed the need for new and dedicated measures of user skills and learning. The metrics we proposed are however only a first attempt at doing so, with more refined and specific metrics being still needed. For instance, our metrics can mostly quantify control over spatial EEG activity (EEG being represented using spatial covariance matrices). We also need metrics to quantify how much control the user has over their spectral EEG activity, as well as over their EEG temporal dynamics. Notably, it would seem useful to be able to quantify how fast, how long and how precisely a user can self-modulate their EEG activity, i.e., produce a specific EEG pattern at a given time and for a given duration. Moreover, such new metrics should be able to estimate successful voluntary self-regulation of EEG signals amidst noise and natural EEG variabilities, and independently of a given EEG classifier. We also need metrics that are specific for a given mental task, to quantify how well the user can master this mental command, but also a single holistic measure summarizing their control abilities over multiple mental tasks (i.e., multiclass metrics), to easily compare users and give them adapted training and BCI systems. The signal processing and machine learning community should thus address all these open and difficult research problems by developing new tools to quantify the multiple aspects of BCI control skills.
Creating a dynamic model of the users' states and skills
A conceptual model of Mental Imagery BCI performance
In order to reach a better understanding of the user-training process, a model of the factors impacting Mental Imagery (MI)-BCI skill acquisition is required. In other words, we need to understand which users traits and states impact BCI performance, how these factors do interact and how to influence them through the experimental design or specific cognitive training procedures. We call such a model a Cognitive Model. Busemeyer and Diederich describe cognitive models as models which aim to scientifically explain one or more cognitive processes or how these processes interact [START_REF] Busemeyer | Cognitive modeling[END_REF]. Three main features characterize cognitive models: (1) their goal: they aim at explaining cognitive processes scientifically, (2) their format: they are described in a formal language, (3) their background: they are derived from basic principles of cognition [START_REF] Busemeyer | Cognitive modeling[END_REF]. Cognitive models guarantee the production of logically valid predictions, they allow precise quantitative predictions to be made and they enable generalization [START_REF] Busemeyer | Cognitive modeling[END_REF].
In the context of BCIs, developing a cognitive model is a huge challenge due to the complexity and imperfection of BCI systems. Indeed, BCIs suffer from many limitations, independent from human learning aspects, that could explain users modest performance. For instance, the sensors are often very sensitive to noise and do not enable the recording of high quality brain signals while the signal processing algorithms sometimes fail to recognize the encoded mental command. But it is also a huge challenge due to the lack of literature on the topic and to the complexity and cost associated with BCI experiments that are necessary to increase the quantity of experimental data required to implement a complete and precise model [START_REF] Jeunet | Towards a cognitive model of MI-BCI user training[END_REF].
Still, a cognitive model would enable us to reach a better understanding of the MI-BCI user-training process, and consequently to design adapted and adaptive training protocols. Additionally, it would enable BCI scientists to guide neurophysiological analyses by targeting the cognitive and neurophysiological processes involved in the task. Finally, it would make it possible to design classifiers robust to variabilities, i.e., able to adapt to the neurophysiological correlates of the factors included in the model. To summarize, building such a model, by gathering the research done by the whole BCI community, could potentially lead to substantial improvements in MI-BCI reliability and acceptability.
Different steps are required to build a cognitive model [START_REF] Busemeyer | Cognitive modeling[END_REF]. First, it requires a formal description of the cognitive process(es) / factors to be described based on conceptual theories. Next, since the conceptual theories are most likely incomplete, ad hoc assumptions should be made to complete the formal description of the targeted factors. Third, the parameters of the model, e.g., the probabilities associated with each factors included in the model, should be determined. Then, the predictions made by the model should be compared to empirical data. Finally, this process should be iterated to constrain and improve the relevance of the model.
By gathering the results of our experimental studies and of a review of the literature, we proposed a first formal description of the factors influencing MI-BCI performance [START_REF] Jeunet | Towards a cognitive model of MI-BCI user training[END_REF]. We grouped these factors into 3 categories [START_REF] Jeunet | Advances in user-training for mental-imagerybased BCI control: Psychological and cognitive factors and their neural correlates[END_REF]. The first category is "task-specific", i.e., it includes factors related to the BCI paradigm considered. Here, as we focused on Mental-Imagery based BCIs, this category gathers factors related to Spatial Abilities (SA), i.e., the ability to produce, transform and manipulate mental images [START_REF] Poltrock | Individual differences in visual imagery and spatial ability[END_REF]. Both the second and third categories include "task-unspecific" factors, or, in other words, factors that could potentially impact performance whatever the paradigm considered. More precisely, the second category includes motivational and cognitive factors, such as attention (state and trait) or engagement. These factors are likely to be modulated by the factors of the third category that are related to the technology-acceptance, i.e., to the way users perceive the BCI system. This last category includes different states such as the level of anxiety, self efficacy, mastery confidence, perceived difficulty or the sense of agency.
The challenge is thus to modulate these factors to optimize the user's state and trait and thus increase the probability of a good BCI performance and/or of an efficient learning. In order to modulate these factors -that can be either states (e.g., motivation) or malleable traits (e.g., spatial abilities), one can act on specific effectors: design artefacts or cognitive activities/training.
The effectors we will introduce hereafter are mainly based on theoretical hypotheses. Their impact on the users' states, traits and performance are yet to be quantified. Thus, although these links make sense from a theoretical point of view, they should still be considered with caution. We determined three types of links between the factors and effectors. "Direct influence on user state": these effectors are suggested to influence the user's state and, consequently, are likely to have a direct impact on performance. For instance, proposing a positively biased feedback -making users believe they are doing better than what they really are -has been suggested to improve (novice) users' sense of agency (i.e., the feeling of being in control, see Section 1.3.3.1 for more details) [START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF]. "Help for users with a specific profile": these effectors could help users who have a specific profile and consequently improve their performance. For instance, proposing an emotional support has been suggested to benefit highly tensed/anxious users [START_REF] N'kambou | Advances in intelligent tutoring systems[END_REF] (see Section 1.3.3.2 for more details). "Improved abilities": this link connects effectors of type cognitive activities/exercises to abilities (malleable traits) that could be improved thanks to these activities. For instance, attentional neurofeedback has been suggested to improve attentional abilities [START_REF] Zander | Towards neurofeedback for improving visual attention[END_REF]. For more details, see [START_REF] Jeunet | Towards a cognitive model of MI-BCI user training[END_REF].
This model has been built based on the literature related to mental-imagery based BCIs (and mainly to motor-imagery based BCIs). It would be interesting to investigate the relevance of this model for other BCI paradigms, such as BCIs based on Steady-State Visual Evoked Potentials (SSVEP) or BCIs based on P300. It is noteworthy that for instance, motivation has already been shown to modulate P300 amplitude and performance [START_REF] Kleih | Motivation modulates the P300 amplitude during braincomputer interface use[END_REF]. The effect of mastery confidence (which is included in the "technology-acceptance factors" in our model) on P300-based BCI performance has also been investigated [START_REF] Kleih | Does mastery confidence influence P300 based braincomputer interface (BCI) performance? In: Systems, Man, and Cybernetics[END_REF]. The results of this study were not conclusive, which led the authors to hypothesize that either this variable had no effect on performance or that they may not have succeeded to manipulate participants' mastery confidence. Further investigation is now required. Besides, the same authors proposed a model of BCI performance [START_REF] Kleih | Psychological factors influencing brain-computer interface (BCI) performance[END_REF]. This model gathers physiological, anatomical and psychological factors. Once again, it is interesting to see that, while organized differently, similar factors were included in the model. To summarize, it would be relevant to further investigate the factors influencing performance in different BCI paradigms, and then investigate to which extent some of these factors are common to all paradigms (i.e., task-unspecific), while determining which factors are specific to the paradigm/task. Then, the next step would be to propose a full and adaptive model of BCI performance. Now, from a signal processing and machine learning point of view, many challenges are remaining. We should aim at determining some physiological or neurophysiological correlates of the factors included in this model in order to be able to estimate, in real time, the state of the BCI user. Therefore, the signal processing community should design tools to recognize these neural correlates in real-time, from noisy signals. Besides, the model itself requires machine learning expertise to be implemented, as detailed in the next Section, i.e., Section 1.2.3.2. Then, one of the main challenges will be to determine, for each user, based on the recorded signals and performance, when the training procedure should be adapted in order to optimize the performance and learning process. Machine learning techniques could be used in order to determine, based on a pool of previous data (e.g., using case-based reasoning) and on theoretical knowledge (e.g., using rule-based reasoning), when to make the training procedure evolve. In the field of Intelligent Tutoring Systems (ITS), where the object is to adapt the training protocol dynamically to the state (e.g., level of skills) of the learner, a popular approach is to use multi-arm bandit algorithms [START_REF] Clement | Multi-arm bandits for intelligent tutoring systems[END_REF]. Such an approach could be adapted for BCI training. The evolution of the training procedure could be either continuous or divided into different steps, in which case it would be necessary to determine relevant thresholds on users' states values, from which the training procedures should evolve, e.g., to become more complex, to change the context and propose a variation of the training tasks, to go back to a previous step that may have not been assimilated correctly, etc.
A computational model for BCI adaptation
As discussed in previous sections, it is necessary to identify the psychological factors, user skills and traits which will determine a successful BCI performance. Coadaptive BCIs, i.e., dynamically adaptive systems which adjust to signal variabilities during a BCI task, and in such way adapt to the user, while the user adapts to the machine via learning -showed tremendous improvement in the system performance ( [START_REF] Schwarz | A co-adaptive sensory motor rhythms Brain-Computer Interface based on common spatial patterns and Random Forest[END_REF] for MI; [START_REF] Thomas | CoAdapt P300 speller: optimized flashing sequences and online learning[END_REF] for P300). However, these techniques dwell mostly within the signal variabilities, by only adjusting to them, without acknowledging and possibly influencing the causes of such variabilities -human factors. These factors, once acknowledged, should be structured in a conceptual framework as in [START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF] in order to be properly influenced or to be adapted upon. In this framework for adaptive BCI methods, the human psychological factors are grouped by their degree of stability or changeability in time, e.g., skills could take multiple sessions (months) to change, while attention drops operate within short time periods. All these changes might have certain EEG signatures, thus considering the time necessary for these factors to change, the machine could be notified to adapt accordingly, and could predict and prevent negative behavior. To influence user behavior, the framework contains a BCI task model, arranged within the same time scales as the user's factors. Consequently, if the user does not reach a certain minimal threshold of performance for one BCI task, the system would switch to another, e.g., if kinesthetic imagination of hand movements is worse than tongue than it would switch to tongue MI. Additionally, if the user shows MI illiteracy, after a session, then the system would switch to other paradigms, and so on. Hence, the task model represents the possible BCI tasks managed by the exploration/exploitation ratio to adapt to the users and optimally influence them, within the corresponding time scales. Once identified and modeled theoretically, we need to search for computational models generic enough which could encompass such complex and unstable behavior, and enable us to design adaptive BCIs, whose signal processing, training tasks and feedback are dynamically adapted to these factors.
Several behavioral sciences and neuroscience theories strive to explain the brain's cognitive abilities based on statistical principles. They assume that the nervous system maintains internal probabilistic models that are updated by neural processing of sensory information using Bayesian probability methods. Kenneth Craik suggested in 1943 that the mind constructs "small-scale models" of reality -later named Men-tal Models [START_REF] Pn | Mental models: Towards a cognitive science of language, inference, and consciousness[END_REF] -that it uses to anticipate events. Using a similar principle, Active Inference, a generic framework based on Bayesian inference, models any adaptive system, as the brain, in a perception/action context [START_REF] Friston | The anatomy of choice: active inference and agency[END_REF]. Active Inference describes the world to be in a true state which can never be completely revealed as the only information the adaptive system has are observations obtained through sensory input. The true state of the world is in fact hidden to the observers, and as such is set in their internal, generative model of the world, as hidden states. The true state of the world is inferred through sensory input or observations and is updated in a generative model, i.e., an internal representation of the world containing empirical priors and prior beliefs. The empirical priors are the hidden states and the possible actions to be made when an observation occurs. The event anticipation and action selection are defined with the free energy minimization principle or minimization of the surprise, and utility function, i.e., a measure of preferences over some set of outcomes. In other words, a set of possible actions which were previously generated in the internal model as empirical priors are favored in order to get a desired outcome. For instance, if a stranger -A -asks a person -B -to borrow him a phone in the street, the outcome of this event or the decision of B would depend on his model of the world, or empirical priors that relate to such an event. B's decision will also depend on his prior beliefs, for instance a religious man would have principles such that one should always help those in need. B can never reveal the absolute truth about A's intentions. So, if B's experience, i.e., empirical priors were negative, and no prior beliefs or higher values govern his actions, he will be likely to refuse. However, if it was positive, B will be likely to accept to help A. Additionally, B's reaction time will depend on a specific prior which encodes the exploration/ exploitation ratio. Hence, B anticipates an outcome, and acts in such a way and in a certain time to reach that event imagined in the future. He inferred the true state -the stranger's intentions-with his empirical priors and he acted to achieve a desired outcome or comply to prior beliefs. The promotion of certain outcomes is encoded in the utility function, and are set as prior beliefs. The free energy minimization principle relies on minimizing the Kullback-Leibler divergence or the relative entropy between two probability distributions -the current state and the desired state. It can be thought of as a prediction error that reports the difference between what can be attained from the current state and the goals encoded by prior beliefs. So, by favoring a certain action, one can reduce the prediction error, and in this way the action becomes the cause of future sensory input. This computational framework enables us to model the causes of sensory input in order to better anticipate and favor certain outcomes, which is indeed what we are looking for in BCI systems.
A P300-speller is a communication BCI device which relies on a neurophysiological phenomenon -called the oddball effect -that triggers a peak in the EEG signal, around 300ms after a rare and unexpected event -a P300. This is why this type of BCI is also called a reactive BCI, as the machine elicits and detects event-related potentials (ERPs), i.e., the brain's reaction to stimuli. In the case of P300-speller, a set of letters are randomly flashed and the users need to focus their visual attention on the letter they wish to spell. Once the target letter is flashed (as an unexpected and rare event) the brain reacts enabling the machine to detect the ERP and spell the desired letter.
Bayesian inference has been successfully used for instance in designing adaptive P300-spellers [START_REF] Mattout | Improving BCI performance through co-adaptation: applications to the P300-speller[END_REF]. In this example, the outcome of a probabilistic classifier (two multivariate-Gaussian mixture) is updated online. In such a way, the machine spells a letter once it attains a certain confidence level, i.e., the decision speed or reaction time depends on the reliability of accumulated evidence. This permits the machine to stop at an optimal moment, while maximizing both speed and accuracy. However, as we mentioned earlier, this example is user-dependent and adaptive, but does not go further by considering the cause of such EEG variability in order to reduce or anticipate it. To achieve this, we could endow the machine with a certain intelligence, with Active Inference [START_REF] Mladenović | Endowing the Machine with Active Inference: A Generic Framework to Implement Adaptive BCI[END_REF]. As we explained, Active Inference is used to model cognitive behavior and decision making processes. However, in our case, we wish to equip the machine with such generative models, in order to achieve a full symbiotic user-machine co-adaptation. The true states, in this case, belong to the user characteristics and intentions, and are in fact hidden to the machine. Concretely, the hidden states are the letters or words the user intends to spell with the BCI. In the beginning all the letters have equal probability to be spelled, but the more the machine flashes letters, the more it accumulates empirical priors and becomes confident about the target letter. In such way, the user intentions are represented as empirical priors (hidden states) which the machine has to update through the accumulation of observations -the classifier output. Furthermore, the machine will act (flash) in such a way to achieve the desired outcome -to reveal the target letter in minimal time. Hence, by using these principles, we could not only achieve optimal stopping [START_REF] Mattout | Improving BCI performance through co-adaptation: applications to the P300-speller[END_REF] but also optimal flashing [START_REF] Mladenović | Endowing the Machine with Active Inference: A Generic Framework to Implement Adaptive BCI[END_REF], i.e., flashing such group of letters to maximize the P300 effect. The flashing would be in an intelligent order yet appear to the user to be in a random order, so that the oddball effect stays uncompromised.
The criteria of optimization, i.e., whether one would favor subjective user's experience over the system performance, depends on the purpose of the BCI system [START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF]. For example, for entertainment or rehabilitation purposes, it is important to motivate the user to keep playing or keep giving effort. To achieve this can be possible by using positively biased feedback. On the other hand, for controlling a wheelchair using a BCI, then the system's accuracy is of essential importance. Active Inference could provide such adaptive power, setting the BCI goals within an intelligent artificial agent which would encode the utility function, and would manipulate the exploration/exploitation factor, see Fig. 1.
The remaining challenges comprise using Active Inference to adapt tasks of other BCI paradigms such as Motor Imagery. The ultimate goal would be to use Active Inference to create a fully adaptive and user customizable BCI. In this case, the hidden states which the machine needs to infer and learn would be more than trial-wise user intentions, but also user's states, skills and traits (measured with passive BCI for instance) and provided to the machine as additional (neuro)physiological observations. The convenience about Active Inference is that it is applicable to any adaptive system. So, we can use any information as input (higher level user observations) and tune the parameters (priors) to each user, in order to provide them with optimal tasks. tive BCI. The machine "observes" one or several (neuro)physiological measurements which serve to infer the user's immediate intentions, or states, skills and traits in longer periods of time. Depending on the purpose of the BCI, its paradigm and exercise, and considering the information the machine learned about the user, it will provide optimal action (feedback or instructions in different modalities or difficulty). An intelligent agent will encode the priors (utility and exploration/exploitation ratio) that are regulated for each user and specific context; favoring the optimal paradigm, exercise and action within specific time-scales of adaptation.
The optimal tasks would be governed by the BCI purpose (control, communication, neuro-rehabilitation etc.), paradigm (P300, MI, SSEP) and exercise (MI of hands, feet, or counting the number of flashes etc).
Regarding signal processing, the adaptive processes which arise such as adapting spatial or temporal filters should not only adjust to the signal variabilities, but be also guided by the context and purpose of the BCI. This way, the signal processing techniques could extend their adaptive power and be more applicable and flexible across contexts and users. Furthermore, the signal processing pipeline would need to expand and include other possible (neuro)physiological measurements in order to measure high level user factors. The machine learning techniques will have to accommodate for more dimensions, not only the features extracted from EEG but the variable states of the user should be taken into account. Active inference would fit this landscape and add such a layer through an internal model of the various causes of signal variability and by its single cost function -free energy.
Improving BCI user training
Machine learning and signal processing tools can also be used to deepen our understanding of BCI user learning as well as to improve this learning. Notably, such tools can be used to design features and classifiers that are not only good to discriminate the different BCI commands, but also good to ensure that the user can understand and learn from the feedback resulting for this classifier/features. This feedback can also be further improved by using signal processing tools to preprocess it, in order to design an optimal display for this feedback, maximizing learning. Finally, rather than designing adaptive BCI algorithms solely to increase BCI command decoding accuracy, it seems also promising to adapt BCI algorithms in a way and at a rate that favor user learning. Altogether, current signal processing and machine learning algorithms should not be designed solely for the machine, but also with the user in mind, to ensure that the resulting feedback and training enable the user to learn efficiently. We detail these aspects below.
Designing features and classifiers that the user can understand and learn from
So far the features, e.g., the power in some frequency bands and channels, and classifiers, e.g., LDA or Support Vector Machine (SVM), used to design EEG-based BCI are optimized based solely on the basis of their discriminative power [START_REF] Lotte | A Review of classification algorithms for EEG-based Brain-Computer Interfaces[END_REF][START_REF] Blankertz | Optimizing spatial filters for robust EEG single-trial analysis[END_REF][START_REF]A Tutorial on EEG Signal-processing Techniques for Mental-state Recognition in Brain-Computer Interfaces[END_REF]. In other words, features and classifiers are built solely to maximize the separation between the classes/mental imagery tasks used to control the BCI, e.g., left versus right hand imagined movement. Thus, a purely machine-oriented criteria -namely data separation -is used to optimize features and classifiers, without any consideration for whether such features and classifiers lead to a feedback that 1) is understandable by the user and 2) can enable the user to learn to self-regulate those features. In the algorithms used so far, while the features are by design as separable as possible, there is no guarantee that they can become more separable with training. Actually, it is theoretically possible that some features with an initially lower discriminative power can be easier to learn to self-regulate. As such, while on the short-term selecting features that are initially as discriminant as possible makes sense, on the longer-term, if the user can learn EEG self-regulation successfully, then it may make more sense to select features that will lead to a possibly even better discrimination after user learning. Similarly, while the classifier output, e.g., the distance between the input feature vector and the LDA/SVM discriminant hyperplane [START_REF] Pfurtscheller | Motor Imagery and Direct Brain-Computer Communication[END_REF], is typically used as feedback to the user, it is also unknown whether such feedback signal variations can be understood or make sense for the user. Maybe a different feedback signal, possibly less discriminant, would be easier to understand and learn to control by the user. Interestingly enough, there are very relevant research results from neuroscience, psychology and human-computer interaction that suggest that there are some constraints and principles that need to be respected so as to favor user learning of selfregulation, or to enable users to understand as best as possible some visualization and feedback. In particular, it was shown with motor related invasive BCIs on monkeys, that using features that lie in the natural subspace of their motor-related activity, i.e., in their natural motor repertoire, leads to much more efficient learning of BCI control than using features that lie outside this subspace/natural repertoire [START_REF] Sadtler | Neural constraints on learning[END_REF][START_REF] Hwang | Volitional control of neural activity relies on the natural motor repertoire[END_REF]. This suggests that not all features have the same user-learning potential, and thus that features should be designed with such considerations in mind. Similarly, regarding feedback and visualization, humans perceive with more or less ease variations of a visual stimuli, depending on the spatial and temporal characteristics of these variations, e.g., how fast the stimuli changes, and what the amplitude of this change is, see, e.g., [START_REF] Ware | Information visualization: perception for design[END_REF] for an overview. For instance, it is recommended to provide visualizations that are consistent over time, i.e., whose meaning should be interpreted in the same way from one trial to the next, and that vary smoothly over time [START_REF] Ware | Information visualization: perception for design[END_REF]. This as well suggests that the feedback should ideally be designed while taking such principles into consideration. There are also many other human learning principles that are in general not respected by current BCI designs, see notably [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF] as well as section 1.3.3. There is thus a lot of room for improvement.
The learning and feedback principles mentioned above could be used as constraints into the objective functions of machine learning and signal processing algorithms used in BCI. For instance, to respect human perception principles [START_REF] Ware | Information visualization: perception for design[END_REF], we could add these perception properties as regularization terms in regularized machine learning algorithms such as regularized spatial filters [START_REF] Lotte | Regularizing Common Spatial Patterns to Improve BCI Designs: Unified Theory and New Algorithms[END_REF] or classifiers [START_REF] Lotte | A Review of classification algorithms for EEG-based Brain-Computer Interfaces[END_REF][START_REF] Lotte | A Review of Classification Algorithms for EEG-based Brain-Computer Interfaces: A 10-year Update[END_REF]. Similarly, regularization terms could be added to ensure that the features/classifier lie in the natural motor repertoire of the user, to promote efficient learning with motorrelated BCIs. This could be achieved for instance, by transferring data between users [START_REF] Lotte | Learning from other Subjects Helps Reducing Brain-Computer Interface Calibration Time[END_REF][START_REF]Signal Processing Approaches to Minimize or Suppress Calibration Time in Oscillatory Activity-Based Brain-Computer Interfaces[END_REF], to promote features that were shown to lead to efficient learning in other users. In other words, rather than designing features/classifiers using objective functions that reflect only discrimination, such objective functions should consider both discrimination and human learning/perception principles. This would ensure the design of both discriminative and learnable/understandable features.
It could also be interesting to explore the extraction and possibly simultaneous use of two types of features: features that will be used for visualization and feedback only (and thus that may not be optimal from a classification point of view), and features that will be used by the machine to recognize the EEG patterns produced by the user (but not used as user training feedback). To ensure that such features are related, and thus that learning to modulate them is also relevant to send mental commands, they could be optimized and extracted jointly, e.g., using multi-task learning [START_REF] Caruana | Multitask learning[END_REF].
Identifying when to update classifiers to enhance learning
It is already well accepted that in order to obtain better performances, adaptive BCI systems should be used [START_REF] Millán | Asynchronous BCI and Local Neural Classifiers: An Overview of the Adaptive Brain Interface Project[END_REF][START_REF] Shenoy | Towards adaptive classification for BCI[END_REF][START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF]. Due to the inherent variability of EEG signals, as well as to the change in users' states, e.g., fatigue or attention, it was indeed shown that adaptive classifiers and features generally gave higher classification accuracy than fixed ones [START_REF] Shenoy | Towards adaptive classification for BCI[END_REF][START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF]. Typically, this adaptation consists in re-estimating parameters of the classifiers/features during online BCI use, in order to keep track of the changing features distribution. However, again, such adaptation is typically performed only from a machine perspective to maximize data discriminability, without considering the user in the loop. The user is using the classifier output as feedback to learn and to use the BCI. If the classifier is continuously adapted, this means the feedback is changing continuously, which can be very confusing for the user, or even prevent them from learning properly. Indeed, both the user and the machine need to adapt to each other -the so-called co-adaptation in BCI [START_REF] Vidaurre | Machine-learning-based coadaptive calibration for brain-computer interfaces[END_REF]. A very recent and interesting work proposed a simple computational model to represent this interplay between the user learning and the machine learning, and how this co-adaptation takes place [START_REF] Müller | A mathematical model for the two-learners problem[END_REF]. While such work is only a simulation, it has nonetheless suggested that an adaptation speed that is either too fast or too slow prevents this co-adaptation from converging, and leads to a decreased learning and performance.
Therefore, when to adapt, e.g., how often, and how to adapt, e.g., how much, should be made with the user in mind. Ideally, the adaptation should be performed at a rate and strength that suits each specific user, to ensure that it does not confuse users but rather that it helps them to learn. To do so seems to stress once more the need for a model of the user (discussed in Section 1.2). such model would infer from the data, among other, how much change the user can deal with, to adapt the classifier accordingly. In this model, being able to measure the users' BCI skills (see also Section 1.2.2) would also help in that regard. It would indeed enable to know when the classifier should be updated because the user had improved and thus their EEG patterns have changed. It would also be interesting to quantify which variations in the EEG feature distribution would require an adaptation that may be confusing to the user -e.g., those changing the EEG source used -and those that should not be -e.g., just tracking change in feature amplitude. This would enable to perform only adaptation that is as harmless as possible for the user. A point that would need to be explored is whether classifiers and features should only be adapted when the user actually changes strategy, e.g., when the user has learned a better mental imagery task. This indeed requires the classifier to be able to recognize such new or improved mental tasks, whether other adaptations may just add some feedback noise and would be confusing to the user.
Designing BCI feedbacks ensuring learning
Feedback is generally considered as an important facilitator of learning and skill acquisition [START_REF] Azevedo | A meta-analysis of the effects of feedback in computer-based instruction[END_REF][START_REF] Bangert-Drowns | The instructional effect of feedback in test-like events[END_REF] with a specific effect on the motivation to learn, see e.g., [START_REF] Narciss | How to design informative tutoring feedback for multimedia learning. Instructional design for multimedia learning[END_REF].
Black and William [START_REF] Black | Assessment and classroom learning. Assessment in Education: principles, policy & practice[END_REF] proposed that to be effective, feedback must be directive (indicate what needs to be revised) and facilitative (provides suggestions to guide learners). In the same way, Kulhavy and Stock proposed that effective feedback must allow verification, i.e., specify if the answer is correct or incorrect and elaboration, i.e., provide relevant cues to guide the learner) [START_REF] Kulhavy | Feedback in written instruction: The place of response certitude[END_REF].
In addition to guidance, informative feedback had to be goal-directed by providing learners with information about their progress toward the goal to be achieved. The feeling that the goal can be met is an important way to enhance the motivation and the engagement of the learners [START_REF] Fisher | Differential effects of learner effort and goal orientation on two learning outcomes[END_REF].
Feedback should also be specific to avoid being considered as useless or frustrating [START_REF] Williams | Teachers' Written Comments and Students' Responses: A Socially Constructed Interaction[END_REF]. It needs to be clear, purposeful, meaningful [START_REF] Hattie | The Power of Feedback[END_REF] and to lead to a feeling of competence in order to increase motivation [START_REF] Ryan | Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being[END_REF].
Another consideration that requires to be much more deeply explored is that feedback must be adapted to the characteristics of the learners. For example, [START_REF] Hanna | Effects of total and partial feedback in multiple-choice testing upon learning[END_REF] showed that elaborated feedback enhances performances of low-ability students, while verification condition enhance performances of high-ability students. In the BCI field, Kübler et al. showed that positive feedback (provided only for a correct response) was beneficial for new or inexperienced BCI users, but harmful for advanced BCI users [START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF].
As underlined in [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF], classical BCI feedback satisfies few of such requirements. Feedback typically specifies if the answer is correct or not -i.e., the feedback is corrective -but does not aim at providing suggestions to guide the learner -i.e., it is not explanatory. Feedback is also usually not goal directed and does not provide details about how to improve the answer. Moreover, the feedback may often be unclear and meaningless, since it is based on a classifier build using calibration data recorded at the beginning of the session, during which the user does not master the mental imagery task they must perform.
In [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF], we discussed the limits of feedback used in BCI, and proposed solutions, some of which having already yielded positive results [START_REF] Hwang | Neurofeedback-based motor imagery training for brain-computer interface (BCI)[END_REF][START_REF] Pfurtscheller | Motor Imagery and Direct Brain-Computer Communication[END_REF]. A possibility would be to provide the user with richer and more informative feedback by using, for example, a global picture of his/her brain activity, eg., a 2D or 3D topography of cortical activation obtained by inverse solutions. Another proposal is to collect a better information on the mental task achieved by the subject (for example by recording Event Related Desynchonisation/Synchronisation activity) to evaluate users' progress and give them relevant insights about how to perform the mental task. Finally, it would be relevant to use more attractive feedback by using game-like, 3D or Virtual Reality, thus increasing user engagement and motivation [START_REF] Lécuyer | Brain-Computer Interfaces, Virtual Reality and Videogames[END_REF][START_REF] Leeb | Brain-Computer Communication: Motivation, aim and impact of exploring a virtual apartment[END_REF].
In a recent study, [START_REF] Jeunet | Continuous Tactile Feedback for Motor-Imagery based Brain-Computer Interaction in a Multitasking Context[END_REF] tested a continuous tactile feedback by comparing it to an equivalent visual feedback. Performance was higher with tactile feedback indicating that this modality can be a promising way to enhance BCI performances.
To conclude, the feedbacks used in BCI are simple and often poorly informative, which may explain some of the learning difficulties encountered by many users. Based on the literature identifying the parameters that maximize the effectiveness of feedback in general, BCI studies have already identified possible theoretical improvements. However, further investigations will be necessary to explore new research directions in order to make BCI accessible to a greater number of people. In particular, the machine learning and signal processing communities have the skills and tools necessary to design BCI feedback that are clearer, adaptive and adapted to the user, more informative and explanatory. In the following we provide more details on some of these aspects. In particular, we discuss the importance to design adaptive biased feedback, emotional and explanatory feedback, and provide related research directions in which the machine learning and signal processing communities can contribute.
Designing adaptive biased feedback
As stated earlier in this chapter, it is essential to compute and understand the user's emotional, motivational and cognitive states in order to provide them with an appropriate, adapted and adaptive feedback that will favor the acquisition of skills especially during the primary training phases of the user [START_REF] Mcfarland | EEG-based communication and control: short-term role of feedback[END_REF]. Indeed, in the first stages, the fact that the technology and the interaction paradigm (through MI tasks) are both new for the users is likely to induce a pronounced computer anxiety associated with a low sense of agency. Yet, given the strong impact that the sense of agency (i.e., the feeling of being in control) has on performance -see Section 1.2.3.1 -it seems important to increase it as far as possible. Providing the users with a sensory feedback informing them about the outcome of their action (MI task) seems to be necessary in order to trigger a certain sense of agency at the beginning of their training. This sense of agency will in turn unconsciously encourage users to persevere, increase their motivation, and thus promote the acquisition of MI-BCI related skills, which is likely to lead to better performances [START_REF] Achim | Computer usage: the impact of computer anxiety and computer self-efficacy[END_REF][START_REF] Saadé | Computer anxiety in e-learning: The effect of computer self-efficacy[END_REF][START_REF] Simsek | The relationship between computer anxiety and computer selfefficacy[END_REF]. This process could underlie the (experimentally proven) efficiency of positively biased feedback for MI-BCI user-training.
Positively biased feedback consists in leading users to believe that their performance was better than it actually was. Literature [START_REF] Barbero | Biased feedback in brain-computer interfaces[END_REF][START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF] reports that providing MI-BCI users with a biased (only positive) feedback is associated with improved performances while they are novices. However, that is no longer the case once they have progressed to the level of expert users. This result could be due to the fact that positive feedback provides users with an illusion of control which increases their motivation and will to succeed. As explained by [START_REF] Achim | Computer usage: the impact of computer anxiety and computer self-efficacy[END_REF], once users reach a higher level of performance, they also experience a high level of self-efficacy which leads them to consider failure no longer as a threat [START_REF] Kleih | Motivation and SMR-BCI: fear of failure affects BCI performance[END_REF] but as a challenge. And facing these challenges leads to improvement. Another explanation is the fact that experts develop the ability to generate a precise predicted outcome that usually matches the actual outcome (when the feedback is not biased). This could explain why when the feedback is biased, and therefore the predicted and actual outcomes do not match, expert users attribute the discrepancy to external causes more easily. In other words, it can be hypothesized that experts might be disturbed by a biased feedback because they can perceive that it does not truly reflect their actions, thus decreasing their sense of being in control.
To summarize, it is noteworthy that the experience level of the user needs to be taken into account when designing the optimal feedback system, and more specifically the bias level. As discussed before, the user experience is nonetheless difficult to assess (see also Section 1.2.2). For instance, when using LDA to discriminate 2 classes, the LDA will typically always output a class, even if it is uncertain about it. This might lead to a class seemingly always recognized, even if the user does not do much. Hence, if both classes are equally biased, the user would most likely not gain motivation for the one always recognized -performing good, but could feel bored. Note that even if one class is always recognized (seemingly giving higher performances than the other class) that does not mean that the user is actually performing well when imagining such class, it can be due to the classifier being unbalanced and providing as output this class more often (e.g., due to a faulty electrode). On the other hand, if the biased feedback is applied to the class which is not well recognized the user would probably gain motivation. Thus, in [START_REF] Mladenović | The Impact of Flow in an EEG-based Brain Computer Interface[END_REF] the task was adaptively biased, depending on the user performances in real time, e.g., positively for the class which was recognized less often, and negatively for the one recognized more often, in order to keep the user engaged. This idea came from the Flow theory [START_REF] Csikszentmihalyi | Toward a psychology of optimal experience[END_REF] which explains that the intrinsic motivation, full immersion in the task and concentration can be attained if the task is adapted to user skills. Following the requirements of Flow theory, in [START_REF] Mladenović | The Impact of Flow in an EEG-based Brain Computer Interface[END_REF] the environment is designed to be engaging and entertaining, the goals clear with immediate visual and audio feedback, and task difficulty adapted to user performance in real-time. It is shown that the users feel more in control, and more in flow when the task is adapted. Additionally, the offline performance and flow level correlated. This suggests that adapting the task may create a virtuous loop, potentially increasing flow with performance.
The approach of providing an adapted and adaptive feedback, obtained by modulating the bias level, sounds very promising in order to maintain BCI users in a flow state, with a high sense of agency. Nonetheless, many challenges remain in order to optimize the efficiency of this approach. First, once more, it is necessary to be able to infer the state of the user, and especially their skill level, from their performance and physiological data. Second, we will have to determine the bias to be applied to the BCI output as a function of the evolution of the users' skills, but also as a function of their profile. Indeed, the basic level of sense of agency is not the same for everybody. Also, as shown in our models [START_REF] Jeunet | Towards a cognitive model of MI-BCI user training[END_REF][START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF], both the sense of agency and the flow are influenced by several factors: they do not depend only upon the performance. Thus, many parameters -related to users' states and traits -should be taken into account to know how to adapt the bias.
Designing adaptive emotional feedback
The functioning of the brain has often been compared to that of a computer, which is probably why the social and emotional components of learning have long been ignored. However, emotional and social contexts play an important role in learning [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF][START_REF] Salancik | A social information processing approach to job attitudes and task design[END_REF]. The learner's affective state has an influence on problem solving strategies [START_REF] Isen | Positive Affect and Decision Making, Handbook of emotions[END_REF], and motivational outcome [START_REF] Stipek | Motivation to learn: From theory to practice[END_REF]. Expert teachers can detect such emotional states and react accordingly to the latter to positively impact learning [START_REF] Goleman | Emotional Intelligence[END_REF]. However, the majority of feedback used for BCI training users typically do not benefit from adaptive social and emotional feedback during BCI training. In [START_REF] Bonnet | Two brains, one game: design and evaluation of a multiuser BCI video game based on motor imagery[END_REF], we added some social context to BCI training by creating a game where BCI users had to compete or collaborate against or with each others, which resulted in improved motivation and better BCI performances for some of the participants. Other studies tried to provide a non adaptive emotional feedback, under the form of smileys indicating whether the mental command was successfully recognized [START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF][START_REF] Leeb | Brain-Computer Communication: Motivation, aim and impact of exploring a virtual apartment[END_REF]. No formal comparisons without such emotional feedback was performed though, making the efficiency of such feedback still unknown. Intelligent tutoring Systems (ITS) providing an emotional and motivational support can be considered as a substitute and have been used in distant learning protocols where such feedback components were also missing. Indeed, they have proven to be successful in improving learning, self-confidence and affective outcome [START_REF] Woolf | Affective tutors: Automatic detection of and response to student emotion[END_REF]. We tested such method for BCI in [START_REF] Pillette | PEANUT: Personalised Emotional Agent for Neurotechnology User-Training[END_REF], where we implemented a learning companion for BCI training purpose. The companion provided both an adapted emotional support and social presence. Its interventions were composed of spoken sentences and facial expressions adapted based on the performance and progress of the user. Results show that emotional support and social presence have a beneficial impact on users' experience. Indeed, users that trained with the learning companion felt it was easier to learn and memorize than the group that only trained with the usual training protocol (i.e., with no emotional support or social presence). This learning companion did not lead to any significant increase in online classification performance so far, though, which suggests that it should be further improved. It could for example consider the user's profile which influences BCI performances [START_REF] Jeunet | Predicting Mental Imagery-Based BCI Performance from Personality, Cognitive Profile and Neurophysiological Patterns[END_REF], and monitor the user's emotional state and learning phase [START_REF] Kort | An affective model of interplay between emotions and learning: Reengineering educational pedagogy-building a learning companion[END_REF]. Indeed, both social and emotional feedback can have a positive, neutral or negative influence on learning depending on the task design, the type of feedback provided and the variables taken into account to provide the feedback [START_REF] Kennedy | The robot who tried too hard: Social behaviour of a robot tutor can negatively affect child learning[END_REF][START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF]. In this context, machine learning could have a substantial impact for the future applications of ITS in BCI training applications. In particular, it seems promising to use machine learning to learn from the students EEG, reactions and from its previous experience, what is the most appropriate emotional feedback it should provide to the user.
Designing explanatory feedback
As mentioned above, in many learning tasks -BCI included -the role of the feedback has been found to be essential in supporting learning, and to make this learning efficient [START_REF] Shute | Focus on Formative Feedback[END_REF][START_REF] Hattie | The Power of Feedback[END_REF]. While feedbacks can be of several types, for BCI training, this feedback is almost always corrective only [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF]. A corrective feedback is a feedback that tells the user whether the task they just performed is correct or incorrect. Indeed, in most BCIs, the feedback is typically a bar or a cursor indicating whether the mental task performed by the user was correctly recognized. Unfortunately human learning theories and instructional design principles all recommend to provide a feedback that is explanatory, i.e., which does not only indicate correctness, but also why it was correct or not. Indeed, across many learning tasks, explanatory feedback, which thus explains the reasons of the feedback, was shown to be superior to corrective one [START_REF] Shute | Focus on Formative Feedback[END_REF][START_REF] Hattie | The Power of Feedback[END_REF].
Consequently, it would be promising to try to design explanatory feedback for BCI. This is nonetheless a substantial challenge. Indeed, being able to provide explanatory feedback means being able to understand the cause of success or failure of a given mental command. so far, the BCI community has very little knowledge about these possible causes. Some works did identify some predictors of BCI performances [START_REF] Jeunet | Advances in user-training for mental-imagerybased BCI control: Psychological and cognitive factors and their neural correlates[END_REF][START_REF] Ahn | Performance variation in motor imagery brain-computer interface: A brief review[END_REF][START_REF] Grosse-Wentrup | What are the Causes of Performance Variation in Brain-Computer Interfacing[END_REF][START_REF] Blankertz | Neurophysiological predictor of SMR-based BCI performance[END_REF]. However, most of these works identified predictors of performance variations across many trials and possibly many runs or sessions. Exceptions are [START_REF] Grosse-Wentrup | Causal Influence of Gamma Oscillations on the Sensorimotor Rhythm[END_REF] and [START_REF] Schumacher | Towards explanatory feedback for user training in brain-computer interfaces[END_REF], who showed respectively than cortical gamma activity in attentional networks as well as tension in forehead and neck muscles were correlated to single trial performances. In [START_REF] Schumacher | Towards explanatory feedback for user training in brain-computer interfaces[END_REF] we designed a first explanatory feedback for BCI, informing users of their forehead and neck muscle tension, identifying when it was too strong, to guide them to be relaxed. Unfortunately this did not lead to significant increase in online BCI performance. Such work was however only a preliminary attempt that should thus be explored further, to identify new predictors of single trial performance, and use them as feedback.
We denote features measuring causes of success or failure of a trial or group of trials as feedback features. We thus encourage the feedback community to design and explore new feedback features. This is another machine learning and signal processing problem, in which rather than classifying EEG as corresponding to a given mental command or another, we should classify them as predicting a successful or a failed trial. Thus with different labels than before, machine learners can explore and design various tools to identify the most predictive feedback features. Such features could then be used as additional feedback during online BCI experiments, possibly supporting efficient BCI skills learning.
Conclusion
In this chapter, we tried to highlight to our readers that when designing Brain-Computer Interfaces, both the machine (EEG signal decoding) and the user (BCI skill learning and performance) should be taken into account. Actually, in order to really enable BCIs to reach their full potential, both aspects should be explored and improved. So far, the vast majority of the machine learning community has worked on improving and robustifying the EEG signal decoding, without considering the human in the loop. Here, we hope we convinced our readers that considering the human user is necessary -notably to guide and boost BCI user training and performanceand that machine learning and signal processing can bring useful and innovative solutions to do so. In particular, throughout the chapter we identified 9 challenges that would need to be solved to enable users to use and to learn to use BCI efficiently, and for each suggested potential machine learning and signal processing research directions to address them. These various challenges and solutions are summarized in Table 1.1.
We hope this summary of open research problems in BCI will inspire the machine learning and signal processing communities, and will motivate their scientists to explore these less traveled but essential research directions. In the end, BCI research does need contributions from these communities to improve the user experience and learnability of BCI, and enable them to become finally usable and useful in practice, outside laboratories.
Figure 1 . 1 :
11 Figure 1.1: A concept of how Active Inference could be used to implement a fully adap-
Table 1 .
1 1: Summary of signal processing and machine learning challenges to BCI user training and experience, and potential solutions to be explored.
Challenges Potential solutions
Modelling the BCI user Robust recognition of users' mental states from physiological signals exploring features, denoising and classification algorithms for each mental state
Quantifying the Riemannian geometry
many aspects of to go beyond
users' BCI skills classification accuracy
Determining when to adapt the training procedure, based on the user's state, to optimise performance and learning Case-based / Rule-based reasoning algorithms; Multi-arm bandits to adapt automatically the training procedure
Computationally modeling the users states and traits and adaptation tools Exploiting Active Inference tools
Understanding and improving BCI user learning Designing features and classifiers resulting in feedback favoring learning Regularizers incorporating human learning and perception principles
Adapting classifiers Triggering adaptation
with a way and timing based on a
favoring learning user's model
Adapting the bias based on the user's level of skills to maintain their flow and agency Triggering adaptation based on a model of the bias*skill relationship
Adapting feedback
to include Build on the existing
emotional support work of the ITS field
and social presence
Identifying/Designing Designing features
explanatory to classify correct
feedback features vs incorrect commands
Acknowledgments
This work was supported by the French National Research Agency with the REBEL project (grant ANR-15-CE23-0013-01), the European Research Council with the BrainConquest project (grant ERC-2016-STG-714567), the Inria Project-Lab BCI-LIFT and the EPFL/Inria International lab. | 85,339 | [
"4180",
"1453",
"20740",
"1007743",
"18798"
] | [
"179935",
"302851",
"491414",
"179935",
"519434",
"179935"
] |
01763807 | en | [
"sdv"
] | 2024/03/05 22:32:13 | 2018 | https://hal.sorbonne-universite.fr/hal-01763807/file/SegonzaciaSubmission_DSR_Revised4HAL.pdf | Stéphane Hourdez
email: [email protected]
Cardiac response of the hydrothermal vent crab Segonzacia mesatlantica to variable temperature and oxygen levels
Keywords: Hypoxia, Oxyregulation, Critical temperature, Critical oxygen concentration
Segonzacia mesatlantica inhabits different hydrothermal vent sites of the Mid-Atlantic Ridge where it experiences chronic environmental hypoxia, and highly variable temperatures. Experimental animals in aquaria at in situ pressure were exposed to varying oxygen concentrations and temperature, and their cardiac response was studied. S. mesatlantica is well adapted to these challenging conditions and capable to regulate its oxygen uptake down to very low concentrations (7.3-14.2 µmol.l -1 ). In S. mesatlantica, this capacity most likely relies on an increased ventilation rate, while the heart rate remains stable down to this critical oxygen tension. When not exposed to temperature increase, hypoxia corresponds to metabolic hypoxia and the response likely only involves ventilation modulation, as in shallowwater relatives. For S. mesatlantica however, an environmental temperature increase is usually correlated with more pronounced hypoxia. Although the response to hypoxia is similar at 10 and 20˚C, temperature itself has a strong effect on the heart rate and EKG signal amplitude. As in shallow water species, the heart rate increases with temperature. Our study revealed that the range of thermal tolerance for S. mesatlantica ranges from 6 through 21˚C for specimens from the shallow site Menez Gwen (800 m), and from 3 through 19˚C for specimens from the deeper sites explored (2700 -3000 m).
Introduction
Environmental exposure to hypoxia in aquatic habitats can be common [START_REF] Hourdez | Hypoxic environments[END_REF]. Near hydrothermal vents, oxygen levels are often low and highly variable both in space and in time. These conditions result from the chaotic mixing of the hydrothermal vent fluid, which is hot, anoxic, and often rich in sulfide, with the deepsea water, cold and usually slightly hypoxic. Oxygen and sulfide spontaneously react, decreasing further the amount of available oxygen in the resulting sea water. The presence of reduced compounds in the hydrothermal fluid is paramount to the local primary production by autotrophic bacteria at the base of the food chain in these environments. To reap the benefits of this high local production in an otherwise seemingly barren deep-sea at similar depths, metazoans must possess specific adaptations to deal with the challenging conditions, among which chronic hypoxia is probably one of the most limiting. All metazoans that have been studied to date indeed exhibit oxygen requirements comparable to those of close relatives that live in well-oxygenated environments [START_REF] Childress | Metabolic rates of animals from hydrothermal vents and other deep-sea habitats[END_REF][START_REF] Hourdez | Adaptations to hypoxia in hydrothermal vent and cold-seep invertebrates[END_REF].
A study of morphological adaptations in decapodan crustaceans revealed that, contrary to annelid polychaetes [START_REF] Hourdez | Adaptations to hypoxia in hydrothermal vent and cold-seep invertebrates[END_REF], there is usually no increase in gill surface areas in vent decapods compared to their shallow-water relatives [START_REF] Decelle | Morphological adaptations to chronic hypoxia in deep-sea decapod crustaceans from hydrothermal vents and cold-seeps[END_REF]. In the vent decapods however, the scaphognathite is greatly enlarged, suggesting an increased ventilatory capacity. In situ observations of vent shrimp in settings typified by different oxygen concentrations also indicated that these animals increased their ventilation rates under lower oxygen conditions [START_REF] Decelle | Morphological adaptations to chronic hypoxia in deep-sea decapod crustaceans from hydrothermal vents and cold-seeps[END_REF]. This behavioral change is consistent with other decapods in which both the frequency and amplitude of scaphognathite beating are increased in response to hypoxia (see [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF][START_REF] Whiteley | Responses to environmental stresses: oxygen, temperature, and pH. Chapter 10. In: The Natural History of Crustaceans[END_REF][START_REF] Whiteley | Responses to environmental stresses: oxygen, temperature, and pH. Chapter 10. In: The Natural History of Crustaceans[END_REF] for reviews).
The vent crab Bythograea thermydron Williams 1980 is able to maintain its oxygen consumption relatively constant over a wide range of oxygen concentrations (i.e. oxyregulation capability), down to much lower concentrations than the shallow water species for which this ability was studied [START_REF] Gorodezky | Effects of sulfide exposure history and hemolymph thiosulfate on oxygen-consumption rates and regulation in the hydrothermal vent crab Bythograea thermydron[END_REF]. The capacity to oxyregulate can involve different levels of regulation. At the molecular level, the functional properties of the hemocyanins (in particular their oxygen affinity) play a central role. Hemocyanins from decapods that inhabit deep-sea hydrothermal vents exhibit very high oxygen affinities, allowing the extraction of oxygen from hypoxic conditions (see [START_REF] Hourdez | Adaptations to hypoxia in hydrothermal vent and cold-seep invertebrates[END_REF] for a review). The properties of these blood oxygen carriers can also be affected by allosteric effectors contained in the hemolymph. [START_REF] Gorodezky | Effects of sulfide exposure history and hemolymph thiosulfate on oxygen-consumption rates and regulation in the hydrothermal vent crab Bythograea thermydron[END_REF] showed that animals injected with thiosulfate, a byproduct of sulfide detoxification in the animals that increases hemocyanin affinity, allowed the crab to oxyregulate down to ever lower environmental oxygen concentrations. At the physiological level, adaptation to lower oxygen concentrations can involve ventilatory and cardio-circulatory responses (for a review, see [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF]. In the shallow water species studied to date, the circulatory response can be quite complex, involving modifications of the heart rate, stroke volume and peripheral resistance. Typically, decapods increase their ventilation (scaphognathite beating frequency and power), decrease their heart rate (bradycardia), and adjust the circulation of their hemolymph, decreasing its flow to digestive organs in favor of the ventral structures [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF].
The responses to variable oxygen levels involving modifications of heart parameters (contraction rate) and ventilation have however so far not experimentally been studied in hydrothermal vent species of crabs. We studied the cardiac response of the Mid-Atlantic Ridge (MAR) vent crab Segonzacia mesatlantica Williams 1988. This species has been collected at different sites from the MAR, with depths ranging from 850 m (Menez Gwen site) to 4080 m (Ashadze 1 site). To study the cardiac response to varying levels of environmental oxygen in S. mesatlantica, experimental animals were equipped with electrodes and their electrocardiograms (EKG) under different oxygen concentrations were recorded. As temperature affects oxygen demand as well, its effect on the EKG of the experimental crabs was also investigated.
Materials and methods
Animal collection
Specimens of the crab Segonzacia mesatlantica were collected on the hydrothermal vent sites Menez Gwen (37˚50'N, 31˚31'W, 855 m water depth), Logatchev (14˚45.12'N, 44˚58.71'W, 3050 m water depth), and Irinovskoe (14˚20.01'N, 44˚55.36'W, 2700 m water depth), on the Mid-Atlantic Ridge. They were captured with the remotely operated vehicle (ROV) MARUM-Quest, deployed from the Research Vessel Meteor (Menez MAR M82/3 and M126 cruises). The animals were brought back to the surface in a thermally insulated box attached to the ROV and quickly transferred to a cold room (5-8˚C) before they were used for experiments. The few specimens from the shallow site (Menez Gwen) that were not used in the experimental system and were maintained at atmospheric pressure all survived for at least two weeks.
Experimental system
The experimental animals were fitted with three stainless-steel thin wire electrodes: two inserted on either side of the heart and the third in the general body cavity as a reference (Fig. 1A). The flexible leads where then glued directly onto the carapace to prevent the movements of the crabs from affecting the position of the implanted electrodes. Attempts to use the less-invasive infrared sensor simply attached to the shell [START_REF] Depledge | A computer-aided physiological monitoring system for continuous, long-term recording of cardiac activity in selected invertebrates[END_REF]Andersen, 1990, Robinson et al., 2009) proved unsuccessful on this species and on another vent crab, Bythograea thermydron (Hourdez, unpub. failure). The same sensor type also proved unsuccessful to measure scaphognathite beating frequency (possible penetration of the seawater into the sensor). The equipped animals were maintained in an anodized aluminum, custom-built, 500-ml pressure vessel (inside diameter 10 cm and 6.4 cm height) with a Plexiglas window that allowed regular visual inspection of the animals. The smaller specimens (LI-2 through LI-6 and LI-10, see Table 1) were maintained in a smaller pressure vessel (inside diameter 3.5 cm and 5 cm height). The crabs were free to move inside the pressure vessel. During the experiment, the crabs usually remained calm, with some short activity periods. The water flow was provided by a HPLC pump (Waters 515), set to 3-5 ml min -1 , depending on the size of the specimen to yield an oxygen concentration decrease (compared to the inlet concentration) that was measurable with confidence. The pressure was regulated by a pressurerelief valve (Swagelok SS-4R3A) (Fig. 1B). Oxygen concentration in the inlet water was modulated by bubbling nitrogen and/or air with various flows.
Oxygen concentrations are reported in µmol.l -1 rather than partial pressures as these latter depend on the total pressure and are therefore difficult to compare between experiments run at different pressures. Oxygen concentration was measured directly after the pressure relief valve with an oxygen optode (Neofox, Ocean Optics). Three-way valves allowed water to flow either through the vessel containing the animal or through a bypass without affecting the pressure in the system. Oxygen consumption rate (in µmol.h -1 ) was then simply calculated as the difference between these two values, taking the flow rate into consideration: Oxygen consumption rate= (O2 in-O2 out) * WFR where 'O2 in' is the inlet oxygen concentration (in µmol.l -1 ) measured with the bypass in place, 'O2 out' the oxygen concentration (in µmol.l -1 ) measured when the water was flowing through the pressure vessel, and WFR the water flow rate (in l.h -1 ) controlled by the HPLC pump.
Temperature in the pressure vessel was controlled by immersion in a temperature-controlled water bath (10 or 20 ± 0.2 ˚C). Temperature ramping to study the effect of temperature on the heart rate was obtained by progressively increasing temperature in the water bath. We were interested in the crabs' response to rapid temperature variation and therefore chose a rate of about 1˚C every 15 minutes. All experiments were run at a pressure equivalent to in situ pressure for the two sites (80 bars (8 MPa), equivalent to 800 m water depth for Menez Gwen and 270 bars (27 MPa) for the Logatchev and Irinovskoe sites).
Recording of electrocardiograms (EKG)
An electrical feed-through in the pressure vessel wall allowed the recording of the EKG of animals under pressure. We worked on a total of twelve specimens, including ten from Semyenov/Irinovskoe, and two from Menez Gwen (Table 1).
Out of these twelve specimens, six were equipped with electrodes to monitor the electrical activity of their heart, including both Menez Gwen specimens. The voltage variations were recorded with a LabPro (Vernier) interface equipped with an EKG sensor (Vernier) for 30 s for each of the conditions. Voltage values were recorded every 1/100 s. Recordings were made every 2-15 minutes, depending on the rate of change of the studied parameters. Specifically, temperature change was fast and recordings were made every 2-3 minutes, while changes in oxygen concentration were slower and recordings were made every 10-15 minutes. As the animals live in a highly variable environment, we were interested in the response to rapidly-changing conditions and did not give animals time to acclimate to various oxygen levels or temperature values.
The crabs sometimes went through transient cardiac arrests (some as long as 20 seconds), a phenomenon also reported in shallow-water crabs, in particular in response to tactile and visual stimuli (e.g. [START_REF] Stiffler | A comparison of in situ and in vitro responses of crustacean hearts to hypoxia[END_REF][START_REF] Florey | The effects of temperature, anoxia and sensory stimulation on the heart rate of unrestrained crabs[END_REF][START_REF] Defur | The effects of environmental variables on the heart rates of invertebrates[END_REF]. Recordings comprising such arrests were not used for the calculation of heart rate. There was no apparent correlation between the conditions and the occurrence of the arrests, although they seemed to occur less at lower oxygen tensions (pers. obs.).
Changes in the parameters of the EKG (amplitude, shape) could reflect important modifications of cardiac output. We studied the effect of both temperature and oxygen concentration on the shape and amplitude of the EKG.
Electrode implantation, recovery and effect of pressure
Shortly after electrode implantation, the EKG was directly recordable at atmospheric pressure, although a bit erratic while manipulating the animal. For one of the shallower site crabs, the EKG was recorded for 4 hours at atmospheric pressure and 10˚C in the closed experimental vessel. Once under stable conditions, the EKG quickly became regular, and its shape resembled that under pressure (data not shown). From an initial heart rate oscillating between 30 and Consequently, all experiments were run at pressures equivalent to that of the depth at which they were captured and, after re-pressurization, the animals were given 8-12 hours of recovery before experiments were initiated.
Determination of curve parameters
We used curve fitting to determine key values for the heart rate and oxygen consumption as a function of oxygen concentration. In particular, the critical oxygen concentration at which oxygen consumption or heart rate drops can be relatively subjective or its determination strongly dependent on the relatively small number of data points below that value. When the critical oxygen concentration is small (as it is the case for S. mesatlantica, see results), very few data points can be obtained below that value. Instead, we used the equation:
! = # * % -' 1 + * * % -'
where X is the oxygen concentration, Y is the physiological parameter (heart rate or oxygen consumption rate), b is a steepness coefficient, the ratio a/b is the value at plateau (X infinite), and c is the intercept of the curve with the x-axis.
The curve fitting parameters a, b, and c were obtained with the software JMP11, based on an exploration of possible values for the parameters a, b, and c, and the best values were determined by minimizing the difference between the observed (experimental) and expected values (based on the curve equation). Because of the very steep drop of both heart rate and oxygen consumption rate (see Fig. 3), the intercept c is hereafter referred to as the critical oxygen concentration.
Results
Oxygen consumption rates
The oxygen consumption rates (in µmole O2 per hour) were measured for all 12 specimens (Table 1, Fig. 2). With a size range of 0.4-41.5 g wet weight, the oxygen consumption rate increases with an allometry coefficient of 0.48 (p=2.8 10 -8 ). The sex of the animals does not have a significant effect on the regression (ANCOVA, p=0.1784). The oxygen consumption rates for the two specimens from Menez Gwen (800 m depth) do not differ markedly from the specimens from the other sites (2700-3050 m depth), and fall within the 95% confidence interval established for the ten specimens from the deeper sites.
Effect of oxygen concentration on heart rate and oxygen consumption
In all investigated specimens, both the oxygen consumption rates and heart rates follow the same pattern. For all specimens equipped with electrodes (n=6), oxygen concentration does not affect the heart rate over most of the range of concentrations, until a critical low concentration is reached (Fig. 3). Below that concentration, the heart rate and the oxygen consumption both drop sharply.
The oxygen consumption reaches zero at oxygen concentrations ranging from 7.3 to 9.9 µmole.l -1 for the deeper sites and 11.3-14.2 µmole.l -1 for the shallower site (Table 1). At 10˚C, the heart rate typically oscillates between 61 and 68 beats per minute (b.p.m.) while it usually varies between 90 and 108 b.p.m. at 20˚C for the specimens from the shallower site (Supplementary data Fig. S2). For the specimens from the deeper sites, the heart rate at 10˚C is higher (72.3-81.5 b.p.m.; Table 1) than that of the specimens from the shallower site (62.5 and 69.0; Table 1). Although the heart rate tends to decrease with increasing wet weight, the correlation is not significant (log/log transform linear correlation p=0.15).
Effect of temperature
Temperature affects the beating frequency of the heart for the three specimens tested (Fig. 4). The Arrhenius plot for the two LI individuals shows a biphasic curve between 3˚C and the Arrhenius break point at 19˚C. At a temperature higher than 19˚C or lower than 3˚C, the heart rate is more variable and drops sharply in warmer water, indicating that 19˚C is the upper temperature limit for this species. Below 3˚C (normal deep-sea temperature in the area), the heartbeat is also irregular, possibly indicating a lower temperature limit for this species. This phenomenon is also observed at 6˚C for the specimen from the shallower site (normal deep-sea water temperature 8˚C for this area). In addition to the upper and lower breakpoints, there is an inflection point for the two deeper specimens at 10.7˚C (Fig. 4). The colder part of the curve has a slope of ca. 4, while the upper part of the curve has a slope of 1.7-2. This inflection point could also be present for the shallower specimen but the temperature range below that value is too short to allow a proper estimate of the slope.
Modifications the EKG characteristics
In addition to the beating frequency, temperature also affects the overall shape of the EKG (Fig. 5). It is characterized by two large peaks at 12˚C, in addition to a smaller one preceding the large peaks. The second large peak increases in height in respect to the first one as temperature increases. At 16˚C, the two peaks have approximately the same amplitude, and fuse completely at higher temperatures.
The height of the second peak increases with temperature while that of the first peak remains unchanged up to 16˚C. Beyond that temperature, the height of the fused peaks keeps increasing at a rate similar to that of the second peak, suggesting it is the contribution of that second peak that is responsible for the changes in amplitude. Beyond 20˚C, the amplitude tends to level off or decrease slightly.
Over most of the range tested, oxygen concentration does not seem to affect the amplitude of the EKG (Fig. 6). Below 25 µmole.l -1 of oxygen, however, the amplitude of the EKG drops sharply. This phenomenon occurs at lower oxygen concentrations than the drop in heart rate (32 µmole.l -1 at this temperature for the same animal). At values below 25 µmole.l -1 of oxygen, the shape of the EKG is also significantly affected (Fig. 7), with a drastic decrease of the second peak height, to the point it may completely disappear (Fig. 7, 17 µmole.l -1 oxygen inset). Upon return to oxygen concentrations greater than 25 µmole.l -1 , the EKG returns to its pre-hypoxia characteristics (Fig. 7, 62 µmole.l -1 oxygen inset).
Discussion
Oxygen consumption rate
As expected, the oxygen consumption rates increase with increasing size, and this increase follows an allometry with coefficient 0.48. This coefficient is in the low end of the range reported for other marine crustaceans [START_REF] Vidal | Rates of metabolism of planktonic crustaceans as related to body weight and temperature of habitat[END_REF]. This value is lower to that reported for the shore crab Carcinus maenas (0.598; [START_REF] Wallace | Activity and metabolic rate in the shore crab Carcinus maenas (L.)[END_REF]. There is evidence that the allometry of metabolism is linked to activity, metabolic rate, and habitat [START_REF] Carey | Economies of scaling: More evidence that allometry of metabolism is linked to activity, metabolic rate and habitat[END_REF].
Compared to the other hydrothermal vent species studied to date, Bythograea thermydron, the rate is very similar for the large animals (Mickel and Childress, 1982b). Mickel and Childress (1982) however report an allometry coefficient not significantly different from 1.0, although the total size range in their study was reportedly not sufficient (20.0-111.4 g wet weight) to obtain reliable data. The wet weights in our study cover two orders of magnitude (0.4-41.5 g wet weight), yielding a more reliable allometry coefficient. The oxygen consumption rates are also similar to other deep-sea and shallow-water crustaceans [START_REF] Childress | Metabolic rates of animals from hydrothermal vents and other deep-sea habitats[END_REF]. Deciphering the meaning of the small allometry coefficient will require a comparative study of crabs closely related to Segonzacia and inhabiting different habitats.
Changes in the EKG shape characteristics
In decapod crustaceans, the heartbeat is initiated within the cardiac ganglion, where a small number of pacemaker neurons control this heartbeat (for a review, see [START_REF] Mcmahon | Intrinsic and extrinsic influences on cardiac rhythms in crustaceans[END_REF]. [START_REF] Wilkens | Re-evaluation of the stretch sensitivity hypothesis of crustacean hearts: hypoxia, not lack of stretch, causes reduction in the heart rate of isolated[END_REF] reports that it is maintained in isolated hearts, provided that the partial pressure of oxygen is sufficient. The overall shape of the EKG resembles that recorded for Bythograea thermydron, another hydrothermal vent crab studied by Mickel and Childress (1982) and other, shallow-water, crabs [START_REF] Burnovicz | The cardiac response of the crab Chasmagnathus granulatus as an index of sensory perception[END_REF]. For B. thermydron, the authors did not consider changes in amplitude as a function of pressure because the amplitude seemed to be affected by the time elapsed since electrode implantation and pressure affected the electrical connectors in the vessel. In the present study however, pressure remained unchanged and changes in amplitude of the EKG were accompanied by modifications of the shape of the EKG, suggesting the amplitude changes were not artifacts.
In our recordings, the EKG pattern clearly comprises two major peaks that fuse at temperatures greater than 16˚C. The relationship between each on the peaks and its potential physiological role (pacemaker, cardiac output control) would be an interesting avenue to explore but the need to work under pressure for S. mesatlantica renders this line of study difficult in this species.
Effect of temperature
For the Menez Gwen animals, the Arrhenius plot of the heart rate revealed that the normal range of functioning for this species lies between 6˚C and 21˚C at 80 bars (8 Mpa). The temperature of the deep-sea water at these depths is close to 8˚C, and the animals are then unlikely to encounter limiting conditions in the colder end of the range. Hydrothermal fluid, mixing with the deep-sea water, can however yield temperatures far greater than the upper end of the range, likely limiting the distribution of the crabs in their natural environment, in combination with other limiting factors (e.g. oxygen, sulfide). The upper temperature is however lower than that reported for B. thermydron, the eastern Pacific relative of S. mesatlantica (Mickel and Childress, 1982), although pressure is very likely to affect the physiology of the animals. These authors report that at 238 atm (23.8 MPa, corresponding to their typical environmental pressure) B. thermydron is capable of surviving 1 h at 35˚C but died when exposed for the same duration at 37.5 or 40˚C. Animals maintained at 238 atm and 30˚C however exhibited a very disrupted EKG, and three of the five experimental animals died within 2 hours. Contrary to the East Pacific Rise species B. thermydron, the Mid-Altantic Ridge species S. mesatlantica is found living as shallow as 800 m.
Animals collected from this shallow site can survive at least 2 weeks at atmospheric pressure (provided they are kept in a cold room at 5-8˚C), suggesting that they do not experience disruptions of the heart function as severe as those observed in B. thermydron at 1 atm (Mickel and Childress, 1982). This hypothesis is supported by the observations performed at 1 atm on freshly collected animals that showed a normal aspect of the EKG (see 'Materials and Methods' section). S. mesatlantica is also found at greater depths on other sites of the Mid-Atlantic Ridge (down to at least 3000 m depth). Our work on specimens from these deeper sites did not reveal an extended upper temperature tolerance, on the contrary the Arrhenius breakpoint is 2˚C lower for these animals (19˚c for the two specimens tested instead of 21˚C for the shallower site specimen). The same shift is observed for the low temperatures: the lower thermal tolerance is about 6˚C for the shallow water specimen and about 3˚C for the deeper ones.
Overall, it seems the total thermal range is about 16˚C, with a shift towards colder temperature in deeper specimens. The absolute heart rates recorded for our species do not differ greatly from other crabs of similar sizes for comparable temperatures. They are very similar to those obtained for the shore crab Carcinus maenas [START_REF] Ahsanullah | Factors affecting the heart rate of the shore crab Carcinus maenas (L.)[END_REF][START_REF] Giomi | A role for haemolymph oxygen capacity in heat tolerance of eurythermal crabs[END_REF], the mud crab Panopeus herbsti, and the blue crab Callinectes sapidus [START_REF] Defur | The effects of environmental variables on the heart rates of invertebrates[END_REF]. Some other species have much greater heart rates (235 b.p.m. for Hemigrapsus nudus at 18˚C) or much lower values (82 b.p.m. for Libinia emarginata at 25˚C; [START_REF] Defur | The effects of environmental variables on the heart rates of invertebrates[END_REF].
The characteristics of the EKG pattern also changed with temperature. Although the amplitude of the signal for the first large peak remained unchanged, that of the second large peak increased linearly with temperature up to the maximal temperature. This could reflect modifications of the cardiac output in S. mesatlantica. In Cancer magister and C. productus, the cardiac output declines but an increased oxygen delivery to the organs is possible through a concomitant decrease of hemocyanin oxygen affinity [START_REF] Florey | The effects of temperature, anoxia and sensory stimulation on the heart rate of unrestrained crabs[END_REF]. There is however no study linking EKG parameters to cardiac output in crustaceans.
Oxygen and capacity limited thermal tolerance (OCLTT)
The concept of oxygen and capacity limited thermal tolerance (OCLTT) was developed to explain the observations on temperature tolerance [START_REF] Frederich | Oxygen limitation of thermal tolerance defined by cardiac and ventilatory performance in spider crab, Maja squinado[END_REF][START_REF] Pörtner | Climate change and temperature-dependent biogeography: oxygen limitation of thermal tolerance in animals[END_REF]. The authors hypothesized that a mismatch between oxygen demand and oxygen supply results from limited capacity of the ventilatory and the circulatory systems at temperature extremes. They argue that limitations in aerobic performance are the first parameters that will affect thermal tolerance. In Segonzacia mesatlantica, the Arrhenius plot of the heart rate exhibits a biphasic profile for these animals from the deeper sites, with an inflection point at 10.7˚C. In the temperate eurythermal crab Carcinus maenas, [START_REF] Giomi | A role for haemolymph oxygen capacity in heat tolerance of eurythermal crabs[END_REF] report a similar observation. The authors interpret this inflection as the pejus temperature, beyond which hypoxemia sets in until the critical temperature (onset of anaerobic metabolism). This would then indicate an optimal temperature range of 3-10.7˚C from S. mesatlantica, beyond which the exploitation of hemocyanin-bound oxygen reserve delays the onset of hypoxemia. Contrary to the C. maenas hemocyanin, however, the hemocyanin from S. mesatlantica does not release oxygen in response to increased temperature, a lack of temperature sensitivity that is found in other hydrothermal vent crustacea hemocyanins (Chausson et al., 2004, Hourdez and[START_REF] Hourdez | Adaptations to hypoxia in hydrothermal vent and cold-seep invertebrates[END_REF].
Effect of oxygen concentration
At hydrothermal vents, temperature and oxygen concentration are negatively correlated (Johnson et al., 1986). The animals therefore need to extract even more oxygen to meet their metabolic demand when it is less abundant in their environment. The conditions also fluctuate rapidly and animals need to respond quickly to the chronic hypoxia they experience.
In most crustaceans, exposure to hypoxia below the critical oxygen tension induces a bradycardia, coupled with a redirection of the hemolymph from the digestive organs towards ventral structures [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF]. In our species, the heart rate remained relatively stable over a very wide range of oxygen concentrations and only dropped below a critical oxygen concentration similar to that of the oxygen consumption. This critical oxygen concentration ranges from 7.3 to 14.2 µmol.l -1 . These values are greater than the half-saturation oxygen tension of the hemocyanin (P50=3.7 µmol.l -1 at 15˚C, [START_REF] Chausson | Respiratory adaptations to the deep-sea hydrothermal vent environment: the case of Segonzacia mesatlantica, a crab from the Mid-Atlantic Ridge[END_REF], suggesting that hemocyanin is not the limiting factor in the failure of oxyregulation at lesser environmental oxygen concentrations. Diffusive and convective (ventilation) processes are likely to limit oxygen uptake. However, the ability to maintain a stable heart rate down to low environmental tensions, along with the high affinity hemocyanin, likely accounts for the very low critical oxygen concentration observed for the vent crabs (this study; Mickel and Childress, 1982) compared to their shallow water relatives (e.g. 100-130 µmol.l -1 in Carcinus maenas; [START_REF] Taylor | The respiratory responses of Carcinus maenas to declining oxygen tension[END_REF].
The EKG amplitude and shape does not change in response to oxygen concentration variations over most of the tested range. Below the critical concentration however, both the amplitude and the presence of the second large peak are affected. This suggests that, although the pacemaker activity remains, the heart either contracts less strongly or does not contract at all. As a result, the hemolymph does not circulate and the animal is unable to regulate its oxygen uptake. As for shallow-water species, the animals are able to survive anoxia. In the crabs Cancer magister and C. productus, this can be tolerated for up to 1 hr [START_REF] Florey | The effects of temperature, anoxia and sensory stimulation on the heart rate of unrestrained crabs[END_REF]. Our experimental crabs were also maintained for the same duration below their ability to oxyregulate, a time during which, once oxygen reserves were depleted, they had to rely on anaerobiosis. During that time, the heart rate varied greatly (Fig. 7), possibly indicating attempts to reestablish oxygen uptake.
Measuring the modifications of blood flow to different parts of the body was unfortunately not feasible inside the pressure vessels. Similarly, we were not able to measure ventilation rate or ventilation flow in our set-up. However, the ability to oxyregulate while maintaining a stable heart rate (neither tachycardia nor bradycardia) strongly suggests that ventilation increases under hypoxic conditions as it does in for example in C. maenas [START_REF] Taylor | The respiratory responses of Carcinus maenas to declining oxygen tension[END_REF][START_REF] Giomi | A role for haemolymph oxygen capacity in heat tolerance of eurythermal crabs[END_REF]. In the hydrothermal vent shrimp Alvinocaris komaii, animals observed in situ in environments characterized by lower oxygen tensions exhibited a higher ventilation rate [START_REF] Decelle | Morphological adaptations to chronic hypoxia in deep-sea decapod crustaceans from hydrothermal vents and cold-seeps[END_REF], as typical of other decapods (see [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF]. Although not directly observed in S. mesatlantica, a similar response is very likely.
Conclusion
As all marine invertebrates, Segonzacia mesatlantica can experience hypoxia through both environmental exposure and as a consequence of increased metabolic consumption during exercise. Unlike most marine invertebrates however, environmental hypoxia is chronic -and possibly continuous-for this species. S. mesatlantica, like its East Pacific Rise congener Bythograea thermydron, is well adapted to these challenging conditions and capable regulate of regulating its oxygen uptake down to very low environmental tensions. In S. mesatlantica, this capacity most likely relies on an increased ventilation rate, while the heart rate remains stable. This is probably helped by the increased ventilatory capacity found in the vent species compared to their shallow water relatives [START_REF] Decelle | Morphological adaptations to chronic hypoxia in deep-sea decapod crustaceans from hydrothermal vents and cold-seeps[END_REF]. When not exposed to temperature increase, hypoxia corresponds to metabolic hypoxia and the response only involves ventilation modulation (and possibly circulatory adjustments). For this species, however, a temperature increase is usually correlated with more pronounced hypoxia. Although the response to hypoxia is similar at 10 and 20˚C, temperature itself has a strong effect on the heart rate and the characteristics of the EKG. It would be interesting to investigate whether the lack of temperature sensitivity [START_REF] Chausson | Respiratory adaptations to the deep-sea hydrothermal vent environment: the case of Segonzacia mesatlantica, a crab from the Mid-Atlantic Ridge[END_REF] impacts the cardiac output response to temperature in comparison to nonhydrothermal vent endemic species. ; in µmol.l -1 .h -1 ) as a 14 function of wet weight in grams (WW) for all specimens (n=12, see table 1 for 15 specimens characteristics). The linear regression has the equation log(Oxygen 16 cons.) = 0.48 * log(WW) + 0.87, and a correlation coefficient r 2 =0.959 (p<0.001). 17
45 beats per minute (b.p.m.), the heart rate increased to 55 b.p.m. between 3.5 and 4 hours after implantation. After pressure was applied a bradycardia appeared (heart rate down to 35 b.p.m.), which lasted for about 1.5 hours before the heart rate returned to typical values for 10˚C at 80 bars (8 MPa; 60-70 b.p.m., see below). Similarly, animals from the Irinovskoe or Logatchev sites (2700-3050 m depth) and acclimated to their in situ pressure, exhibit transient bradycardia when exposed to lowered pressure. Within a few minutes, the animals stabilized their heart rate to values greater than simulated in situ pressure values. At pressure values lower than 150 bars (15 mPa), the heart rate remained relatively stable (see supplementary material S1). Upon return to the in situ pressure value, the heart rate rapidly returned to the initial value.
Figure 1 :Figure 2 :
12 Figure 1: Experimental set-up. (A) Electrode implantation on an experimental 8 animal (cephalothorax width about 50 mm). (B) Flow-through pressure vessel, 9 control of oxygen concentration, and position of oxygen optode. The bypass line 10 allows the isolation of the vessel and the measurement of oxygen concentration 11 in the inlet water. 12 13 Figure 2: Oxygen consumption rates (oxygen cons.; in µmol.l -1 .h -1 ) as a 14
18
Figure 3 :Figure 4 :Figure 5 :
345 Figure 3: Oxygen consumption rate (open diamonds) and heart rate (black 19 diamonds) in response to oxygen levels in the pressure vessel. Conditions: 20 270 bars (27 MPa) of pressure at 10˚C for specimen LI-1 (see Table 1 for 21 specimen characteristics). The curves were fit to the datapoints as described in 22 the Materials and methods section. 23 24 Figure 4: Arrhenius plot of temperature-induced changes of the heart rate 25 (HR) of S. mesatlantica for three specimens under in situ pressure. 1000/ K: 26 reciprocal temperature in Kelvin (multiplied by 1000 for ease of reading). See 27 Table 1 for specimens characteristics. The Arrhenius breakpoint, inflection point 28 in the relationship, and slope values on either side of this point are also indicated. 29 30 Figure 5: Modifications of EKG characteristics in response to temperature 31 under 80 bars (8 MPa) of pressure and at non-limiting oxygen 32 concentrations (50-100 µmol.l -1 ). Open diamonds: amplitude of the first peak 33 in the EKG; black squares: amplitude of the second peak. Note that at 34 temperature values greater than 16˚C, the two peaks fuse, and only the black 35 symbols are used. Each datapoint corresponds to a mean of 30-40 measurements 36 (depending on the temperature) and its standard deviation. 37
Figure 5 70 71
Table 1 :
1 Collection, morphological and physiological characteristics of the experimental animals at 10˚C and under in situ pressure. 1 Animals LI-2 through LI-6 and LI-10 were too small for adequate electrode implantation. 2 Individuals from the Menez Gwen site; LI: Individuals from the Logatchev or Irinovskoe sites; a: Cephalothorax width; b: average 3 value for the plateau area of the graph; nd: not determined. 4
Depth of Sex Size a Wet Heart rate b Resp. rate Critical O 2
Specimen capture (m) (mm) weight (g) (b.p.m.) (mmol.h -1 ) conc.
ID (µmol.l -1 )
MG-1 800 M 51 41.5 69.0 41.1 14.2
MG-2 800 F 34 24.2 62.5 35.4 11.3
LI-1 3050 M 27 7.7 81.1 16.6 9.8
LI-2 3050 M 13 0.9 No data 5.3 8.8
LI-3 3050 M 21 3.0 No data 11.8 9.9
LI-4 3050 M 17 1.7 No data 8.8 8.9
LI-5 3050 M 7.5 0.2 No data 3.3 8.9
LI-6 3050 M 12 0.6 No data 5.5 9.2
LI-7 3050 F 23 3.6 81.5 12.3 8.9
LI-8 2700 F 53 39.1 74.8 47.2 nd
LI-9 2700 F 31 10.1 72.3 31.4 7.9
LI-10 3050 M 12 0.4 No data 6.3 7.3
MG:
Acknowledgements
All the work described here would not have been possible without the skills and help of the ROV MARUM-Quest crew, not only for animal collections but also for fixing a broken fiber optics cable used in my system: many thanks to a great crew. The crew of the RV Meteor has also been very helpful on board. I would also like to thank Nicole Dubilier, chief scientist, for inviting me on this cruise and for exciting scientific discussions. The modular pressure vessels used in this study were based with permission on a design by Raymond Lee. I would like to thank Jim Childress for very insightful discussions. This research was supported in part by the European Union FP7 Hermione programme (Hotspot Ecosystem Research and Man's Impact on European Seas; grant agreement no. 226354), and by the Region Bretagne HYPOXEVO grant. The German Research Foundation (DFG) and the DFG Cluster of Excellence "The Ocean in the Earth System" at MARUM, Bremen (Germany) are acknowledged for funding and support of the research cruise with the RV Meteor (MenezMar M82/3 and M126) and ROV MARUM-Quest.
The author declares that he has no competing interests. | 41,054 | [
"736019"
] | [
"541812"
] |
01763827 | en | [
"info"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01763827/file/1570414966.pdf | Dan Radu
email: [email protected]
Adrian Cretu
email: [email protected]
Benoît Parrein
Jiazi Yi
email: [email protected]
Camelia Avram
email: [email protected]
Adina As
Benoit Parrein
email: [email protected]
Adina Astilean
email: [email protected]
Flying Ad Hoc Network for Emergency Applications connected to a Fog System
The main objective of this paper is to improve the efficiency of vegetation fire emergency interventions by using MP-OLSR routing protocol for data transmission in Flying Ad Hoc NETwork (FANET) applications. The presented conceptual system design could potentially increase the rescuing chances of people caught up in natural disaster environments, the final goal being to provide public safety services to interested parties. The proposed system architecture model relies on emerging technologies (Internet of Things & Fog, Smart Cities, Mobile Ad Hoc Networks) and actual concepts available in the scientific literature. The two main components of the system consist in a FANET, capable of collecting fire detection data from GPS and video enabled drones, and a Fog/Edge node that allows data collection and analysis, but also provides public safety services for interested parties. The sensing nodes forward data packets through multiple mobile hops until they reach the central management system. A proof of concept based on MP-OLSR routing protocol for efficient data transmission in FANET scenarios and possible public safety rescuing services is given.
Introduction
The main objective of this paper is to introduce MP-OLSR routing protocol, that already proved to be efficient in MANET and VANET scenarios Yi et al (2011a), [START_REF] Radu | Acoustic noise pollution monitoring in an urban environment using a vanet network[END_REF], into FANET applications. Furthermore, as a proof of concept, this work presents a promising smart system architecture that can improve the saving chances of people caught in wildfires by providing real-time rescuing services and a temporary communication infrastructure. The proposed system could locate the wildfire and track the dynamics of its boundaries by deploying a FANET composed of GPS and video enabled drones to monitor the target areas. The video data collected from the FANET is sent to a central management system that processes the information, localizes the wildfire and provides rescuing services to the people (fire fighters) trapped inside wildfires. The data transmission QoS (Quality of Service) for data transmission in the proposed FANET network scenario is provided to prove the efficiency of MP-OLSR, a multipath routing protocol based on OLSR, in this types of applications.
Wildfires are unplanned events that usually occur in natural areas (forests, prairies) but they could also reach urban areas (buildings, homes). Many such events occured the last years (e.g. Portugal andSpain 2017, Australia 2011). The forrest fire in the north of Portugal and Spain killed more than 60 people. During the Kimberley Ultra marathon held in Australia in 2011 multiple persons were trapped in a bush fire that started during a sports competition.
The rest of this paper is structured as folows. Section 2 presents the related works in the research field. Section 3 introduces the proposed system design. Section 4 shows and discusses the QoS performance evaluation results. Finally, Section 5 concludes the paper.
Related works
Currently there is a well-known and increasing interest for providing Public Safety services in case of emergency/disaster situations. The US Geospatial Multi Agency Coordination1 provides a web service that displays fire dynamics on a map by using data gathered from different sources (GPS, infrared imagery from satellites). A new method for detecting forest fires based on the color index was proposed in [START_REF] Cruz | Efficient forest fire detection index for application in unmanned aerial systems (uass)[END_REF]. Authors suggest the benefits of a video surveillance system installed on drones. Another system, composed of unmanned aerial vehicles, used for dynamic wildfire tracking is discussed in [START_REF] Pham | A distributed control framework for a team of unmanned aerial vehicles for dynamic wildfire tracking[END_REF].
This section presents the state of the art of the concepts and technologies used for the proposed system design, current trends, applications and open issues.
Internet of Things and Fog Computing
Internet of Things (IoT), Fog Computing, Smart Cities, Unmanned Aerial Vehicle Networks, Mobile Ad Hoc Networks, Image Processing Techniques and Algorithms, and Web Services are only some of the most promising actual, emerging technologies. These all share a great potential to be used together in a large variety of practical applications that could improve, sustain and support peoples life.
There are many comprehensive surveys in the literature that analyse the challenges of IoT and provide insights over the enabling technologies, protocols and possible applications [START_REF] Al-Fuqaha | Internet of things: A survey on enabling technologies, protocols, and applications[END_REF]. In the near future, traditional cloud computing based architectures will not be able to sustain the IoT exponential growth leading to latency, bandwidth and inconsistent network challenges. Fog computing could unlock the potential of such IoT systems.
Fog computing refers to a computing infrastructure that allows data, computational and business logic resources and storage to be distributed between the data source and the cloud services in the most efficient way. The architecture could have a great impact in the emerging IoT context, in which billions of devices will transmit data to remote servers, because its main purpose is to extend cloud infrastructure by bringing the advantages of the cloud closer to the edge of the network where the data is collected and pre-processed. In other words, fog computing is a paradigm that aims to efficiently distribute computational and networking resources between the IoT devices and the cloud by:
• allowing resources and services to be located closer or anywhere in between the cloud and the IoT devices; • supporting and delivering services to users, possibly in an offline mode when the network is partitioned by example; • extending the connectivity between devices and the cloud across multiple protocol layers.
In the near future, traditional cloud computing based architectures will not be able to sustain the IoT exponential growth leading to latency, bandwidth and inconsistent network challenges. Fog computing could unlock the potential of such IoT systems.
Currently the use cases and the challenges of the edge computing paradigm are discussed in various scientific works [START_REF] Lin | A survey on internet of things: Architecture, enabling technologies, security and privacy, and applications[END_REF][START_REF] Al-Fuqaha | Internet of things: A survey on enabling technologies, protocols, and applications[END_REF], [START_REF] Ang | Big sensor data systems for smart cities[END_REF]. Some of the well known application domains are: energy, logistics, transportation, healthcare, industrial automation, education and emergency services in case of natural or man made disasters. Some of the challenging Fog computing research topics are: crowd based network measurement and interference, client side network control and configuration, over the top content management, distributed data centers and local storage/computing, physical layer resource pooling among clients, Fog architecture for IoT, edge analytics sensing, stream mining and augmented reality, security and privacy.
There are numerous studies that connect video cameras to Fog & IoT applications. The authors of Shi et al (2016) discuss a couple of practical usages for Fog computing: cloud offloading, video analytics, smart home and city, and collaborative edge. Also, some of the research concepts and opportunities are introduced: computing stream, naming schemes, data abstraction, service management, privacy and security, and optimization metrics. Authors of [START_REF] Shi | The promise of edge computing[END_REF] present a practical use case in which video cameras are deployed in public areas or on vehicles and they could be used to identify a missing persons image. In this case, the data processing and identification could be done at the edge without the need of uploading all the video sources to the cloud. A method that distributes the computing workload between the edge nodes and the cloud was introduced Zhang et al (2016). Authors try to optimize data transmission and ultimately increase the life of edge devices such as video cameras.
Fog computing could be the solution to some of the most challenging problems that arise in the Public Safety domain. Based on the most recent research studies and previous works concerning public safety Radu et al (2012), [START_REF] Yi | Multipath routing protocol for manet: Application to h.264/svc video content delivery[END_REF] it can be stated that real time image & video analysis at the edge of a FANET network could be successfully implemented in the public safety domain, more specifically for fire detection and for rescuing emergency services provisioning.
One of the most important advantages of Fog computing is the distributed architecture that promises better Quality of Experience and Quality of Service in terms of response, network delays and fault tolerance. This aspect is crucial in many Public Safety applications where data processing should be done at the edge of the system and the response times have hard real-time constraints.
Flying Ad Hoc Networks
Unmanned Aerial Vehicles (UAV's, commonly known as drones) become more and more present in our daily lifes through their ease of deployment in areas of interest. The high mobility of the drones, with their enhanced hardware and software capabilities, makes them suitable for a large variety of applications including transportation, farming and disaster management services. FANET's are considered as a sub type of Mobile Ad Hoc Networks networks that have a greater degree of mobility and usually the distance between nodes is greater as stated in [START_REF] Bekmezci | Flying ad-hoc networks (fanets): A survey[END_REF].
A practical FANET testbed, build on top of Raspberry Pi c , that uses two WiFi connections on each drone (one for ad hoc network forwarding and the other for broadcasted control instructions is described in [START_REF] Bekmezci | Flying ad hoc networks (fanet) test bed implementation[END_REF]. Another FANET implementation that consists of quadcopters for disaster assistance, search and rescue and aerial monitoring as well as the design challenges are presented in [START_REF] Yanmaz | Drone networks: Communications, coordination, and sensing[END_REF] 2.3 Routing protocols OLSR (Optimized Link State Routing) protocol proposed in Jacquet et al ( 2001) is an optimization of link state protocol. This single path routing approach presents the advantage of having shortest path routes immediately available when needed (proactive routing). OLSR protocol has low latency and performs best in large and dense networks.
In [START_REF] Haerri | Performance comparison of aodv and olsr in vanets urban environments under realistic mobility patterns[END_REF] OLSR and AODV are tested against node density and data traffic rate. Results show that OLSR outperforms AODV in VANETs, providing smaller overhead, end-to-end delay and route lengths. Furthermore there are extensive studies in the literature regarding packets routing in FANET's. Authors of [START_REF] Oubbati | A survey on position-based routing protocols for flying ad hoc networks (fanets)[END_REF] give a classification and taxonomy of existing protocols as well as a complete description of the routing mechanisms for each considered protocol. An example of a FANET specific routing protocol is an adaptation of OLSR protocol that uses GPS information and computes routes based on the direction and relative speed between the UAV's is proposed in [START_REF] Rosati | Dynamic routing for flying ad hoc networks[END_REF].
In this paper authors use MP-OLSR (Multiple Paths OLSR) routing protocol based on OLSR proposed in Yi et al (2011a), that allows packet forwarding in FANET and MANET networks through spatially separated multiple paths. MP-OLSR exploits simultaneously all the available and valuable multiple paths between a source and a destination to balance traffic load and to reduce congestion and packet loss. Also it provides a flexible degree of spatial separation between the multiple paths by penalizing edges of the previous paths in an original Dijkstra algorithm execution.
Based on the above considerations, a system architecture that can improve the saving chances of people caught in wildfires by providing real-time rescuing services and a temporary communication infrastructure is proposed.
System Design
One of the main objectives of this work is to design and develop a smart system architecture, based on FANET networks, which integrates with the numerous emergent applications offered by the Internet of Things, that is:
• extensible: the system architecture should allow any new modules to be easily plugged in; • reliable: the system should support different levels of priority and quality of service for the modules that will be plugged in. For example, the public safety and emergency services that usually have real-time hard constraints should have a higher priority than other services that are not critical; • scalable: the architecture should support the connection of additional new Fog components, features and high node density scenarios;
• resilient: the system will be able to provide and maintain an acceptable level of service whenever there are any faults to the normal operation.
The overview of the proposed model, in the context of Internet of Things & Fog Computing, is given in Figure 1. The system could locate the wildfire and track the dynamics of its boundaries by deploying a flying ad hoc network composed of GPS and video enabled drones to monitor the target areas. The fire identification data collected from the FANET is sent to a central management system that processes the data, localizes the wildfire and provides rescuing services to the people (fire fighters) trapped inside the wildfires. The proposed system intends to support and improve emergency intervention services by integrating, based on the real-time data collected from the Fog network, multiple practical services and modules such as:
• affected area surveillance;
• establishing the communication network between the disaster survivors and rescue teams; • person in danger identification and broadcasting of urgent notifications;
• supporting the mobility of the first responders through escape directions;
• rescuing vehicle navigation.
Our FEA (FANET Emergency Application) network topology is presented in Figure 2 and it is composed of three main components:
• A MANET of mobile users phones;
• FANET -video and GPS equipped drones that also provide sufficient computational power capabilities for fire pattern recognition; • Fog infrastructure that supports FANET data collection at the sink node located at the edge of the network. This provides data storage, computational power and supports different communication technologies for the interconnection with other edge systems.
This last component can be done through an object store as proposed in Confais et al (2017b) where a traditional Bittorrent P2P network can be used for storage purpose. Combined with a Scale-out NAS as in Confais et al (2017a), the Fog system avoids costly metadata management (even in local accesses) and computing capacity thanks to an intensive I/O distributed file system. Moreover, the global Fog system allows to work on a disconnected mode in case of network partitionning with the backbone.
FEA uses a FANET network, to collect fire identification data from drones (GPS and video enabled), and a MANET network composed of users smartphones. Sensing nodes periodically transmit data to the central management system where the fire dynamics is determined for monitoring purposes. If a fire has been detected by a sensing drone, based on the dynamics of the fire, rescuing information will be computed and broadcasted back into the FANET and MANET so that the people trapped in the fire to be able to receive the safety information on their smartphones in real-time. We make the folowing assumptions, that will be taken into account for the simulation scenario modelling, regarding the FEA message forwarding:
• when fire is detected by sensing drones they start to periodically forward data packets with the information regarding fire dynamics over multiple hops in the mesh network towards the sink node; • the central management system processes the fire detection data received from FANET nodes and computes the fire dynamics using the GPS coordinates that are included in the received data messages; • the central management system sends back into the mobile network (FANET and MANET) rescuing information that will be received by people in danger on their smartphones.
In FEA system FANET nodes are responsible for: fire identification based on video recording, forwarding the processed information (alongside with GPS coordinates) towards the collector node and rescuing information forwarding to the MANET nodes. The proposed network architecture could also serve as a temporary communication infrastructure between rescuing teams and people in danger.
One of the many advantages of FEA is the ease of deployment, all the technologies and components of the system are widely available, inexpensive and easy to provide. Also the Quality of Service in the FANET network, which is essential in emergency services where delays and packet delivery rations are very important, is enhanced by using MP-OLSR routing protocol that chooses the best multiple paths available between source and destination.
System Evaluation
The simulations are performed to evaluate MP-OLSR in the proposed FANET scenario. This section is organized as follows. The simulation environment configuration and scenario assumptions are given in Section 4.1 and then the Quality of Service performance are compared between OLSR and MP-OLSR in Section 4.2
Simulation Scenario
For the simulations we designed a 81 nodes FANET & MANET hybrid topology placed in a 1480 square meters grid topology. The Random Waypoint Model mobility pattern was used for different maximal speeds suitable for the high mobility of drones: 1-15 m/s (3.6-54 km/h). We make the assumption that only a subset of nodes (possibly the ones that detect fire or the smart phones of people in danger) need to communicate with the Fog edge node through the mesh network so the data traffic is provided by 4 Constant Bit Rate (CBR) sources. Qualnet 5 was used as a discrete event network simulator. The detailed parameters for the Qualnet network scenario and routing protocols configuration parameters are listed in Table 1. The terrain altitude profile is shown in Figure 3.
Simulation Results
For each routing protocol a number of 80 simulations were executed (10 different seeds/speed ranges). To compare the performances of the protocols, the following metrics are used:
• Packet delivery ratio (PDR): the ratio of the data packets successfully delivered at all destinations. • Average end-to-end delay: averaged over all received data packets from sources to destinations as depicted in [START_REF] Schulzrinne | Rfc 1889: Rtp: A transport protocol for real-time applications[END_REF]. • Jitter: average jitter is computed as the variation in the time between packets received at the destination caused by network congestions and topology changes.
Figures 4, 5 and 6 show the QoS performance of MP-OLSR and OLSR in terms of PDR, end-to-end delay and Jitter results with standard deviation for each point.
From the obtained results it can be seen that PDR decreases slightly with the mobility as expected. For the proposed FANET scenario MP-OLSR delivers an average of 10% higher PDR than OLSR protocol.As expected, when the speed increases to values closer to the high mobility of FANET scenarios the links become more unstable so OLSR performance decreases while MP-OLSR provides a much better overall delivery ratio than OLSR (around 9% in average at higher speeds). MP-OLSR also performs much better than OLSR in terms of end-to-end delay and Jitter. The delay of OLSR is around 2 times higher at the highest speed while Jitter is 50% higher. This aspect is very important for the proposed emergency application where the re-sponse time must be provided as quickly as possible. Furthermore, the MP-OLSR standard deviation for all the results is smaller than for OLSR.
Conclusion and Future Work
We described FEA system as a possible emergency application for MP-OLSR routing protocol which uses a FANET network to collect fire dynamics data from drones and through a central management system it provides safety instructions back to the people in danger. The performance evaluation results show that MP-OLSR is suitable for FANET scenarios, most specifically emergency applications, where the mobility is high and response times have hard real-time constraints.
The folowing are some of our future works: system deployment on a real testbed, analysis of the cooperation between MANET & FANET, data analysis based on thermal cameras.
Fig. 1
1 Fig. 1 System overview
Fig. 2
2 Fig. 2 Emergency system architecture
Fig. 3
3 Fig. 3 Qualnet altitude profile pattern for 100 m 2
Fig. 4
4 Fig. 4 Delivery ratio
Fig. 5
5 Fig. 5 End-to-end delay
Fig. 6
6 Fig. 6 Jitter
Table 1
1 Simulation parameters.
Simulation Parameter Value Routing Parameter Value
Simulator Qualnet 5 TC Interval 5 s
Routing protocols OLSRv2 and MP-OLSR HELLO Interval 2 s
Area 1480 x 1480 x 34.85 m 3 Refresh Timeout Interval 2 s
Number of nodes 81 Neighbor hold time 6 s
Initial nodes placement Grid Topology hold time 15 s
Mobility model Random Waypoint Duplicate hold time 30 s
Speeds 1-15 m/s Link Layer Notification Yes
Number of seeds 10 No. of path in MP-OLSR 3
Transport protocol UDP
IP IPv4
IP fragmentation unit 2048 bytes
Physical layer model PHY 802.11b
Link layer data rate 11 Mbits/s
Number of CBR sources 4
Sim duration 100 s
CBR start-end 15-95 s
Transmission interval 0.05 s
Application packet size 512 bytes
https://www.geomac.gov | 22,795 | [
"3931",
"858423"
] | [
"189574",
"189574",
"473973",
"2071",
"189574",
"189574"
] |
01763828 | en | [
"shs"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01763828/file/Quality%201920-30_HES.pdf | Jean-Sébastien Lenfant
Early Debates on Quality, Market Coordination and Welfare in the U.S. in the 1930s
Introduction
The concept of quality in economics, as an relevant aspect of economic coordination, has gone through ups and down since it was identified as a decision variable of the producer in a competitive monopolist environment in Chamberlin's The Theory of Monopolistic Competition (1933, first ed.;[START_REF] O'brien | Sound Buying Methods for Consumers[END_REF]second ed.). To my knowledge, the concept of quality-how it has been defined and integrated into economic thinkinghas not retained the attention of historians of economic thought so far. Anyone looking for milestones in the history of quality in economics will find some accounts in works done in the fields of economic sociology [START_REF] Karpik | L'économie des singularités[END_REF], management (Garvin, 1984) and economics of conventions [START_REF] Eymard-Duvernay | Conventions de qualité et formes de coordination[END_REF][START_REF] Eymard-Duvernay | La qualification des biens[END_REF]. Through the rare articles providing some historical sketch on the concept, the reader will get the idea that it was lurking in the theory of monopolistic competition (Chamberlin, 1933), that it was implicitly accounted for in Lancaster's "new approach to consumer behavior" [START_REF] Lancaster | A New Aproach to Consumer Behavior[END_REF], that it was instrumental to Akerlof's "lemons" market story [START_REF] Akerlof | The Market for 'Lemons'. Quality Uncertainty and the Market Mechanism[END_REF] and, eventually, that it became a relevant variable for the study of macroeconomic issues [START_REF] Stiglitz | The Causes and Consequences of the Dependence of Quality on Price[END_REF]. The purpose of the present article is to provide the starting point for a systematic history of quality in economics, going back some years before Chamberlin's staging of the concept. Indeed, it is our view that quality deserves more attention than it has received so far in economics and that it should not be relegated to marketing or socio-economic studies. A history of economics perspective on this concept is a requisite to help us understand the fundamental difficulties that accompany any attempt at discussing quality in standard economic thought as well as the fruitfulness of the concept for our thinking of market failures and of welfare and regulatory issues in economics. The way it has been addressed is likely to tell us a lot of how standard views on rational behavior and market coordination have overshadowed cognitive and informational aspects of consumers' and producers' behavior, which were central in many discussions about branding, grading, and labeling of goods as well as about educating the consumer. Those cognitive and informational aspects are now resurfacing within behavioral perspectives on consumption and decision (nudge being one among many possible offspring of this) and consequently reflections on quality in economics may be a subject for renewed enquiry. Actually, many aspects of economic life related to the quality of goods-information, grading, standardizing, consumers' representations of quality-have been at the core of many researches and discussions mainly in the 1930s in the U.S. Those researches did not proceed at the outset from broad theoretical views on competition or price coordination, instead they stemmed from practical and sectoral accounts of specific impediments experienced by producers of agricultural products on the one hand and, on the other hand, by the progressive setting up of institutions and organizations devoted to consumers' protection, with a particular intensity during Roosevelt's New Deal program. The present article will focus on this body of literature, with a view of enhancing the arguments pro and con the need for institutions protecting the consumer and improving the marketing of goods. First, contributions to this literature are to be replaced within a historical account of the development of institutions in charge of doing research or of implementing standards to improve the functioning of markets and to protect consumers in this period. Research on the subject of quality was undoubtedly driven by a specific historical set of events, notably the development of mass-production, marketing and branding in the 1920s and later on by the decrease of prices and losses of quality that accompanied the Great Depression. The first section deals with the institutional context. It focuses on the prominent role of the Bureau of Agricultural Economics and discusses the motivations and successes of the New Deal institutions linked with consumers' protection. Section 2 deals with the way agricultural economists discussed the issue of quality in relation with coordination failures and their consequences on welfare. As will be shown, early debates on quality in the 20th century lead to two opposite ways of thinking about quality in relation with market coordination. Two points are in order before to embark.
First, a noteworthy aspect of the literature under study here-notably the one linked with farm economics-is that it is following its own agenda independently of theoretical developments in the academia. For this very reason, the research on quality and market coordination will barely echo on the publication of Chamberlin's book and the follow-up literature and its is ignored as well in Chamberlin's book. 1Most of the research dealing with quality in the field of farm economics has been done during the years 1928-1939 and continued slowly after WWII until around 1950 or so, then almost vanishing.
Second, in those years, the concept of quality not having been introduced into standard economics, we cannot expect to confront ideas of the 1930s with an even idealized theoretical account of quality. It has to be approached as something that is in need of being defined and constructed against the marginalist school of eco-nomics.2 1 Promoting the interest of producers ... and consumers
The development of grades, brands, informative labeling, and more broadly of quality indicators as a means to improving the marketing of goods and the allocation of resources was addressed slowly in the 1910s and gained momentum in the 1930s. Our goal in this section is to present the main institutions and organizations that have been involved in the promotion of standards and grades as a means for improving the functioning of markets and the welfare of producers and/or consumers.
The main features of this overview are first, that standards and grading practices have been promoted slowly due to pressures on the part of industrials against it and second, that the structuring of a policy devoted to protecting consumers has been addressed late compared with policies focused on producers protection on wholesale markets. We shall first highlight the role of the Bureau of Agricultural Economics as one of the agencies in charge of promoting quality standards. We then present the role of consumer associations and the development of Home Economics as a field of research and teaching linked with the promotion of the consumer as a legitimate figure whose Welfare should be protected and promoted by government policies. We then move to a general view of the development of standards in the 1930s.
The Bureau of Agricultural Economics and the Grading of Agricultural Products
The history of the Bureau of Agricultural Economics is inseparable from the history of farm economics in the U. (1928) it soon appeared that growers and shippers could not know whether price variations resulted from variations in supply and demand or if they reflected differences in the quality of products. Thus, the need for standardizing the goods exchanged was established as a condition for market expansion. 3 It was expected that standards be based on scientific inquiry regarding the factors influencing quality (and then price), in relation with the use of the goods. As Tenny (1946Tenny ( , 1020)), a former researcher at the Bureau of Markets, puts it "other problems seemed to be getting themselves always to the front but non more so than cotton standards and grain standards." It is most interesting that the issue of quality and standards should be in a sense a foundational issue for the BAE. The BAE was then involved progressively into establishing standards for many agricultural products. The issue of standards concerned first and foremost cotton and grain, but "in fact, there was scarcely an agricultural product for which grades were not established, and these all were based largely not only on trade practices but also on scientific studies." [START_REF] Tenny | The Bureau of Agricultural Economics. The Early Years[END_REF](Tenny, , 1022) ) In the 1920s, one major contribution of the BAE was the Outlook program, whose aim was to provide forecasts about supply, demand and prices of several agricultural products to farmers and to educate them to deal with the information contained in the outlook and take decisions [START_REF] Kunze | The Bureau of Agricultural Economics' Outlook Program in the 1920s as Pedagogical Device[END_REF]. For certain categories of products, it was conceived that forecasts could be published in time to allow growers to adapt their choice of plantations. As an elite institution producing research and policy recommendations, the BAE has been associated with a progressive view on economic policy-whose purpose was to devise policies that would secure faire prices to growers [START_REF] Mcdean | Professionalism, policy, and farm economists in the early Bureau of Agricultural Economics[END_REF]-and that would deliver well organized and intelligible statistical information about prices. Contrary to farmer's Congressmen and group leaders, most economists at the BAE were favoring equality of opportunity to farmers and government policymaking role to improve on market outcomes [START_REF] Hardin | The Bureau of Agricultural Economics under fire: A study in valuation conflicts[END_REF]. Historically, it has been considered convenient to construct official standards for the quality of cotton. Under Cotton Futures Act, the US Department of Agriculture (USDA) controlled the standardization of cotton offered on exchange contracts. Until then, cotton grown and delivered in the US was classified according to US standards, while it was subjected to Liverpool standards on international markets. There was then a move towards acceptance of American standards throughout the world in 1924. Since its creation in 1922, the BAE's activities have often been criticized for interfering with market outcomes (through market information given to farmers and through predictions regarding futures). However, the BAE power was limited. In 1938, the BAE was given more influence as a policy-making agency at the Department of Agriculture (rather than being centered on research). This increase of power was short-lasting due to power struggles within the Department between different action agencies, and would be even more reduced in 1947, after publication of a report on the effects of the return of veterans to agriculture. (see [START_REF] Hardin | The Bureau of Agricultural Economics under fire: A study in valuation conflicts[END_REF]. 4Progressively, other kinds of products benefited from grading. This could be on a voluntary basis, as for instance with potatoes. For instance, [START_REF] Hopper | Discussion of [Urgent Needs for Research in Marketing Fruits and Vegetables[END_REF] notes that the graded potatoes from other states tended to replace non graded products. The, compulsory grading was beneficial to Ontario growers, demonstrating to potato buyers in Ontario that the locally grown product, when properly graded, was equal to that produced in other areas (Hopper, 1936, 418). The BAE was in the forefront to develop standards of many farm products (fruits, vegetables, meat). Its main concern, however, was standards for wholesale markets. Concerns about consumers welfare and protection through standards and grades of quality would develop later. For instance the Food and Drug Administration (FDA) had already passed laws protecting consumers against dangerous ingredients and material in food and in pharmaceutical products, but still the protection of the consumer was minimal hand most often incidental. Also, the Federal Trade Commission regulations against unfair advertising were conceived of as a protection to producers and not to consumers. Overall, until the late 1920s, the BAE could appear as the main agency able to provide careful analysis and reflections on quality. Things would change to some extent in the 1930s, under Roosevelt's administration.
A Progressive Move Towards Consumers' Protection
The following gives an overview of institutions (private or governmental) involved in promoting standards for consumer goods and protecting consumers. It points to the lack of coordination of the different institutions and their inability to develop an analytically sound basis for action. It is beyond the scope of this article to provide an overview of the associations, agencies, groups, clubs, laboratories and publications involved in consumers' protection. Rather, it is to illustrate that their helplessness in promoting significant changes in legislation reflect disagreements as to the abilities of consumers and efficiency of markets in promoting the appropriate level of qualities to consumers. This period, mainly associated with Roosevelt's New Deal programs, shows the limits and shortcomings of its accomplishment as regards consumer's protection. According to [START_REF] Cohen | A Consumers' Republic. The Politics of Mass Consumption in Postwar America[END_REF][START_REF] Cohen | Is it Time for Another Round of Consumer Protection? The Lessons of Twentieth-Century U.S. History[END_REF], one can distinguish two waves of consumer mobilization before WWII, the first one during the Progressive Era and the second one during the New Deal (1930s-1940s), two periods when reformers have sought to organize movements to obtain more responsible and socially equitable legislations, notably regarding the protection and safety of consumers. While the reformers of the Progressive Era focused on safety laws and better prices (Pure Food and Drug Act, Meat Inspection Act, anti-trust Federal Trade Commission Act) reformers of the 1930s would insist on the promotion of consumers' welfare in general in the context of the Great Depression. The rapid development of private initiatives aimed at defending and educating consumers has been associated to the violent upheavals of the 1920s and 1930s. The 1920s witnessed an acceleration of mass-production and the development of modern ways of marketing goods and advertizing brands, while the Great Depression induced sudden variations in price and quality of goods. Movements active in promoting the protection of the consumer were very diverse in their motivations and philosophy. Consumer associations and clubs were active in organizing consumer education and lots of buying cooperatives over the country were created with a view of rebalancing the bargaining power and offering goods that suited consumers' needs. Incidentally, consumer movements were prominently organized and activated by women-e.g. The American Association of University Women-and many consumers education programs were targeted towards women, who most often were in charge of the budget of the family [START_REF] Cohen | Is it Time for Another Round of Consumer Protection? The Lessons of Twentieth-Century U.S. History[END_REF]. Among private initiatives, let's mention the Consumer Research, the American Home Economics Association, the National Consumers League (founded 1891), the General Federation of Women's Clubs, the League of Women Voters, and the American Association of University Women. Several different associations have been engaged in consumers' education and protection during the 1920s-1930s. This can be looked at within a context of the "Battle of Brands", that is an overwhelming number of brands selling similar products, with important quality differences. As Ruth O'Brien, a researcher at the Bureau of Home Economics, would sum up: "Never before have we had so many consumer organizations, so much written and spoken about consumers' problems." (O'Brien, 1935, 104). Numerous college-trained homemakers did lead consumers' study groups on advertising, informative labels and price and quality variations among branded goods [START_REF] O'brien | Sound Buying Methods for Consumers[END_REF]. They were trained through programs of professional organizations such as the American Association of University Women, the American Home Economics Association, the League of Women Voters and the National Congress of Parents and Teachers. Education is important above all for durable goods for which it is not possible to benefit from experience. As bodies active in the promotion of standards and labels, they were asking for facts, that is characteristics of goods: "If consumer education is to be really effective as ammunition, it must consist of carefully directed shrapnel made of good hard facts." (O'Brien, 1935, 105) Before the New Deal, there were already some governmental institutions in charge of consumers' protection, at least indirectly. This is the case of the BAE as well as of the Bureau of Standards and the Food and Drug Administration. Special mention has to be made of the Office of Home Economics (1915) that became a fully-fledged Bureau of Home Economics in 1923 (BHE); it was an agency from the Department of Agriculture whose function was to help households adopt good practices and routines of everyday life (cooking, nutrition, clothing, efficient time-use). Among other things the BHE studied the consumption value of many kinds of goods: the nutritive value of various foods, the utility of different fabrics, and the performance results of houshold equipment. The New Deal period appeared as a favorable period for the expression of consumers' interest. In the framework of the National Recovery Act, the Roosevelt administration sat up two important agencies in this respect: The Consumers' Advisory Board, in the industrial administration, and the Consumers' Counsel in the agricultural administration. Both were supposedly in charge of voicing the consumers' interests. Several authors in the 1930s have expressed hopes and doubts that those institutions would be given enough power to sustain a balanced development of the economy (e.g. [START_REF] Means | The Consumer and the New Deal[END_REF][START_REF] Douglas | The Rôle of the Consumer in the New Deal[END_REF]Walker, 1934;Blaisdell, 1935, Sussman and[START_REF] Sussman | Standards and Grades of Quality for Foods and Drugs[END_REF]. Gardiner C. [START_REF] Means | The Consumer and the New Deal[END_REF] offers the more dramatic account of such hopes:5
The Consumers' Advisory Board, The Consumers' Counsel-these are names which point to a new development in American economic policy-a development which offers tremendous opportunities for social well-being. Whether these opportunities will be realized and developed to the full or will be allowed to lapse is a matter of crucial importance to every member of the community. It may well be the key that will open the way to a truly American solution of the problem which is leading other countries in the direction of either fascism or communism. (Means, 1934, 7) The move towards recognizing the importance of the consumer in the economic process and its full representation in institutions is thus a crucial stake of the 1930s. To Means and other progressives, laissez-faire doctrines, entrusting the consumer as the rational arbitrator of the economy, are based on an ideal of strong and transparent market coordination, which, it is said, was more satisfactorily met in small enterprise capitalism. Modern capitalism, on the contrary, is characterized by a shift of the coordination from markets to big administrative units, and prices of most commodities are now administered instead of being bargained. The consumer has lost much of its bargaining power and he is no longer in a position to know as much as the seller about the quality of goods. To avoid the hazards of an institutionalization of administered prices, the consumer is essential.6 To Means, as to other commentators, it is high time that consumers be given a proper institutional recognition in government administration. 7 During the New Deal, some associations and prominent figures have been active in urging for a true recognition of the consumers interests through the creation of a Department of the Consumer (asked for by Consumers' Research) or at least a consumers' standard board. This project was notably supported by the report of the Consumers' Advisory Board known as the "Lynd Report" after the name of Robert Lynd, head of the Consumer Advisory Board.8 . The Consumers' Advisory Board was a much weaker organization than the Industrial Advisory Board or the Labor Advisor Board (Agnew, 1934). Despite efforts to promote some kind of independent recognition of the interests of consumers, achieving parity with Departments of Labor, Commerce and Agriculture, it was never adopted. 9 The literature offers some testimonies and reflections pointing out the relative inefficiency and limited power of agencies in charge of standards. On the one hand, there seems to be "general agreement that the interests of the consumer are not adequately represented in the process of government", while on the other hand "there is wide diversity of opinion as to the most effective method of remedying this deficiency. Concepts of the appropriate functions to be assigned to a Consumers' Bureau vary widely" (Nelson, 1939, 151). 10This state of institutional blockage stems from a fundamental difficulty to recognize the Consumer as a powerful and pivotal entity in the functioning of a market economy.11 . Clearly, the consumer represents a specific function (not a subgroup in the economy) and its interest "is diffuse compared with the other interests mentioned; it is harder to segregate and far more difficult to organize for its own protection" (Nelson, 1939, 152). More or less, similar ideas are expressed by [START_REF] Douglas | The Rôle of the Consumer in the New Deal[END_REF] and [START_REF] Means | The Consumer and the New Deal[END_REF]. To Paul H. Douglas, chief of the Bureau of Economic Education of the National Recovery Administration, a genuine balanced organization of powers in a modern economy, searching for harmony of interests would imply the creation of a Department of the Consumer, a necessary complement to the Department of Commerce and the Department of Labor [START_REF] Douglas | The Rôle of the Consumer in the New Deal[END_REF].12
Helplessness of institutions regarding quality standards
If we now look more precisely at the issue of standards and grades, it turns out that different agencies in charge of setting standards (Bureau of Standards, Bureau of Agricultural Economics, Federal Trade Commission) acted in a disorganized fashion and where hampered to provide specific protection to consumers through dilution of their power [START_REF] Nelson | Representation of the Consumer Interest in the Federal Government[END_REF]; see also Auerbach, 1949). By the end of the 1930s, the prevailing comments pointed out that the development of standards and grades for different kinds of consumers goods was not satisfactorily enforced. Ardent advocates for a new Food and Drug Act, by which a system of standards and grades of quality be established Sussman and Gamer (1935,581) make a dark picture of the situation, with only very few products being controlled by standards of quality (tea, butter) whose aim is essentially to define what is the product. The situation is one where government administrations are entitled to set standards of quality in a very loose way. Actually, Sussman and Gamer point out that committees in charge of standards have but an advisory status in general, and that the setting of standards may be purely formal, making the enforcement of laws costly and uncertain:
The Food and Drugs Act, which merely prohibits the sale of adulterated products, presents insuperable obstacles to proper enforcement because it contains no indication of the standard, a deviation from which constitutes adulteration. . . . In consequence, no standards are provided by which a court may judge whether a product is in fact adulterated or misbranded. The result is that each case must stand upon its own facts and the government is obliged to use numerous experts and scientific data to indicate the proper standard and to prove that there was a departure therefrom. (Sussman and Gamer, 1935, 583) Lack of coordination, power struggles and pressures from industry explain that the outcome of the New Deal was eventually seen as unsatisfactory in terms of consumers protection. 13 At the same time, the NRA has imposed informative labeling and NRA tags on many products. It has extended the power to establish standards of quality to cover all food products. It has proscribed unfair or deceptive advertising practices. But only on rare occasions rules or standards have been promulgated that went beyond the proposals made by industrials. 14 What comes out of debates on the proper scope of consumers interests in governmental policy is that it leads to identify the terms in which the status of quality can be handled in economics from a theoretical standpoint. Many issues will be set out on this occasion. First, there is the objective vs subjective account of quality, second, there is the impossibility to grade goods adequately according to a single scale, third, there is the scientific vs subjective way of grading, fourth, there is the allocative effect of the absence of grades or standards on the market vs the disturbing effect of standards on business and innovation, fifth, there is the effect of standards and grades on branding (and incidentally on the range of qualities available to consumers). The general trend towards more standardization and labeling was forcefully opposed by many industries and pro-market advocates. One good illustration of such views is George Burton [START_REF] Hotchkiss | Milestones in the History of Standardizing Consumers' Goods[END_REF]. The main fear of Hotchkiss is that standardization will eradicate branding and will lower the average quality of goods. 15 He brushes aside criticisms that the number of trade-marks bewilders the consumers or that trade marks achieve monopoly through advertising (Hotchkiss, 1936, 74). This is not to deny the usefulness of grades on some products when a scientific 13 Nelson (1939 160) notes that statutory provisions prohibited executive department from lobbying. "Their only permissible activity, which some are exploiting fully, is to assist independent consumer organizations to present their points of view by furnishing them with information and advice. It has proved impossible, however, to maintain any coordinated consumer lobby to offset activities of business pressure groups."
14 One such example is the promulgation of rules regarding rayon industry by the FTC 15 "Trade-marks have developed when consumers came to accept some marks of makers 'as better guides' in purchasing than the hall mark of the Gild, or the seal of the town or crown officer. ... The trade-mark acquired value only through the experience of satisfied consumers, and when consumers found a mark they could trust, they did not care particularly whether it was the mark of a manufacturer or of a merchant. Even though the modern factory system has made it possible for manufacturers in many fields to dispose of their whole output under their own trade-mark, many of them still supply merchants, wholesalers, and large-unit retailers with equivalent merchandise to be marketed under their private trade-marks. Only a small proportion of consumers know or care that one trade-mark is a mark of origin and the other of sponsorship." (Hotchkiss, 1936, 73-74) grading is possible. 16 However, Hotchkiss questions the use of standards imposed by official regulation, because the benefits are counterbalanced by more disadvantages and because limits to the sellers initiative in the end limit the freedom of buyers. Also, buying by specifications (intermediate goods) is not perfect and cannot satisfy all departments within an organization. Or else, there is no absolute uniformity in testing articles and fallible human judgment leads to approximations. Hotchkiss's assessment is typical of a pro-market bias by which in the end consumers, on an equal footing with producers, participate through their choice in fostering the adequate production of an adequate range of qualities for different products. Administrative intervention would but corrupt such a mechanism:
The whole history of official regulation of quality can be summed up as follows. No form of it (that I have been able to discover) has over any long period, been honestly or efficiently administered. No form of consumers' standards has continued to represent the wants and desires of consumers. No form of regulation has ever succeeded in protecting the consumers against fraud. Nor form of it has failed to prove oppressive and irksome to consumers themselves. Few business men have any confidence that a trial of official regulation of quality in America now would work out any more successfully. (Hotchkiss, 1936, 77) Besides, it would prevent the sound regulation through consumers' sovereign judgment:
The marketing of consumer goods is still accompanied by many abuses, but they cannot be ascribed to helplessness on the part of buyers. On the contrary, the marketing system in twentieth-century America puts greater power in the hands of consumers than any similar group has ever known. The power they exercise in daily over-the-counter buying can dictate standards of quality far better than can be done by delegated authority. They can force the use of more informative labels and advertising. (Hotchkiss, 1936, 77-78) This process can be achieved through private initiatives to diffuse information to consumers. The modern housewife has to search for information by herself, with the help of domestic science experts, dietitians, testing laboratories, always keeping the power of final decision.17 Hotchkiss' stand is in line with a tradition of strong opposition to establishing standards and grades of quality. 18This running idea of a well informed consumer, having at his disposal, if he wants to, sufficient information to make his choice, is precisely what is challenged by consumers' protection movements. Ruth O'Brien, a researcher at the Bureau of Home Economics, notes that precisely consumers' organization have resentment "at the fog which baffles and bewilders anyone trying to compare the myriad of brands on the present market." (O'Brien, 1935, 104) What then should be the right extent of a legislation on standards? Going beyond traditional oppositions, the question was addressed from different standpoints, which laid the foundations for making quality a subject of inquiry for economists. A minimal conception would uphold for the provision of definitions of identity, eliminating the problem for courts of determining whether a product is or is not what it is supposed to be. 19 This preliminary step is notably insufficient if it is to protect consumers, who have no way for ascertaining the quality of a product and its ability to satisfy their needs. To this aim, minimum standards of quality are required. Even those would not be enough most of the time if a legislation is to be oriented towards protecting consumers, therefore "a comprehensive scheme of consumer protection must embrace definitions of identity, minimum standards of quality, and grades" (Sussman and Gamer, 1935, 587). Beyond, there are questions left to scientists and technicians, to economists and psychologists regarding the adequate basis, factors, attributes, properties and characteristics to grade a product. Here, we arrive at the difficulty in deciding with reference to what standards should be determined and of putting such standards in an Act, because they may reveal out-of-date or faulty, and because revisions have to be made in due time due to constant innovations of products and new uses. Manufacturers must innovate and develop their business within a context of constancy of standards. Consequently, "the legislators' function is limited to providing that mechanism which will best serve the purpose of the scientist or technician. Any attempt to set out standards in the act itself might seriously limit the effectiveness of a system of standards and grades." (Sussman and Gamer, 1935, 589) Fundamentally, even authors who are definitely aware of the necessity to protect consumers tend to recognize the complexity of erecting such a set of standards if it is to be neither purely formal nor contriving with some principles of market mechanism and individual liberty: "it is doubtful if government can do more than establish certain minimum standards of physical quality for that limited class of products, the use of which is intimately related to public health and safety. It is also doubtful if its sphere can be much extended without public regulation and control on a scale incompatible with our ideals of economic liberty." (Walker, 1934, 105) This overview on the motives and debates on the protection of the consumers and of farmers as a specific category of agents in the 1930s shows that there is no consensus as to the proper scope of government intervention and as regards the kind of standards and grades to be promoted. In the end of this descriptive overview of the stakes of introducing quality indicators on goods, it turns out that quality is identified as a complex subject for economics which has definite consequences on market outcomes, and for that very reason which deserves to be analyzed in a scientific way (through statistical evaluations, through experiments, through theoretical modeling). Even though the absence of standards or grades is identified as a source of coordination failure, net losses to producers and consumers, the best way of intervening, through purely decentralized and private certifying bodies or through Governmental agencies, is open to debate and in need of economic inquiry. However, in the 1930s, the stage was set for a serious discussion of quality issues into economics. Economists involved would certainly share the view that the subject of quality is definitely an essential element of market coordination, with potentially important welfare effects on both consumers and producers. 35 years before [START_REF] Akerlof | The Market for 'Lemons'. Quality Uncertainty and the Market Mechanism[END_REF], Ruth O'Brien could summarize the situation in a clear-cut manner: Grade labeling will affect the brand which has been selling a C grade for an A price. It should. It will affect unethical advertising. It should. But it will help rather than hinder the reputable manufacturer and distributor who now are obliged to meet such kinds of competition. We are all familiar with instances in which a very poor quality of a commodity has completely forced a higher quality off the market because there was no grading or definite means of informing the public of the differences between the products. Only superlatives were available to describe both. It is not to the consumer's interest that all low quality be taken off the market. But it is to her interest that she know what she is buying, that she pay a price which corresponds to this quality, and that she have a basis for comparing different qualities. (O'Brien, 1935, 108, emphasis mine).
Quality as a supplementary datum for economics
This section aims at understanding how the concept of quality could make its way into economic analysis proper. What has been seen in the first section is that there have been debates about quality and standards in the 1920s and 1930s that were driven by the identification of inefficiencies on markets, specifically on farm products markets. The view that what is being sold on markets is as important as the price at which it is sold was firmly established. From a history of economic though perspective, what needs to be addressed is how economists would engage in giving this idea a scientific content, that is, how they would endeavor to adapt the framework of the marginalist theory of value in order to make room for quality as supplementary datum of economic analysis. In the following, I regard work in the field of farm economics as a set of seminal contributions growing out of the context of inefficient market coordination on agricultural products described above. One cannot expect to see a whole new set of theory coming out well packed from those reflections. On the contrary, the best that we can expect is a set of ideas about analytical and theoretical issues linked with quality that suggest the intricacies of the subject and the methods to be followed to analyze quality issues. As a first step, I present Frederick Waugh's 1928 seminal statistical work and followup literature, which serve as a starting point to identify themes and the lineaments of their theoretical treatment.
Price-quality relationships for farm products
Frederick Waugh pioneered work on quality and its influence on market equilibrium. "Quality Factors Influencing Vegetable Prices" appeared in 1928 in the Journal of Farm Economics, and was to be mentioned quite often as a reference article on this topic, giving an impetus to lots of research on quality of farm products. Waugh's contribution is of first importance, and probably it went unsurpassed in terms of method, setting up a standard of analysis for the next decade. His goal is to focus on quality as a factor influencing price differentials among goods at a microeconomic level, and this is as he puts it "an important difference" (Waugh, 1928, 185). The originality of Waugh's study is concentrate exclusively on the "causes of variation in prices received for individual lots of a commodity at a given time" (Waugh, 1928, 185). The motivation behind is not purely theoretical; it is that variations between the prices of different lots affect the returns to individual producers. One central point that needs considering is how Waugh and his followers would define quality. Quality, eventually, is any physical characteristic likely to affect the relative price of two lots of goods at the same time on a market. For instance, regarding farm products, it can be shape, color, maturity, uniformity, length, diameter, etc.); and it is the task of the economist to discover those that are relevant from a market value perspective. One can note that quality factors are those that are to play a significant role on the market, at the aggregate level; consequently, it is assumed that even though some consumers may not be sensitive to such or such characteristic, quality factors are those that are commonly accepted as relevant to construct a hierarchy of values on the market and to affect the relative prices of goods. Here, a first comment is worth doing. Even though individuals may be giving some relative importance to different sets of characteristics, only those characteristics that are relevant enough in the aggregate shall be kept in the list of quality characteristics. In some sense, we can say that from this perspective, the market prices observed on different lots reveal quality characteristics. The aim of Waugh is not to question quality per se, but rather to identify that each market for an agricultural product, say cotton, is actually the aggregate of different sub-markets on which different qualities of cotton are supplied and demanded. But at the same time, Waugh is also aware that the information and marketing processes can be misleading and ineffective if they do not correspond to the quality differentials that make sense to market participants. We are thus here, at the very beginning, at the crossing between two potentially different ways of analyzing quality and the coordination aspects linked to it: One that relies on the forces of the market-a balance between producers and buyers behaviorto construct quality scales and reveal what counts as a determinant of quality; another one that recognizes that participants in the market are active--perhaps in an assymetric way--in constructing quality differentials (through marketing and signaling devices) and in making prices reflect those differences. The motivation for the whole analysis is clearly to help farmers adapt their production plan and marketing behavior to take advantage of as much as the market can offer them:
The farmer must adopt a production program which will not only result in a crop of the size most suited to market conditions, but he must produce varieties and types of each commodity which the market wants and for which it is willing to pay. His marketing methods, also, should be based on an understanding of the market demand for particular qualities. Especially, his grading and packaging policies should be based on demand if they are to be successful. Such terms as 'No.1,' 'A grade,' or 'Fancy' are meaningless unless they represent grades which reflect in their requirements those qualities which are important in the market. (Waugh, 1928, 186) On the one hand, it assumed that we can be confident that the market will deliver information both on what counts as quality and how much it counts (as explaining a price differential). Lets note that this may be demanding too much to markets, which are first supposed to indicate the equilibrium conditions for each well identified product, for given conditions on supply and demand. 20 On the other hand, if a relevant quality differential is assumed to exist on a market and that this quality differential is not reflected enough-or not at all-in price differentials, then the market is deemed inefficient and it is interpreted as the result of an inability of participants to discriminate between the qualities of different lots. The fundamental tension is here, in the fact that it is expected too much from market data; First to indicate price premium paid to the best quality, second to identify relevant characteristics that explain them, and third to reveal market failures to value quality. From the last quotation, we would tend to understand that to Waugh, objectivity about quality is structured on the demand side and that farmers should adapt their production and the information associated to each lot to the kind of information that is relevant for buyers/consumers, to the exclusion of other kinds of information. 21 Waugh (1928) reports the results of a study of different products at the Boston wholesale market recording the price and quality of lots sold and analyzing the influence on price of various factors through multiple correlation methods. We shall retain his analysis of the markets for asparagus and cucumbers. The asparagus market reveals that green color is the most important factor in Boston, explaining 41% of the price variation, while the size factor explains 15% of variation of the price only. As regards cucumbers, two factors were measured, length and diameter 20 We will not digress on this issue in a purely theoretical fashion, which would lead us much too far.
21 Remind that Waugh focuses mainly on wholesale markets, where buyers are middlemen. Waugh relies on some statistical studies already done or being done to the effect of eliciting a quantitative measurement of the effect of quality on price. One is on the influence of protein content in wheat to prices, another one is on egg quality and prices in Wilmington. The goal of those studies is quite practical, it is to identify a possible discrepancy between the structure of demand in terms of quality requirements and the actual structure of supply, and to discuss the possibility for farmers to adjust the production in terms of quality. Of course, most of the reasoning can apply to markets with ultimate consumers:"If it can be demonstrated that there is premium for certain qualities and types of products, and if that premium is more than large enough to pay the increased cost of growing a superior product, the individual can and will adapt his production and marketing policies to the market demand." (Waugh, 1928, 187). Occasionally, Waugh criticizes existing surveys who aim at discovering desirable qualities to consumers, because they give no idea of their relative importance, because they are often biased by the methods used, and eventually because the choice of consumption will depend not only on quality but also on price (see [START_REF] Waugh | Urgent Needs for Research in Marketing Fruits and Vegetables[END_REF] (expressed as a percentage of length), and length explains 59% of the price. 22 The main conclusion is that there is a discrepancy between what counts for consumers and what serves as official characteristics used for grading the goods:
This type of study gives a practical and much-needed check on official grades and on market reports. It is interesting to note that U.S. grade No1 for asparagus does not require any green color, and the U.S. grades for cucumbers do not specify any particular length nor diameter. It is true that the length of green color on asparagus and length of cucumbers may be marked on the box in addition to the statement of grade, but if these factors are the most important ones, should not some minimum be required before the use of the name? (Waugh, 1928, 195) To sum up, Waugh considers that on a specific market-here the Boston marketthere is an objective hierarchy of the lots sold according to some qualities that buyers consider the most relevant, thus ignoring others. It is not clear however to what extent producers are aware of those relevant qualities (in their sorting of lots). This hierarchy is manifestly at odds with the characteristics that make official grades used by sellers on the market. There is thus a likely discrepancy between the required characteristics of official grades (which are constructed outside the market) and the ones that are important on the market. Waugh calls for a better overlap between official grades and the preferences of consumers (or retailers) on the market. 23 One year later, in 1929, the Journal of Farm Economics would publish a symposium on this very same topic of price-quality relationships, later followed by many studies on different farm products. Clearly, the analysis leads to challenge the theory of value, that relies on scarcity and utility, because at best it does not deal with the influence of quality on utility (Youngblood, 1929, 525). 24 . Farm economists working on quality agree that producers, especially farmers, do not care about producing better quality and improving their revenue. They care predominantly about increasing the yield. More, marketing practices on certain markets show that cotton is sold on the basis of an average price corresponding to an average quality. Here, we touch to the issue of a performative effect of the use or non-use of standards on markets. Because high quality cotton is not rewarded on local markets, producers are not inclined in planting high quality cotton and are creating the conditions for expelling even more better qualities out of the market: "While the individual farmer may feel that he is profiting by the production of lowgrade or short-staple cotton, he is obviously lowering the average of the quality of cotton in his market and, therefore, the average price level not only for himself but for all his neighbors. From a community standpoint, therefore, the higher the quality of the cotton, the higher the price level." (Youngblood, 1929, 531) Clearly, Youngblood anticipates very important issues: "It need not be expected that the cotton growers will appreciate the importance of quality so long as they have no adequate incentive to grow better cotton" (Youngblood, 1929,531). Actually, this is often the case on unorganized markets (mainly local markets) contrary to big regional or national markets, where trading is based on quality. 25 If it is recognized that quality differentials are not systematically accounted for, how can we expect to provide some objectivity to quality as a relevant economic variable? Implicitly, the answer is that some markets, particularly the biggest and best organized can be taken as a yardstick, as providing an objective scale for quality and price differentials. What we would like to point out is that in this body of literature, the markets are given a power to make appear positive valuations of quality differential, provided that participants receive adequate incentives. Clear-cut facts are to be enough to make things function and increase the wealth of growers. This contention is backed on the principle that the market failure can be established through comparing its outcome with the outcome of another market (with similar goods exchanged)
An important point in our story is that more or less, all the economists involved in those years, working in the field of agricultural economics, seem to say that the system of grading is recognized by some participants in the markets, but not by all. Then, this lack of information or knowledge prevents the working of the markets from central markets to local ones. In the case of cotton, impossibility to assess correctly quality and to value it leads to careless harvesting and to breeding high-yielding short-staples varieties, thus leading to a sub-optimal equilibrium on markets [START_REF] Cox | Relation of the Price and Quality of Cotton[END_REF]. 26 . The remedies to this situation are to concentrate markets by eliminating smaller ones and create enough business and to develop community production and cooperative marketing. Within a very short time span, a great number of studies on quality as related to price were conducted along the methodological lines set out by [START_REF] Waugh | Quality factors influencing vegetable prices[END_REF] (see [START_REF] Tolley | Recent Developments in Research Method and Procedure in Agricultural Economics[END_REF][START_REF] Waite | Consumer Grades and Standards[END_REF][START_REF] Norton | Differentiation in marketing farm products[END_REF]. Again, the common view is that those studies should serve as guides to production and marketing methods and in establishing standard grades representing variations in quality corresponding with market price differentials. Most of those studies concern the wholesale markets and are suppose to help improving coordination between producers and middlemen (shippers, merchants) Some study point to the fact that markets can be particularly biased, giving no reward to quality differentials. For instance, [START_REF] Allred | Farm price of cotton in relation to quality[END_REF] show that on spot markets, growers are not rewarded for better quality above Middling. For those qualities, the hierarchy is not reflected in prices. 25 A number of experiments carried out by the BAE on cotton have explored the link between price and quality. They confirm "that staple length is of greater significance than grade [START_REF] Crawford | Analysis of the Relation of Quality to Price of Cotton: Discussion by G.L. Crawford[END_REF] on organized markets, but not on unorganized ones:'The unorganized local cotton markets rather effectively kill all incentive that a farmer may have to produce cotton of superior spinning utility. The question of the proper recognition of quality in our local markets is one of the fundamental problems with which we have to deal in cotton production and marketing.' (Crawford, 1929, 541)". From statistics on trading on those markets, it turns out that staple length rather than grade is important for price differentials. The need is to provide clear-cut facts about the respective values of different grades on the markets (notably for exporting) and to adapt cotton growing to international demand.
26 "the farmers are not able to class their cotton accurately and a large percentage of the local buyers are not able to do so. Bargaining is done in horse-trading fashion on price and not on quality." (Cox, 1929, 548) 27 To improve the coordination on those markets, it is thus necessary to improve the bargaining power of sellers (growers) and to improve their knowledge of quality and to develop a good system of classing.28 From this overview of studies done by farm economists, a first blind spot can be identified. In some cases, it is said, sellers and buyers are able to bypass usual grades and prices are established according to some quality characteristics that seem to be reasonably shared by both parties on the market. In other cases, notably in the case of cotton, it is deplored that even though some quality characteristics could be identified as relevant for price differentials, it can happen that some participants do not make efforts to improve the quality of their crop or to sort it out in a proper way, thus making up the conditions for a market on which high quality will not be rewarded, in which traders expect that the relevant variable for dealing will be price and no one expects much from quality differentials. Everything happens as if because they fear that the quality differentials will not be rewarded enough to cover the cost, it is not necessary to grow high quality cotton. What needs to be understood then is how far markets are deemed efficient enough by themselves to make quality differentials be valued; and if not, what is the proper scope of government intervention.
2.2 Making quality objective: markets do not lie but implementing a common language on quality is not easy task.
Following Waugh's and other farm economists' contributions, we can identify a first set of works whose aim is to discuss the discrepancy between actual systems of grades and the factors that are explaining price differentials. The economist's point of view on grades, as we have seen, is that consumer grades are a means of securing the competitive conditions on the market. Grades are systems of classification of goods aimed at facilitating economic processes, providing information to market participants (producers, growers, middlemen, cooperatives, wholesale buyers, consumers) and meliorating the formation of prices and consumer's choices. Grades were sometimes adopted on organized markets but not so much regarding the sale of commodities to consumers. Thus, if it is expected that grades play a coordination function on markets, it is necessary that the meaning of each grade be relevant to market participants, notably to buyers. It has been often remarked after [START_REF] Waugh | Quality factors influencing vegetable prices[END_REF] that the construction of standards does not fit necessarily with the consumer/buyer view of quality. This is in itself a subject of passionate debates, which has many dimensions. It has to do with measurement issues, with consumers preferences, with multidimensionality of quality, and with the variety of uses that a given good can serve.
The data of quality measurement The most common explanation is that factors affecting quality are not easily measured. As [START_REF] Tenny | Standardization of farm products[END_REF] puts it,"It must not be supposed that Federal standards for farm products necessarily reflect the true market value of the product. There are several reasons why they may not. For instance, there are frequently certain factors which strongly influence market quality for which no practical method of measurement has been devised for use in commercial operations. Until comparatively recently the important factor of protein determination in wheat was ignored in commercial operations although it was given indirect recognition by paying a premium for wheat from sections where the average protein content was high." (Tenny, 1928, 207) Here, there is the tension between a tendency to privilege characteristics of goods that can be measured in a scientific way without relying on subjective judgment, like moisture or protein content for different kinds of cereals. The question is to what extent those characteristics are likely to allow the establishment of grades adapted to the functioning of markets?29 However, grading cannot result always from scientific measurement. It is often a matter of judgment, appealing to the senses of sight, taste and smell (for butter).
Dealing with multidimensionality Another difficulty with grading is that the multidimensionality of quality makes it unfit for measurement along a single dimension.
Lots of examples are discussed in the literature. Probably the agricultural product most studied in the 1930s is cotton. For the case of cotton, the usual grades used for standards are color and freedom from trash. 30 It is known that the length of the staple is an important factor too, but it is dealt with separately [START_REF] Tenny | Standardization of farm products[END_REF].
Apart from standards on the quality of the bale, there are seven basic grades of cotton linters, based on the length of the fiber. Grades, if they are to synthesize a set of properties not correlated in the good, must be constructed on the basis of an idealized good. For instance, quality grades of cotton have been constructed on the basis of a cotton having perfect uniform fibers, characterized by its strength and brightness. According to Youngblood there has been a development of "the art of classing" (Youngblood, 1929, 527;see also Palmer, 1934) first through private standards built by spinners, and then later through official standards. However, "within reasonable limits, adjacent grades, staple lengths, and characters of cotton may substitute for each other in a given use." (Youngblood, 1929, 528) and no synthetic indicator has been devised. The simplest way of establishing a one dimensional synthetic grade is to calculate a weighted average of different scores obtained for different factors of grade, as has been done for canned products [START_REF] Hauck | Research as a Basis for Grading Fruits and Vegetables[END_REF] If ever it seems impossible to merge two or more quality properties into one, then it may be enough to grade different characteristics independently and then to let the consumer choose the combination he prefers. 31 It may be difficult to obtain a useful grading system if based on too many factors, because those factors can be met independently. If a given lot is high according to some factors and off for another, it can lead to rank it low. This can lead participants in the market to trade without taking account of the official grade (Jesness, 1933, 710). Also, the factors relevant for grading can change according to the final use of the goods. Color of apples is important for eating but not for cider purposes.
Grades and consumers' preferences There seems to be a large consensus that grades should be implemented as much as possible in reference with consumers' preferences:
No useful purpose is served in attempting to judge the flavor of butter unless flavor affects the demand for butter. If color has no influence on demand, why be concerned with an attempt to measure it? The problem of defining market grades in reality is one of determining the considerations which are of economic importance in influencing demand and then to find technical factors which are susceptible of measurement as a means of assigning proper weights to each of them. The economic basis of grades is found in factors affecting the utility of goods and hence the demand for them. Grades are concerned with the want-satisfying qualities of products. (Jesness, 1933, 708-709).
Actually, this leads to recognize that very few is known about consumers' preferences and their willingness to pay a premium for such and such quality differential (Hauck, 1936, 397). In dealing with this aspect of the construction of official grades and standards, we arrive at a limit point. Shall we assume the consumers preferences are given and that all agree-or at least the majority-with the list of relevant characteristics that make quality. For instance, Jesness's, chief of the Federal Bureau of Agricultural Economics, recognizes that in case consumers are not aware of the meaning of grade, they will soon learn to adopt the grading system provided to them by experts: That the consumer may not always appear to exercise the best judgment in his preference is beside the point. The primary purpose in grades is to recognize preferences as they are, not as the developer of grades may think they should be. This, however, is not a denial of the possibility of using established grades as a means of educating consumers in their preferences. (Jesness, 1933, 709) Thus, in the end, if it turns out that no grading system is self-evident and easily recognized as useful to consumers, at least it can function as a focal point and become common knowledge. There must be some expert way of indicating a hierarchy of products qualities. At the same time, there is much to known about consumers' behavior. Certainly, there may be different demand schedules according to grades, and it would be useful to obtain information about the influence of grades over one another and to know precisely what consumers use the product for (Jesness, 1933, 716;[START_REF] Waite | Consumer Grades and Standards[END_REF]. Even though economists are well aware of this, they can but acknowledge that no satisfactory methods for eliciting preferences have been devised. Eliciting preferences from observation does not deliver information about what the consumer is aware of and what, if ever he is, he takes as relevant for his choice. 32 Waite, for instance, advocates for giving the consumers information that he may not at first consider as relevant for his choice and for the quality of goods he consumes:
It is a valuable thing to indicate to consumers specifications of essential qualities even though these do not become reflected in price, but this is more or less of a social problem since it involves the should aspects of the problem. Economics demands that we proceed somewhat differently and endeavor to indicate groups that are significantly price different both from demand and supply aspects. (Waite, 1934, 253) Articulating preferences and income Agricultural economists did not engage very far into the study of preferences. The main idea is that there should be some representative preferences and buying practices for different strata of income. Knowing better about preferences by income groups shall help to know what percentage of production of a top grade is necessary in a crop to make it pay better to sort them out to sell separately [START_REF] Hauck | Research as a Basis for Grading Fruits and Vegetables[END_REF]. To Norton, for instance, different markets are actually to be related to different strata of income: "What is needed for accurate analysis of retail price differentiation is an accurate measure of how different strata of demand respond to different price policies. Certainly the theoretical reactions of groups will vary. At high-income levels a minor change in the price of a food item will not affect purchases; at low-income levels, it may have a decided effect." (Norton, 1939, 590) 33 As [START_REF] Froker | Consumers' Incomes and Demand for Certain Perishable Farm Products]: Discussion[END_REF] would point out, most preference studies merge preferences and demand behavior [START_REF] Froker | Consumers' Incomes and Demand for Certain Perishable Farm Products]: Discussion[END_REF]. Notably, providing incentives to farmers to increase quality should not lend them to think that the best quality can be sold without limit. 34 As [START_REF] Rasmussen | Consumers' Incomes and Demand for Certain Perishable Farm Products[END_REF] would make clear, only a small proportion of American families do have sufficient incomes to buy the best quality of food. In the end, only a comprehensive study mixing knowledge on preferences, incomes and uses of product would allow to develop a system of grades that improves market coordination:"If grades are to be the means both of increasing net farm income and of consumer satisfaction, it seems obvious that such grades must 32 The method of questionnaires or statistical studies on choice are criticized. They will not disclose what the consumer would do if granted the opportunity to buy the good. Besides usual difficulties, one must get an idea of whether "failure [of a characteristic] to be price significant is due to inability of consumers under present marketing methods to differentiate these qualities, or consumer ignorance of their importance, or simply indifference of consumers" (Waite, 1934, 252).
33 [START_REF] Norton | Differentiation in marketing farm products[END_REF] is probably one of the first to link his analysis with Chamberlin's theory of monopolistic competition. To Norton, different factors used to differentiate food products to reach income groups might be classified as follows: A. service differentials: delivery vs carrying; cash vs credit; packages vs bulk; B. Product differentials: "quality: a range of choices", size, price of cut; "style: up-to-the-minute or out-of-date"; C. Advertising differentials : branded vs unbranded; featured characteristics vs standard grades; presumed uniformity or necessity for expert knowledge. The factors put to the fore differ according to the farm products. Illustration with the milk market of New York City and the automobile industry (proposing different lines of cars at different prices, including second-hand cars)
34 Also, Secretary Henry Wallace would assess in 1938 in his annual report:"We need to avoid too much insistence on only first-quality foods. All foods should meet basic health requirements; but thousands of families would rather have grade C food at a low price than grade A food at a high price, and thousands of farmers have grade C food to sell. Our marketing system must efficiently meet the needs of the poor as well as of the rich." (quoted in Rasmussen, 1939, 154) bear definite and clear-cut relationships to the economic desires of both dealers and consumers, and must recognize (first) differences in levels of consumer purchasing power; (second) differences in preferences of individuals; and (third) differences in the purposes for which products may be used and the qualities needed for each purpose." (Rasmussen, 1939, 149) The above considerations lead us to consider the idea that markets may not be enough to the understanding of coordination outcomes once it is recognized that information (or absence of information) about grading is influential on market outcomes.
Toward a cognitive theory of quality: protecting consumers from market failures
If grades are not used on markets, or if they do not play a role in establishing price differentials, it does not prove that grade specifications are wrong: "It does indicate either that consumers don't recognize quality (at any rate they reward it by paying a premium to get it) or that retailers base their prices upon factors other than quality, or that our standards of quality differ substantially from those which consumers consider important." (Hauck, 1936, 399) Here, we open to a quite different view on the use of government intervention through quality standards. The goal is not to help sellers and buyers to share a common language according to quality characteristics that all recognize as being relevant for improving their coordination on markets. It is more ambitious and contains a normative account of quality and of the nature of government intervention. It is contemplated that grading does necessarily contain an educational dimension, and that grading contributes to the formation of preferences instead of just revealing then and making them expressible on markets. We mean here that some authors have pointed out that contrarily to the market-driven coordination point of view, market failures indicate a need to develop grades as a means to repairing the failures of market coordination and the causes of those failures, which are possible as an exploitation of consumers cognitive deficiencies. According to Waite,"Moreover, the grades tend to protect consumers from certain obvious abuses arising from the profit making motive of the economic order. For example, there is a tendency for businessmen in a competitive society to secure protection for their sales by building around their product thru [sic] brands or other distinguishing devices semi-monopolistic situations. Grades break down these protective devices by expanding similarity of essential characteristics to a broader group." (Waite, 1934, 248) This is clearly pointing to a normative role of grades understood as a protecting device to help consumers improve their bargaining power, not merely to improve coordination. There is a counterbalancing effect of grades. Gilbert Sussman and Saul Richard Gamer, two members of the Agricultural Adjustment Administration, take as a starting point "that the consumer has no practical way of knowing or discovering at present the quality of any food and drug he buys, much less whether any particular brand of a product he purchases is good, bad or indifferent as compared to any other particular brand which he might have chosen." (Sussman and Gamer, 1935, 578). This fact is well recognized, and consumers are frequently mislead / deceived by such a situation, notably since "it has been indisputably established that the price at which a particular article may sell is not a satisfactory, if any, index to the quality of the product. Nor does the use of brand or trade names supply an adequate guide." (Sussman and Gamer, 1935, 578). The great number of brands for particular articles forbids rational buying. This view is radically at odds with the starting point of Waugh's reflections on the quality-price relationship, and thus it is rejecting the market point of view on quality. If markets are to work as coordination devices, it supposes that buyers are helped to make well-informed decisions. Cognitive limits of the consumer are recognized as a basis for producers' resistance to mandatory grades.
It may be readily admitted that the fact that consumers frequently are not rational in their decisions is a limitation which is encountered in this field. . . .The irrationality of the consumer itself may well be worth studying in connection with determining upon the economic basis of market grades. (Jesness, 1933, 716) Quality per se is not something given to consumers, not something evident. On the contrary, the grading of goods or the rating of commodities is said to acquaint consumers with the characteristics of a good that the expert deems essential [START_REF] Waite | Consumer Grades and Standards[END_REF]. Here we touch to what is probably the most delicate issue from a theoretical and policy point of view. There is clearly they idea that grades, and more specifically any system of rating of consumers' goods, is influencing the preferences of consumers by constructing their own system of preferences and the way they assess goods, pointing out some characteristics over other characteristics. Waite identifies that producers are usually reluctant to adopt standards for consumer goods. The adoption of grades stems from a necessity to bypass the cognitive limitations of consumers: "Where such grades have been accepted by the industry it has been usually because qualities were indistinguishable by consumers and misrepresentation so rampant that consumers were utterly bewildered and hesitated to purchase with a consequent great decline in sales and individual profits." (Waite, 1934, 249) Otherwise stated, producers are willing to accept standards when it reduces information asymmetry, which is the cause for a low level of transactions. This analysis does not make clear what comes from pure absence of knowledge on the part of the consumer or from difficulties to cope with too much information. Hence, by lack of government intervention to constrain producers, grade labeling is often permissive (and not compulsory). "But where the market is not demoralized there is strong opposition to the adoption of consumer grades. Here those with a reputation for consistently superior products may secure enhanced prices because of that reputation, and those with shoddy products may secure higher prices than they could with labeling. It is unlikely that many products will find their markets sufficiently demoralized by bad trade practices to accept readily mandatory grades. This has forced us to make grades largely permissive in character. We have had sufficient experience with these permissive grades to demonstrate that in the majority of cases opposition of important trade groups will preclude their widespread adoption. With permissive grades the only hope is to educate consumers to purchase products so labeled, but with the inertia of consumers and determined resistance of a considerable part of the trade practical results are remote." (Waite, 1934, 249) Anyway, there is consumer's inertia, and permissive labels are of weak effect on behavior. Clearly, Waite identifies that there is an opportunity for strengthening rules about standards and labels and that this may allow to reduce the overall exploitation of the consumer's ignorance about qualities: "The participation of the government as a party in these agreements [about codes and marketing between producers in many industries], charges it with the duty of a broad social viewpoint, which includes among other things insistence of protection of consumers from the exploitation which is widespread under competitive system. This opportunity is passing rapidly and it is pathetic that we are failing in the use of it." (Waite, 1934, 249) The interpretation of the need of grades determines also the kind of policy recommendation. To Waite, there is sort of a mix of given consumer's preferences and expert analysis that should be constitutive of the definition of grades:
The grades may specify simply the characteristics which are now judged important with respect to products by consumers themselves as reflected for example in the price they are willing to pay. The grades may specify, however, characteristics which consumers would judge important and for which they would be willing to pay if they were able to distinguish them or were provided with the opportunity. These characteristics may be unassociated with present easily observable external characteristics known to consumers, or may be observable but due to other associated undesirable characteristics from which they have not been separated the consumer may be unable to register a preference. Finally the grades may specify characteristics which are judged important by expert opinion. They may designate qualities which should be important to consumers. Grades in this sense contain an element of propaganda in the direction of consumption in desirable channels, the full force of which we do not know, as yet. Consumers may feel the higher grades more valuable, particularly in the cases where the specifications are not readily distinguishable and in those cases they will probably react with a willingness to pay somewhat higher prices, thus widening the spread between the better and lower qualities. This is a form of the time honored device now used by business men to differentiate their product and sell to consumers at a higher price because the consumer is made to think the product superior and it may be turned to the advantage of consumers by designation by disinterested agencies. The second advantage of consumer grades is that the designation of these qualities may assist early subdivision of the product into groups possessing these characteristics. Early subdivision will facilitate economical handling of the product and will tend to reflect back to producers characteristics desired by consumers. This should lead to higher prices for these types of products possessing these characteristics and a subsequent larger production wherever these qualities are subject to control. This, in turn, should result in greater consumer satisfaction and enhanced incomes to the more effective producers. (Waite, 1934, 250)
The rise of the Office takes places in a context of rapid expansion of markets from a local to a national and international scale. Under the head of Taylor, the Office of Farm Management took control over the Bureau of Markets and the Bureau of Crop and Livestock Estimates. The BAE was officially established as an agency of the U.S. Department of Agriculture (headed by Secretary Wallace) by the Congress July 1st 1921. The Bureau of Markets, created in 1915, was in charge of helping farmers to market their crops. Notably, it organized a telegraphic market news service for fruits and vegetables. According to Lloyd
S.A. It gathered pioneers of farm economics soon after World War I who developed sophisticated methods to estimate demand and supply functions on different agricultural markets. The BAE emerged little by little, as an extension of the Office of Farm Management, under the heading of Henry C. Taylor by 1919 onwards. He recruited new personnel along high training standards in economics and organized the Office in different committees each focused on one aspect of farm economics. His goal was to promote new methods of management and a reorganization of farms adapted to market conditions
[START_REF] Mcdean | Professionalism, policy, and farm economists in the early Bureau of Agricultural Economics[END_REF]
Of course, the effects of monopolistic competition on the concept of quality in economics shall be a subject for future study. Be it enough to mention that reflections on quality based on monopolistic competition tools did not actually blossom before after WWII. However, this does not contradict the fact that many arguments used by agricultural economists do have a monopolistic competitive flavor
This is not to deny that the issue of quality has been a relevant issue in economics since 18th century, and that it has been part of the legal-economic nexus since the Middle Ages[START_REF] Lupton | Quality Uncertainty in Early Economic Thought[END_REF]
Among other justifications for standardization is the need for credit. Farm products being used as collateral for loans, lenders need to appraise the quality of the products. More generally, it is reducing transaction costs.
Notably, appropriations for economic investigations were severely reduced in between 1941 and 1947, while more appropriations were given to crop and livestock estimates, thus reducing the ability of the BAE to sustain research and policy recommendations (seeHardin, 1946, 641)
Gardiner C. Means, a member of the Consumers' Advisory Board in the National Recovery Administration was called to act as Economic Adviser on Finance to the Secretary of Agriculture.
The main thesis in[START_REF] Means | The Consumer and the New Deal[END_REF] that administrative price ends with control over production and increase of prices that are detrimental to overall welfare, needs not further comments here.
To Means, "First in importance among such organizations would come those which are in no way committed to the producer point of view-teachers' societies, organizations of Government employees, churches, women's organizations, engineering societies, and, of course, the consumer cooperatives. These organizations could in a clear-cut manner carry the banner of the consumer and act as channels through which consumers' action could be taken. They are in a position not only to educate their members but also through their representatives to exert definite pressure to counterbalance moves on the part of producer interests which would otherwise jeopardize the operations of the economy."(Means, 1934, 16)
On Lynd's personal record as a theologist and social scientist, seeMcKellar (1983).
Such a Department would be entrusted with the development of commodity grades and standards, acquainting consumers with established rules and standards, crystallizing consumer sentiment and urging business and government agencies to cooperate in the effort. To Nelson, this is a dead-born project, "Conceivably this proposal may constitute an ultimate goal; it is not an immediate practical possibility."(Nelson, 1939, 162)
Regarding consumer education, various federal agencies render available to the consumer information which will permit him to buy more efficiently. But official publications had too limited a circulation. The best known is the Consumer's guide, published for five years by the Consumer's Counsel of Agricultural Adjustment Administration, with a maximum permissible circulation of 135 000.
11 The original and "natural" organization of powers is done first along functional lines in the U.S. (there are departments of State, of War, of Navy, of Treasury, of Justice), each representing the citizens as a whole. When new Departments have been established, they were representing specific interests of major economic groups (Department of Agriculture, Commerce, Labor). "Thus far, the consumer has not been accorded similar recognition. This is not at all surprising. It is only recently that the distinctive nature of the consumer interest has come to be clearly understood and that its representatives have become articulate."[START_REF] Nelson | Representation of the Consumer Interest in the Federal Government[END_REF]
151)12 Even under the NRA, when the Consumers' Advisory Board was accorded parity with the Advisory Boards representing Industry and Labor, if faced constant opposition and seldom succeeded in "achieving any effective voice in NRA Policy."(Nelson, 1939, 156)
In a few fields, such as dairy products, considerable progress has been made toward setting up grades that are useful to the ultimate consumer. This is easier with milk, where the degree of freedom from bacteria may be the basis for distinguishing between Grades A, B, C, than with butter or cheese, where relative desirability rests on a composite of characteristics. 'Scored' butter has been available for some time, but only a small percentage of housewives have shown a disposition to use the 'score' of butter as their guide in purchasing. Even in buying milk the Grade is only one of many factors that determine the consumers' choice.(Hotchkiss, 1936, 76)
The only thing that could induce a consumer to forego partly their freedom of choice is the offer of a financial saving, through buying cooperatives (like book clubs), thus abiding to the choice of books made by their committee.(Hotchkiss, 1936, 78)
The opposition to give the government authority on standards, for instance, was successful in the
Food and Drug Act, senators arguing that "each case should stand upon its own facts" (quoted bySussman and Gamer, 1935, 585)
"No longer will a court, in a prosecution for adulteration or misbranding, be compelled in the first instance to determine whether a particular article is or is not a macaroon"(Sussman and Gamer, 1935, 585)
Waugh also identifies factors affecting the price of tomatoes. The main factors affecting the price of tomatoes are firmness (30%) and absence of cracks
But Waugh also wonders whether this hierarchy on the Boston market is the same on other markets.
The standard theory assumes that goods on a market are homogeneous and that there is not the slightest difference in quality between two units of the same good consumed. Otherwise, this would cause a change in the preference for goods and a consequential change on the ratio of exchange(Jevons, 1871; Clark, 1899)
Regarding lower grades (inferior to Middling), the price paid and the discounts are more or less reflecting the discounts observed on spot markets. More or less, most studies confirm those results, confirming the weak relationship between quality and price (see[START_REF] Cox | Factors influencing Corn Prices[END_REF],[START_REF] Kapadia | A Statistical Study of Cotton Prices in Relation to Quality and Yeld[END_REF],Hauck (1936, 399),Garver (1937, 807)). Among factors explaining this situation is the fact that those local markets are not as liquid as are spot markets.
This may imply to promote the use of single variety communities, of good practices for harvesting and ginning, to introduce licensed classing of samples offered by association of growers. There were some cotton classing schools. On the art of classing cotton, seePalmer (1933)
A connected issue is that some standards that can be adapted to the wholesale market will be useless on the retail market.
Grading cotton consists in appraising the cotton by observation of the color, the bloom, and the amount of waste appearing in a sample of cotton taken from the bale, while stapling is the method of valuing the cotton by measuring the length, strength and fineness properties of the fibers. These estimates are subject to considerable errors of judgment.
e.g. a blanket can be graded according to warmth and according to durability. | 85,744 | [
"745282"
] | [
"1188"
] |
01763857 | en | [
"sdv",
"scco"
] | 2024/03/05 22:32:13 | 2018 | https://amu.hal.science/hal-01763857/file/Ramdani%20et%20al.%20DOI-1.pdf | Céline Ramdani
email: [email protected]
Franck Vidal
Alain Dagher
Laurence Carbonnell
Thierry Hasbroucq
Dopamine and response selection: an Acute Phenylalanine/Tyrosine Depletion study
Keywords: Dopamine, Supplementary motor areas, Simon task, Electroencephalography, Response selection, Acute phenylalanine/tyrosine depletion: APTD
The role of dopaminergic system in decision-making is well documented, and evidence suggests that it could play a significant role in response selection processes. The N-40 is a fronto-central event-related potential, generated by the supplementary motor areas (SMAs) and a physiological index of response selection processes. The aim of the present study was to determine whether infraclinical effects of dopamine depletion on response selection processes could be evidenced via alterations of the N-40. We obtained a dopamine depletion in healthy volunteers with the acute phenylalanine and tyrosine depletion (APTD) method which consists in decreasing the availability of dopamine precursors. Subjects realized a Simon task in the APTD condition and in the control condition. When the stimulus was presented on the same side as the required response, the stimulus-response association was congruent and when the stimulus was presented on the opposite side of the required response, the stimulus-response association was incongruent. The N-40 was smaller for congruent associations than for incongruent associations. Moreover, the N-40 was sensitive to the level of dopaminergic activity with a decrease in APTD condition compared to control condition. This modulation of the N-40 by dopaminergic level could not be explained by a global decrease of cerebral electrogenesis, since negativities and positivities indexing the recruitment of the primary motor cortex (anatomically adjacent to the SMA) were unaffected by APTD. The specific sensitivity of N-40 to ATPD supports the model of Keeler et al. (Neuroscience 282:156-175, 2014) according to which the dopaminergic system is involved in response selection.
Introduction
Decision-making can be regarded as a set of cognitive processes that contribute to the production of the optimal alternative among a set of concurrently possible actions. The role of the dopaminergic system in human decision-making is well documented (e.g., [START_REF] Montague | A framework for mesencephalic dopamine systems based on predictive Hebbian learning[END_REF][START_REF] Montague | Computational roles for dopamine in behavioural control[END_REF][START_REF] Rogers | The roles of dopamine and serotonin in decision making: evidence from pharmacological experiments in humans[END_REF].
Response selection (i.e., the association of a specific action to a specific sensation) can be considered as the core process of decision-making [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], and one can wonder whether the dopaminergic system is directly involved in this process. According to [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], the striatal direct pathway (D1 receptors subtype) would allow preparation for response selection, while the striatal indirect pathway (D2 receptor subtypes) would allow selection of the appropriate response within the prepared set of all possible responses, through. [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF] called this system a Bprepare and selectâ rchitecture.
The implication of the dopaminergic system in preparatory processes has been widely acknowledged in animals. Response preparation is impaired in rats after dopamine depletion [START_REF] Brown | Simple and choice reaction time performance following unilateral striatal dopamine depletion in the rat[END_REF]. Now, [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] went a step further showing that, during preparation, the activation of the dopaminergic system adjusts to the difficulty of the response selection to be performed after this preparation.
In humans, taking advantage of the high iron concentration in the substantia nigra (SN) which reveals this structure as a relatively hypodense zone on T2*-weighted images (including EPI volumes), [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] accurately examined the activation of the SN during the 7.5-s preparatory period of a between-hand choice reaction time (RT) task. At the beginning of the preparatory period, a precue indicated which one of two (easy or difficult) stimulus-response associations should be applied when the response signal (RS) would be delivered, at the end of the preparatory period. The SN BOLD signal increased after the precue in both cases. However, whereas the BOLD signal returned to baseline towards the end of the preparatory period in the easiest of the two possible response selection conditions, this signal remained at high levels until the end of the preparatory period in the most difficult condition. Interestingly, no such an interaction could be evidenced in the neighboring subthalamic nucleus (STN); given the close functional relationships between STN and SN pars reticulata, the authors convincingly argued that the BOLD signal sensitivity to the difficulty of the selection process resulted from a sensitivity of the dopaminergic neurons of SN pars compacta to this manipulation. This interpretation is highly consistent with Keeler et al.'s (2014) model which assumes that the dopaminergic system plays a prominent role in response selection processes. Now, after the RS, that is in the period when response selection itself occurs, no differential effect could be evidenced, but this might easily be explained by the poor temporal resolution of fMRI method (RTs were about 550 and 600 ms only, in the easy and difficult conditions, respectively).
Evidencing a direct effect of the dopaminergic system on response selection processes themselves (which take place during the RT period) would lend direct additional support to Keeler et al.'s view that the dopaminergic system plays an essential role in response selection, not only in preparing for its difficulty [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] but also in carrying out response selection processes themselves; this was the aim of the present study which will be made explicit as follows.
Among the main targets of the basal ganglia (via the thalamus) are the supplementary motor areas (SMAs). Recent fMRI data did demonstrate that acute diet-induced dopamine depletion (APTD) impairs timing in humans by decreasing activity not only in the putamen but also in the SMAs [START_REF] Coull | Dopamine precursor depletion impairs timing in healthy volunteers by attenuating activity in putamen and supplementary motor area[END_REF], which role in motor as well as sensory timing is well documented (e.g., [START_REF] Coull | Neuroanatomical and neurochemical substrates of timing[END_REF].
SMAs are often assumed to play a prominent role not only in timing but also in response selection (e.g., Mostofsky and Simmonds 2008, for a review). Taking into account the sensitivity to dopamine depletion of the SMAs [START_REF] Coull | Dopamine precursor depletion impairs timing in healthy volunteers by attenuating activity in putamen and supplementary motor area[END_REF], one might therefore wonder, in the frame of Keeler et al.'s (2014) model, whether their activities would also be impaired, by APTD during the reaction period of a RT task in which a response selection is required. Given the short time range of RTs, a high temporal resolution method is needed to address this question. EEG seems to be particularly adapted, since it is classically considered as having an excellent temporal resolution [START_REF] Sejnowski | Brain and cognition[END_REF].
During the reaction time of a between-hand choice RT task, an electroencephalographic (EEG) component has been evidenced in humans (the N-40; [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF] right over the SMAs. Given that the N-40 peaks about 50 ms before the peak activation of the (contralateral) primary motor cortex involved in the response, it has been proposed that this component is an index of response selection which might arise from the SMAs. In accordance to this view, it has been shown that (1) that the N-40, although present in choice conditions was absent in a go/ no-go task, a task in which no response selection is required [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF]and (2) that the amplitude of the N-40 was modulated by the difficulty of the selection process, being smaller for easier selections [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF]. Finally, tentative source localization performed with two independent methods (sLORETA and BESA), pointed to quite superficial medio frontal generators corresponding to the SMAs [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF]).
If we admit that the N-40 is generated by the SMAs and index response selection, a convenient way to address the question of the involvement of the dopaminergic system in response selection processes consists in examining the sensitivity of the N-40 to APTD in a between-hand choice RT task quite similar to the one used by [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF]: we chose a Simon task (see Fig. 1, see [START_REF] Simon | The effects of an irrelevant directional cue on human information processing[END_REF] for a review). Now, in between-hand choice RT tasks, the N-40 is followed by a transient (negative) motor potential [START_REF] Deecke | Voluntary finger movement in man: cerebral potentials and theory[END_REF]) revealing the build-up of the motor command in the (contralateral) primary motor areas (M1) controlling the responding hand [START_REF] Arezzo | Intracortical sources and surface topography of the motor potential and somatosensory evoked potential in the monkey[END_REF]. Concurrently, a transient positive wave, reflecting motor inhibition and related to error prevention [START_REF] Meckler | Motor inhibition and response expectancy: a Laplacian ERP study[END_REF], develops over (ipsilateral) M1 controlling the non-responding hand [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF]. If one admits that these activities are not directly related to response selection, examining their (in)sensitivity to dopamine depletion allows examining the selectivity of the effects of this depletion (if present) on response selection processes.
In a previous study, we submitted subjects to APTD. Although we did evidence subtle behavioral effects that can be attributed to action monitoring impairments [START_REF] Ramdani | Dopamine precursors depletion impairs impulse control in healthy volunteers[END_REF], we did not find any clear behavioral evidence of APTD-induced response selection impairment.
To evidence the role of the dopaminergic system in response selection, the present study was aimed at assessing whether infraclinical effects of dopamine depletion on response selection processes can be evidenced via selective alterations of the N-40.
Material and method
The experimental procedure has been described in detail elsewhere [START_REF] Ramdani | Dopamine precursors depletion impairs impulse control in healthy volunteers[END_REF], and only essential information is provided here.
Twelve healthy subjects participated in this experiment.
Dopamine depletion
Dopamine availability was decreased using the APTD method [START_REF] Mctavish | Effect of a tyrosine-free amino acid mixture on regional brain catecholamine synthesis and release[END_REF][START_REF] Leyton | Effects on mood of acute phenylalanine/tyrosine depletion in healthy women[END_REF][START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF][START_REF] Nagano-Saito | Dopamine depletion impairs frontostriatal functional connectivity during a set-shifting task[END_REF], 2012). The present experiment comprised two experimental sessions differing by the level of tyrosine and phenylalanine in the amino acid mixture: (i) the Bplacebo session^: the subject performed the task after ingestion of a mixture containing 16 essential amino acids (including tyrosine and phenylalanine) and (ii) the Bdepleted session^: the subject performed the task after the ingestion of the mixture without tyrosine and phenylalanine. Plasma concentrations of phenylalanine, tyrosine, and other large neutral amino acids (LNAAs; leucine, isoleucine, methionine, valine, and tryptophane) were measured by HPLC with fluorometric detection on an Ultrasphere ODS reversephase column (Beckman Coulter) with ophtalaldehyde precolumn derivatization and amino-adipic acid as an internal standard. Plasma concentrations of tryptophan were measured by HPLC-FD on a Bondpak reverse-phase column (Phenomenex).
Task and design
Each subject performed both sessions, on separate days, at least 3 days apart. Subjects were not taking any medication at the time of the experiment. None of them had a history of mental or neurologic illness. They were asked not to take stimulating substances (e.g., caffeine or stimulant drugs) or alcohol the day and the night before both sessions. The day before each session, subjects ate a low protein diet provided by investigators and fasted after midnight. On the test days, subjects arrived at 8:30 a.m. at the laboratory and had blood sample drawn to measure plasma amino acid concentrations. They ingested one of the two amino acid mixtures at 9:00 a.m. in a randomized, double-blind manner. Peak dopamine reduction occurs during a period 4-6 h after ingestion of the two amino acid mixtures [START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF]). They were tested from 1:30 p.m. At 3:00 p.m., subjects had a second blood sample drawn to measure plasma amino acid concentrations.
The order of depleted and placebo sessions was counterbalanced between subjects.
Subjects performed a between-hand choice reaction time RT task.
A trial began with the presentation of a stimulus. The subjects' responses turned off the stimulus, and 500 ms later, the next stimulus was presented. If subjects had not responded within 800 ms after stimulus onset, the stimulus was turned off and the next stimulus was displayed 500 ms later.
At the beginning of an experimental session, subjects had one training block of 129 trials. Then, they were required to complete 16 blocks of 129 trials each. A block lasted about 2 min. There was 1 min break between two blocks and 5 min break every four blocks. The training block was discarded from statistical analyses.
The structure of this between-hand choice RT task realized a Simon task [START_REF] Simon | The effects of an irrelevant directional cue on human information processing[END_REF]. The stimuli of this Simon task were the digits three, four, six, and seven presented either to the right or the left of a central fixation point. Half of the subjects responded with the right thumb on the right force sensor for even digits and with the left thumb on the left force sensor for odd digits; the other half performed the reverse mapping. When the stimulus was presented on the same side as the required response, the stimulus-response association was congruent. When the stimulus was presented on the side opposite to the required response, the stimulus-response association was incongruent. A block contained 50% of the congruent trials and 50% of the incongruent ones. [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] (see Fig. 1) manipulated the complexity of the stimulus-response association by varying the spatial correspondence between the direction (right or left) indicated by a centrally presented arrow and the position (right or left) of the required response: on congruent associations, responses had to be given on the side indicated by the arrow, while on incongruent associations, responses had to be given on the opposite side. In the present Simon task, congruency was manipulated by varying the spatial correspondence between the position of the stimulus and the position of the required response, given that (1) as in the [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] study, congruency affects response selection processes [START_REF] Hommel | A feature-integration account of sequential effects in the Simon task[END_REF][START_REF] Kornblum | The effects of irrelevant stimuli: the time course of S-S and S-R consistency effects with Stroop-like stimuli, Simon-like tasks, and their factorial combinations[END_REF]Proctor and Reeve 1990) and (2) congruency affects the amplitude of the N-40 in a Simon task [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF]. Subjects were asked to respond as fast and as accurately as possible.
RT was defined by the time interval between the stimulus onset and the mechanical response.
Electrophysiological recordings and processing
The electromyographic (EMG) activity of the flexor pollicis brevis (thenar eminence, inside base of the thumb) was recorded bipolarly by means of surface Ag-AgCl electrodes, 6 mm in diameter, fixed about 20 mm apart on the skin of each thenar eminence. The recorded EMG signals were digitized online (bandwidth 0-268 Hz, 3 db/octave, sampling rate 1024 Hz), filtered off-line (high pass = 10 Hz), and then, inspected visually [START_REF] Van Boxtel | Detection of EMG onset in ERP research[END_REF]). The EMG onsets were hand scored because human pattern recognition processes are superior to automated algorithms (see [START_REF] Staude | Precise onset detection of human motor responses using a whitening filter and the log-likelihood-ratio test[END_REF]). To overcome subjective influence on the scoring, the experimenter who processed the signals was unaware of the type of associations (congruent, incongruent) or session (placebo, APTD) to which the traces corresponded.
Electroencephalogram (EEG) and electro-oculogram (EOG) were recorded continuously from preamplified Ag/ AgCl electrodes (BIOSEMI Active-Two electrodes, Amsterdam). For EEG, 64 recording electrodes were disposed according to the 10/20 system with CMS-DRL as reference and ground (specific to the Biosemi acquisition system). A 65th electrode on the left mastoid served to reference the signal offline. Electrodes for vertical and horizontal EOG were at the Fp1 and below the left eye and at the outer canthus of the left and right eyes, respectively. The signal was filtered and digitized online (bandwidth 0-268 Hz, 3 db/octave, sampling rate 1024 Hz). EEG and EOG data were numerically filtered offline (high pass = 0.02 Hz). No additional filtering was performed. Bipolar derivations were calculated offline for vertical and horizontal EOGs. Then, ocular artifacts were subtracted [START_REF] Gratton | A new method for off-line removal of ocular artifact[END_REF]. A trial-by-trial visual inspection of monopolar recordings allowed us to reject unsatisfactory subtractions and other artifacts.
The scalp potential data were segmented from (-500) to (+ 500 ms) with the EMG onset as zero of time. Afterward, for each individual, scalp potential data were averaged time locked to the EMG onset. However, on scalp potential data, due to volume conduction effects, the N-40 is overlapped by large components generated by remote generators and by closer ones in the primary motor areas. Therefore, it hardly shows up on scalp potential recordings [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF].
The surface Laplacian (SL) transformation (see Carvalhaes and de [START_REF] Carvalhaes | The surface Laplacian technique in EEG: theory and methods[END_REF] for theory and methods), acting as a high-pass spatial filter [START_REF] Nuñez | Estimation of large scale neocortical source activity with EEG surface Laplacians[END_REF], is very efficient in attenuating volume conduction effects [START_REF] Giard | Scalp current density mapping in the analysis of mismatch negativity paradigms[END_REF][START_REF] Kayser | On the benefits of using surface Laplacian (current source density) methodology in electrophysiology[END_REF]. Because of this property, the SL transformation allows unmasking the N-40, by removing spatial overlap between this component and other ones. Therefore, the Laplacian transformation was applied on each individual scalp potential average, obtained for each subjects in each condition (congruent and incongruent), and each session (placebo and APTD); surface Laplacian was estimated after spherical spline interpolation with four as the degree of spline and a maximum of 10°for the Legendre polynomial, according to the method of [START_REF] Perrin | Scalp current density mapping: value and estimation from potential data[END_REF].
N-40 At the FCz electrode, the N-40 begins to develop about 70 ms and peaks about 20 ms before EMG onset. The slopes of the linear regression of this wave were computed for each subject from -70 to -20 ms, so within a 50-ms time window. The slopes of the N-40 were determined for Bpure^correct trials only. These values were then submitted to repeatedmeasures canonical analyses of variance (ANOVA). The ANOVA involved two within-subjects factors: session (placebo, APTD) and congruency (congruent, incongruent) for mean results. Contrary to stimulus-locked data, when studying response-locked averages, the choice of the appropriate baseline is always problematic. To circumvent the problem, peakto-peak measures are often used, as they are baseline free.
They may be conceived of as a crude slope measure. However, mean slopes can be estimated by computing the linear regression line by the least squares method, in an interval of interest. In this case, slope analysis has certain advantages over amplitude analysis: (i) they are also independent of the baseline and (ii) they give morphological information on the polarity of the curves in delimited time windows and are less variable than amplitude measures [START_REF] Meckler | Motor inhibition and response expectancy: a Laplacian ERP study[END_REF].
Activation/inhibition pattern For correct trials, we analyzed EEG activities over the primary sensory-motor area (SM1) contralateral and ipsilateral to the response (over C3 and C4 electrodes): we measured the slopes computed by linear regression in a specific time window of 50 ms (-30 ms; 20 ms) (e.g., [START_REF] Meckler | Motor inhibition and response expectancy: a Laplacian ERP study[END_REF]. These mean values of slope were submitted to an ANOVA involved two within-subjects factors: session (placebo, APTD) and congruency (congruent, incongruent).
Results
Amino acid plasmatic concentrations
For phenylalanine (see Table 1), the ANOVA revealed an effect of the session (F (1, 11) = 37.49; p = 0.000075) and no effect of the time of the drawn (F (1, 11) = 0.13; p = 0.73). These two factors interacted (F (1, 11) = 119.81; p = 0.000000), signaling that the session affected the samples drawn at the end of the testing session, 6 h after ingestion (F (1, 11) = 65.84; p = 0.000006), but not the samples drawn before ingestion (F (1, 11) = 0.016; p = 0.902). For tyrosine (see Table 1), the ANOVA revealed a main effect of the session (F (1, 11) = 68.91; p = 0.000005) and a main effect of time (F (1, 11) = 17.95; p = 0.0014). These two factors interacted (F (1, 11) = 68.83; p = 0.000005), indicating that the session affected tyrosine levels at the end of the session (F (1, 11) = 71.16; p = 0.000004) but not prior to ingestion (F (1, 11) = 0.24; p = 0.631).
In sum, plasma concentrations of tyrosine and phenylalanine were significantly lower for the depleted session than for the placebo session.
Reaction time of correct responses
There was a main effect of congruency of 13 ms (congruent 408 ms, incongruent 421 ms, F (1, 11) = 57.58; p = 0.00001) but no effect of session (placebo 414 ms; depleted 416 ms, F (1, 11) = 0.056; p = 0.817). These two factors did not interact on mean RT (F (1, 11) = 0.116; p = 0.739).
Error rate
There was a non-significant trend for an increase of error rate on incongruent stimulus-response associations (7.28%) compared to congruent stimulus-response associations (5.79%) (F = 3.70; p = 0.081). The error rate was not statistically different for the placebo (6.62%) and depleted (6.45%) sessions (F (1, 11) = 0.123; p = 0.732). There was no interaction between congruency and session (F (1, 11) = 0.026; p = 0.874).
N-40 (Fig. 2)
Only data obtained on correct trials have been analyzed.
As expected from [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF], the slope of the N-40 was steeper on the incongruent (-57.42 μV/cm 2 /ms) than on the congruent condition (-24.50 μV/cm 2 /ms) (size effect on steepness -32.92 μV/cm 2 /ms, F (1, 11) = 5.41, p = 0.040). The congruency effect observed on slopes of the N-40 was attributed to the more demanding selection on incongruent stimulus-response associations as compared to congruent stimulus-response associations. Moreover, the slope of the N-40 was also steeper on placebo (-59.39 μV/cm 2 /ms) than on the APTD session (-22.54 μV/cm 2 /ms) (size effect on steepness -36.85 μV/cm 2 /ms, F (1, 11) = 5.50, p = 0.039).
These two factors (congruency and sessions) did not interact (F (1, 11) = 0.030; p = 0.865).
Activation/inhibition pattern (Fig. 3) Inspection of the Laplacian traces reveals that, for all conditions, a negativity/positivity pattern developed before EMG onset over the controlateral and ipsilateral M1s respectively.
As expected from [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF], over contralateral M1 (over controlateral electrode), we observed a negative wave peaking about EMG onset while over the ipsilateral M1 (over ipsilateral electrode), we observed a positive wave.
Regarding contralateral negativity, there was no effect of congruency (congruent condition -210.37 μV/cm 2 /ms and incongruent condition -224.19 μV/cm 2 /ms; F (1, 11) = 0.67; p = 0.429) or main effect of the session (placebo session -213.10 μV/cm 2 /ms, APTD session -221.47 μV/cm 2 /ms; F (1, 11) = 0.066; p = 0.802). These two factors did not interact (F (1, 11) = 1.18, p = 0.299).
Regarding ipsilateral positivity, there was neither an effect of congruency (congruent condition 102.51 μV/ms and incongruent condition 110.26 μV/ms; F (1, 11) = 0.349; p = 0.566) nor an effect of session (placebo session 109.53 μV/ms, APTD session 103.25 μV/ms; F (1, 11) = 0.182; p = 0.677). These two factors did not interact (F (1, 11) = 0.098; p = 0.759).
Discussion
The present study reproduces already available data: (1) RTs were longer on congruent than on incongruent trials, revealing the existence of a Simon effect; (2) an activation/inhibition pattern developed over M1s before the response [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF][START_REF] Van De Laar | Lifespan changes in motor activation and inhibition during choice reactions: a Laplacian ERP study[END_REF][START_REF] Alexander | Linking motor-related brain potentials and velocity profiles in multi-joint arm reaching movements[END_REF]); (3) this activation/inhibition pattern was preceded by an N-40 [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF][START_REF] Alexander | Linking motor-related brain potentials and velocity profiles in multi-joint arm reaching movements[END_REF], which is in line with the notion that N-40 index response selection processes, upstream to response execution as manifested by contralateral M1 activation; (4) the amplitude of the N-40 depended on congruency, being larger on congruent than on incongruent associations [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF], which is also in line with the notion that N-40 indexes response selection processes; (5) the procedure used here was efficient in inducing a clear ATPD, known to induce a secondary dopamine depletion (McTavish et al. [START_REF] Leyton | Effects on mood of acute phenylalanine/tyrosine depletion in healthy women[END_REF][START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF][START_REF] Nagano-Saito | Dopamine depletion impairs frontostriatal functional connectivity during a set-shifting task[END_REF], 2012); (6) acute dopamine depletion had no effect on RT, error rate, or the size of the congruency effect, in line with Larson et al.'s (2015) results who did not evidence either any APTD effect on RT, error rates, or the size of the congruency effect, in another conflict task. Therefore, given that the procedure used here induced a clear APTD, and that subjects exhibited behavioral and EEG patterns as expected from previous literature, we can be quite confident that subjects were submitted to classical appropriate experimental conditions.
In these conditions, (1) over M1s, neither contralateral activation nor ipsilateral inhibition were sensitive to congruency, suggesting that congruency has little or no effect on execution processes (contralateral M1) or error prevention (ipsilateral M1); (2) over M1s, neither contralateral activation nor ipsilateral inhibition were sensitive to APTD, suggesting that the dopamine depletion has little or no effect on execution processes (contralateral M1) or error prevention (ipsilateral M1);
(3) over the SMAs, the N-40 was reduced after APTD, suggesting that the dopamine depletion affects response selection processes.
A first comment is in order. The sensitivity of the N-40 to ATPD cannot be attributed to a general effect on cerebral electrogenesis. First, because ERPs recorded over M1s were unaffected by ATPD; secondly, because Larson and his colleagues (2015) examined extensively the sensitivity to ATPD of several ERPs, assumed to reveal action monitoring processes, namely the N450 [START_REF] West | Effects of task context and fluctuations of attention on neural activity supporting performance of the Stroop task[END_REF], the conflict slow potential [START_REF] West | Effects of task context and fluctuations of attention on neural activity supporting performance of the Stroop task[END_REF][START_REF] Mcneely | Neurophysiological evidence for disturbances of conflict processing in patients with schizophrenia[END_REF], the Error Negativity [START_REF] Falkenstein | Effects of crossmodal divided attention on late ERP components. II. Error processing in choice reaction tasks[END_REF][START_REF] Gehring | A neural system for error detection and compensation[END_REF], or the Error Positivity [START_REF] Falkenstein | Effects of crossmodal divided attention on late ERP components. II. Error processing in choice reaction tasks[END_REF], and none of these activities were sensitive to ATPD, thus confirming that the present effects observed on the N-40 cannot result from a general decrease of electrogenesis. Therefore, one can conclude that the sensitivity of the N-40 to ATPD is specific.
Second, because dopamine depletion has little or no effect on execution processes (contralateral M1) or proactive control of errors (ipsilateral M1), one can conclude that the effect of ATPD observed on the N-40 over the SMAs reflects a selective influence of dopamine depletion on response selection processes (as proposed in the BIntroduction^section), without noticeable effects on processes occurring downstream, i.e., Fig. 2 N-40: in black, the placebo session and in gray, the APTD session; in dash, the congruent condition and in solid the incongruent condition. Maps have the same scale and dated at -30 ms. The zero of time corresponds to the onset of the EMG burst response execution. Note that the selective influence of APTD, but also of congruency, on upstream processes, both leaving unaffected contingent downstream activities, suggests the existence of separate modules in information processing operations [START_REF] Sternberg | Separate modifiability, mental modules, and the use of pure and composite measures to reveal them[END_REF][START_REF] Sternberg | Modular processes in mind and brain[END_REF] and the existence of Bfunctionally specialized [neural] processing modules ( Sternberg 2011, page 158). Now, because APTD had no influence on RT, error rate, or the size of the congruency effect [START_REF] Ramdani | Dopamine precursors depletion impairs impulse control in healthy volunteers[END_REF][START_REF] Larson | The effects of acute dopamine precursor depletion on the cognitive control functions of performance monitoring and conflict processing: an event-related potential (ERP) study[END_REF], we must conclude that the effect of an about 30% dopamine depletion [START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF][START_REF] Montgomery | Reduction of brain dopamine concentration with dietary tyrosine plus phenylalanine depletion: an [11C] raclopride PET study[END_REF]) observed here on the N-40 reveals a weak infraclinical functional deficit in response selection operations. One can imagine that a stronger depletion would have a behavioral expression on RTs.
According to the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], fMRI data reported by [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] indicate that during the preparatory period of a choice RT task, the dopaminergic system is involved in preparing for response selection; the SN pars compacta BOLD signal increased after the precue but returned to baseline before the end of the preparatory period in the easiest of the two possible response selection conditions. However due to the low temporal resolution of fMRI, no evidence could be provided regarding response selection per se.
If one admits that the N-40 is a physiological index of response selection processes [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF][START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF], the selective sensitivity of the N-40 to ATPD, with spared M1s activation/inhibition pattern, lends support to the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF] which assumes that the dopaminergic system is involved in response selection per se. In the motor loop between the basal ganglia and the cortex [START_REF] Alexander | Linking motor-related brain potentials and velocity profiles in multi-joint arm reaching movements[END_REF], the SMAs constitute a major cortical target of the basal ganglia, via the thalamus. If we assume that the N-40 is generated by the SMAs [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF], it is likely that the final effect of dopaminergic depletion on response selection takes place in the SMAs because of a decrease in thalamic glutamatergic output to this area, due to dopamine depletion-induced striatal impairment. Of course, it cannot be excluded that APTD influenced SMAs activity through direct dopaminergic projections to the cortex; however, this seems unlikely since PET studies show that most of the APTD-induced dopamine depletion involves striatal structures [START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF][START_REF] Montgomery | Reduction of brain dopamine concentration with dietary tyrosine plus phenylalanine depletion: an [11C] raclopride PET study[END_REF].
Fig. 3 Activation/inhibition pattern: In green, the placebo session; light green: activation of the (contralateral) primary motor cortex involved in the response; and dark green: inhibition of the (contralateral) primary motor cortex involved in the response. In red, the APTD session; dark red: activation of the (contralateral) primary motor cortex involved in the response; and light red: inhibition of the (ipsilateral) primary motor cortex involved in the response. The zero of time corresponds to the onset of the EMG burst. Maps have the same scale and dated at 0 ms. On the left side of maps, in blue, the activity of controlateral primary motor cortex involved in the response and on the right side of the map, in red, the activity of ipsilateral primary motor cortex involved in the response According to Grillner andhis colleagues (2005, 2013), the basal ganglia are strongly involved in the selection of basic motor programs (e.g., locomotion, chewing, swallowing, eye movements…), through a basic organization that has been conserved throughout vertebrate phylogeny, from lamprey to primates. The model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], the data of [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF], and the present results suggest that the implication of basal ganglia in action selection might also concern more flexible experience-dependent motor programs.
A final comment is in order. Although the Laplacian transformation largely increases the spatial resolution of EEG data, it is not possible to spatially separate the subregions of SMAs, i.e., the pre-SMA and SMA proper [START_REF] Luppino | Multiple representations of body movements in mesial area 6 and the adjacent cingulate cortex: an intracortical microstimulation study in the macaque monkey[END_REF][START_REF] Matsuzaka | A motor area rostral to the supplementary motor area (presupplementary motor area) in the monkey: neuronal activity during a learned motor task[END_REF]Picard andStrick 1996, 2001). [START_REF] Larson | The effects of acute dopamine precursor depletion on the cognitive control functions of performance monitoring and conflict processing: an event-related potential (ERP) study[END_REF] reported that APTD does not affect the Error Negativity (note that we did not evidence either any effect of APTD on the Error Negativity in the present experiment [data not shown]). Now, it has been demonstrated with intracerebral electroencephalography in human subjects that the Error Negativity is primarily generated in SMA proper but not in pre-SMA [START_REF] Bonini | Action monitoring and medial frontal cortex: leading role of supplementary motor area[END_REF]. This suggests that SMA proper activity is not noticeably impaired by ATPD. As a consequence, it seems likely that the N-40 is generated in the pre-SMA. Although both areas receive disynaptic inputs from the basal ganglia via the thalamus, a differential sensitivity of SMA proper and pre-SMA to dopamine depletion is not necessarily surprising if one considers that pre-SMA and SMA proper are targeted by neurons located in neurochemically and spatially distinct regions of the internal segment of the globus pallidus [START_REF] Akkal | Supplementary motor area and presupplementary motor area: targets of basal ganglia and cerebellar output[END_REF]), a major output structure of the basal ganglia.
Two limitations of the present study must be noticed. First, considering that the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF] not only assumes that the dopaminergic system is involved in response selection but also supposes that response selection of the appropriate response is achieved via the D2 system, the present study is unable to determine whether the effects of dopamine depletion are due to D1, D2 subreceptor types, or both; the same remark would also hold for [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] results. Secondly, our ATPD manipulation is an all or none one, with no possibility to evidence Bdose-dependent effects.^Future pharmacological manipulations would allow dose-dependent manipulations with possibly no behavioral effects at lower doses and behavioral impairments at higher doses. Such manipulations would also allow separating at least in part D2 from D1 effects to test further the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF].
To conclude, our results extend those of Yoon, who showed in a fMRI study that the dopaminergic system was finely sensitive to the complexity/simplicity of response selection during preparation. Thanks to the N-40, in accordance with the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], the present results directly indicate that the dopaminergic system is selectively involved in response selection per se, with little or no effect on response execution processes or proactive control of errors.
Fig. 1
1 Fig. 1 Schematic representation of the task used by Yoon et al. (2015) and the present task. Congruency in the Yoon et al. (2015) task was between the arrow direction and the response side and in the present task between stimulus position and response side
Table 1
1
Amino acid plasmatic
concentrations of phenylalanine Amino acid Before the absorption After the absorption
of the mixture of the mixture
Phenylalanine (μmol/l) placebo 60.3 ± 3.2 106.8 ± 6.1*
Phenylalanine (μmol/l) depleted 59.8 ± 3.5 17.4 ± 3.6*
Tyrosine (μmol/l) placebo 74.4 ± 4.1 272.7 ± 10.8*
Tyrosine (μmol/l) depleted 71.7 ± 4.4 20.5 ± 4.4*
Laboratoire de Neurosciences Cognitives, Aix-Marseille Univ/CNRS, Marseille, France
Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
Institut de Médecine Navale du Service de Santé des Armées, Toulon, France
Acknowledgements We thank Dominique Reybaud, Bruno Schmid, and the pharmacy personnel of the hospital Sainte Anne for their helpful technical contribution.
Funding information
The authors also gratefully acknowledge the financial support from the Institut de Recherches Biomédicales des Armées, France.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest. | 44,744 | [
"741516"
] | [
"507058",
"199398",
"549850",
"37965",
"199398"
] |
01760614 | en | [
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01760614/file/art_10.1007_s11306-017-1169-z.pdf | Stephane Greff
email: [email protected]
Mayalen Zubia
email: [email protected]
Claude Payri
email: [email protected]
Olivier P Thomas
email: [email protected]
Thierry Perez
email: [email protected]
Stéphane Greff
Stéphane Greff
Chemogeography of the red macroalgae Asparagopsis: metabolomics, bioactivity, and relation to invasiveness
Keywords: Asparagopsis taxiformis, Macroalgal proliferations, Metabolomics, Transoceanic comparisons, Microtox®, UHPLC-HRMS
Introduction
Ecologists generally assume that biotic interactions are prominent under tropics [START_REF] Schemske | Is there a latitudinal gradient in the importance of biotic interactions?[END_REF] where species richness and biomass are considered higher [START_REF] Brown | Why are there so many species in the tropics?[END_REF][START_REF] Mannion | The latitudinal biodiversity gradient through deep time[END_REF]. The latitudinal gradient hypothesis (LGH) states that tropical plants inherit more defensive traits from higher pressures of competition, herbivory and parasitism than their temperate counterparts [START_REF] Coley | Comparison of herbivory and plant defenses in temperate and tropical broad-leaved forests[END_REF][START_REF] Coley | Herbivory and plant defenses in tropical forests[END_REF][START_REF] Schemske | Is there a latitudinal gradient in the importance of biotic interactions?[END_REF]). The same trend exists in marine ecosystems as temperate macroalgae are consumed overall twice more than the better defended tropical ones [START_REF] Bolser | Are tropical plants better defended? Palatability and defenses of temperate vs. tropical seaweeds[END_REF]. However, the assumption that both biotic interactions and defense metabolism are strongly related to latitudinal gradient, and resulting from co-evolutionary processes, still requires additional evidences [START_REF] Moles | Dogmatic is problematic: Interpreting evidence for latitudinal gradients in herbivory and defense[END_REF][START_REF] Moles | Is the notion that species interactions are stronger and more specialized in the tropics a zombie idea?[END_REF]. Studies did not show any relationship between herbivory pressure and latitudes [START_REF] Adams | A test of the latitudinal defense hypothesis: Herbivory, tannins and total phenolics in four North American tree species[END_REF][START_REF] Andrew | Herbivore damage along a latitudinal gradient: relative impacts of different feeding guilds[END_REF], and an opposite trend has also been demonstrated in some cases [START_REF] Del-Val | Seedling mortality and herbivory damage in subtropical and temperate populations: Testing the hypothesis of higher herbivore pressure toward the tropics[END_REF]. For instance, phenolic compounds in terrestrial [START_REF] Adams | A test of the latitudinal defense hypothesis: Herbivory, tannins and total phenolics in four North American tree species[END_REF]) and marine ecosystems seem to be equally present under low and high latitudes [START_REF] Targett | Biogeographic comparisons of marine algal polyphenolics: evidence against a latitudinal trend[END_REF][START_REF] Van Alstyne | The biogeography of polyphenolic compounds in marine macroalgae: Temperate brown algal defenses deter feeding by tropical herbivorous fishes[END_REF].
Chemical traits may also be related to environmental and/or biotic interactions changes [START_REF] Nylund | Metabolomic assessment of induced and activated chemical defence in the invasive red alga Gracilaria vermiculophylla[END_REF]. Several ecosystems are affected by introduction of nonindigenous species (NIS) which may disrupt biotic interactions [START_REF] Schaffelke | Impacts of introduced seaweeds[END_REF][START_REF] Simberloff | Impacts of biological invasions: What's what and the way forward[END_REF]. After the loss of specific competitors, NIS may reallocate the energy originally dedicated to defenses (specialized metabolism) into reproduction and growth (primary metabolism), and succeed in colonized environments [START_REF] Keane | Exotic plant invasions and the enemy release hypothesis[END_REF]. Interactions between NIS and native species may also modify chemical traits as it is argued by the novel weapon hypothesis (NWH) [START_REF] Callaway | Novel weapons: invasive success and the evolution of increased competitive ability[END_REF]. In addition, the production of defensive compounds may also be influenced by several abiotic factors such as temperature [START_REF] Ivanisevic | Biochemical trade-offs: evidence for ecologically linked secondary metabolism of the sponge Oscarella balibaloi[END_REF][START_REF] Reverter | Secondary metabolome variability and inducible chemical defenses in the Mediterranean sponge Aplysina cavernicola[END_REF], light [START_REF] Cronin | Effects of light and nutrient availability on the growth, secondary chemistry, and resistance to herbivory of two brown seaweeds[END_REF][START_REF] Deneb | Chemical defenses of marine organisms against solar radiation exposure Marine Chemical Ecology[END_REF][START_REF] Paul | The ecology of chemical defence in a filamentous marine red alga[END_REF], and nutrient availability [START_REF] Cronin | Effects of light and nutrient availability on the growth, secondary chemistry, and resistance to herbivory of two brown seaweeds[END_REF]. Moreover, internal factors such as reproductive stage (Ivanisevic et al. 2011a;[START_REF] Vergés | Sex and life-history stage alter herbivore responses to a chemically defended red alga[END_REF], ontogeny [START_REF] Paul | Simple growth patterns can create complex trajectories for the ontogeny of constitutive chemical defences in seaweeds[END_REF]) are globally subjected to seasonal variation and consequently they may affect the s pecialized metabolism ( Ivanisevic et al. 2011a).
The genus Asparagopsis (Rhodophyta, Bonnemaisoniaceae) is currently represented by two species, A. taxiformis (Delile) Trévisan de Saint-Léon and A. armata (Harvey) [START_REF] Andreakis | Asparagopsis taxiformis and Asparagopsis armata (Bonnemaisoniales, Rhodophyta): Genetic and morphological identification of Mediterranean populations[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF]. Asparagopsis taxiformis is widespread in temperate, subtropical and tropical areas and, so far, six cryptic lineages with distinct geographic distributions have been described for this species [START_REF] Andreakis | Phylogeography of the invasive seaweed Asparagopsis (Bonnemaisoniales, Rhodophyta) reveals cryptic diversity[END_REF][START_REF] Andreakis | Endemic or introduced? Phylogeography of Asparagopsis (Florideophyceae) in Australia reveals multiple introductions and a new mitochondrial lineage[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF]). Among them, the worldwide fragmented distribution pattern of A. taxiformis lineage two is explained by multiple introduction events, and in some places of the Southwestern Mediterranean Sea for instance, it is clearly invasive and outcompeting indigenous benthic organisms [START_REF] Altamirano | The invasive species Asparagopsis taxiformis (Bonnemaisoniales, Rhodophyta) on Andalusian coasts (Southern Spain): Reproductive stages, new records and invaded communities[END_REF][START_REF] Zanolla | The seasonal cycle of Asparagopsis taxiformis (Rhodophyta, Bonnemeaisoniaceae): key aspects of the ecology and physiology of the most invasive macroalga in Southern Spain[END_REF].
The genus Asparagopsis is known to biosynthesize about one hundred of halogenated volatile hydrocarbons containing one to four carbons with antimicrobial, antifeedant and cytotoxic properties [START_REF] Genovese | In vitro evaluation of antibacterial activity of Asparagopsis taxiformis from the Straits of Messina against pathogens relevant in aquaculture[END_REF][START_REF] Kladi | Volatile halogenated metabolites from marine red algae[END_REF]Paul et al. 2006b). Assessment of resources allocated to defense traits can be obtained through analysis of the specialized metabolism using metabolomics. Another way to evaluate resources allocated to defense traits is to measure the bioactivity of organismal extract as a proxy of defense-related compounds biosynthesis. The Microtox® assay is a simple, efficient and rapid method that highly correlates with other biological tests [START_REF] Botsford | A comparison of ecotoxicological tests[END_REF]. Trade-off between the specialized metabolism, and the primary metabolism dedicated to essential biochemical processes such as growth and reproduction can be assessed by this approach [START_REF] Ivanisevic | Biochemical trade-offs: evidence for ecologically linked secondary metabolism of the sponge Oscarella balibaloi[END_REF]). Bioactivities of extracts can be directly correlated to the expression level of targeted metabolites [START_REF] Cachet | Metabolomic profiling reveals deep chemical divergence between two morphotypes of the zoanthid Parazoanthus axinellae[END_REF][START_REF] Martí | Quantitative assessment of natural toxicity in sponges: toxicity bioassay versus compound quantification[END_REF][START_REF] Reverter | Secondary metabolome variability and inducible chemical defenses in the Mediterranean sponge Aplysina cavernicola[END_REF], and metabotypes were shown to explain bioactivity patterns [START_REF] Ivanisevic | Biochemical trade-offs: evidence for ecologically linked secondary metabolism of the sponge Oscarella balibaloi[END_REF]). However, metabolomics doesn't match necessarily bioactivity assessment. Indeed, metabolomics provides an overall picture of chemical complexity of a biological matrix, this picture being dependent of the selected technique (MS or NMR), but in any case without any indication of putative synergetic or antagonistic effects of the detected compounds. On the other hand, an assay such as the Microtox® integrates all putative synergetic or antagonistic effects of the extracted compounds, but the obtained value is only a proxy depending on the specificity of the model bacterial strain response.
The first objective of our study was to assess macroalgal investment in defensive traits using two non-equivalent approaches, UHPLC-HRMS metabolic fingerprinting and biogeographic variations of macroalgal bioactivities assessed with the Microtox® assay. The second objective was to understand how environmental factors (temperature, light) may influence macroalgal defensive traits. Finally, we also have evaluated the relationship between bioactivities and the status of the macroalga, its origin (introduced vs. native) and its cover together as an indicator of invasiveness, in order to relate the involvement of macroalgal chemical defenses in its proliferation trait.
Methods
Biological Material
Among the six different lineages of A. taxiformis (Delile) Trevisan de Saint-Léon (Rhodophyta, Bonnemaisoniaceae) [START_REF] Andreakis | Phylogeography of the invasive seaweed Asparagopsis (Bonnemaisoniales, Rhodophyta) reveals cryptic diversity[END_REF][START_REF] Andreakis | Endemic or introduced? Phylogeography of Asparagopsis (Florideophyceae) in Australia reveals multiple introductions and a new mitochondrial lineage[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF], only five were considered in this study. This alga can cover hard and soft substrate from 0 to 45 m depth both in temperate and tropical waters. Asparagopsis armata (Harvey), a species distributed worldwide and currently composed of two distinct genetic clades [START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF], was also considered in this study. Only one lineage mostly growing on hard substrates at shallow depth was investigated. The genus is dioecious. Gametophytes stages present distinct male and female individuals, which alternates with a heteromorphic tetrasporophyte "Falkenbergia stage". In this study, we focused on the gametophyte stage of the macroalgae.
Sampling
A total of 289 individuals of A. taxiformis gametophytic stage were collected in 21 stations selected in 10 sites from two zones (temperate and tropical), from October 2012 to April 2015 (Table 1). Sampled stations presented highly variable A. taxiformis covers. Three classes of Asparagopsis cover were determined by visual assessment: low, 0-35%, medium, 35-65%, high 65-100%. Asparagopsis armata was sampled in South of Spain where it lives in sympatry with A. taxiformis. Two temporal samplings of A. taxiformis were performed in Réunion (Saint Leu, four dates from October 2012 to July 2013) and in France (La Ciotat, six dates from November 2013 to April 2015).
Metabolite extraction
After collection, samples were transported in a cooler, stored at -20 °C before freeze-drying. Dried samples were preserved in silica gel and sent to Marseille (France). Each sample was then individually ground into a fine powder using a blender (Retsch® MM400, 30 Hz during 30 s). One hundred milligram of each sample were extracted 3 times with 2 mL of MeOH/CH 2 Cl 2 1:1 (v/v) in an ultrasonic bath (1 min) at room temperature. The filtrates (PTFE, 0.22 µm, Restek®) were pooled and concentrated to dryness, adsorbing extracts on C18 silica particles (100 mg, non-end-capped C18 Polygoprep 60-50, Macherey-Nagel®). The extracts were then subjected to SPE (Strata C18-E, 500 mg, 6 mL, Phenomenex®) eluting with H 2 O, MeOH, and CH 2 Cl 2 (5 mL of each) after cartridge cleaning (10 mL MeOH) and conditioning (10 mL H 2 O). MeOH fractions were evaporated to dryness, resuspended in 2 mL of MeOH prior to metabolomic analyses by UHPLC-QqTOF. After this first analysis, the same macroalgal extracts were concentrated to dryness ready to be used for bioactivity assessment using the Microtox® assay.
Metabolomic analyses
Chemicals
Methanol, dichloromethane and acetonitrile of analytical quality were purchased from Sigma-Aldrich (Chroma-solv®, gradient grade). Formic acid and ammonium formate (LC-MS additives, Ultra grade) were provided by Fluka.
LC-MS analyses
Analyses were performed on an UHPLC-QqToF instrument: UHPLC is equipped with RS Pump, autosampler and thermostated column compartment and UV diode array (Dionex Ultimate 3000, Thermo Scientific®) and coupled to a high resolution mass spectrometer (MS) equipped with an ESI source (Impact II, Bruker Daltonics®). Mass spectra were acquired in positive and negative mode consecutively. Elution rate was set to 0.5 mL min -1 at a constant temperature of 40 °C. Injection volume was set to 10 µL. Chromatographic solvents were composed of A: water with 0.1% formic acid (positive mode), or 10 mM ammonium formate (negative mode), and B: acetonitrile/water (95:5) with the same respective additives. UHPLC separation occurs on an Acclaim RSLC C18 column (2.1 × 100 mm, 2.2 µm, Thermo Scientific®). According to the study of spatial and temporal patterns, chromatographic elution gradients were adjusted to improve peak resolution using a pooled sample. Two chromatographic elution gradients were thus applied. For the study of spatial patterns, the program was set up at 40% B during 2 min, followed by a linear gradient up to 100% B during 8 min, then maintained 4 min in isocratic mode. The analysis was followed by a return to initial conditions for column equilibration during 3 min for a total runtime of 17 min. For the study of temporal patterns, the program was set up at 2% B during 1 min, followed by a linear gradient up to 80% B during 5 min, then maintained 6 min in isocratic mode at 80% B. The analysis was followed by a phase of 100% B during 4 min and equilibration during 4 min for a total runtime of 20 min. Analyses were processed in separate batches for the study of spatial and temporal variation of the metabotypes. Macroalgal extracts were randomly injected according to sampling sites or dates, alternating the pooled sample injected every 6 samples to realize inter and intra-batch calibration due to MS shift over time. MS parameters were set as follows for positive mode (and negative mode): nebulizer gas, N 2 at 31 psi (51 psi), dry gas N 2 at 8 L min -1 (12 L min -1 ), capillary temperature at 200 °C and voltage at 2500 V (3000 V). Data were acquired at 2 Hz in full scan mode from 50 to 1200 amu. Mass spectrometer was systematically calibrated with formate/acetate solution forming clusters on the studied mass range before a full set of analysis. The same calibration solution was automatically injected before each sample for internal mass calibration. Data-dependent acquisition MS 2 experiments were also conducted (renewed every three major precursors) on some samples of each location.
Data analyses
Constructor raw analyses were automatically calibrated using internal calibrant, before exporting data in netCDF files (centroid mode) using Bruker Compass DataAnalysis 4.3. All converted analysis were then processed by XCMS software [START_REF] Smith | XCMS: processing mass spectrometry data for metabolite profiling using nonlinear peak alignment, matching, and identification[END_REF]) under R (R_Core_Team 2013), using the different steps necessary to generate the final data matrix: (1) Peak picking (peakwidth = c [START_REF] Cardigos | Non-indigenous marine species of the Azores[END_REF]20), ppm = 2) without threshold prefilter [START_REF] Patti | Meta-analysis of untargeted metabolomic data from multiple profiling experiments[END_REF]), ( 2) retention time correction (method = "obiwarp"), (3) grouping (bw = 10, minfrac = 0.3, minsamp = 1), (4) Fillpeaks, and finally (5) report and data matrix generation transferred to Excel. Each individual ion was finally normalized (if necessary according to the drift of equivalent ion of pooled samples) and injection order (van der [START_REF] Van Der Kloet | Analytical error reduction using single point calibration for accurate and precise metabolomic phenotyping[END_REF]. Data were calibrated between batches (inter-batch calibration for the study of spatial patterns) by dividing each ion by the intra-batch mean value of pooled-sample ions, and multiplying by the total mean value for all batches [START_REF] Ejigu | Evaluation of normalization methods to pave the way towards large-scale LC-MS-based metabolomics profiling experiments[END_REF]. Metabolites were annotated with constructor software (Bruker Compass DataAnalysis 4.3).
Bioactivity assays
Bioactivities of macroalgal extracts were assessed using the standardized Microtox® assay (Microbics, USA). This ecotoxicological method measures the effect of compounds on the respiratory metabolism of Allivibrio fischeri which is correlated to the intensity of its natural bioluminescence [START_REF] Johnson | Microtox® acute toxicity test[END_REF]. Extracts were initially prepared at 2 mg mL -1 in artificial seawater with 2% of acetone to facilitate dissolution, and then diluted (twofold) three times in order to test their effect on bacteria and to draw EC 50 curves. EC 50 expressed in µg mL -1 represents the a According to the definition of an "introduction" by [START_REF] Boudouresque | Les espèces introduites et invasives en milieu marin, 3 edn[END_REF]: transportation favored directly or indirectly by humans, biogeographical discontinuity with native range and established (self-sustaining population). b Cover: low (up to 35%), medium (from 35 to 65%), high (over 65%). In summary, high probability of introduction together with high cover is considered equivalent to high invasiveness concentration that decreases half of the initial luminescence after 5 min of exposure to extracts.
Environmental factors
Sea surface temperature (SST in °C) and Photosynthetically available radiation (PAR in mol m -2 day -1 ) were obtained from NASA GES DISC for all sites (http://giovanni.gsfc.nasa.gov). In France, supplementary abiotic factors related to water chemistry such as ammonium (NH 4 + ), nitrate (NO 3 -), and phosphate (PO 4 3-) concentrations (in µmol L -1 ) were provided by SOMLIT (http://somlit. epoc.u-bordeaux1.fr/fr/).
Statistical analyses
Principal components analysis (PCA) were realized using the "ade4 package" [START_REF] Dray | The ade4 package: implementing the duality diagram for ecologists[END_REF]. PCA were centered (the mean ion intensity of all samples was subtracted to each sample ion intensity) and normalized (consequently divided by the relative standard deviation of ion intensity of all samples). PERMANOVA (adonis function, 1e 5 permutations) and ANalysis OF SIMilarity (ANO-SIM function using Euclidean distances) were performed with the "vegan package" [START_REF] Oksanen | vegan: community ecology package. R package version 2[END_REF]. PLS-DA were realized using the "RVAideMemoire package" [START_REF] Hervé | RVAideMemoire: Diverse Basic Statistical and Graphical Functions[END_REF] on scaled raw data according to zones and unscaled log-transformed data according to sites. Permutational test based on cross model validation procedures (MVA.test and pairwise.MVA.test) were used to test differences between groups: outer loop fivefold cross-validation, inner loop fourfold cross-validation according to zones and sites [START_REF] Szymanska | Double-check: validation of diagnostic statistics for PLS-DA models in metabolomics studies[END_REF]. Very important peaks (VIPs) were determined according to the PLS-DA loading plots. Non-parametric analysis (Kruskal-Wallis test followed by Steel-Dwass-Critchlow-Fligner post-hoc test, and Mann-Whitney) were performed on XLSTAT version 2015.4.01.20575 to test differences of macroalgal bioactivities according to zones and sites. The relationships of macroalgal bioactivities against A. taxiformis cover, latitudes or environmental factors were assessed using Spearman's correlations rank test (Rs) using XLSTAT.
Results
Spatial variation of the macroalgal chemical profiles and bioactivities
All the macroalgal metabotypes were plotted on principal component analysis (PCA) taking into account 2683 negative and 2182 positive ions. The PCA shows that the global inertia remains low with 11.9% of explanation of the variability (Fig. 1). Variances on axis 1-3 and 2-3 of the PCA show similar values with 11.2% and 9.3%, respectively. But the divergence between groups (zones and sites) is more evident along the 2-3 axis. Whereas the difference between macroalgal metabotypes in tropical and temperate zones is not statistically supported (PERMANOVA, F = 2.7, p = 0.064), a significant difference between sites is recorded (PERMANOVA, F = 2.9, p = 0.003), with a weak Pearson correlation factor (R 2 = 0.13). A similarity test between sites confirmed that macroalgae sampled in Azores and France are significantly different from all other sites, as well as for macroalgae sampled in Martinique and New Caledonia (ANOSIM, R = 0.281, p < 0.001) (Fig S1
and Table S1 in Supporting Information). Eight metabolite features were selected as chemomarkers with regards to the congruence of ions detected in both negative and positive modes from the PPLS-DA loading plots. They partly explain the dispersion of groups that differentiate temperate metabotypes from tropical ones (PPLS-DA, NMC = 2.5%, p = 0.001) (Fig S2). Differentiation of groups was also effective for metabotypes of algae sampled from the different sites (PPLS-DA, NMC = 31.8%, p = 0.001) (Fig S3
and Table S2). Most probable raw formula of biomarkers did not match with any known compounds from the genus Asparagopsis (Table S3).
Macroalgal bioactivities are negatively correlated with latitudes (Rs = -0.148, R 2 = 0.02, p = 0.041). Overall A. taxiformis from temperate zones show higher bioactivities (EC 50 = 32 ± 3 µg mL -1 , mean ± SE) than those from tropical zones (85 ± 5 µg mL -1 , Mann-Whitney, U = 4256, p < 0.001, Figs. 2,3a). Macroalgal bioactivities also differ significantly according to sampling sites (Kruskal-Wallis, K = 90, p < 0.001, Fig. 2b). EC 50 ranged from 20 ± 4 µg mL -1 (Azores) to 117 ± 15 µg mL -1 (French Polynesia). Similarly to macroalgal bioactivities found in Azores, high values were recorded in France (31 ± 4 µg mL -1 ), Algeria (33 ± 5 µg mL -1 ) and Spain (52 ± 13 µg mL -1 ). In comparison, A. armata sampled in South of Spain did not show significant different EC 50 values from A. taxiformis sampled in France, Algeria and Spain (ESP arm , 24 ± 4 µg mL -1 , Steel-Dwass-Critchlow-Fligner post hoc test, p > 0.05).
Macroalgae from Martinique (114 ± 24 µg mL -1 ) and French Polynesia (117 ± 15 µg mL -1 ) showed the lowest bioactivities, whereas macroalgae from Mayotte showed the highest values among the tropical macroalgae (57 ± 8 µg mL -1 ). Asparagopsis taxiformis from other tropical sites (New Caledonia, Réunion and Guadeloupe) exhibited intermediate bioactivities (respectively 72 ± 7 µg mL -1 , 80 ± 10 µg mL -1 , 80 ± 22 µg mL -1 ).
No relationship between the spatial pattern of variability in the macroalgal bioactivity and the macroalgal cover has been established (Rs = 0.061, R 2 = 0.004, p = 0.336). This spatial pattern in macroalgal bioactivities is actually negatively correlated with SST (Rs = -0.428, R 2 = 0.18, p < 0.001) and PAR (Rs = -0.37, R 2 = 0.14, p < 0.001) which explain respectively 18 and 14% of the overall variability (Fig. 1; Table 2).
Temporal variation of the macroalgal chemical profiles and bioactivities
Macroalgae from France and Réunion displayed distinct metabotypes that varied in time, with overall a much higher variability recorded in Réunion and no clear pattern of seasonal variation in France (Fig. 3a,b). PCAs show that inertia are globally higher when assessing the spatial variability, with about 19% explained in France and 24% in Réunion. Although we were able to distinguish several metabotypes in these time series, no clear chemomarkers were identified to explain this variability. This divergence likely relies on several minor ions.
The EC 50 values for A. taxiformis from France range from 13 ± 3 to 37 ± 5 µg mL -1 (mean ± SE) revealing a high bioactivity all along the year (Fig. 4a), individuals sampled in January 2015 exhibiting the lowest bioactivity values recorded for this site (EC 50 = 37 ± 5 µg mL -1 ). This temporal pattern of variability is positively correlated to SST variations (Rs = 0.287, R 2 = 0.08, p = 0.02) (PAR, p > 0.05), and negatively correlated to variation in ammonium and nitrate concentrations (Rs = -0.373, R 2 = 0.14, p = 0.003 for NH 4 + ; Rs = -0.435, R 2 = 0.19, p < 0.001 for NO 3 -; Table 2). In Réunion, macroalgal bioactivities show a higher variability than those recorded for temperate site (Kruskal-Wallis, K = 29, p < 0.001; Fig. 4b). The lowest values (EC 50 = 204 ± 13 µg mL -1 ) were recorded in January when the seawater temperature is the highest (monthly S4 for details according to sites) mean SST of 27.1 °C, Table S5), whereas the highest values (EC 50 = 14 ± 3 µg mL -1 ) were recorded in July when the seawater temperature is lower (monthly mean SST of 23.5 °C). There is thus a negative correlation between the macroalgal bioactivity, the SST and the PAR variability (Rs = -0.729, R 2 = 0.53, p < 0.001 and Rs = -0.532, R 2 = 0.28, p < 0.001, for SST and PAR respectively).
Discussion
Applying LC-MS-based metabolomics to halogenated metabolites
The genus Asparagopsis is known to biosynthesize about one hundred of halogenated volatile organic compounds [START_REF] Kladi | Volatile halogenated metabolites from marine red algae[END_REF]). Whereas the major metabolites are assumed to be low molecular weight brominated compounds (Paul et al. 2006a), the metabolomic approach used in this study mostly detected non-halogenated metabolites. This might be explained by the volatility of these small compounds that are mainly detected using GC-MS analysis. Higher molecular weight metabolites with six carbons named mahorones were reported from this species [START_REF] Greff | Mahorones, highly brominated cyclopentenones from the red alga Asparagopsis taxiformis[END_REF]. Targeted search of the mahorones in collected gametophytes revealed the presence of 5-bromomahorone in almost all samples without any clear pattern of distribution between samples. The second mahorone was not detected maybe because of difficulties in the ionization process of these molecules as described by [START_REF] Greff | Mahorones, highly brominated cyclopentenones from the red alga Asparagopsis taxiformis[END_REF].
In this study, brominated and iodinated metabolites, were only evidenced by the release of bromide and iodide in the negative mode. Electrospray ionization is strongly dependent of metabolite physical and chemical properties. In negative mode, the detection of halogenated metabolites are not favored as the electrons may be trapped by halogens, rendering halogenated metabolites unstable and undetectable to mass spectrometer (except for halide ions). A metabolomic approach using HRMS is thus suitable for the detection of easily ionizable metabolites present in the macroalgae, but there is a limitation when the major specialized metabolites are highly halogenated.
Relationship between metabotypes and bioactivities
Although, various metabotypes were clearly discriminated, the divergence is due to a high number of minor ions. In this study, phenotypes of temperate macroalgae, especially A. armata sampled in Spain and A. taxiformis sampled in Spain and Algeria, distinguish mostly by the presence of some metabolite features named MF1-MF8 not previously described for these species. Macroalgae sampled in temperate environments evidenced higher bioactivities than those sampled under tropical environments indicating that macroalgal investment in defense was greater under higher latitudes. So far, temperate A. taxiformis (France, Azores, Spain and Algeria) is mainly represented by the introduced lineage 2 (L2) [START_REF] Andreakis | Phylogeography of the invasive seaweed Asparagopsis (Bonnemaisoniales, Rhodophyta) reveals cryptic diversity[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF] suggesting that this NIS can modify its investment in chemical traits. Both species, A. taxiformis and A. armata, sampled in South Spain showed closer phenotypes and bioactivities than A. taxiformis sampled at a larger geographic scale. This outcome suggests that macroalgal phenotypes are more driven by environmental factors, at least partly related to microbial communities, than to genetic factors. This phenotypic variability related to the exposome was already known at the morphological level, as a given genetic lineage or population can include various morphotypes [START_REF] Dijoux | La diversité des algues rouges du genre Asparagopsis en Nouvelle-Calédonie : approches in situ et moléculaires[END_REF][START_REF] Monro | The evolvability of growth form in a clonal seaweed[END_REF], and the morphotype variability could never be explained by genetics [START_REF] Dijoux | La diversité des algues rouges du genre Asparagopsis en Nouvelle-Calédonie : approches in situ et moléculaires[END_REF]. Although MS metabolomics was applied with success to a rather good number of chemotaxonomy or chemosystematics works, this study did not allow to discriminate macroalgal lineages. The unusual ionization processes of the major and highly halogenated specialized metabolites produced by these species might be one of the main explanation, calling thus to use other technical approaches. Besides metabolomics, bioactivity assessment using the Microtox® assay appeared as a relevant complement to our MS approach in order to detect putative shift in macroalgal chemical diversity and its related bioactivity.
In addition, the results of the Microtox® analyses, used as a proxy of the production of chemical defenses, are not in accordance with the Latitudinal Gradient Hypothesis (LGH) used on land where plants allocate more in defensive traits under lower latitudes. It also shows that environmental factors are driving-forces that can strongly influence the specialized metabolism and its related bioactivity or putative ecological function [START_REF] Pelletreau | New perspectives for addressing patterns of secondary metabolites in marine macroalgae[END_REF][START_REF] Puglisi | Marine chemical ecology in benthic environments[END_REF][START_REF] Putz | Chemical defence in marine ecosystems[END_REF]. A higher herbivory pressure in tropical ecosystems than in temperate ones can rely on the species richness and biomass of tropical ecosystems [START_REF] Brown | Why are there so many species in the tropics?[END_REF][START_REF] González-Bergonzoni | Meta-analysis shows a consistent and strong latitudinal pattern in fish omnivory across ecosystems[END_REF], but also to a stronger resistance of herbivores to plant metabolites [START_REF] Craft | Biogeographic and phylogenetic effects on feeding resistance of generalist herbivores toward plant chemical defenses[END_REF]. Previous study conducted with A. armata demonstrated that an increase in toxicity towards bacteria was related to the amount of bioactive halogenated compounds (Paul et al. 2006a). The halogenation process was also shown determinant for the deterrence of non-specialized herbivores (Paul et al. 2006b;[START_REF] Rogers | Ecology of the sea hare Aplysia parvula (Opisthobranchia) in New South Wales, Australia[END_REF][START_REF] Vergés | Sex and life-history stage alter herbivore responses to a chemically defended red alga[END_REF]) and the abalone Haliotis rubra (Paul et al. 2006b;Shepherd andSteinberg 1992 in Paul 2006) are known to graze A. armata, and only the sea hare Aplysia fasciata was reported to feed on A. taxiformis [START_REF] Altamirano | The invasive species Asparagopsis taxiformis (Bonnemaisoniales, Rhodophyta) on Andalusian coasts (Southern Spain): Reproductive stages, new records and invaded communities[END_REF].
Competition for space might also promote defensive traits as macroalgae can be abundant in temperate infralittoral zones [START_REF] Mineur | European seaweeds under pressure: Consequences for communities and ecosystem functioning[END_REF][START_REF] Vermeij | Biogeography and adaptation: Patterns of marine life[END_REF]. For A. taxiformis in the Mediterranean Sea, the pressure of competition might be expected to be rather high in spring when productivity is the highest [START_REF] Pinedo | Seasonal dynamics of upper sublittoral assemblages on Mediterranean rocky shores along a eutrophication gradient[END_REF], but our temporal survey showed rather similar bioactivities all along the year. If macroalgal-macroalgal interactions can induce defensive metabolites biosynthesis, it remains however difficult to explain why A. taxiformis maintains such a high level of defensive traits whereas these interactions are supposed to decrease. Competition is closely related to light availability. Under temperate latitudes, the photophilic community is generally more bioactive than the hemisciaphilic communities indicating that light plays a key role in bioactivity and the biosynthesis of defense related metabolites [START_REF] Martí | Seasonal and spatial variation of species toxicity in Mediterranean seaweed communities: correlation to biotic and abiotic factors[END_REF][START_REF] Mtolera | Stress-induced production of volatile halogenated organic compounds in Eucheuma denticulatum (Rhodophyta) caused by elevated pH and high light intensities[END_REF]. [START_REF] Paul | The ecology of chemical defence in a filamentous marine red alga[END_REF] demonstrated that the production of specialized metabolites was not costly for A. armata when the light is not limited, as biosynthesis was positively correlated to growth. Yet, light is scarcely limited except when competition with fleshy macroalgae reaches a maximum. Under tropics, high irradiance should lead to the synthesis of defense metabolites as revealed for the Rhodophyta Eucheuma denticulatum [START_REF] Mtolera | Stress-induced production of volatile halogenated organic compounds in Eucheuma denticulatum (Rhodophyta) caused by elevated pH and high light intensities[END_REF], but excessive irradiance may also stress macroalgae leading to biosynthesis switch [START_REF] Cronin | Effects of light and nutrient availability on the growth, secondary chemistry, and resistance to herbivory of two brown seaweeds[END_REF].
Taking all these factors together, seasonal bioactivity variation can give some clues. Surprisingly, the highest variation of macroalgal bioactivities was displayed by A. taxiformis in tropical region (Réunion) while seasonality was quite weak with a lower temperature range (5 °C) and high irradiance all along the year. During austral winter (with SST of 23-24 °C), A. taxiformis from Réunion showed bioactivities equivalent to temperate zones whereas the lowest bioactivities were displayed in austral summer when the water temperature was higher (26-27 °C). Asparagopsis taxiformis thermal tolerance was tested up to 30 °C (Padilla-Gamino and Carpenter 2007). However, high temperatures coupled to high irradiance and low nutrient levels that characterize tropical environments [START_REF] Vermeij | Biogeography and adaptation: Patterns of marine life[END_REF]) may lead to metabolic alterations as suggested by [START_REF] Cronin | Effects of light and nutrient availability on the growth, secondary chemistry, and resistance to herbivory of two brown seaweeds[END_REF]. Thus, a way to explain the higher defensive traits in temperate environments is to consider that maintaining a high level of defensive trait may not be so costly as long as light and nutrients are available, and temperature physiologically adequate.
Relationship between macroalgal bioactivities and invasiveness
In temperate regions, A. taxiformis was reported to be recently introduced in many places. In Azores, A. taxiformis spread all around the islands until the last 90′ and it is now well established [START_REF] Cardigos | Non-indigenous marine species of the Azores[END_REF][START_REF] Chainho | Non-indigenous species in Portuguese coastal areas, coastal lagoons, estuaries and islands[END_REF][START_REF] Micael | Tracking macroalgae introductions in North Atlantic oceanic islands[END_REF]). The last report on the worldwide distribution of A. taxiformis genetic lineages confirmed the presence of the introduced L2 in two Azorean Islands [START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF]). In the Western Mediterranean Sea, only the L2 has been recorded so far [START_REF] Andreakis | Asparagopsis taxiformis and Asparagopsis armata (Bonnemaisoniales, Rhodophyta): Genetic and morphological identification of Mediterranean populations[END_REF][START_REF] Andreakis | Phylogeography of the invasive seaweed Asparagopsis (Bonnemaisoniales, Rhodophyta) reveals cryptic diversity[END_REF][START_REF] Andreakis | High genetic diversity and connectivity in the polyploid invasive seaweed Asparagopsis taxiformis (Bonnemaisoniales) in the Mediterranean, explored with microsatellite alleles and multilocus genotypes[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF], but an invasive trait is not recorded everywhere [START_REF] Zenetos | Alien species in the Mediterranean Sea by 2012. A contribution to the application of European Union's Marine Strategy Framework Directive (MSFD). Part 2[END_REF]. In Alboran Sea, where this species has spread quickly since the late XXth century, A. taxiformis can form monospecific stands in several places along the Iberian coasts [START_REF] Altamirano | The invasive species Asparagopsis taxiformis (Bonnemaisoniales, Rhodophyta) on Andalusian coasts (Southern Spain): Reproductive stages, new records and invaded communities[END_REF][START_REF] Altamirano | New records for the benthic marine flora of Chafarinas Islands (Alboran Sea, Western Mediterranean)[END_REF]. In the same biogeographic region, a high cover of A. taxiformis was also recorded off the Algerian coast, while the macroalga is poorly distributed at Ceuta and the Strait of Gibraltar. Thus, for the widespread lineage 2 which is considered as invasive in some regions of the Mediterranean Sea and North Atlantic [START_REF] Altamirano | The invasive species Asparagopsis taxiformis (Bonnemaisoniales, Rhodophyta) on Andalusian coasts (Southern Spain): Reproductive stages, new records and invaded communities[END_REF][START_REF] Micael | Tracking macroalgae introductions in North Atlantic oceanic islands[END_REF][START_REF] Streftaris | Alien marine species in the Mediterranean-the 100 'Worst Invasives' and their impact[END_REF], we observed highly variable fate in the indigenous benthic community, proliferating in some places and rather discreet in others. This observation can be extended to different lineages present in other geographic context, which tends to indicate no link between macroalgal bioactivities, their metabotypes and their invasiveness. It is likely that other physiological traits, compared to indigenous sessile organisms, may explain its success in certain habitats, such as special efficiencies for up-taking nutrients, for spreading a specific life cycle stage, or for resisting to environmental stress thanks to the polyploid status of the thalli.
Fig. 1
1 Fig. 1 Principal component analysis (PCA) of methanolic macroalgal extracts analyzed in UHPLC-QqToF (positive and negative modes) according to zones (temperate versus tropical) and sites PYF: French Polynesia, MTQ: Martinique, GUA: Guadeloupe, MYT: Mayotte, REU: Réunion, NCL: New Caledonia, AZO: Azores, FRA: France, DZA: Algeria; ESP: Spain with A. taxiformis, ESP tax and
Fig. 2
2 Fig. 2 Mean bioactivities (±SE) of methanolic macroalgal extracts measured with Microtox® ecotoxicological assay according to (a) sampling zones: tropical vs. temperate and (b) sites: PYF: French Polynesia, NCL: New Caledonia, REU: Réunion, MYT: Mayotte, MTQ: Martinique, GUA: Guadeloupe, AZO: Azores, FRA, France, DZA: Algeria, ESP: Spain. Numbers of samples tested written in bars. Comparisons between zones were achieved with Mann-Whitney test. Comparisons between sites were achieved with Kruskal-Wallis test followed by Steel-Dwass-Critchlow-Fligner post hoc test. Letters figure differences between groups
Fig. 3
3 Fig. 3 Principal component analysis (PCA) of methanolic macroalgal extracts analyzed in UHPLC-QqToF according to temporal variation for two sites/zones: (a) France for temperate zone and (b) Réunion for tropical zone. EC 50 (in µg mL -1 ) of A. taxiformis opposite to
Fig. 4
4 Fig. 4 Mean bioactivities (± SE) of methanolic macroalgal extracts measured with Microtox® assay according to season at (a) La Ciotat (France) and (b) Saint Leu (Réunion). Numbers of samples tested are written in bars. Comparisons were achieved using Kruskal-Wallis test followed by Steel-Dwass-Critchlow-Fligner post hoc test. Letters figure differences between groups
Table 1
1 Sampling sites of Asparagopsis spp. around the world for the study of spatial and temporal variation of macroalgal bioactivities and chemical phenotypes
Cover b Sampling date Latitude Longitude Depth Sampling effort A. taxiformis A. armata high 07/11/2012 38°31.309′N 28°38.315′W -9 high 04/09/2013 36°47.587′N 3°21.078′E -12 high 25/04/2013 11 36°43.276′N 3°44.119′O 3-25 5 low 26/04/2013 35°53.848′N 5°18.479′O -1 low 27/04/2013 35°53.936′N 5°18.495′O -1 low 21/05/2013 43°9.957′N 5°36.539′E 5-8 9 low 14/04/2014 16°9.603′N 61°32.74′W 10-12 10 low 07/03/2014 14°28.86′N 61°5.095′W 24-24 7 high 06/04/2013 12°49.277'S 45°17.406′E -10 high 06/04/2013 12°59.987'S 45°6.543′E -9 low 29/10/2012 21°10.157'S 55°17.102′E <1 20 low 31/01/2013 22°18.888'S 166°26.085′E 0.5-2 2 medium 01/02/2013 22°20.816'S 166°13.954′E 6-14 11 medium 01/02/2013 22°20.836'S 166°13.906′E 10-35 5 low 4-5/02/2013 8 20°13.23'S 165°17.122′E 6-12 low 06/02/2013 3 20°40.403'S 164°15.431′E - medium 07/02/2013 26 21°41.376'S 165°27.735′E 5-35 low 22/11/2012 6 17°43.681'S 149°35.399′W - low 22/11/2012 6 17°32.974'S 149°37.852′W - low 23/11/2012 5 17°32.179'S 149°35.703′W - low 26/11/2012 10 17°36.661'S 149°37.296′W 4-35 low 1-8/02/2013 16 23° 10.051'S 134° 55.839′W 3-5 08/11/2013 8 43°9.957′N 5°36.539′E 5-8 08/01/2014 10 43°9.957′N 5°36.539′E 5-8 06/05/2014 7 43°9.957′N 5°36.539′E 5-8 01/07/2014 10 43°9.957′N 5°36.526′E 5-8 30/09/2014 10 43°9.957′N 5°36.526′E 5-8 01/08/2015 9 43°9.957′N 5°36.526′E 5-8 15/04/2015 5 43°9.957′N 5°36.526′E 5-8 04/10/2012 10 21°10.157'S 55°17.102′E <1 30/01/2013 10 21°10.157'S 55°17.102′E <1 25/04/2013 9 21°10.157'S 55°17.102′E <1 03/07/2013 21°10.157'S 55°17.102′E <1
Probability of introduction a high high high high high high uncertain uncertain uncertain uncertain uncertain low low low low low low low low low low low
Zone Sites Stations Most prob- able lineage Spatial variation Temperate Azores -L2 Algeria Ilôt de Bounettah L2 Spain La Herradura L2 Ceuta Ciclon de Tierra L2 Ceuta Ciclon de Fuera L2 France La Ciotat L2 Tropical Guadeloupe Caye a Dupont L3 Martinique Anses d'Arlet L3 Mayotte Aéroport L4 Kani kéli L4 Réunion Saint Leu Ravine L4 New Caledonia Ilot Canard L5 Dumbéa L5 Dumbéa L5 Touho L5 Koumac Kendec L5 Bourail L5 French Polynesia Paea L4 Faaa1 L4 Faaa2 L4 Taapuna L4 Mangareva L4 (L5) Temporal variation Temperate France La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 Tropical Réunion Saint Leu Ravine L4 Saint Leu Ravine L4 Saint Leu Ravine L4 Saint Leu Ravine L4
Table 1 (
1
Depth Sampling effort Total 289
Longitude
Sampling date Latitude
Cover b
Probability of introduction a
Most prob- able lineage
Stations
continued) Sites
Zone
Table 2 Spearman
2
's matrix of correlations between Pattern Sites Variables Bioactivity SST PAR NH 4 + NO 3 -
dependant (bioactivity) and independent variables (SST: sea surface temperature, PAR: photosynthetically active radiation, NH 4 + : ammonium concentration, NO 3 -: nitrate concentration, PO 4 3-: phosphate concentration) Spatial Temporal France SST PAR SST PAR NH 4 + NO 3 -PO 4 3- -0.428 (0.18) -0.370 (0.14) 0.287 (0.08) 0.175 (0.03) -0.373 (0.14) -0.435 (0.19) 0.079 (0.06) 0.377 0.661 -0.617 -0.654 -0.450 -0.622 -0.233 -0.294 0.811 0.640 0.602
Réunion SST -0.
Bold numbers show significant value at the level α ≤ 0.05. Coefficient of determination (Spearman R 2 ) are
into brackets
729 (0.53) PAR -0.532 (0.28) 0.913 et
al. 2006b
). However, only few grazers are recognized to feed on Asparagopsis: the sea hare Aplysia parvula
(Paul | 47,345 | [
"18764",
"960475",
"863262",
"18399"
] | [
"188653",
"443873",
"209113",
"188653",
"188653"
] |
01764115 | en | [
"shs"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01764115/file/substituability_EJHET_7March2018_pour%20HAL.pdf | Jean-Sébastien Lenfant
Substitutability and the Quest for Stability; Some Reflexions on the Methodology of General Equilibrium in Historical Perspective
Keywords: stability, general equilibrium, gross substitutability, substitutability, complementarity, Hicks (John Richard), law of demand, Sonnenschein-Mantel-Debreu, methodology B21, B23, B41, C62
niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
1 The heuristic value of the substitutability assumption
It is common view that the aims of general equilibrium theory have been seriously disrupted and reoriented after the famous Sonnenschein-Mantel-Debreu theorems.
The hopes for finding general sufficient conditions under which a tâtonnement process is stable for a competitive economy have turned into dark pessimism, and even to disinterest. The story of the stability issue as a specific research program within GET is rather well known. [START_REF] Ingrao | The invisible hand: economic equilibrium in the history of science[END_REF] have identified the steps and boundaries (see also [START_REF] Kirman | Demand theory and general equilibrium: from explanation to introspection, a journey down the wrong road[END_REF]). Some scholars have dealt more specifically with the issue of dynamics, establishing connections with the history of general equilibrium theory [START_REF] Weintraub | Appraising general equilibrium analysis[END_REF], [START_REF] Weintraub | Stabilizing Dynamics: constructing economic knowledge[END_REF], [START_REF] Hands | Restabilizing dynamics: Construction and constraint in the history of walrasian stability theory[END_REF]).
In contrast, the methodological appraisal of this story has not been pushed very far. Hands (2010 and[START_REF] Hands | Derivational robustness, credible substitute systems and mathematical economic models: the case of stability analysis in walrasian general equilibrium theory[END_REF] provides insights regarding the notion of stability of consumer's choice and revealed preference theory in relation with the stability of general equilibrium, but his aim is not to provide a comprehensive analysis of the stability issue. Hence, as a shortcut to the history of stability, the most common opinion on the subject [START_REF] Guerrien | Concurrence, flexibilité et stabilité: des fondements théoriques de la notion de flexibilité[END_REF][START_REF] Rizvi | Responses to arbitrariness in contemporary economics[END_REF][START_REF] Bliss | Hicks on general equilibrium and stability[END_REF] credits the famous Sonnenschein-Mantel-Debreu theorems for having discarded any serious reference to the invisible hand mechanism to reach a competitive market equilibrium. However, one can find a slightly different position regarding the stability literature in [START_REF] Ingrao | The invisible hand: economic equilibrium in the history of science[END_REF]. According to them, from the very beginning of the 1960s, mathematical knowledge on dynamical systems and some well known instability results [START_REF] Scarf | Some examples of global instability of the competitive equilibrium[END_REF][START_REF] Gale | A note on global instability of competitive equilibrium[END_REF] made stability researches already a vain task.
The gap between those two positions is not anecdotal. Firstly, according to the stance adopted, the place of the SMD results is not the same, both theoretically and from a symbolic point of view. Secondly, there are methodological consequences at stake on the way we can represent the development of general equilibrium theory, and more specifically, on the kind of methodological principles at work in a field of research characterized first of all by strong mathematical standards.
The aim of this paper is to identify some methodological principles at stake in the history of the stability of a competitive general equilibrium. More precisely, I would like to identify some criteria, other than mere analytical rigor, that were in use to direct research strategies and to evaluate and interpret theorems obtained in this field. This methodological look at the stability literature may lead to a more progressive view of the history, where results are modifying step by step the feelings of mathematical economists on the successes and failures of a research program.
My aim in this article is to provide a first step into the history of stability of a Walrasian exchange economy, taking Hicks's Value and capital (1939) as a starting point. To this end, I will put in the foreground the concept of substitutability. Indeed, substitutability has been a structuring concept for thinking about stability. It is my contention here that the concept of substitutability helps to provide some methodological thickness to the history of general equilibrium theory, not captured by purely mathematical considerations. Hence, I uphold that it allows identifying some methodological and heuristic constraints that were framing the interpretation of the successes and failures in this field.
It is well known that a sufficient condition for local and global stability of the Walrasian tâtonnement in a pure exchange economy is the gross-substitutability assumption (GS) (i.e., that all market excess demands increase when the price of other goods increases). By reconstructing the history of stability analysis through the concept of substitutability, I uphold that the representations attached to substitutability constituted a positive heuristic for the research program on stability. Therefore, tracking the ups and downs of this concept within general equilibrium provides some clues to appraise the methodology of general equilibrium in historical perspective and to account for the rise and fall of stability analysis in general equilibrium theory.
The research progam on stability of a competitive general equilibrium is by itself rather specific within GET, and bears on other subfields of GET such as uniqueness and comparative statics. It is also grounded on some views about the meaning of the price adjustment process. Stability theorems are for the most part of them not systematically microfounded: they are formulated at first as properties on the aggregate excess demands (such as gross substitutability, diagonal dominance, weak axiom of revealed preference), and their theoretical value is then assessed against their descriptive likelyhood and heuristic potential, and not against their compatibility with the most general hypotheses of individual rationality. The paper upholds that the concept of substitutability, as a tool for expressing market interdependencies, was seen as a common language to mathematical economists, rich enough to develop a research program on stability and to appraise its progress and failures.
An ever recurring question behind different narratives on GET revolves around the principles explaining the logic of its development, the fundamental reasons that explain GET was a developing area of research in the 1950s-1960s while it became depreciated in the 1970s. Adopting a Lakatosian perspective, one would say that the research program on GET was progressive in the 1960s and became regressive in the 1970s. Even this question assumes that we (methodologists, theorists, historians) agree upon the idea that GET is functioning as a research program and that it went trough two different periods, one during which new knowledge accumulated and one that made new "positive" results hopeless, even devaluating older results in view of new ones.
Recent trends in economic methodology have left behind the search for such normative and comprehensive systems of interpretation of the developments of economic theories. They focus instead on economics as a complex or intricate system of theories, models, fields of research, each (potentially) using a variety of methods as rationalizing and exploratory tools (econometrics and statistical methods applied to various data, simulations, experiments). The first goal of methodological inquiries is then to bring some order into the ways those various tools and methods are applied in practice, how they are connected (or not) through specific discourses, what are the rationales of the practitioners themselves when they apply them.
As far as we are concerned here, the question lends itself how GET can be grasped as an object of inquiry in itself, and more precisely how a field of questionings and research within this field-the stability of a competitive system-can be analyzed both as an autonomous field and in connection with other parts of GET. The present contribution does not claim to provide the structuring principle that explains the ups and downs of the researches on the stability of general competitive equilibrium. It is too evident that various aspects of this research are connected with what is taking place in other parts of the field. First, the kind of mathematical object which is likely to serve as a support for discussing about stability is not independent of the choice of the price-adjustment process that is used to describe the dynamics of the system when it is out-of-equilibrium. Hence, the explanatory power of a set of assumptions (about demand properties) is not disconnected from the explanatory power of another set of assumptions (the price adjustment process), which himself has to be connected with the assumptions about agents behavior, motivations and perception of their institutional environment (e.g. price taking behaviors, utility maximizing and profit maximizing assumptions). In a sense, while it is useful to analyze the proper historical path of researches on the stability of a Walrasian tâtonnement with a methodological questioning in mind, the historian-methodologist should be aware that various rationales are likely to play a role in its valuation as a relevant or anecdotal result. Second, the kind of assumptions made on a system of interdependent markets will have simultaneous consequences on different subfields of GET. An all too obvious example is that Gross Substitutability is both sufficient for uniqueness and global stability of a competitive equilibrium and allows for some comparative statics theorems. Third, the simple fact of identifying an autonomous subfield of research and to claim that it is stable enough through time to be analyzed independently of some internal issues that surface here and there, is something that needs questioning. I have in mind the fact that it is not something quite justified to take the stability of a competitive equilibrium as a historically stable object on which we may apply confidently various methodological hypotheses. There is first the question of delimiting the kind of tools used to describe such a competitive process. Certainly the Walrasian Tâtonnement (WT) has been acclaimed as the main tool for this, but the methodological rationale for it needs be considered in detail to account for the way theorists interpret the theorems of stability. One set of question could be: What about similar theorems when non-tâtonnement processes are considered? Why discard processes with exchanges out of equilibrium? Another set would be: Why not considering that the auctioneer takes into account some interdependencies on the market to calculate new prices? Should we search for stability theorems that are independent of the speed of adjustment on markets? Should stability be independent of the choice of the numéraire?
It is my contention here that the methodology of economics cannot hope to find out one regulatory principle adapted for describing and rationalizing the evolutions of a field of research when the studied object is by itself under a set various forces from inside and outside that make it rather unstable. If I do not claim for an explanatory principle of the research on stability theorems, what does this historical piece of research pretend to add to existing litterature? It provides a principle that is in tune with most recent research on the methodology of GET, as exemplified in [START_REF] Hands | Derivational robustness, credible substitute systems and mathematical economic models: the case of stability analysis in walrasian general equilibrium theory[END_REF]. It argues that the mathematical economists involved in the search for stability theorems adopted a strategy that focused on the ability to provide an interpretative content to their theorems, which by itself was necessary to formulate ways of improvement and generalization. In this respect, the concept of substitutability offered a way to connect the properties of individual behaviors with system-wide assumptions (such as GS) and to appraise those assumptions as more or less satisfactory or promising, in consideration of the kind of interpretable modifications that can be elaborated upon, using the language of substitutability. In so doing, using economically interpretable and comparable sets of assumptions is presented as a criteria for valuating theorems, confronting them and fostering new research strategies; while at the same time it does not pretend to exhaust the reasons for interpreting those results with respect to the developments in other fields of GET. Even if the language of substitutability would lead to some new results (in the 1970s-1990s), their valuation would become too weak in comparison with what was expected as a satisfactory assumption after the critical results obtained by Sonnenschein, Mantel and Debreu. The paper aims at putting some historical perspective on how the concept of substitutability failed to convey enough economically interpretable and fruitful content.
The paper is organised as follows. Section 2 deals with Hicks' Value and capital (1939) and its subsequent influence on stability issues until the middle of the 1950s. During this time span, stability is linked intimately with the search for comparative static results. It is a founding time for the heuristic of substitutability, and more precisely for the idea of a relation between substitutability and stability. (2. Stability and Substitutability: A Hicksian Tradition). With the axiomatic turn of GET, there are hopes for finding relevant conditions of stability. On the one hand, substitutability remains a good guiding principle, while on the other hand, the first examples of instability are presented, making findings of reasonable conditions of stability more urgent. (3. From Gross Substitutability to instability examples). The last time period in this story is much more uneasy and agitated. It is characterised with hidden pessimism and with difficulties in making substitutability a fruitful concept for stability theorems. One among other results, the SMD theorems come as a confirmation that the search for stability of the Walrasian tâtonnement is a dead-end. But as we will see, it is not the only result that played a role in the neglect of stability analysis (4. The end of a research program). As a conclusion, I provide an evaluation of the SMD results and of their consequences within the context of many other results (5. Concluding comments).
Stability and substitutability: a Hicksian tradition
In Value and Capital (1939), Hicks makes a systematic use of substitutes and complements to express stability conditions. He upholds a narrow link between stability and substitutability, giving to the concept of substitutability an explanatory value of the stability of market systems and praising its qualities to describe the main features of market interdependencies. This view would imprint the future of the search for stability conditions. I will first present Hicks' ideas on stability and substitutability (2.1 Stability and substitutability according to Hicks). Then, I show how a Hicksian tradition in GET was established in the 1940s and 1950s (2.2 A Hicksian tradition).
Stability and substitutability according to Hicks
Let's remind first some technical definitions. In Value and Capital, Hicks provides a definition of substitutes and complements on the basis of the [START_REF] Slutsky | Sulla teoria del bilancio del consumatore[END_REF] fundamental equation of value. The Hicks-Slutsky decomposition of the derivative of the demand for good i, x i with respect to the price of a good j, p j is:
∂x i (p, r) ∂p j = ∂h i (p, u) ∂p j -x j ∂x i (p, r) ∂r (1)
with r the income of an agent, x i (p, r) the Marshallian demand for i, h i (p, u) the compensated demand (or Hicksian demand) for i where u is the (indirect) level of utility attainable with (p, r), noted v(p, r).
From (1) we say that i and j are net substitutes, independent or net complements if the change in the compensated demand of i due to a change in p j is positive, null, or negative:
∂h i (p, v(p, r)) ∂p j 0 (2)
From the equation (1) we say that i and j are gross substitutes, independent or gross complements if the change in the Marshallian demand of i due to a change in p j verifies
∂x i (p, r) ∂p j 0 (3)
At an aggregated level, definitions (2) and ( 3) are usable for a general description of substitution between different markets, and equation (1) can serve to discuss the direction and the strength of revenue effects.
As can be infered from (1), two goods may be localy net substitutes (resp. net complements) and gross complements (resp. gross substitutes) depending on the direction and magnitude of income effects in the Slutsky-Hicks decomposition. What is true at the individual level is alo true at the aggregate. Hence, as is well known, the symmetry property of Hicksian demand functions ∂h i (p,r)
∂p j = ∂h j (p,r) ∂p i
is not true for Marshallian demands, except of course when income effects can be neglected.
In 1874, Walras had launched the idea of a sequential and iterative process -a tâtonnement-to model the price dynamics on competitive markets and to establish the possibility for such idealized markets to "discover" by groping the equilibrium-whose existence was theoretically assumed by the equality of equations and unknowns in the model. Walras would also connect the tâtonnement with some comparative static results. 1In Value and Capital, Hicks reinstates Walrasian general equilibrium analysis, which had been deemed fruitless by Marshall. 2 This renewal of interest for general equilibrium, it is worth noting, arises precisely from the availability of new tools to analyze choice and demand, notably the Slutsky equation, hence also the distinction between income and substitution effects and the new definition of substitutes and complements build from it. 3 In Hicks's view, even more certainly than for Walras, there is no doubt that the law of supply and demand leads the economy to an equilibrium. Hicks will follow Walras's reasoning on stability, with the aim of providing a precise mathematical account for it and to discuss with much more attention the effect of interdependencies between markets. Since the first part of the analysis proceeds in an exchange economy, the Slutsky equation then becomes:
∂z i (p, r) ∂p j = ∂h i (p, u) ∂p j -z j ∂x i (p, r) ∂r (4)
This leads to the well-known distinction between perfect and imperfect stability and its mathematical treatment in the Appendix of Value and capital. Consider the Jacobian matrix of the normalized system of n goods, that is, the matrix JZ containing all the cross derivatives of excess demand functions relative to all prices [z ij (p )], i, j ∈ [1, ...n] (the price of good n + 1 being set equal to 1). Stability will be perfect if the principal minors of JZ, calculated at equilibrium p , alternate in sign, the first one being negative. The system is imperfectly stable if only the last of these determinants respects the sign condition.
Hicks's analysis proceeds from the generalization of the results obtained in a two-good economy. He thinks that, except for particular cases, income effects to buyers and sellers on each market should tend to compensate each others: Therefore, when dealing with problems of the stability of exchange, it is a reasonable method of approach to begin by assuming that income effects do cancel out, and then to inquire what difference it makes if there is a net income effect in one direction or the other. (Hicks, 1939, 64-65) Thus, actually, through this thought experiment, the Jacobian of the system is identical to the matrix of substitution effects (the Slutsky matrix) since income effects on each market-i.e. associated to each price variation-are assumed to cancel out. And after a rather clumsy discussion about introducing income effects in the reasoning, Hicks comes to the following conclusion:
To sum up the negative but reassuring conclusions which we have derived from our discussion of stability. There is no doubt that the existence of stable systems of multiple exchange is entirely consistent with the laws of demand. It cannot, indeed, be proved a priori that a system of multiple exchange is necessarily stable. But the conditions of stability are quite easy conditions, so that it is quite reasonable to assume that they will be satisfied in almost any system with which we are likely to be concerned. The only possible ultimate source of instability is strong asymmetry in the income effects. A moderate degree of substitutability among the bulk of commodities will be sufficient to prevent this cause being effective. (Hicks, 1939, 72-73, emphasis mine) them as a relationship between two goods as regards a third one (or money). [START_REF] Slutsky | Sulla teoria del bilancio del consumatore[END_REF] did not provide a new definition, which, he suggested, would have been disconnected from human feelings What kind of substitutability is refereed to here? That the goods are net substitutes to one another. Consequently, the argument goes, symmetrical revenue effects at the aggregate level will have only a weak effect compared with the aggregate substitution effect, so that the Jacobian matrix is approximately symmetric. In so doing, Hicks develops a descriptive and explicative point of view on stability, and substitutability is given a prominent role. Substitutability is entrusted to produce a stylised representation of the interdependencies between markets, likely to receive a validation a priori. Thus, the idea that substitutes are dominating over the system is regarded as a natural and virtuous property of the economic system.
In the wake of Samuelson's discarding of Hicks' mathematical treatment of stability, there has been a tendency to evaluate Hick's analysis of stability exclusively from the standpoint of the mathematical apparatus of Value and capital, i.e. from the perfect/imperfect stability distinction, with a view of pinpointing its wrong mathematical conclusions. Instead, to our story, it is worth insisting that Hicks' reasoning in the text provides insights about the importance of substitutability as a structuring device. The heart of Hicks' reasoning, actually, is a discussion of interdependencies in a three-good case.
The gist of the discursive argument about stability of multiple markets in Value and Capital is in Chapter V, § §4-5, after Hick's introduction of the distinction between perfect and imperfect stability. It is to be noted also that the perfect/imperfect stability distinction is aimed at being powerful to consider cases of intertemporal non clearing Keynesian equilibria.
Here, the main question is whether an intrinsically stable market (say of good X) can be made unstable through reactions of price adjustments on other markets (themselves being out of equilibrium following the initial variation of the price p x and the subsequent reallocations of budgets). The interactions between markets for X and Y (T being the third composite commodity) are first studied under the assumption that net income effects can be neglected. Hicks discusses the case of price elasticities of excess demand for Y on the excess demand for X and T . He ends with the reassuring idea that in the three-good case, and even more when the number of goods widens, cases of strong complementarity are seldom and X will most of the time be "mildly substitutable" with most of the goods constitutive of the composite commodity T . 4Hence, the whole discussion of the mathematical apparatus is conducted through the idea of neglecting asymmetric income effects, focusing on complementarity relations to deal with instability and on a reasonable spread of substitutability to ensure stability. This latter argument would be corrected in the second edition of Value and Capital. Our point, here, is that from a heuristic or interpretative standpoint, Hicks' overall discussion is biased not specifically by its mathematical treatment, which is constrained by focusing on symmetrical systems; it is also biased by the strong separation between discussion of net complementarity and substitution on the one side and discussion of income effects on the other side. Hicks' discursive focus on complementarity has two opposite effects. First, it introduces the language of substitutability as the prominent device to discuss stability issues (as will be the case again when discussing intertemporal equilibrium). Second, it isolates the analysis of substitutability from the one on income effects, thus introducing a strong separation between arguments in terms of income effects and arguments in terms of substitutability to deal with stability analysis.
The establishment of a Hicksian tradition on stability analysis
Hicks's analysis of system stability was first challenged by [START_REF] Samuelson | The stability of equilibrium: comparative statics and dynamics[END_REF]1942;1944) 5 , who rejected his method and results. Samuelson's criticism was the starting point for a series of restatements of Hicks' intuitions by [START_REF] Lange | Price flexibility and employment[END_REF][START_REF] Lange | Complementarity and interrelations of shifts in demand[END_REF], [START_REF] Mosak | General-equilibrium theory in international trade[END_REF], [START_REF] Smithies | The stability of competitive equilibrium[END_REF] and [START_REF] Metzler | Stability of multiple markets: the hicks conditions[END_REF]. Hicks's views and intuitions were partially saved, pointing out its usefulness to think about stability issues. This led to establishing the language of substitutability as a heuristically fruitful concept to think about stability of general equilibrium. However, Hicks' narrow view which concentrated on net substitutability was abandoned and the analysis would now be conducted in terms of gross-substitutes and complements According to Samuelson, Hicksian stability is a static approach to stability. It consists mainly "to generalize to any number of markets the stability conditions of a single market" (Samuelson, 1947, 270). Instead, [START_REF] Samuelson | The stability of equilibrium: comparative statics and dynamics[END_REF] proposes the first true dynamic expression of Walrasian tâtonnement as a simultaneous process of price adjustment on markets. It takes the form of a differential equation system, ṗi = H(z i (p), H being a monotonic positive function (actually, Samuelson considers the simple case ṗi = z i (p)). Such a system is stable if and only if real parts of the eigenvalues associated to the Jacobian [Z ij ] of the system are all negative. More precisely, Samuelson introduces a distinction between local stability and global stability of an equilibrium price vector p . The former obtains when applying the differential equation to a system of demand functions starting from the neighborhood of p tends to p . Global stability obtains if the price path tends to p starting from any initial positive price vector. In the local stability case, the Jacobian can be approximated by linearization around p and the tâtonnement process will be stable if and only if the real part of all the characteristic roots of the matrix are strictly negative. Samuelson would show that the conditions of Hicksian perfect stability are neither necessary [START_REF] Samuelson | The stability of equilibrium: comparative statics and dynamics[END_REF] nor sufficient [START_REF] Samuelson | The relation between hicksian stability and true dynamic stability[END_REF] for local linear stability (the same, a fortiori, for Hicksian imperfect stability). Hence, to Samuelson, Hicksian matrices (ie matrices with principal minors alternating in sign, starting with a negative sign) are not a sound starting point for thinking about stability. The main lesson to be drawn from Samuelson's analysis is that taking the income effect seriously is indispensable to making a serious analysis of stability. However, one weakness of Samuelson's analysis of stability is that he does not strive to provide interpretable conditions of stability. In the time period following immediately Samuelson's reformulation of stability conditions, Metzler, Lange, Mosak and Smithies would rework the Hicksian intuition in line with Samuelson's mathematical apparatus. [START_REF] Smithies | The stability of competitive equilibrium[END_REF], apparently independently of Samuelson, discussed the case of stability of a monopolistic price competition economy and arrived at different necessary and sufficient properties of the roots of the characteristic polynomial of his system, (which, by the way, followed a sequential process of adjustment): those roots should be less than unity in absolute value. Interestingly, he noted that the advantage of his method over Samuelson's is that his result "leads more readily to general economic interpretation than Mr. Samuelson's method" (Smithies, 1942, 266) As for Mosak, Lange and Metzler, their investigations on the stability of economic equilibrium circa 1942-1945 can be interpreted as a series of work aiming at developping a Hicksian method in the analysis of Keynesian economic ideas, focussing on the general equilibrium framework and promoting the idea that unemployment can result from intertemporal durable underemployment of some ressources. Those various contributions will deal with international trade, imperfect competition, monetary theory and financial behaviors to build on Hicks's intuitions. Passages dealing with the stability of general static equilibrium are occasions to adapt Hick's results to the modern treatment of the price adjustment process provided by Samuelson. Lange's (1944) Price Flexibility and Employment is probably the most representative account of those various attempts. He had already identified that the theory of complementarity would be a debated topic [START_REF] Lange | Complementarity and interrelations of shifts in demand[END_REF]. The main point here is that the interplay of markets depends partly on individual's behaviors towards money. Indeed, [START_REF] Lange | Complementarity and interrelations of shifts in demand[END_REF] provided a systematic analysis of complementarity relationships at the market level. He refrains from Hicks's tendency to identifiy complementarity as a possible cause of instability. Lange also introduces a notion of partial stability of order m, expressing the fact that a system can be stable for a given subset of m prices that are adjusted (m < n). He discusses Hick's dynamic stability conditions and notes that since Samuelson leaves out the derivative of the function H in the characteristic determinant, it is tantamount to assuming that the flexibility of all prices is the same. He highlights that Hicksian stability makes sense in case when the Jacobian (the characteristic determinant) is symmetric: then, all roots are real, and the Hicksian conditions are necessary and sufficient for perfect stability.6 . The meaning of symmetry, he goes on, is that "the marginal effect of a change in the price p s upon the speed of adjustment of the price p r equals the marginal effect of a change in the price p r upon the speed of adjustment of the price p s " (Lange, 1944, 98). Thus, stability analysis does not require an equal speed of adjustment on each market, but that the effects of a price change (dp r ) upon the speed of adjustment on another market dps dt are symmetric. Mosak's exposition of the theory of stability of the equilibrium in General-Equilibrium Theory in International Trade also discusses the flaws of Hick's stability analysis. Its main merit, in this respect, is to operate a shift in the interpretation of stability. Instead of focusing on the symmetry properties of the Jacobian, the analysis of stability now revolves around the properties of excess demands, which can be conducted either in terms of gross-substitutability vs gross-complementarity or in terms of asymmetrical vs symmetrical income effects. 7 ... If the rate of change of consumption with respect to income is the same for all individuals then this net income effect will be zero. In order that the net income effect should be at all large, ∂xs ∂r must be considerably different for buyers of x s from what it is for sellers. it is not too unreasonable to assume therefore that ordinarily the income effects will not be so large as to render the system unstable" (Mosak, 1944, p.42).
Mosak would also mention the assumption that usually, goods that are net substititutes are also gross-substitutes (Mosak, 1944, p.45). [START_REF] Metzler | Stability of multiple markets: the hicks conditions[END_REF] established that under gross substitutability (GS), the conditions of Hicksian stability are the same as the conditions for true dynamic stability. Metzler insists that Hicks's analysis of stability aims at giving some ground to comparative statics results by providing a theory of price dynamics when a system is out of equilibrium. The conclusion that lends itself from Samuelson's results is that "Hicksian stability is only remotely connected with true dynamic stability" (Metzler, 1945, 279). However, Metzler's argues, "the Hicks conditions are highly useful despite their lack of generality" (Metzler, 1945, 279):
In the first place ... Hicks conditions of perfect stability are necessary if stability is to be independent of . . . price responsivness. Second, and more important, in a certain class of market systems Hicksian perfect stability is both necessary and sufficient for true dynamic stability. In particular, if all commodities are gross substitutes, the conditions of true dynamic stability are identical with the Hicks condition of perfect stability. (Metzler, 1945, 279-280).
The idea to take into account speeds of adjustement on each market is congenial to Samuelson's dynamic stability conditions and was further identified as a defect of Hicks' analysis by [START_REF] Lange | Price flexibility and employment[END_REF]. This point is quite interesting since it illustrates how some properties of a mathematical tool can be entrusted with important descriptive qualities. Clearly, imposing that a price-adjustment should not be independent of the speed of adjustment may be taken as a gain in generality in some sense, but to some clearly it was not and it would appear as an unnecessary constraint. 8 Given that the knowledge of speeds of adjustement is likely to be dependent upon specific institutional properties of an economic system, it is desirable to formulate stability conditions in terms that are independent of such speeds. For all that, the fact that Hicks conditions of perfect stability are necessary in his case does not make them sufficient for stability. At least, when the assumption of gross 7 Stability can be destroyed only if the market income effects are sufficiently large to overcome the relationships which prevail between the substitution terms. It cannot be destroyed by any possible degree of complementarity.
8 [START_REF] Smithies | The stability of competitive equilibrium[END_REF] analyses the stability of a monopolistic competitive framework. Starting from profit maximization conditions of n producers with their own market demand expectations. Each producer will change its price according to a continuous adjustment process proposed by [START_REF] Lange | Formen der angebotsanpassung und wirtschaftliches gleichgewicht[END_REF] taking into account the difference between the last period price expectation and the price of the last period.
substitutability is made, Hicksian stability is necessary and sufficient for true dynamic stability. Metzler agrees that this property may not be useful since "almost all markets have some degree of complementarity" (Metzler, 1945, 291). Hence ignoring gross complementarity in the system can lead to "serious errors" (Metzler, 1945, 284). However, Metzler's feeling is in tune with other researchers interested in stability analysis, and it is upholding the interest of Hicks fundamental intuitions:
It is natural to speculate about the usefulness of these conditions for other classes of markets as well. The analysis presented above does not preclude the possibility that the Hicks conditions may be identical with the true dynamic conditions for certain classes of markets in which some goods are complementary. Indeed, Samuelson has previously demonstrated one such case [Samuelson, 1941, 111]; ... Further investigation may reveal other cases of a similar nature. In any event, an investigation which relates the true stability conditions to the minors of the static system will be highly useful, whether or not the final results are in accord with the Hicks conditions. (Metzler, 1945, 292) In a few words, the main contribution of the Metzler analysis (together with those mentioned above) is to introduce once more the concept of substitutability into the analysis of stability, and to focus on the interpretative content of the analysis. The Hicksian tradition is reformulated around the gross substitutability hypothesis. And this hypothesis is taken as a fruitful point of departure.
To make a conclusion on this first group of works on stability, one can say that the concept of substitutability has been worked out in the 1930s and 1940s so that it can be used to describe and to interpret the main stability properties of an economic system. So, beyond the mathematics of stability, there is an interpretative content and a "positive heuristic" attached to substitutability. It is enhanced first by the idea that substitutes are good for stability (Hicks), and then, following Samuelson's criticism, that income effects should not be so ditributed as to disturb the stabilizing properties of symmetric systems, which implies in turn to consider that net substitution will dominate over income effects. Hence, even if a theory of aggregate income effects is needed, most of the arguments and the dynamic of research will take gross-substitutability as the starting point for further results.
It was so much shared by the researcher involved in GET at the time that Newman could write more than a decade later:
A good deal of the work on the analysis of stability has been directed towards establishing intuitively reasonable-or at least readily comprehensibleconditions on the elements of A, that will ensure stability. (Newman, 1959, 3) Research in this direction was on the tracks since the beginning of the 1950s. it was notably explored by [START_REF] Morishima | On the laws of change of the price system in an economy which contains complementary commodities[END_REF] by formulating a stability theorem when some complementarity is introduced into the modeL A Morishima is characterized by a complementarity-substitutability chain hypothesis, (CS): Substitutes of substitutes (and complements of complements) are substitutes, and substitutes of complements (and complements of substitutes) are complements.
Morishima derives a number of theorems from such a system, notably that dynamic stability conditions are equivalent to the Hicksian conditions for perfect stability. This result, in turn, was important for establishing the interest of Hicksian stability conditions despite Samuelson's criticism. However, Morishima's analysis introduced new unexpected constraints regarding the choice of a numéraire and would lead to further comments in the next decades. Now, by the end of the 1950s, the idea that stability should be analyzed through the properties of the matrix representing the derivatives of excess demand functions relatively to prices was well established. Moreover, research focused on systems implying gross substitutability or on systems whose content could be described as properties of the [z ij ] involving substitutability. The existence and optimality theorems of the 1950s put stability issues in the background for a few years, only to surface in the end of the 1950s in a more serious form. Now, the question is: What will be left of this analysis of stability after the axiomatic turn in general equilibrium theory, once the issue of global stability will become central?
3 From "gross-substitutability" to instability examples (1958)(1959)(1960)(1961)(1962)(1963) The turn of the 1960s represents the heyday of the research on the stability of a system of interdependent markets connected through a simple dynamics of price adjustments, the Walrasian tâtonnement. At this moment in time, the use of the langage of substitutablity is structuring research towards theorems of stability. Morishima even proclaimed that "Professor Hicks is the pioneer who prepared the way to a new economic territory-a system in which all goods are gross substitutes for each other" (Morishima, 1960, 195). Indeed, work on stability of general equilibrium was rather limited in the 1950s, researchers being more focussed on existence and welfare theorems, and there was a sudden boom by the very end of the 1950s. The time span between 1958 and 1963 is fundamental both for the structuring of the research on stability and the importance of the language of subtitutability as the main interpretative device to think about stability. In this section, I would like to put to the foreground the mode of development of general equilibrium analysis after Arrow-Debreu-McKenzie theorems of existence. I will begin by enhancing the intuitive privilege that was attributed to the stability hypothesis, and as a consequence, the interest for the gross-substitutability assumption (2.1. "Gross substitutability" as a reference assumption). Then, I will focus on instability examples and the way the results have been received by theoreticians (2.2 Scarf and Gale's counter-examples). The discussion of alternative sufficient conditions of stability are then discussed (2.3 Gross Substitutability, Diagonal Dominance and WARP)
"Gross substitutability" as a reference assumption
Most of the work on stability in the fifties and sixties is centered on the hypothesis of gross substitutability. It is a sufficient hypothesis for unicity of equilibrium [START_REF] Arrow | On the stability of the competitive equilibrium, i[END_REF]. It is also the hypothesis with which Arrow, Block and Hurwicz established the global stability of the tâtonnement in 1959. This result was presented as a confirmation of the importance for stability of the substitution among goods. Let's make a short digression on the status of concept and hypothesis within the axiomatic phase of general equilibrium theory. Axiomatisation is usually at odds with the interpretative content of the concepts and hypothesis [START_REF] Debreu | Theoretic models: mathematical form and economic content[END_REF][START_REF] Ingrao | The invisible hand: economic equilibrium in the history of science[END_REF]). The question is thus whether the heuristic properties of substitutability should remain relevant in this context. The answer is yes. Leaving aside the relevance of the tâtonnement as a descriptive tool, the fact is that most of the theoreticians, I mean those who were interested in the work on stability, tended to consider that the concepts and assumptions used should have some heuristic properties and descriptive qualities. This aspect of the work on stability, compared with other fields of general equilibrium theory, is hardly underlined [START_REF] Hands | Derivational robustness, credible substitute systems and mathematical economic models: the case of stability analysis in walrasian general equilibrium theory[END_REF]. In any case, it is certainly a key to study the development of stability analysis and to understand the reactions of the main protagonists. Otherwise stated, everything happens as if the descriptive content of general equilibrium was not only at the level of the dynamic process but also at the level of the properties of the excess demand functions giving stability. In this sense, substitutability plays a heuristic role in the stability analysis, in conformity with Hicks's ideas. It is also to be mentioned that some theoreticians have always privileged a use of axiomatics bounded by the constraints of providing interpretable theorems. This is exemplified in [START_REF] Arrow | General competitive analysis[END_REF] and it can be traced back to Abraham [START_REF] Wald | Über einige gleichungssysteme der mathematischen ökonomie[END_REF]. The assumption of grosssubstitutability, as such, could appear to any economist with a solid background in mathematics as the most natural assumption to obtain global or local stability of the price adjustment process. Indeed, GS appears in many studies on dynamic stability in the late 1950s. Let's mention [START_REF] Hahn | Gross substitutes and the dynamic stability of general equilibrium[END_REF], [START_REF] Negishi | A note on the stability of an economy where all the goods are gross substitutes[END_REF], [START_REF] Mckenzie | Stability of equilibrium and the value of positive excess demand[END_REF], [START_REF] Arrow | Some remarks on the equilibria of economic systems[END_REF], and the now classical [START_REF] Arrow | On the stability of the competitive equilibrium, i[END_REF] and [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF] articles. 9 Arrow and Hurwicz (1958) testifies for the optimist flavor of the time. Under GS, homogeneity of demand functions and the Walras Law, they show that the tâtonnement process is globally stable. The proof makes use of Lyapunov's second method to study dynamical systems. In this article, they show that certain kinds of complementarity relations are logically impossible within the framework of a Walrasian economy. This is taken as reducing the conditions of instability:
[The] theorem . . . suggests the possibility that complementarity situations which might upset stability may be incompatible with the basic assumptions of the competitive process. (Arrow and Hurwicz, 1958, 550) In the same time, the gross substitutability assumption is seen as not realistic. But gross substitutability is after all nothing more than a sufficient condition for stability, and the field of investigation seems to be open for less stringent hypotheses, introducing complementarity. So, during the axiomatic turn, there is a slight epistemological shift in stability analysis. On the one hand, there is still the Hicksian idea that substitutes are good for stability, but it is quite clear that substitutes, as opposed to complementary goods, will not do all the work, and that the task will not be so easy to achieve. The fact is that generalization of the gross substitutability assumption (the weak gross substitutability) was not that easy to obtain. On the other hand, it is clear also that substitutability is still regarded as the most important concept in order to express stability conditions and to describe the structural properties of an economy. Neither the diagonal dominance, nor the weak axiom of revealed preferences in the aggregate caught that much interest (see below).
Through this theorem, Arrow, Block and Hurwicz were confirming the importance of GS for global stability after other results of local stability [START_REF] Hahn | Gross substitutes and the dynamic stability of general equilibrium[END_REF][START_REF] Negishi | A note on the stability of an economy where all the goods are gross substitutes[END_REF]. [START_REF] Arrow | On the stability of the competitive equilibrium, i[END_REF] had already provided a proof for this theorem in a three-good economy. Even if GS could appear as an ad hoc assumption, given its strong mathematical properties on the price path, it was regarded however as a central assumption and a relevant and promising starting point for further inquiries. Other aspects of the Arrow, Block and Hurwicz contribution were regarded as strong results. Notably, the fact that the global stability was obtained both in an economy with or without numéraire (the normalized and non-normalized cases). Moreover, global stability implied that the price adjustment process was not necessarily linear: any sign-preserving adjustment was accepted.
On this occasion, we see that various constraints are likely to operate on the judgmment about the quality or the importance of a result. It turns out that the superiority of having a theorem independent of the choice of a numéraire is something that could appear as justified by the search for the greatest generality, while its genuine significance from an interpretative content was not discussed or debated.
As a consequence of this optimism and of the heuristic content of substitutability, one can understand the situation of the work on stability at the end of the 1950s. The idea that it would be necessary to find out stable systems with complementarity was clearly identified. [START_REF] Enthoven | A theorem on expectations and the stability of equilibrium[END_REF] show that if A is stable, the DA is stable iff the diagonal elements in D are all positive. They address the limits of such a model:
In any actual economy, however, we must be prepared to find substantial, asymmetrical income effects and a goodly sprinkling of gross complementarity. It is desirable, therefore, to try to find other classes of matrices about which useful statements about stability can be made. (Enthoven and Arrow, 1956, 453) For all that, due to its structuring role, there is a kind of benevolence towards the gross substitutability assumption. It is the task for unsatisfied people to prove that gross substitutability is not appealing, and that the concept of substitutability may not be enough to study stability. What makes the interest of this story is that counter-examples of unstable economies will arrive a few months later.
Scarf and Gale's counter-examples
The two important contributions of [START_REF] Scarf | Some examples of global instability of the competitive equilibrium[END_REF] and [START_REF] Gale | A note on global instability of competitive equilibrium[END_REF] will shift the debate on stability. I will not enter precisely into their construction here. Just to go to the point of my analysis, they construct a general equilibrium model with three goods, based on individual rational agents, so that the tâtonnement process of the economy does not converge to the unique equilibrium. Scarf's example implies complementarities between two goods, and asymmetrical income effects. Scarf comments on his results, underlying that instability comes from pathological excess demand functions. Scarf's attitude towards this result is ambiguous. On the one hand, he asserts that "Though it is difficult to characterise precisely those markets which are unstable, it seems clear that instability is a relatively common phenomenon" (Scarf, 1960, 160). On the other hand, he gives some possible objections to the empirical relevance of his model: As a final interpretation, it might be argued that the types and diversities of complementarities exhibited in this paper do not appear in reality, and that only relatively similar utility functions should be postulated, and also that some restrictions should be placed on the distribution of initial holdings. This view may be substantiated by the known fact that if all the individuals are identical in both their utility functions and initial holdings, then global stability obtains. (Scarf, 1960, 160-161) Scarf's comment shows negatively how the language of substitutability makes sense to interpret results, be there positive or negative ones. The presence of complementarity in the system is a guarantee of descriptive relevance of the model. And Scarf goes even farther in suggesting that complementarity may be a cause for instability while a sufficient degree of substitutability may ensure stability.
As for Gale (1963, 8), he would insist on Giffen goods to explain the instability examples obtained. In line with this tendency to entrust substitutability with an explanatory power, the same kind of interpretation can be found in [START_REF] Negishi | The stability of a competitive economy: a survey article[END_REF] and also in Quirk and Saposnik (1968, 191), who are of opinion that the stability of a tâtonnement "is closely tied up with the absence of strongly inferior goods".
The different reasons invoked to comment the instability examples should not be overplayed. They stem also from the tendency to disconnect analytical cases. The apearance of a Giffen effect is linked to situations when substitution is difficult and is not independent with the specific situation of the some agents in terms of initial endowments compared with other agents in the economy.
Nevertheless, the Scarf and Gale examples are received with a kind of perplexity. Everything happens as if their models where singular models, and thus as if they were not affecting the general idea that systems including enough substitutability may be stable. At the same time, it is now felt urgent to find less stringent conditions including complementarity, guaranteeing stability. At this moment in time, the interpretative content of substitutability is at stake. With Scarf's and Gale's examples, the situation is reversed. The suspicion is now clearly on stable systems, and it is the task for all those who have a positive a priori in favor of stability to produce examples of stable systems including complementarity relations. In fact, Scarf's results make it possible to question the heuristic content of substitutability.
Actually, by identifying many possible sources of instability, relating to the spread of initial endowments and to the variety of preferences, and to their implications on demand, the interpretative and descriptive content of substitutability looses some ground. It does not seem possible to express only with substitutes and complements the characteristics of an economy, and its properties for stability. Nevertheless, substitutability remains the main concept with which it is thinkable to search for stability conditions. As a proof for this, it is remarkable that neither the diagonal dominance hypothesis, nor the weak axiom of revealed preference would be serious candidates for serving as a starting point to think about stability, at least in those years.
Discussing Diagonal Dominance and WARP in the aggregate
The general idea that we uphold here is of a methodological nature. Whereas some authors would tend to apply an external set of criteria for success and failure of a research program, we would like to on some complementary criteria to appraise the history of the research done on stability. Research on stability was structured on a set of soft constraints in terms of methods and tools to be used, which are regarded as more fundamentally in tune with the spirit of the general equilibrium research program. To name just a few, such soft constraints concern the choice of the price adjustment process, the interpretative and descriptive potential of conditions for stability, the relative importance of global vs local stability results, the search for results that are independent of the choice of a numéraire, a tendency to prefer models implying unicity of equilibrium. All those constraints are structuring the expectations and valuations of the results obtained. So far, we have seen that until the beginning of the 1970s, substitutability was able to meet a number of constraints and to offer a good starting point for a descriptive interpretation of GET. We have to discuss more in depth why substitutability was priviledged compared with the assumptions of Diagonal Dominance and the Weak Axiom of Revealed Preferences in the aggregate.
The research programme on the stability of general equilibrium is very specific within GET, and it has consequences also on uniqueness and comparative statics because it is disconnected from direct microfoundations of the statements. Sets of conditions proposed as sufficient conditions for stability are related to market properties and the study of the micreconomic foundations of those properties is postponed. Meanwhile, stability conditions are valued according with their heuristic content or likelihood. Actually, no alternative condition on the properties of excess demands would appear as a promising alternative before the end of the 1950s. One such alternatives is the Diagonal Dominance condition. It states that the terms of the matrix JZ are such that (DD)
z ii < 0 and |z jj | > n i=1 i =j |z ij | j = (1, 2, ...n) (5)
This condition appeared at first in [START_REF] Newman | Some notes on stability conditions[END_REF] but was independently explored by Arrow and Hahn 10 . This condition states that the effect of a price change of good i on the excess demand for good i must be negative and greater in absolute value than the sum of absolute values of the indirect effects of the same variation in price of good i on the excess demands of all other goods. 11
In some sense, DD has much to be recommended. It is less stringent than gross substitutability, because (GS) implies (DD). But practically, no utility function has been found that may imply diagonal dominance without implying also gross substitutability. Moreover, only certain forms of diagonal dominance do guarantee stability. Then, it seems easier to provide less stringent conditions by taking gross substitutability as a starting point that can be amended and weakened than by taking diagonal dominance as a starting point. Other reasons can also explain why DD was not taken as an interesting basis for research on stability in those years. Actually, one can figure out the economic content of DD, expressing that the own price effect on a market dominates the whole set of indirect effects from other prices. On second thought, it turns out that this idea of domination involves some quantitative property that are better avoided. At least, it is in those terms that general equilibrium theorists conceived of the search for general theorems. In this respect, as long as one could hope to find satisfactory results only with qualitative assumptions, quantitative constraints, interpretable as they may, were not favored. Moreover, it does not seem easy to use DD as a starting point to search for less stringent assumptions. For instance, Arrow and hahn would point out that it has a "Marshallian flavor" (Arrow and Hahn, 1971, 242) and that it does not carry with it enough heuristic power. Such views on DD would change later on, as the set of constraints would weaken.
What about WARP and stability? It was already known since Wald that WARP in the aggregate was a sufficient condition for uniqueness of equilibrium. [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF] showed that WARP is a sufficient condition for local stability. Actually, WARP is a necessary and sufficient condition for the uniqueness of equilibrium in certain cases. GS thus implies WARP, with the advantage that the GS property is preserved through aggregation while it is not the case for WARP (in a more than three-good case). Actually, those relationships would not be discussed in the 1960s, hence DD appeared as the only alternative starting point for discussing about stability, even though research on stability was somehow disconnected from the immediate search for microfoundations of the assumptions made on the properties of excess demand functions.
Hahn has informed me that he and Kenneth Arrow have used it in some as yet unpublished work. It is common in the mathematical literature." (Newman, 1959, 4) 11 An alternative statement for (DD), (DD') is that the own price effect z ii be greater in absolute value to the sum of all cross price effects from the variation of the prices of other goods
|z ii | > n j=1 i =j |z ij | i = 1, 2, ...n.
A JZ satisfying both DD and DD' is strongly dominant diagonal. Newman also mentions another set of stabiity conditions based on (DD)-quasi-dominant main diagonalthat was proposed by Hahn and Solow. 4 The end of a research programme?
So far, I have indicated how a general framework of interpretation of the work on stability of an exchange economy was constructed. As was seen in the first section, the idea to search for sufficient conditions introducing complementarity pre-existed to the Arrow Block Hurwicz result and to the Scarf and Gale counter examples. In this section, I want to focus on two different kinds of obstacles that were put on the road. Firstly, from an internal point of view. All the attempts that were made to generalise the gross substitutability assumption did not give many results. What is clear from Scarf and Gale is that it was no more possible to introduce complementarity arbitrarily (4.1 The impossible generalisation). From an external point of view, then, some work in the seventies and eighties questioned radically the research programme, as it had been formulated by Walras (4.2 Through the lookingglass, and what Sonnenschein, Mantel, Debreu, Smale and others found there).
The impossible generalization
This is an important point for my thesis. The time period following immediately the Scarf results shows that researchers have not much hopes to obtain much better than the [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF] [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF] theorem. It is the true moment when the heuristic of substitutability failed and faded away within one decade. It is striking for instance how this set of conditions is treated in [START_REF] Arrow | General competitive analysis[END_REF] General competitive analysis, a book which was (and still is in many respects) the state of the art of GET.
Actually, as we have mentionned before, the hopes to find stability results with complementarity relations was on the agenda from 1945 on (Metzler), and it had just been confirmed by the results on global stability. From the beginning of the fifties on, Morishima was working on this agenda [START_REF] Morishima | On the laws of change of the price system in an economy which contains complementary commodities[END_REF](Morishima, , 1954[START_REF] Morishima | On the three hicksian laws of comparative statics[END_REF][START_REF] Morishima | A generalization of the gross substitute system[END_REF]). Morishima's idea was to introduce complementarity relations between certain goods. He thus proposed an economy whose excess demands would verify that goods were grouped together so that all the substitutes of substitutes are substitutes to each other, and so that all complementary goods of complementary goods where complementary to each other. In the same spirit, [START_REF] Mckenzie | Stability of equilibrium and the value of positive excess demand[END_REF] established the dynamic stability of an exchange economy in which certain sums of the partial derivatives of the excess demand with respect to the prices are positive, which allows introducing a certain amount of complementarity into the system. 12The above discussion on the relative merits of GS, DD and WARP is thus conditional on the kind of constraints that the theoretician takes as granted. This would certainly have an impact on further research (section 4). It was already apparent here and there in the 1950s and 1960s. For instance, McKenzie (1958) offers one of the first study in which GS implies global stability of the unique equilibrium. when there is no num éraire in the model. He also provides a model with a num éraire which makes it possible to consider the stability of a system in which certain weighted sums of the partial derivatives of excess demands with respect to prices are positive, a "natural generalization". McKenzie's comment shows that the descriptive potential of an assumption is linked with the choice of constraints. Indeed, the case in which some complementarity is introduced in the model leads only to a local stability result and aknowledges the multiplicity of equilibria, a situation which he finds descriptively adequate, i.e. in accordance with his own ideas about the stylized facts: "In this case, one must be content with a local stability theorem, but one hardly need apologize for that. Global stability is not to be expected in general" (McKenzie, 1958, 606) Finally, [START_REF] Nikaido | Generalized gross substitute system and extremization[END_REF] proposed the generalised gross substitutability assumption, i.e. that the sum of the symmetrical terms relative to the diagonal of the Jacobian be positive. In such a system, if tea is a gross complement to sugar, then sugar must be a gross substitute to tea. Some remarks on all these developments are in order: Firstly, all the results obtained have strong limitations relatively to the programme of general equilibrium theory. They are not independent of the choice of the numéraire good and they are valid only locally. For example, the Morishima case was showed to be incompatible with a Walrasian economy, because of the properties of the numéraire commodity. That stability should be invariant under a change of numéraire seemed "reasonable" (Newman, 1959, 4). [START_REF] Ohyama | On the stability of generalized metzlerian systems[END_REF] added a condition on the properties of substitution of the numéraire with other goods to ensure stability. Secondly, most of the results I have mentioned suppose that they refer to quantitative constraints on excess demand functions, in the sense that they suppose comparing the relative strength of partial derivatives. From this point of view, the diagonal dominance hypothesis goes in the same direction.
All these limitations illustrate the doubts that arose regarding the hopes for finding a true generalisation of the gross substitutability hypothesis. Indeed, this kind of quantitative constraint is something general equilibrium theorists would prefer to dispense with. At least this general view was not debated. Meanwhile, the heuristic of substitutability is shrinking. The question now is to discuss the relevance of quantitative and structural restrictions on excess demand functions. The change in the spirit and in the state of mind of the theoreticians can be clearly felt. Just to give a quotation by [START_REF] Quirk | Complementarity and stability of equilibrium[END_REF] [START_REF] Quirk | Complementarity and stability of equilibrium[END_REF], who focuses specifically on the limits of a purely qualitative approach to GET: "In contrast to the Arrow-Hurwicz results, here we do not prove instability but instead show that stability cannot be proved from the qualitative properties of the competitive model alone, . . . except in the gross substitute case" (Quirk, 1970, 358). It is a very clear way to renounce establishing general properties compatible with stability. This reduces quite naturally the analytical appeal of substitutability as a single comprehensive tool to deal with stability issues (see also Ohyama (1972, 202). Finally, at this moment on, theoreticians have realised to what degree the gross substitutability assumption was specific, as the only qualitative hypothesis on excess demand functions guaranteing stability of a tâtonnement process. Of course, it may be that some mathematical economists were perfectly aware that GS is too much an had hoc mathematical assumption to obtain stability, but still its specificity as a qualitative assumption was much better acknowledged by the end of the 1960s and beginning of the 1970s.
A word is in order regarding the presentation of the sufficient conditions for stability in [START_REF] Arrow | General competitive analysis[END_REF]'s General competitive analysis. The presentation of stability results in surveys such as [START_REF] Newman | Some notes on stability conditions[END_REF] and [START_REF] Negishi | The stability of a competitive economy: a survey article[END_REF] were clearly transmitting the view of a progressive program of research, with a need to understand the links between different sets of conditions. Yet, Negishi introduced some more temperate view, both enhancing the GS assumption as concentrating the essence of the knowledge on stability and pointing out that due to unstability examples, theoreticians would better concentrate on alternate adjustement processes (such as non-tâtonnement processes). 13 To sum up, in the beginning of the seventies, the work on stability gives a very pessimistic, and even negative, answer to the agenda originally formulated by Metzler and then by Arrow, Block and Hurwicz. Two kinds of results will come and evacuate a bit more any interest with this kind of work: the well-known Sonnenschein-Mantel-Debreu theorem on the one hand, and the Smale-Saari-Simon results on the other hand. Already in the thirties it was known that some properties of individual demand behavior would not be preserved at the aggregated level in general (see for example [START_REF] Schultz | Theory and measurement of demand[END_REF] and [START_REF] Hicks | Value and capital[END_REF]. Clearly, there was a gap between weak restrictions on the demand side and stringent sufficient conditions for stability. So, while the work on stability was progressing only very slowly, and not with the results that were expected, a group of theoreticians was engaged in taking the problem from the other side, that is, from the hypothesis of individual maximising behaviour:
"Beyond Walras' Identity and Continuity, that literature makes no use of the fact that community demand is derived by summing the maximizing actions of agents" (Sonnenschein, 1973, 353) If it is not possible to demonstrate that an economic system with complementarity relations among markets is stable, is it not possible to show that any general equilibrium system based on rational agents exhibits some properties regarding the excess demand functions. This would be at least a way to "measure" the gap between what the logic of GET gives us and what we expect from it in order to arrive at stability theorems. The answer to this question is well known. It is a series of negative results known as Sonnenschein-Mantel-Debreu theorems or results 14 . Market excess demand generated by an arbitrary spread of preferences and initial endowments will exhibit no other properties than Walras Law and the Homogeneity of 13 Negishi's reaction to Scarf examples is interesting in its way to put the emphasis on the choice of the "computing device" and not on the interpretative content of stability conditions. This is another instance of the fact that the proper balance between different attitudes regarding the research program was not discussed and can only be grasped here and there from passing remarks: "We must admit that the tâtonnement process is not perfectly reliable as a computing device to solve the system of equations for general economic equilibrium. It is possible to interprete these instability examples as showing that the difficulty is essentially due to the assumption of tâtonnement (no trade out of equilibrium) and to conclude that the tâtonnement process does not provide a correct representation of the dynamics of markets." (Negishi (1962, 658-9))
14 SMD theorems are named after a series of articles published in 1972-1974. degree zero of excess demands relative to prices. Otherwise stated, given an arbitrary set of excess demands, one can always construct an economy that will produce those excess demands. The question raised by Sonnenschein, Mantel, Debreu and others goes against the usual stream of investigation concerning stability. But it is the most natural stream in terms of the individualistic methodological foundations of the general equilibrium program. Nevertheless, this result raised some perplexity from the field of econometrics. After all, the distribution of endowments and of preferences allowing for such arbitrary excess demand in the aggregate may well be as much (or even more) unrealistic as the ones generating a representative agent [START_REF] Deaton | Models and projections of demand in post-war Britain[END_REF]. [START_REF] Kirman | Market excess demand in exchange economies with identical preferences and collinear endowments[END_REF] showed that the class of excess demands would not be restrained even if the agents had the same preference relations and co-linear endowments. To improve further on the constraints would mean to construct a representative agent. So, the SMD result would imply that Giffen goods are quite "normal" goods in a general equilibrium framework, and following Scarf and Gale conclusions, "instability" would be a common feature of economic systems.
Then, the S-M-D theorem reduces still a bit more the relevance of quantitative restrictions that would yield stability. The change in the spirit of the economists has been portrayed by Mantel:
"Another field in which new answers are obtained is that of stability of multimarket equilibrium. It is not so long ago that the optimistic view that the usual price adjustment process for competitive economies is, as a rule, stable, could be found-an outstanding representative is that of [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF]. Counterexamples with economies with a single unstable equilibrium by [START_REF] Scarf | Some examples of global instability of the competitive equilibrium[END_REF] and [START_REF] Gale | A note on global instability of competitive equilibrium[END_REF] had a sobering effect, without destroying the impression that the competitive pricing processes show some kind of inherent stability. Here the question arises whether such counterexamples are likely, or whether they are just unlikely exceptions" (Mantel et al., 1977, 112) After the SMD results, Scarf and Gale counterexamples could no longer be regarded as improbable, if the excess demand should have arbitrary properties. But from a historical point of view, one must keep in mind that there was a twelve years gap between the reception of the SMD results and the strengthening of thsee results by [START_REF] Kirman | Market excess demand in exchange economies with identical preferences and collinear endowments[END_REF]. What happened during that time span is also very fruitful for our inquiry. For all those who were discouraged by the turn of events, for those who had only a poor faith in the possibility to find satisfactory theorems, the Scarf counterexamples were a starting point for something else. We have seen that Scarf himself felt uncomfortable with the instability result, and that he felt that some disturbing cause of instability may have been arbitrarily introduced in the model. This was the starting point for an inquiry into dynamic systems and algorithmic computation of equilibrium [START_REF] Scarf | The computation of economic equilibria[END_REF]. In this field of research, Steve Smale endeavored to cope with the question of stability. His purely mathematical look at the subject kept the interpretative content outside, and he readily understood that in general equilibrium "complexity keeps us from analysing very far ahead" (Smale,976c,290). Rather than concentrating his reproaches on the descriptive content of the tâtonnement process and on the stability conditions that were found, Smale tackles another question, quite different from that of Sonnenschein, Mantel and Debreu. If equilibrium exists, how is this equilibrium reached? After [START_REF] Scarf | The computation of economic equilibria[END_REF], [START_REF] Smale | A convergent process of price adjustment and global newton methods[END_REF] will found a dynamic process much more complex than Walrasian tâtonnement which allows finding the equilibrium, for any arbitrary structure of the excess demand functions. This process, the Global Newton method, is a generalisation of a classical algorithm of computation of equilibrium. In this process, the variations in the prices on each market dp i dt will not depend solely on the sign of the excess demand z i (p) on this market, but also on the excess demands on other markets. This dynamic process is Dz(p) dp dt = -λz(p) with λ having the same sign as the determinant of the Jacobian. Smale's shift in the way to attack the issue of stability is of interest when confronted with the constraints that general equilibrium theorists had put on the research program, focusing on Walrasian tâtonnement as a neutral dynamics, whose advantage came essentially from the mathematical simplicity in handling it. [START_REF] Hahn | Reflections on the invisible hand[END_REF] reaction to this kind of process is embarrassed. Indeed, Smale's process is shifting the general equilibrium program. What can be the meaning of a dynamic process in which the behaviour of the prices on each market depends of the situation on every other market? Hahn does not have any answer to give. The fundamental problem is that this process is very demanding in terms of information. While the Walrasian auctioneer does not have anything else to know than excess demands at a given price vector, the fictional auctioneer of the Global Newton method will have to know about the qualitative properties of each excess demands. [START_REF] Saari | Effective price mechanisms[END_REF] established that this amount of information was the price to be paid to find a computational method independent of the sign of the excess demand. This is precisely this kind of information that the use of a Walrasian tâtonnement dynamics aimed at ignoring. With the Sonnenschein-Mantel-Debreu and Kirman and Koch's theorems on the one hand and with Smale, Saari and Simon's results on the other, the stability research program, in its original form, has collapsed.
It is not the purpose of the present study to tell the details of all the escapes from SMD [START_REF] Lenfant | L'équilibre général depuis sonnenschein, mantel et debreu: courants et perspectives[END_REF]. As far a substitutability is concerned, it is worth noting that it appears here and there.
A first consequence of the Scarf-Smale escape from stability issues is that the concept of substitutability becomes at best to weak to serve as a descriptive basis of the properties of stable systems, at least as long as the search for global stability theorems is at stake. Slowly, the condensed structure of constraints pertaining to the research program on stability has disaggregated. For instance, the SMD results have destroyed the perspective to search for reasonable conditions on uniqueness and global stability. Hence, research has been reoriented towards different perspectives, either to concentrate on the algorithm that permits to calculate equilibria (this is the Scarf perspective) or to concentrate on local stability results in various frameworks. It is not the purpose of this article to discuss various ways to react to SMD. Work on the stability of a Walrasian type price adjustment process has led to some new results regarding WARP and DD in relation to GS. To [START_REF] Hahn | Reflections on the invisible hand[END_REF] even though GS implies DD, there is practically no example of utility functions satisfying DD but not GS. [START_REF] Keenan | Diagonal dominance and global stability[END_REF] show that DD is a sufficient condition for stability in an unnormalized tâtonnement process.
The effect of SMD results on the research program on stability cannot be examined independently of their broader impact on the theory of general equilibrium. Once the perspective of obtaining uniqueness vanishes, the interest for local stability comes back to the forefront. Once the idea of treating money as a specific input into GET which cannot be dealt with endogeneously is well accepted, the idea of appraising results that are not independent of the choice of the numéraire may attract more attention. Whatever the global effect of SMD, it is still to be found that some research focus on substitutability as a stabilizing phenomenon. For instance, Keenan (2000) has established that the standard conditions for global stability of WT (either GS, DD or that the Jacobian is Negative semidefinite) "can be translated into ones that need be imposed only on the aggregate substitution matrix" (Keenan, 2001, 317) do depend exclusively on substitution effects: "Thus for each condition on the matrix of total price effects implying global stability, there is a corresponding one on only the matrix of compensated price effects which also implies global stability" (Keenan, 2001, 317) Keenan's agenda may seem dubious, in its way to treat substitutability has a concept which is sufficient to support all the relevant information for understanding stability. At least, it could be taken as a remnant of the heyday of stability theory. 15To us, it is revealing of the still lively importance of the concept of substitutability as a heuristic device for discussing stability. Following a quite different agenda, [START_REF] Grandmont | Transformations of the commodity space, behavioral heterogeneity, and the aggregation problem[END_REF] has established conditions on the interdependence of preferences within the economic system (increasing heterogeneity) with the result that sufficient heterogeneity leads to GS for a growing set of initial values of the price vector and to WARP in the limit, thus guaranteeing stability of the WT. But a different view could be held on the basis of a more tracatable and applied approach to GET, in line with Scarf's agenda, such as the one upheld by [START_REF] Kehoe | Gross substitutability and the weak axiom of revealed preference[END_REF]. Kehoe argues that in production economies, the GS assumption looses much of its interest because there are cases of multiple equilibria. He focuses on the number of equilibria rather than on stability issues. In this framework, it is possible to construct economies with Cobb-Douglas consumers (hence well-behaved GS behaviors) and yet a production technology that generates several equilibria. In contrast, WARP (in the aggregate) implies uniqueness even in production economies [START_REF] Wald | Über einige gleichungssysteme der mathematischen ökonomie[END_REF]. Hence, in production economies, GS does not imply WARP.
Final remarks
In this paper, my aim was to put to the foreground the uses of the concept of substitutability in general equilibrium theory. Substitutability, as the main concept used to describe the qualitative properties of an economic system, was expected to provide also good interpretative properties i.e. it was hoped that substitutability would be a sufficient way to express general conditions under which the stability of the tâtonnement would be guaranteed. I have interpreted this very general idea as a guiding principle for the researches on stability. It was thought that substitutes and complements should represent enough information to formulate "reasonable" or "hardly credible" stability conditions. The point was then to see how this guiding principle, this positive heuristic, has been affected by the mathematical results that were foundeaton, and how it came to be deprived of its interpretative content. Of course, I do not pretend that substitutability was the only concept implied in the elaboration of the research programme. It is quite clear from my presentation that the formalisation of the Walrasian tâtonnement, the reflection on quantitative constraints, have also played a role in this story. They formed a complex system of rules to be followed and where themselves embedded in different representations of the purpose of GET. The present article does not pretend to have identified the one single way of interpreting the history of this research program and its connections with other aspects of GET. It has highlighted that within the development of GET, it is possible to identify descriptive heuristics that seem to have payed a role in structuring the research agenda and the interpretation of the results. But in the final analysis, the concept of substitutability has served as a criterion in order to evaluate the relevance of most of the results and to appraise the theoretical consequences of those results on the research programme. It has been a tool for rationalising the path followed by stability analysis. From a methodological point of view, a conclusion that can be drawn from this study of stability is that the weakening of a research program and its reformulation within the framework of a purely mathematical theory, do not depend on a unique result, be it a negative result. The matter depends more pragmatically on the accumulation of many negative or weak results that come to be interpreted as a bundle of results indicating that something else must be done and that the programme must be amended. And it might be that the Sonnenschein-Mantel-Debreu result was not the most important result with regard to this amendment. In this respect, there was not any more important result, because it was more questioning, and it makes sense to us when connected to other results and to the general principles that dominated research on GET in the 1960s and 1970s. This overview, it is hoped, opens more fundamentaly to a new representation of the development of GET based on simulations. Thus, a number of disruptive shifts from the original research program have changed the whole understanding of the tenets of GET and of the role played by different assumptions. The evolution of the theoretical involvements of theoreticians into the concept of substitutability offers, we think, a fruitful perspective on the transformations of a complex and intricate research topic such as GET.
4. 2
2 Through the looking glass and what Sonnenschein, Mantel, Debreu, Smale and others found there.
[START_REF] Walras | Eléments d'économie politique pure ou théorie de la richesse sociale (elements of pure economics, or the theory of social wealth)[END_REF], after discussing the tâtonnement when the price vector is not at equilibrium, applies the same technic to discussing the effects of a simple change in the parameters of the model, e.g. a change in the intial endowment in one good to one agent. The whole set of results-a mix of stability and comparative statics-is precisely what Walras calls the "Law of supply and demand"
Actually, the mathematical analysis of stability was already published in 1937 in the booklet presenting the appendix of Value and capital, Théorie mathématique de la valeur en régime de libre concurrence[START_REF] Hicks | Théorie mathématique de la valeur en régime de libre concurrence[END_REF].
[START_REF] Hicks | A reconsideration of the theory of value. part i[END_REF] provided the state of the art of the ordinalist theory of choice and demand, obtaining independently of[START_REF] Slutsky | Sulla teoria del bilancio del consumatore[END_REF] a decomposition of the single price effect on demand into two effects[START_REF] Chipman | Slutsky's 1915 article: How it came to be found and interpreted[END_REF]. They also corrected Pareto's insuficiencies[START_REF] Pareto | Manual of political economy[END_REF] as regards the defition of complements and substitutes, which implied to recognize
If the the market for X is unstable taken by itself, price reactions will tend to increase market disequilibrium, hence it will not be made stable through reactions with other markets(Hicks (1939), 71-72, §5). Again, this result would probably be different if a number of market interactions are taken into account.
see also[START_REF] Samuelson | Foundations of economic analysis[END_REF] which contains in substance the three articles mentioned
Symmetry of the characteristic determinant of order m implies (and requires) symmetry of all its principal minors
We know fromNegishi (1958, 445, fn) that his contribution and those of[START_REF] Hahn | Gross substitutes and the dynamic stability of general equilibrium[END_REF] and[START_REF] Arrow | On the stability of the competitive equilibrium, i[END_REF] were prepared independently and submitted to Econometrica in between Appril and July 1957.
"This condition apears to be new in the literature on general equilibrium, although Dr. Frank
For any partition of the set of goods J = (1, ...n) into two subsets J 1 and J 2 , we havei∈I1 z ij2 + i∈I2 z ij1 > 0 for all j 1 ∈ J 1 , j 2 ∈ J 2
Note that on this occasion, Keenan favors the discussion of conditions on the Jacobian to the use of a Lyapunov function | 89,545 | [
"745282"
] | [
"1188"
] |
01764151 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764151/file/462132_1_En_13_Chapter.pdf | Hussein Khlifi
email: [email protected]
Abhro Choudhury
Siddharth Sharma
Frédéric Segonds
Nicolas Maranzana
Damien Chasset
Vincent Frerebeau
Towards cloud in a PLM context: A proposal of Cloud Based Design and Manufacturing methodology
Keywords: Cloud, Collaborative Design, PLM, Additive Manufacturing, Manufacturing
Product Lifecycle Management (PLM) integrates all the phases a product goes through from inception to its disposal but generally, the entire process of the product development and manufacturing is time-consuming even with the advent of Cloud-Based Design and Manufacturing (CBDM). With enormous growth in Information Technology (IT) and extensive growth in cloud infrastructure the option of design and manufacturing within a cloud service is a viable option for future. This paper proposes a cloud based collaborative atmosphere with real-time interaction between the product development and the realization phases making the experience of design and manufacturing more efficient. A much-optimized data flow among various stages of a Product Lifecycle has also been proposed reducing the complexity of the overall cycle. A case study using Additive Manufacturing (AM) has also been demonstrated which proves the feasibility of the proposed methodology. The findings of this paper will aid the adoption of CBDM in PLM industrial activities with reduced overall cost. It also aims at providing a paradigm shift to the present design and manufacturing methodology through a real-time collaborative space
Introduction
With the emergence of new advanced technologies and rapidly increasing competition for efficient product development, researchers and industry professionals are constantly looking for new innovations in the field of design and manufacturing. It has become a challenge to meet the dynamics of today's Marketplace in the manufacturing field as the product development processes are geographically spread out. In the research community of Cloud Based Design and Manufacturing ongoing debate constantly takes place on the key characteristics like cloud based design, communication among users, safety of data, data storage, and data management among others. Such discussions have now been answered with the developments of cloud based design and manufacturing. Efforts are now directed towards making advancements in the field of design and manufacturing by using IT tools & PLM concepts. Few researchers are advancing in the field of developing a PLM paradigm in linking modular products between supplier and product developers [START_REF] Belkadi | Linking modular product structure to suppliers' selection through PLM approach: A Frugal innovation perspective[END_REF], few others have extended their PLM research in the domains of Building information modeling by taking motivation and best practices from PLM by emphasizing more on information centric management approach in construction projects [START_REF] Boton | Comparing PLM and BIM from the Product Structure Standpoint[END_REF]. A revolutionary advancement of cloud services which now offers distributed network access, flexibility, availability on demand and pay per use services has certainly given push for applying cloud computing technology in the field of manufacturing. The intended idea of performing manufacturing on cloud has reached to such an extent that industries are forced to carry out operations in cloud rather than using the traditional methods. Today's world is moving faster and is more connected than ever before due to globalization which has created new opportunities & risks. Traditional methods lack the ability to allow users, who are geographically spread out to work in a collaborative environment to perform design & manufacturing operations. Traditional Design processes have a one-way process that consists of four main phases: customer, market analysis, designer and manufacturing engineers followed in the same order where each phase was a standalone centralized system with minimum cross functional interaction. With the time technologies like CAD, internet services and client server model evolved drastically but overall the advantages provided by these systems were limited in nature as it was still following the same one-way methodology [START_REF] Abadi | Data Management in the Cloud: Limitations and Opportunities[END_REF]. Moreover, there exists a rigid and costly system of supply chain till now whereas in Cloud-based Supply Chain, the supply chain is customer centric and the users with specific needs are linked with the service providers while meeting the cost, time and quality requirements of the user. This is where adoption of Cloud Based Design and Manufacturing (CBDM) becomes essential as it is based on a cloud platform that allows users to collaborate and use the resources on demand and on a self-service basis. This provides flexibility and agility, which is required to reconfigure the resources to minimize the down-time, also called Rapid scalability. CBDM is designed to allow collaboration and communication between various actors involved from design to delivery phase in the crossdisciplinary teams to work in a collaborative way in real time from anywhere in the world with access to internet. Cloud manufacturing allows to produce variety of products of varying complexity and helps in mass customization. Using the CBDM system, the prototypes of the part can be manufactured without buying costly manufacturing equipment. Users can pay a subscription fee to acquire software licenses and use manufacturing equipment instead of purchasing them. Finally, usage of cloudbased environment leads to saving of opportunities as the tasks that were not economically viable earlier can be done using the cloud services.
2
State of The Art
Cloud Based Collaborative Atmosphere
With the coming & advancement of Web 2.0, social collaborative platforms provided a wonderful way to exchange information and data [START_REF] Wu | Cloud-based design and manufacturing systems: A social network analysis[END_REF]. The internet based information and communication technologies are now allowing to exchange information in real time and are providing means to put into practice the concepts of mass collaboration, distributed design & manufacturing processes [START_REF] Schaefer | Distributed Collaborative Design and Manufacture in the Cloud-Motivation, Infrastructure, and Education[END_REF]. Collaboration-based Design & Manufacturing comprises all the activities that revolve around the manufacture of a product and leads to significant economies of scale, reduced time to market, improvement in quality, reduced costs etc. In a cloud manufacturing system, manufacturing resources & capabilities, software etc. are interconnected to provide a pool of shared resources and services like Design as a Service, Simulation as a Service, and Fabrication as a Service to the consumers [START_REF] Ren | Cloud manufacturing: From concept to practice[END_REF]. Current researches have emphasized a lot on the connectivity of products or in other words smart connected products via cloud environment for better collaboration of various operations of manufacturing being carried out on a product [START_REF] Goto | Multi-party Interactive Visioneering Workshop for Smart Connected Products in Global Manufacturing Industry Considering PLM[END_REF] and hence this acted as a first motivation of going into cloud domain for design and manufacturing. Also many larger scale enterprises have formed decentralized and complex network of their operations in the field of design and manufacturing where constant interaction with small scale enterprise is becoming a challenge. However, with the emergence of cloud computing there is an observation that more and more enterprises have shifted their work into cloud domain and have saved millions of dollars [START_REF] Wu | Cloud-based design and manufacturing: Status and promise," a service-oriented product development paradigm for the 21st century[END_REF][START_REF] Wu | Cloud-based manufacturing: old wine in new bottles?[END_REF] and hence this forms our second motivation behind implementing manufacturing which in our case is AM on "Cloud" which is backed by the fact that the currently automobile and aeronautics giants have been shifting wide portion of their work into cloud platform by implementing cloud computing technology into many business lines pertaining to engineering domain. This also reaffirms our belief that cloud computing is envisaged to transform enterprise both small and big to profit from moving their design and manufacturing task into the cloud. Hence this forms the first pillar of the proposed CBDM.
Rapid manufacturing scalability
The idea of providing manufacturing services on the internet was in fact developed a long time ago when researchers envisaged the propagation of IOT (Internet of Things) in the production. Recent research has showcased the importance to have continuous process flow in lean product development which gave rise to an idea of having scalability in manufacturing process to have more liquidity in the manufacturing process.
In a world of rapid competition, scalability of rapid manufacturing is more important than ever. In the alignment to the statement made by Koren et al [START_REF] Yoram | Design of Reconfigurable Manufacturing Systems[END_REF] regarding importance of reconfigurable manufacturing systems (RMSs) for quick adjustment in production capacity and functionality, CBDM allows users to purchase services like manufacturing equipment, software licences with reconfiguration module which in turns allow scalability of the manufacturing process and prevents over purchasing of computing and manufacturing capacities. This digital manufacturing productivity greatly enhances the scalability of the manufacturing capacity in comparison to the traditional manufacturing paradigm and this has been evident from the recent research work carried out by Lechevalier et al. [START_REF] Lechevalier | Model-based Engineering for the Integration of Manufacturing Systems with Advanced Analytics[END_REF] and Moones et al. [START_REF] Moones | Interoperability improvement in a collaborative Dynamic Manufacturing Network[END_REF] who have showcased efficient interoperability in a collaborative and dynamic manufacturing framework. As stated by Wua et al [START_REF] Wua | Cloud-based design and manufacturing: A new paradigm in digital manufacturing and design innovation[END_REF], from the perspective of manufacturing scalability, CBDM allows the product development team to leverage more cost-effective manufacturing services from global suppliers to rapidly scale up and down the manufacturing capacity during production. Hence, Rapid manufacturing scalability forms our second pillar of the proposed methodology.
Design and additive manufacturing methodology model
In this section, the flow of information in the digital chain has been studied to optimize the quality of AM which remains our focus in the experiment to test proposed methodology. This information management system interacts with the support infrastructure [START_REF] Kim | Streamlining the additive manufacturing digital spectrum: A systems approach[END_REF] (The standards, methods, techniques and software). The table whose phase 3 to 6 are represented in Fig. 1, provides an overview of the eight distinct stages and transitions. With a clear understanding of the various phases of additive manufacturing and transitions of information between each phase, we were able to identify optimization opportunities of additive manufacturing and establish mechanisms and tools to achieve them. In the current research phase 3 and 4 have been considered as represented by dotted line region in Fig. 1. This transition is an important preparedness activity to AM that is essential to the achievement of the final product [START_REF] Fenves | A core product model for representing design information[END_REF]. It includes activities like journal of 3D model, generation of the carrier around the 3D model, decomposition in successive layers of a 3D model and generating a code which contains the manufacturing instructions for the machine. It is this transition stage "Activities for AM process" which is dealt later in this research project where AM process is optimized in the proposed methodology making this model a fourth pillar to the methodology.
Real-Time Business Model
One of the major advantages of using CBDM is that we are always linked to the outer world and this lets us know the real-time scenario. So as one of the pillars of our methodology we propose Real-time Business model to execute the entire process in the most efficient way in terms of quality and cost. The Real-Time Request for Quotation (RT-RFQ) is an interesting feature which increases the utility of the system. This basically utilizes the Knowledge Management System (KMS) which are an integral part of Cloud based design and manufacturing systems [START_REF] Li | Cost, sustainability and surface roughness quality-A comprehensive analysis of products made with personal 3D printers[END_REF]. The selection of candidate KSPs is done based on the abilities and the capacities of the KSP to produce the product within stipulated time, cost and quality. The entire process of generating a request for quotation, finalising the service provider and delivery of the final product is in real-time thus creating collaboration between the sellers and the buyers, which we name it as "Market Place". The entire Material Management and the Supply Chain of the product in a collaborative platform is an integral part of our proposed methodology thus forming one of the pillars.
Proposal of a methodology
Synthesis
Synthesis of the proposed methodology is supported by four foundation pillars: Cloud environment, Rapid manufacturing scalability, Design and additive manufacturing methodology model and real-time business model. As discussed in the section 2.3, optimization process involved in AM workflow is rich in research opportunities and thus important to reduce the number phases involved in the manufacturing process. In the construction of methodology, a centralized system has been considered which controls all the process i.e. cloud domain and forms a platform where all actions will take place. Thus "Cloud" atmosphere forms heart of the methodology which starts with inputs that are decided during the RFQ and award acknowledgement process of a project. 3D design (phase-1) followed by two new functionalities such as Preparation for manufacturing (phase-2) and the Marketplace (phase-4). Then comes generic processes manufacturing (phase 4) which in combination with phase-1, phase-2 and phase-3 gives the power of rapid manufacturing scalability as discussed in the section 2.2. Last two phases represent packaging (phase-5) and delivery (phase-6) that constitutes the supply chain network of the process and are interconnected to phase-3 "Mar-ketplace" in cloud by the means of interactions. Phase-3, phase-5 and phase-6 along with inputs given to the process is inspired form the real-time business model as discussed in the section 2.4. This way four pillars forms the backbone of the methodology. Collaboration at each phase in form of propagation of design, consultation, evaluation and notification happens in parallel or simultaneously during the process which forms a distributed and connected network in the methodology.
In addition to defining pillars, the existing methodologies workflow was simplified. The methodology process has been scaled down to six phases, instead of eight as mentioned by Kim D [START_REF] Kim | Streamlining the additive manufacturing digital spectrum: A systems approach[END_REF]. For that, some sub-stages were regrouped into phases to optimize the process and simplify the methodology Indeed, it was noticed that by reducing phases and regrouping linked sub-stages to a single phase, we can minimize the interactions that could happen during transitions between the different phases that aided us in achieving 6 phase methodology process with multiple parallel interactions. By grouping sub steps to main steps, we proposed a 6-phase methodology. This approach of grouping sub steps represents our idea of moving for a "task to do" vision to a "defined role" vision. Instead of thinking as a task of 3D scanning, 3D modelling or a triangulation together, it would be a better to think as a task of a function such as 3D designer or a mechanical engineer. Following this approach, we group several tasks to a specific role. That's how we simplified our methodology, which is checked and validated in the case study applied to AM.
Methodology
From the 3D design to the product delivery, this methodology describes six phases including five transitions with tractability on the cloud as outlined in the Fig. 2.
As shown on Fig. 2, the methodology process starts with a 3D design phase (1) which involves designing the product in a 3D environment, produce 3D CAD File and save it on the Cloud by allowing collaborative work with someone who has the access. This file is then sent to be prepared for manufacturing [START_REF] Boton | Comparing PLM and BIM from the Product Structure Standpoint[END_REF]. A preparation of the 3D model before manufacturing is basically deciding the manufacturing process that will be used to produce the designed part. Sub steps such as repair geometry, meshing, weight optimization and finite elements simulations are grouped in a single manufacturing preparation phase [START_REF] Boton | Comparing PLM and BIM from the Product Structure Standpoint[END_REF]. Once the file prepared for manufacturing, it's uploaded to a Marketplace (3) platform where the product will be evaluated and reviewed by service providers. It an online collaborative platform which brings together buyers (designers, engineers and product developers) and sellers the key service providers (KSPs) who manufacture and bring the design and the concept to realization. Here phase 2 and 3 works in parallel to double check whether the 3D file is ready for manufacturing or requires a further preparation or optimization for manufacturing process to be used. At this stage, the product design has been optimized, prepared for manufacturing and the most efficient service provider has been awarded the order by the designer. Those service providers will lead the customer to the appropriate manufacturing process and will start the manufacturing phase (4). A validation and evaluation product loop occurs after manufacturing to make sure the product matches the requirement specifications. Once the product is manufactured and validates the re-quirements, the service provider proceeds to the packaging (5) then the delivery (6). The service provider selected in the Market place also has the responsibility of providing packaging and delivery service. The methodology proposed here, is the result of a conceptual and theoretical work. However, it must be applied at a practical level to evaluate its efficiency. We have implemented the proposed thermotical model on a case study to enlighten the benefits of this model in a real-world scenario. The following section describes a case study of the proposed methodology, applied to Additive Manufacturing.
Additive manufacturing (AM) has become a new way to realize objects from a 3D model [START_REF] Thompson | Design for Additive Manufacturing: Trends, opportunities, considerations, and constraints[END_REF] as it provides a cost-effective and time-efficient way to produce lowvolume, customized products with complex geometries and advanced material properties and functionalities.
From 3D design to product delivery, step by step the proposed methodology discussed in the section 3 has been applied in AM context and thus changing the step (2) from "Preparation for manufacturing" to "Preparation to Additive Manufacturing" and the rest remains the same. As the project was conducted in a partnership with Dassault Systèmes, and the fact that we want to use a unique platform for the whole CBDM process, the "3DEXPERIENCE" solution by enterprise was used to test the proposed methodology. The focus was on optimizing the methodology dataflow, which impacts directly the product quality.
Step 1: 3D design In the first phase, the user will use a 3D design app on the cloud and work collaboratively. Once the product is designed and converted into an appropriate format, we proceed to the preparation process for manufacturing.
Step 2: Preparation for manufacturing
We have a 3D model file at this stage which requires preparation for the 3D printing. The Fig. 3 describes the fundamental AM processes and operations followed during preparation for manufacturing the CAD model in an AM environment. During the process Pre-context setting, Meshing were also carried out. Step 3: The 3DMarketplace The 3DMarketplace is a platform for additive manufacturing. It addresses the end-toend process of upstream material design, downstream manufacturing processes and testing to provide a single flow of data for engineering parameters. The objective here as a buyer is to select the most efficient key service provider possessing the required capabilities and skill sets on the Marketplace to proceed to the manufacturing phase (Fig. 4). The Marketplace shows up a list of service providers that can process the product manufacturing. A printing request was sent to the laboratory where a back and forth transition between the buyer and the service provider is necessary to make assure the printability of the 3D model and the use of the right manufacturing technology. This phase is done by confirmation of the order and starting of the AM process.
Step 4-5-6: Manufacturing, Packaging and delivery As defined in the proposed methodology section, the service provider from the Marketplace takes care of the manufacturing, packaging and delivery service. For the delivery, we chose to pick up the part. The customer can rate their experience and raise complaints on the 3DMarketPlace if required, and thus allowing improvement in the services provided.
Conclusion and Future work
The successful implementation of Cloud based additive manufacturing demonstrated that Collaborative and distributed design and manufacturing task as complex as AM can be performed with ease by using cloud based service. This research points towards a centralized user interface i.e. cloud platform which forms the heart of the proposed methodology thus allowing its users to aggregate data and facilitate coordination, communication and collaboration among its various players of design, development, delivery and business segments. We optimized the digital workflow while applying the proposed methodology, which helped in obtaining better quality products, shorter machining time, less material use and reduced AM costs. One of the main gains from study was the use of 3D mar-ket place in the methodology which offers a collaborative atmosphere for discussing subjects such 3D model design, geometry preparation and the appropriate manufacturing and also aides in the evaluation and validation of the two previous phases of the proposed methodology which is great from the outlook of the optimization and accuracy point of view in product development and delivery. The prototype of the CBDM system presented in this work will help to develop confidence in the functioning of a CBDM system especially in the domain of AM and will serve an ideal framework for developing it better for the near future.
Future work can consist of an adapted version of the proposed methodology CBAM (Cloud Based Additive Manufacturing) with more optimized process for AM. Overall the proposed methodology based on the work performed in the case study offers: a simplified, optimized, collaborative and AM applied solution that could be used in industrial and academic contexts and further strengthens the idea of adoption of cloud based services in the manufacturing sector soon.
Fig. 1 .
1 Fig. 1. Extract of digital channel Information flow for AM as proposed by Kim D [17]
Fig. 2 .
2 Fig. 2. Proposal of a CBDM methodology
Fig. 3 .
3 Fig. 3. Preparation for manufacturing steps
Fig. 4 .
4 Fig. 4. The 3D Marketplace procees with used service providers during experiment
Case study: Additive ManufacturingThis research is conducted in a partnership between the LCPI, a research lab in the Engineering School Arts et Métiers ParisTech, and Dassault Systèmes company. Collaborate to unify an academic research entity and an industrial Leader is one of our main purposes to point out merits of CBDM such as distributive & collaborative network as a solution to today's design & manufacturing activities. The proposed model in this paper, is tested by carrying out designing, manufacturing, trading on Marketplace and finally packaging for a very common industrial product called "joiner" in a collaborative & distributed environment on a cloud platform to demonstrate the feasibility of the proposed solutions by experimental tests. | 25,258 | [
"1030641",
"1030642",
"1030643",
"916993",
"941395",
"1030644",
"1030645"
] | [
"127758",
"301940",
"175453",
"127758",
"175453",
"127758",
"175453",
"127758",
"175453",
"127758",
"175453",
"301940",
"301940"
] |
01764153 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764153/file/462132_1_En_41_Chapter.pdf | Farouk Belkadi
Ravi Kumar Gupta
Stéphane Natalizio
email: [email protected]
Alain Bernard
Modular architectures management with PLM for the adaptation of frugal products to regional markets
Keywords: PLM, Modular Architecture, Product Features, Co-evolution 1
Nowadays companies are challenged with high competitiveness and saturation of markets leading to a permanent need of innovative products that ensure the leadership of these companies in existing markets and help them to reach new potential markets (i.e. emerging and mature market). Requirements of emerging markets are different in terms of geographic, economic, cultural, governance policies and standards. Thus, adopting existing European product to develop new products tailored to emerging markets is one possible strategy that can help companies to cope with such challenge. To do so, a large variety of products and options have to be created, managed and classified according to the requirements and constraints from a target regional market. This paper discusses the potential of PLM approach to implement the proposed modular product design approach for the adaptation of European product and production facilities to emerging markets. Using modular approach, the product design evolves iteratively coupling the configuration of various alternatives of product architectures and the connection of functional structures to their contexts of use. This enables the customization of adapted product to specific customer's needs.
Introduction
Customer's requirements fluctuate across geographical regions, standards, and context of use of the product of interest, whereas global production facilities to address such requirements are constrained by local governing policies, standards, and local resources availability. In order to address emerging market's needs and adapt existing product development facilities, it is important to analyze and evaluate different possibilities of product solutions against specific requirements of one regional market.
An emerging market is generally characterized as a market under development with less presence of standards and policies comparing to mature markets in the developed countries [START_REF]MSCI Market Classification Framework[END_REF]. To respond to the competition from these emerging countries, frugal innovation is considered as a solution to produce customized products in a shorter time for improving the attractiveness of western companies [START_REF] Khanna | Emerging Giants: Building World-Class Companies in Developing Countries[END_REF]. Frugal innovation or frugal engineering is the process of reducing the complexity and cost of goods, and their production. A frugal product is defined in most industries in terms of the following attributes: Functional, Robust, User-friendly, Growing, Affordable and Local. The details of these attributes are given in [START_REF] Bhatti | Frugal Innovation: Globalization, Change and Learning in South Asia[END_REF][START_REF] Berger | Frugal products, Study results[END_REF].
As per the study [START_REF] Gupta | Adaptation of european product to emerging markets: modular product development[END_REF], these frugal attributes are not always sufficient for adapting existing product development facilities in European countries to emerging markets. Several additional factors can influence consumer behavior as well such as cultural, social, personal, psychological and so on. To answer this demand, companies have to provide tangible goods and intangible services that result from several processes involving human and material resources to provide an added value to the customer.
However, looking to the large variety of markets, customer categories, needs and characteristics, companies have to create and manage a huge variety of products and services, under more complex constraints of delivery time reduction and cost saving. To do so, optimization strategy should concern all steps of the development process, including design, production, packaging and transportation [START_REF] Ferrell | Marketing: Concepts and Strategies[END_REF].
Generally, three categories of product are distinguished depending on the level of customization and the consideration of customer preferences, namely: (i) standard products that don't propose any customization facility; (ii) mass customized product offering customization on some parts of the product, and (iii) unique product developed to answer specific customer demand. Despite this variety, every product is defined through a bundle of elements and attributes capable of exchange and use. It is often proven that modular architectures offer high advantages to support creation and management of various product architectures from the same family. Taking advantage from this concept, this paper proposes the use of a modular approach to address the emerging market requirements through the adaptation of original products. The key issue is the use of PLM (Product Lifecycle Management) framework as a kernel tool to support both the management of product architectures and the connection of these architectures with production strategies. The specific use case of product configuration of mass customized product is considered as application context.
The next section discusses the main foundation of modular approach and its use for the configuration of product architectures. Section 3 discusses the implementation of the proposed approach in Audros software. Audros is a French PLM providing a set of flexible tools adaptable to a lot of functional domains through an intelligent merge of the business process model, the data model generator and the user interface design. Finally, section 4 gives the conclusion and future works.
2
Product configuration strategies within modular approach
Product modular architectures
Product architecture is the way by which the functional elements (or functions) of a product are arranged into physical units (components) and the way in which these units interact [START_REF] Eppinger | Product Design and Development[END_REF]. The choice of product architecture has broad implications for product performance, product change, product variety, and manufacturability [START_REF] Ulrich | The role of product architecture in the manufacturing firm[END_REF]. Product architecture is thought of in terms of its modules. It is also strongly coupled to the firm's development capability, manufacturing specialties, and production strategy [START_REF] Pimmler | Integration analysis of product decompositions[END_REF].
A product module is a physical or conceptual grouping of product components to form a consistent unit that can be easily identified and replaced in the product architecture. Alternative modules are a group of modules of the same type and satisfy several reasoning criteria/features for a product function. Modularity is the concept of decomposing a system into independent parts or modules that can be treated as logical units [START_REF] Pimmler | Integration analysis of product decompositions[END_REF][START_REF] Jiao | Fundamentals of product family architecture[END_REF]. Modular product architecture, sets of modules that are shared among a product family, can bring cost savings and enable the introduction of multiple product variants quicker than without architecture. Several companies have adopted modular thinking or modularity in various industries such as Boeing, Chrysler, Ford, Motorola, Swatch, Microsoft, Conti Tires, etc. [START_REF] O'grady | The age of modularity: Using the new world of modular products to revolutionize your corporation[END_REF]. Hubka and Eder [START_REF] Hubka | Theory of technical systems[END_REF] define a modular design as "connecting the constructional elements into suitable groups from which many variants of technical systems can be assembled". Salhieh and Kamrani [START_REF] Salhieh | Macro level product development using design for modularity[END_REF] define a module as "building block that can be grouped with other building blocks to form a variety of products". They also add that modules perform discrete functions, and modular design emphasizes minimization of interactions between components.
Generic Product Architecture (GPA) is a graph where nodes represent product modules and links represent connections among product modules according to specific interfaces (functional, physical, information and material flow) to represent a product or a set of similar products forming a product family. A GPA represents the structure of the functional elements and their mapping into different modules and specifies their interfaces. It embodies the configuration mechanism to define the rules of product variant derivation [START_REF] Elmaraghy | Product Variety Management[END_REF]. A clear definition of the potential offers of the company and the feasibility of product characteristics could be established for a set of requirements [START_REF] Forza | Application Support to Product Variety Management[END_REF]. Figure 1 shows an example of modular product architecture for the case of bobcat machine, including the internal composition of modules and the interaction between them [START_REF] Bruun | Interface diagram: Design tool for supporting the development of modularity in complex product systems[END_REF]. The similar concepts mentioned in the literature are 'building product architecture', 'design dependencies and interfaces' and 'architecture of product families', which can be used for the development of GPA. The GPA can be constructed by using different methods presented in the literature [START_REF] Jiao | Product family design and platform-based product development: a state-of-the-art review[END_REF][START_REF] Bruun | PLM support to architecture based development contribution to computer-supported architecture modelling[END_REF].
Construction of modular architectures
The use of the modular approach should propose the facility to work in different configurations. The concept of GPA can give interesting advantages for these issues. Indeed, by using existing GPA to extract reusable modules, a first assessment of interfaces compatibilities and performance of the selected modules can be performed regarding various product structures. Thus, module features are defined to support these assessments and used to link process specifications, production capabilities, and all other important criteria involved in the product development process. As the developed GPA is a materialization of the existing products, the adaptation of these products to the new market requirements will be obtained through some swapping, replacing, combining and/or modification actions on the original product architectures.
In fact, the application of customer-driven product-service design can follow one of two ways processes; either collectively through generic product architecture by mapping all the requested functions, or by mapping functions individually through features and then configuring product modules (cf. Figure 2). In this last case, more flexibility is allowed for the selection of products modules and consequently more innovative possibilities for the final product alternatives. However, more attention is required for the global consistency of the whole structure. The concept of "feature" is considered as a generic term that includes technical characteristics used for engineering perspective as well as inputs for decision-making criteria, useful for the deployment of customer-driven design process in the context of adaptation of existing European product and development facilities to an emerging market.
Fig. 2. Two ways product configuration strategies for identification of modules for a product
In the first case, starting from existing solutions implies a high level of knowledge about the whole development process and will reduce considerably the cost of adaptation to a new market. Using individual mapping of modules, the second way will give more possibilities to imagine new solutions (even the design process doesn't start from scratch) by reusing modules that are not originally created for the identic function. The implementation scenarios detailing these two ways are the following:
Configuration 1: Mapping of Requested Functions to GPA. The starting point in this configuration is the existing product families, really produced to meet certain functions and sold to customers in other markets. The goal is then to adapt the definition of modules regarding the new requirements according to their level of correspondence with existing functions, the importance of each customer option, and possible compatibilities between local production capabilities and those used for the realization of the original product. The modular approach is used to satisfy set of functions collectively through GPA by mapping all the functions required.
Configuration 2: Mapping set of functions to modules through features. In the second configuration, the modular approach is used to satisfy functions individually through features. More attention is given to product modules separately regardless of the final products structures involving these modules. This is also the case when the previous product structures contain partial correspondence with new requirements. This configuration offers more innovation freedom for the design of new product but include a strong analysis of interface compatibilities across modules. In this configuration, we go from the interpretation of the functions to identify all modules' features and then, search if there are some adequate modules and then configure these modules to possible products architectures.
Implementing modular approach in PLM for the configuration of customized product
By using modular architectures, different product configurations can be built as an adaptation of existing products or the creation of new ones through the combination and connection of existing modules developed separately in previous projects. Product Configuration is already used for mass customization perspective [START_REF] Daaboul | Design for mass customization: Product variety VS process variety[END_REF]. This can be also used to increase product variety for regional adaptation and improve the possibility to the customer to choose between different options for an easily customized product with low production cost. This is possible through the matching among product modules, process modules and production capabilities. The development of a product for a new market can then be obtained through a concurrent adjustment of the designed architecture and the production strategy, considered as a global solution. Following this approach, the involving of the customer into the product development process is made through an easier clarification of his needs as a combination of functions and options. These functions/options have to be connected in the design stage to pre-defined modules. Customers are then engaging only in the modules which they are interested in and presenting a high potential of adaptation. In the production side, alternatives of process are defined for each alternative of product configuration so that all the proposed options presented in the product configurator are already validated in terms of compatibility with the whole product architecture and production feasibility. This ensures more flexibility in the production planning.
Figure 3 shows a global scenario connecting a product configurator with the PLM. Following this scenario, the customer can visualize different options for one product type and submit his preferences. These options are already connected to a list of pre-defined models which are designed previously and stored in the PLM. The selection of a set of options will activate various product architectures in the PLM. Based on the selected set of options, the designer extracts the related product architectures. For every option as displayed to the customer in the configurator, a set of modules alternatives are available in the PLM and can be managed by the Designer to create the final product architecture as a combination of existing architectures.
In addition, when selecting the product family and the target market, the PLM interfaces provide a first filtering of modules respecting the target market requirements.
Fig. 3. Scenario of product configuration with PLM
The creation of the predefined models in the PLM is part of a design process which is fulfilled in the design department based on the configuration strategies presented in section 2.2. For each target market or potential category of customers, every type of product is presented with its main architecture connected to a set of alternative architectures. Each alternative implements one or more product options that are tailored to specific regional markets by means of related alternatives of production process.
The main question to be resolved in this design stage concerns the characteristics which the concept of modules should adopt in order to cope with the co-evolution strategy of product architecture and production process, respecting customization constraints. In this case, specific features are defined with the module concept as decision-making criteria to support the product configuration process within a coevolution perspective as given below:
Criticality: The importance of a module in the final product architecture regarding the importance of the related option/function to the customer. This will help the designer to choose between solutions in presence of some parameters conflicts. Interfacing: The flexibility of one module to be connected with other modules in the same architecture. This increases its utilization in various configurations. Interchangeability: The capacity of one module to be replaced by one or more other modules from the same category to provide the same function. Based on this feature and the previous one, the customer can select only compatible options.
Process Connection: It gives information about the first time the related module is used in the production process and the dependency with other assembly operations. This is particularly important if the company aims to propose more flexibility to the customer for selecting some options although the production process is started.
To support the implementation of such process, a data model is implemented in the Audros PLM to manage a large variety of product alternatives connected to several alternatives of production (cf. Figure 4). In this model, every function is implemented through one or several technical alternatives. The concept of "module" is used to integrate one (and only one) technical solution in one product structure. Every product is composed of several structures representing product alternatives. Each structure is composed of a set of modules and connectors that present one or more interfaces. The concept of product master represents the models of mature products that will be available for customization within the product configurator and able.
Fig. 4. PLM Data model implementing modular approach
Based on this data model, several scenarios are defined as an implementation of the construction and use processes of modular architectures (see Figure 2). These scenarios concern, for instance, the creation of original products from scratch or from the adaptation of existing ones, the connection between PLM and product configurator for the ordering of new customized product and the connection PLM-MPM (manufacturing process management) for the realization of the selected alternatives, etc. A recognition scenario of the ordering and customization of a frugal product based on the adaptation of an existing one, using PLM is described as follow. The customer or the marketing department choose an existing product as a base and define customization to be applied to adapt the product by the design office.
Actors: Customer/Marketing department of the company + Design department Goal: select the product to be customized and ordered Pre-condition:
─ If request comes from the Marketing department, a new product family will be developed with options. ─ If the request comes from the customer, a new customer order with customization will be considered. Post condition: Instance of the product master is created, request is sent to design Events and interactions flow:
─ The user chooses product type and target market ─ The system returns the list of suitable options ─ The user creates an order for the desired products ─ The system creates a new product, instance of chosen product master ─ The user selects the options ─ The system analyzes the order and identifies suitable modules for each option ─ The system filters the alternatives of modules for each function regarding the interfacing and compatibility criteria. ─ The system generates potential alternatives of product architecture ─ The system sends a notification of design request to design office.
The Graphical User Interface (GUI) of the PLM tool has been designed to provide flexible and user-friendly manipulation of any type of product structure as well as its different modules and features. The global template of the GUI is the same for all screens, but the content adapts itself depending on the data to be managed and the context of use (Scenario and Use). With this GUI, the user will have a unified interface that will help the designer for the design of a frugal product and its co-evolution with the production process as follow:
Create and analyze various product architectures at any level, from different point of view (functional, technical solutions, compatibility, manufacturing, etc.) Promote re-use and adaptations of existing solutions in the design of product architectures. This is based on the searching facilities for object (function, modules, alternatives …) in a very simple and quick way. Manipulate product and production data (create/modify/adapt solutions) Access easily to all related documents like market survey, customer feedback, etc.
The following figure (cf. Fig 5) presents the main GUIs of the proposed PLM platform as used in the proposed frugal design process. The flexibility of this platform takes advantage from the use of the "effectivity parameter" describing the link between two PLM objects. The effectivity parameters, displayed in the GUI, are used for data filtering as well as representation and manipulation of objects during the configuration process. There is no limit for the definition of effectivity parameters. Examples of effectivity parameters used in the case of frugal product configuration are: Criticality; Customization; Manufacturing plant; Sales country; Product option/variation; and Begin/end date of validity.
Conclusion
PLM tool configuration for the representation and the management of Product modular architectures has been introduced so as to respond to the requirements of adapting product-service design and production in a customer-driven context. The focus is the tailoring of mature product solutions to customer's needs in emerging market. Module features have been defined to help translate the regional customer requirements into product functions and product structure design. It is also used to connect the product design to production planning as well as other downstream activities.
The modular design approach for the adaptation of European product to emerging markets has been proposed for this objective. The proposed modular product design approach is actually under implementation for supporting the configuration and customization of aircrafts in aeronautic domain and the co-design of production systems tailored to regional markets. Another application, in domestic appliance industry concerns the integration of the customer in the definition of product variety through a smart organization of feedback survey following modular structures, highlighting the preferences of potential customers in a target regional market. Software interoperability and information exchanges between involved tools in these industrial scenarios is ensured using PLM framework, considered as a hub.
Fig. 1 .
1 Fig. 1. Example of generic product architecture of Bobcat machine (adapted from [16]).
Fig. 5 .
5 Fig. 5. Several PLM GUIs as a whole process
Acknowledgement
The presented results were conducted within the project "ProRegio" entitled "customer-driven design of product-services and production networks to adapt to regional market requirements", funding by the European Union's Horizon 2020 research and innovation program, grant agreement n° 636966. | 24,781 | [
"18529",
"1030649",
"1030650",
"174660"
] | [
"473973",
"473973",
"483934",
"473973"
] |
01764172 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764172/file/462132_1_En_59_Chapter.pdf | D Morales-Palma
I Eguía
M Oliva
F Mas
C Vallellano
Managing maturity states in a collaborative platform for the iDMU of aeronautical assembly lines
Keywords: Product and process maturity, Industrial Digital Mock-Up (iDMU), Digital manufacturing, Digital factory, PLM
Collaborative Engineering aims to integrate both functional and industrial design. This goal requires integrating the design processes, the design teams and using a single common software platform to hold all the stakeholders contributions. Airbus company coined the concept of the industrial Digital Mock Up (iDMU) as the necessary unique deliverable to perform the design process with a unique team. Previous virtual manufacturing projects confirmed the potential of the iDMU to improve the industrial design process in a collaborative engineering environment. This paper presents the methodology and preliminary results for the management of the maturity states of the iDMU with all product, process and resource information associated with the assembly of an aeronautical component. The methodology aims to evaluate the suitability of a PLM platform to implement the iDMU in the creation of a control mechanism that allows a collaborative work.
Introduction
Reducing product development time, costs and quality problems can be achieve through effective collaboration across distributed and multidisciplinary design teams. This collaboration requires a computational framework which effectively enables capture, representation, retrieval and reuse of product knowledge. Product Lifecycle Management (PLM) refers to this enabling framework to help connect, organize, control, manage, track, consolidate and centralize all the mission-critical information that affects a product and the associate processes and resources. PLM offers a process to streamline collaboration and communication between product stakeholders, engineering, design, manufacturing, quality and other key disciplines.
Collaboration between product and process design teams has the following advantages for the company: reduction of time required to perform tasks; improvement of the ability to solve complex problems; increase of the ability to generate creative alternatives; discussion of each alternative to select as viable and to make decisions; communication improvement; learning; personal satisfaction; and encouraging innovation [START_REF] Alonso | Enterprise Collaboration Maturity Model (ECMM): Preliminary Definition and Future Challenges[END_REF]. However, collaboration processes need to be explicitly designed and managed to maximize the positive results of such an effort.
Group interaction and cooperation requires four aspects to be considered: people have to exchange information (communication), organize the work (coordination), operate together in a collective workspace (group memory) and be informed about what is happening and get the necessary information (awareness).
Maturity models have been designed to assess the maturity of a selected domain based on a comprehensive set of criteria [START_REF] Bruin | Understanding the main phases of developing a maturity assessment model[END_REF]. These models have progressive maturity levels, allowing the organization to plan how to reach higher maturity levels and to evaluate their outcomes on achieving that.
A maturity model is a framework that describes, for a specific area of interest, a set of levels of sophistication at which activities in this area can be carried out [START_REF] Alonso | Enterprise Collaboration Maturity Model (ECMM): Preliminary Definition and Future Challenges[END_REF]. Essentially, maturity models can be used: to evaluate and compare organizations' current situation, identifying opportunities for optimization; to establish goals and recommend actions for increasing the capability of a specific area within an organization; and as an instrument for controlling and measuring the success of an action [START_REF] Hain | Developing a Situational Maturity Model for Collaboration (SiMMCo) -Measuring Organizational Readiness[END_REF].
Product lifecycle mainly comprises several phases, e.g. research, development, production and operation/product support [START_REF] Wellsandt | A survey of product lifecycle models: Towards complex products and service offers[END_REF]. The development phase comprises the sub phases shown in Fig. 1: feasibility, concept, definition, development and series, which involve improvement and modifications. Product collaborative design encompasses all the processes before the production phase, and the information management strategy of products achieve internal information sharing and collaborative design by integrating data and knowledge throughout the whole product lifecycle and managing the completeness of the information in each stage of product design. Researches on the product maturity are mainly about project management maturity which are used to evaluate and improve the project management capabilities of enterprises. A smaller part of the researches have discussed the concept of product maturity, and the number of works devoted to studying maturity of related processes and resources is insignificant. Wang et al. [START_REF] Wang | Research on Space Product Maturity and Application[END_REF] proposed the concept of space product maturity and established a management model of product maturity, but it lacks the research about product maturity promoting the product development process. Tao and Fan [START_REF] Tao | Application of Maturity in Development of Aircraft Integrated Process[END_REF] discussed the concept of maturity and management control method in the process of integration, but the division of the maturity level is not intuitive, and discussed little about application of product maturity in collaborative R&D platform. Chen and Liu [START_REF] Chen | Maturity Management Strategy for Product Collaborative Design[END_REF] provided the application of a strategy of product maturity for collaborative design on the collaborative development platform Teamcenter to verify the effectiveness and the controllability of the strategy. Wuest et al. [START_REF] Wuest | Application of the stage gate model in production supporting quality management[END_REF] adapted the state gate model, a well-established methodology for product and software development, to production domain and indicated that it may provide valuable support for product and process quality improvement although the success is strongly dependent of the right adaptation.
The main objective of this paper is the design of a maturity management model for controlling the functional and industrial design phase of an aeronautical assembly line in the Airbus company (Fig. 1), and explores the development of this model in 3DExperience, a collaborative software platform by Dassault Systémes [9].
Antecedents and iDMU concept
The industrial Digital Mock-Up (iDMU) is the Airbus proposal to perform the design process with a unique team with a unique deliverable. The iDMU is defined by Airbus to facilitate the integration of the processes of the aircraft development on a common platform throughout all their service life. It is a way to help in making the functional and the industrial designs evolving jointly and collaboratively. An iDMU gathers all the product, processes and resources information to model and validate a virtual assembly line, and finally to generate the shopfloor documentation needed to execute the manufacturing processes [START_REF] Menéndez | Virtual verification of the AIRBUS A400M final assembly line industrialization[END_REF][START_REF] Mas | Collaborative Engineering: An Airbus Case Study[END_REF].
Airbus promoted the Collaborative Engineering in the research project "Advanced Aeronautical Solutions Using PLM Processes and Tools" (CALIPSOneo) by implementing the iDMU concept [START_REF] Mas | Collaborative Engineering Paradigm Applied to the Aerospace Industry[END_REF][START_REF] Mas | iDMU as the Collaborative Engineering engine: Research experiences in Airbus[END_REF][START_REF] Mas | PLM Based Approach to the Industrialization of Aeronautical Assemblies[END_REF]. The iDMU implementation was made for the industrialization of the A320neo Fan Cowl, a mid-size aerostructure. It was built by customizing CATIA/DELMIA V5 [9] by means of the PPR model concept. The PPR model of this commercial software provided a generic data structure that had to be adapted for the products, processes and resources of each particular implementation. In this case, a specific data structure was defined to support the Airbus products, the industrial design process, the process structure nodes, the resources structure nodes and their associated technological information, 3D geometry and metadata.
The process followed by Airbus to execute a pilot implementation of the iDMU is briefly described as follows. The previously existing Product structure was used and an ad-hoc application was developed that periodically updated all the modifications released by functional design. The Process and Resources structures were populated directly in the PPR context. The Process structure comprised four levels represented by four concepts: assembly line, station, assembly operation and task. Each concept has its corresponding constraints (precedence, hierarchy), its attributes and its allocation of products to be assembled and resources to be used. Once the PPR structures were defined, the system calculated the product digital mock-up and the resources digital mock-up that relate to each process node. As a result, the designer created simulations in the 3D graphical environment to analyse and validate the defined manufacturing solution. This validation of the process, product and resource design, by means of Virtual Manufacturing utilities in a common context, is a key feature in the Collaborative Engineering deployment.
The iDMU supports the collaborative approach through 3 main elements. First, it allows sharing different design perspectives, to reveal solutions that while valid for a perspective (e.g. resources design) cause problems in other perspectives (e.g. industrialization design), and to solve such issues. Second, it enables checking and validation of a high number of alternatives, allowing improving the harmonization and optimization of the design as a whole. And third, it is possible to reuse information contained in the iDMU by other software systems used in later stages of the lifecycle, facilitating the integration and avoiding problems with translation of models into intermediate formats, and making easier the use of new technologies such as augmented reality.
The CALIPSOneo project [START_REF] Mas | Collaborative Engineering Paradigm Applied to the Aerospace Industry[END_REF][START_REF] Mas | iDMU as the Collaborative Engineering engine: Research experiences in Airbus[END_REF][START_REF] Mas | PLM Based Approach to the Industrialization of Aeronautical Assemblies[END_REF], with a scope limited to the A320neo fan cowl, allowed confirming that the iDMU provides a suitable platform to develop the sociotechnical process needed by the collaborative engineering. However, the project also revealed that the general functionalities provided by the adopted PLM commercial solution required an important research and development work to implement the data structures and functions needed to support the iDMU.
An important factor in the implementation of an iDMU is the need for a PLM tool capable of coordinating the workflow of all participants by means the definition and control of the lifecycle of allocated elements of the PPR structure, i.e. to manage its maturity states. At present, this issue is being addressed in the research project "Value Chain: from iDMU to Lean documentation for assembly" (ARIADNE).
Methodology
As said before, one of the studies carried out within the scope of the ARIADNE project was the analysis of capabilities that a PLM tool requires to manage the maturity states of the iDMU. Such a PLM tool aims the following objectives: To define independent and different maturity states sets for Product, Process and Resource revisions. To define precedence constraints between the maturity states of a Process revision, and the maturity states of its related Products and Resources. To define, for each Process revision maturity state, other conditions (e.g. attribute values) that are to be met prior to evolving a Process revision to the maturity state. To define, for each Process revision maturity state, that some process data or relations are not modifiable from this maturity state onwards. To display online, in the process revision iDMU, the Products and Resources evolved through maturities from the last time it was vaulted.
To display online, in the process revision iDMU, the impact of the evolved Products and Resources and how easy these issues can be fixed.
In order to prove the capabilities of a new PLM tool to meet these objectives, a simple lifecycle model is proposed. The model has only three possible maturity states for every element of the PPR structure: In Work, Frozen and Released. However, the importance of the proposed model lies in a set of constraints that prevent the promotion between maturity states, as described below. This simple model aims to be a preliminary test to evaluate a new PLM tool, so that it can be improved and extended with new states, relationships, constraints, rules, etc.
The In Work state is used for a new version of a product, process or resource element in the PPR structure. In Work data are fully modifiable and can be switched to Frozen by the owner, or to Released by the project leader. Frozen is an intermediate state between In Work and Released. It can be used, for example, for data waiting for approval. Frozen data are partially modifiable (for minor version changes) and can be switched back and forth between In Work and Frozen by the owner, or to Released by the project leader. Released is the final state of a PPR element, e.g. when a product is ready for production, a process is accepted for industrialization, or a resource is fully configured for its use. Released data cannot be deleted and cannot switched back to previous states. In this situation, all product, process and resource elements in the PPR structure are In Work. The collaborative environment must allow the visualization and query of information under development to the different actors of the system, based on roles and permissions, so that it helps to detect design errors and make right decisions.
The new PLM tool must provide a set of rules or constraints that allow to control and alert the designer about non-coherent situations. Fig. 2 schematically presents some constraints to promote a PPR element. For instance, it is not possible to assign to a process node a maturity state of Frozen until the related product node has a maturity state of Released and the allocated resource has a maturity state of Frozen. In a similar way, to promote a process to Released, the allocated resource must be in Re-leased. On the other hand, the resource element can only reach the maturity state of Released when the process element has been Frozen previously.
In addition to define constraints between elements of different types (product, process and resource), it is necessary to establish rules between elements of the same type to control the change of maturity states of their interconnected elements. For instance, the following constraint inside the Product structure could be established: the designer of a product consisting of several parts can change the state of the product element to Frozen/Released when all its parts already have that same state, so that a part still unfinished (In Work) alerts him that the product cannot be promoted yet.
Practical application
The proposed model for managing the maturity states of an iDMU was implemented and tested in a PLM commercial software, within the frame of the ARIADNE project.
The implementation was carried out with the 3DExperience software solution by Dassault Systémes. The PPR structure in 3DExperience differs slightly from CATIA/DELMIA V5 so that the process of building the iDMU is different from those developed in previous projects. A significant difference is that the previous 3-elements PPR structure is replaced by a 4-elements structure, as represented schematically in Fig. 3: Product: it presents the functional zone breakdown in an engineering oriented organization. It is modelled by Design Engineering to define the functional view for structure and system installation. Process: it is focused to model the process plan from a functional point of view. It is indeed a product structure composed of a cascade of components identified by part numbers that presents how the product is built and assembled. Thus, both product and process elements of the PPR structure are directly correlated.
System: it defines the work flow operation. It contains a set of system/operations that corresponds to the steps necessary to correlate with the Process structure. It contains the information necessary to perform operations such as balancing the assembly lines. Resource: it represents the layout design for a manufacturing plant. Resources can be classified as working (e.g. robot, worker, conveyor), non-working (e.g. tool device) or organizational (e.g. station, line). The required resources are attached to operations in the System structure, as shown in Fig. 3.
The adopted PLM software integrates a default lifecycle model to any created object that controls the various transitions in the life of the object. This model includes elements such as user roles, permissions, states and available state changes. To facilitate the collaborative work, 3DExperience also provides a lifecycle model to manage Engineering Changes, which has links to PPR objects, and a transfer ownership functionality that can be used to pass an object along to another authorized user to promote it. Both PPR and Engineering Changes lifecycle models can be customized. These characteristics made 3DExperience an adequate collaborative platform for the purpose of this work. The objectives that a PLM tool must satisfy for managing the maturity states, described in the previous section, were analysed to fit the 4-element PPR structure of the 3DExperience software. Accordingly, the proposed model was redefined as shown in Fig. 4(a). As can be seen, the set of constraints for the System lifecycle is equivalent to the previous set of constraints for the Process lifecycle, whereas that Process elements are the bridge between Products and Systems.
A series of roles has been defined (see Fig. 4(a)) to implement the proposed model of maturity states in 3DExperience, such as the Project Leader (PL) and a different type of user to design each of the PPR structures: a Designer Engineer (DE), a Process Planner (PP), a Manufacturing Engineer (ME) and a Resources Designer (RD). Each system user is responsible for designing and promoting/demoting each node of its structure to the three possible states, as shown in Fig. 4(a). The PL coordinates all maturity state changes: he checks that there are no inconsistencies and gives the other users permission to make the changes.
Designers have several possibilities for building the iDMU using the 3DExperience graphical interface. Briefly, the maturity state is stored as an attribute of each PPR element, so it can be accessible from the query tool "Properties". The software also provides the "Team Maturity" utility to display information in the graphical environment about the maturity states. This utility displays a coloured flag in each element of the model tree that indicates its maturity state; however, it applies just for Product and Resource elements, i.e. elements that have associated geometry. Another utility allows displaying graphical information about the related elements of an allocated iD-MU element. Both graphical utilities, for maturity states and related elements, were used to search and filter information before changing an object state. To promote or demote the maturity state of an iDMU element, the "Change Maturity" utility presents different fields with the available changing states and related information according to the lifecycle model, roles and permissions.
The Airbus A400M empennage (about 34000 parts, see Fig. 4(b)) and its assembly processes were selected to develop the iDMU in 3DExperience. The empennage model developed in CATIA V5 was used as the Product structure. Process, System and Resource structures were modelled from scratch. Different use cases were evaluated by choosing small and more manageable parts of the iDMU to change their maturity states in the collaborative platform. The following is a summary of the implementation process carried out. An example is shown in Fig. 5. At the beginning of the lifecycle, the PL authorized all other system actors to work together in the iDMU at the same time in the collaborative platform (label a in Fig. 5). The main PPR structures were created and scope links were established between them. In this situation, all PPR nodes were In Work while the iDMU was designed in a collaborative and coordinated way.
One of the first state changes in the iDMU is made by the DE when it promotes a component or subproduct to Frozen (b). In this situation, only minor design changes could be made to the frozen component, which will have no impact on the rest of the iDMU (including other components of the product). Demoting the component to an In Work state (c) would indicate that major changes are required as a result of the current design state in other areas of the iDMU. In general, the promotion to Released of every PPR structure will be carried out in an advanced state of the whole iDMU. This means that their design has been considered as stable and that no significant changes will occur that affect other parts of the iDMU.
Maturity state changes in the Process structure are conditioned by the state of related components in the Product structure. Thus, before promoting a Process element (d), the PP must check the status related components with the aforementioned 3DExperience utilities to search and analyse the related elements and their states of maturity. If the related product is Frozen/Released, the PP can request authorization to the PL to promote the Process element.
Another of the first state changes of maturity that occurs in the iDMU is that of resources. Thus, the RD promotes a resource to Frozen (e) or demotes it to In Work (f) following the same guidelines as the DE with the products. Instead, the promotion of a resource to Released (g) can only be authorized by the PL when the related assembly system is Frozen, indicating that the assembly line has been designed except for possible minor changes that would not affect the definition of the resources.
The ME is the last actor to promote the state of his work in the iDMU: the design of the assembly system/line. In order to freeze his work (h), the ME needs to know in advance the final design of the product assembly process and also the definition of the necessary resources. Any changes in product or process structures, even if they are minor, could have a relevant impact on the definition of the assembly line. Therefore, the ME must previously verify that related assembly processes are Released and required resources are Frozen. Since resource nodes are linked to the System structure through operation nodes, the ME extensively uses the 3DExperience utilities to trace all affected nodes and check their maturity states. As discussed above, the promotion to Released of all PPR structures occurs in an advanced development of the iDMU, being the last two steps those relating to Resource and System structures.
Conclusions
This paper presents the methodology and preliminary results for the management in a collaborative environment of the maturity states of PPR elements with all product, process and resource information associated with the assembly of an aeronautical component. The methodology aims to evaluate the suitability of PLM tools to implement the Airbus methodology in the creation of a control mechanism that allows a collaborative work. The proposed model shows in a simple way the importance of the flow of information among the different participants of a unique team to build an iDMU as the unique deliverable in a collaborative platform. An outstanding feature of the lifecycle model is its ability to authorize or restrict the promotion of a product, process or resource element depending on the states of the related elements. Different use cases with coherent and non-coherent situations have been successfully analysed using 3DExperience to implement an iDMU for the Airbus A400M empennage.
In this work, the change management of maturity states has been coordinated by a Project Leader. The next step will be to customize 3DExperience to automate the maturity state changes, so that the system is responsible for evaluating the information of related elements, allowing or preventing the designer from promoting an iDMU element.
Fig. 1 .
1 Fig. 1. Airbus product lifecycle and milestones development.
Fig. 2 .
2 Fig. 2. Proposed simple model for the lifecycle of the PPR structure.
Fig. 2
2 Fig.2shows a schema of the proposed model. At the beginning of the lifecycle, since Design Engineering starts the product design, usually Manufacturing Engineering can begin to plan the process, set up the layout and define the necessary resources. In this situation, all product, process and resource elements in the PPR structure are In Work. The collaborative environment must allow the visualization and query of information under development to the different actors of the system, based on roles and permissions, so that it helps to detect design errors and make right decisions.The new PLM tool must provide a set of rules or constraints that allow to control and alert the designer about non-coherent situations. Fig.2schematically presents some constraints to promote a PPR element. For instance, it is not possible to assign to a process node a maturity state of Frozen until the related product node has a maturity state of Released and the allocated resource has a maturity state of Frozen. In a similar way, to promote a process to Released, the allocated resource must be in Re-
Fig. 3 .
3 Fig. 3. Schema of implementation of Airbus iDMU concept in 3DExperience.
Fig. 4 .
4 Fig. 4. (a) Extension of proposed model and (b) Airbus A400M empennage.
Fig. 5 .
5 Fig. 5. An example of implementation of the proposed simple model.
Acknowledgements
The authors wish to thank the Andalusian Regional Government and the Spanish Government for its financial support through the research project "Value Chain: from iDMU to Lean documentation for assembly" (ARIADNE). The work of master thesis students, Gonzalo Monguió and Andrés Soto, is also greatly acknowledged. | 27,204 | [
"1030673",
"1030674",
"1030675",
"1000803",
"1030676"
] | [
"254694",
"254694",
"483884",
"483884",
"254694"
] |
01764176 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764176/file/462132_1_En_10_Chapter.pdf | Manuel Oliva
email: [email protected]
Jesús Racero
Domingo Morales-Palma
Carmelo Del Valle
email: [email protected]
Fernando Mas
email: [email protected]
Jesus Racero
Carmelo Del Valle
Value Chain: From iDMU to Shopfloor Documentation of Aeronautical Assemblies
Keywords: PLM, iDMU, interoperability, Collaborative Engineering, assembly 2 ARIADNE project
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
PLM systems integrate all phases in the product development. The full product lifecycle, from the initial idea to the end-of-life, generates a lot of valuable information related to the product [START_REF] Ameri | Product lifecycle management: closing the knowledge loops[END_REF].
In aerospace industry, the long lifecycle (about 50 years), the number of parts (over 700.000 as average in a short range aircraft) and the modifications, make the aircraft a high complex product. Such complexity is drawn both from the complexity of the product and from the amount of resources and multidisciplinary work teams involved.
A complexity of multidisciplinary is found during the interaction between functional and industrial designers which brings inefficiencies in developing time, errors, etc. Research studies propose the necessity to evolve from the concurrent way of working to a more efficient one with the objective to deliver faster, better and cheaper products [START_REF] Pardessus | Concurrent Engineering Development and Practices for aircraft design at Airbus[END_REF], [START_REF] Haas | Concurrent engineering at Airbus -a case study[END_REF], [START_REF] Mas | Concurrent conceptual design of aero-structure assembly lines[END_REF]. One proposal to comply with such challenge is the Collaborative Engineering concept [START_REF] Lu | A scientific foundation of Collaborative Engineering[END_REF], [START_REF] Morate | Collaborative Engineering Paradigm Applied to the Aerospace Industry[END_REF].
Collaborative Engineering involves a lot of changes in terms of organization, teams, relationships, skills, methods, procedures, standards, processes, tools, and interfaces: it is a business transformation process. The main deliverable of a collaborative team is the iDMU [START_REF] Mas | iDMU as the Collaborative Engineering engine: Research experiences in Airbus[END_REF]. The iDMU concept is the approach defined by Airbus to facilitate the integration of the aircraft development on a common platform throughout all their service life. An iDMU gathers all the product, processes and resources data, both geometrical and technological to model a virtual assembly line. An iDMU provides a single environment, in which the assembly line industrial design is defined and validated.
To cover the bridge between the complexity of product information and the different PLM software tools to manage it, interoperability has raised as a must nowadays to improve the use of existing data stored in different formats and systems [START_REF] Penciuc | Towards a PLM interoperability for a collaborative design support system[END_REF]. Interoperability foundation is the Model Base Engineering (MBE), as a starting point for organizing a formal way of communicating and building knowledge [START_REF] Liao | Semantic annotations for semantic interoperability in a product lifecycle management context[END_REF] from data and information.
The development of solutions, to facilitate the implementation of both the concurrent engineering and the Collaborative Engineering in the aerospace industry, was the objective of some projects since the end of the 1990's decade. Two of the most relevant ones are the European projects ENHANCE [START_REF] Braudel | Overall Presentation of the ENHANCE Project[END_REF], [START_REF]VIVACE Project[END_REF] and VIVACE [START_REF] Van | Engineering Data Management for extended enterprise[END_REF].
In the last decade, different research projects have been conducted for a complete integration of the iDMU and all the elements in the different stages of the life cycle (from design to manufacturing). The CALIPSOneo project [START_REF] Mas | PLM based approach to the industrialization of aeronautical assemblies[END_REF] was launched by Airbus to promote the Collaborative Engineering. It implements the iDMU as a way to help in making the functional and the industrial designs evolving jointly and collaboratively. The project synchronizes, integrates and configures different software applications that promote the harmonization of common set of PLM and CAD tools.
EOLO (Factories of the future. Industrial development) project was developed as an initiative to achieve a better integration between information created in the industrialization phases and the information created in the operation and maintenance phases.
The ARIADNE project emerges as an evolution of both, CALIPSOneo and EOLO projects, which incorporates the integrated management of the iDMU life cycle (product, processes and resources), the Collaborative Engineering and interoperability between software systems (independent vendor). These characteristics will provide an improvement of data integration, knowledge base and quality of the final product. , the PLM platform currently running in most of the aerospace companies, and 3DExperience. An analysis of the 3DExperience platform is been performed in ARIADNE with the objective of checking the main functionalities needed for the industrial design, that represents an improvement over CATIA v5. It is not an exhaustive analysis of all functionalities of 3DExperience, but it is a study of the characteristics provided by 3DExperience that covers the main requirements of manufacturing engineering activities for the industrialization of an aerospace assembly product. HELIOS (New shopfloor assembly documentation models). HELIOS proposes research on a solution to extract information from an iDMU independently of the software provider. The conceptual solution is based on developing the models and transformations needed to explode the iDMU for any other external system. Currently, any system that needs to exploit the iDMU would have to develop its own interfaces. In case the iDMU is migrated to a different PLM, those interfaces must be changed also. To help with those inefficiencies and to be independent from any existing PLM, HELIOS will generate a standardized software code (EXPRESS-i) that any external system can use to communicate and obtain the required information from the iDMU. ORION (Laser authoring shop floor documentation). ORION aims to develop a system to exploit assembly process information contained in the iDMU with Augmented Reality (AR) technics using laser projection technology. This system will get any data from the iDMU needed for the assembly, verification or maintenance process. ORION is based on the SAMBAlaser project [START_REF] Serván | Augmented Reality Using Laser Projection for the Airbus A400M Wing Assembly[END_REF], an 'AR by laser' technology developed by Airbus. ORION will analyze new ways for laser programming besides numerical control and will provide a 3D simulation tool. Also it will propose a data model to integrate the iDMU with the AR laser system and to facilitate the laser programming and execution.
ARIADNE project functional architecture
ARIADNE architecture is a consequence of the conclusions and the proposed future work in CALIPSOneo project in 2013. CALIPSOneo architecture for a collaborative environment was CATIA v5 in conjunction with DPE (DELMIA Process Engineering) to hold the process definition in a database (also called Manufacturing Hub by Dassault Systèmes). The architecture in CALIPSOneo, although still in production in Airbus and available in the market, is not and architecture ready to support the requirements from Industry 4.0 and is quite out of phase in technology to connect or communicate with today's technology.
To develop MINOS, the decision on the tool to support it was 3DExperience a natural evolution of CATIA v5. Data used in MINOS, the Airbus military transport aircraft A400M Empennage shown in Figure 3a, are in CATIA v5 format. To keep the 3DExperience infrastructure simple, and thanks to the relative low volume of data of the A400M empennage, a single virtual machine with all the required servers were deployed for the project.
For the interoperability between CATIA v5 and 3DExperience, CATIA v5 the input data are stored in file based folders containing the geometry in CATPart and the product structure in CATProduct as Figure 5. FBDI (File Based Data Import) is the process provided by Dassault Systémes that reads and or imports information (geometry and product structure) into 3DExperience. The option 'Import as Native', selected in FBDI will read the CATIA v5 as a reference, meaning, creating a 3D representation in 3DExperience as in CATIA v5, but will not allow it to be modified. Resources and assembly processes will be designed in 3DExperience based on the product (in CATIA v5) previously imported. For the interoperability analysis, the wing tip of the Airbus C295 (a medium range military transport aircraft) was chosen.
Developments in HELIOS and ORION will be based also on CATIA v5 data availability.
ARIADNE pretends to use only off-the-shelf functionalities offered natively by 3DExperience with no additional development.
Implementation and results
Collaborative Engineering, interoperability and iDMU exploitation are the targets in the different work packages of ARIADNE. The implementation and results are described in this section.
Collaborative Engineering
Collaborative Engineering requires an integrated 3D environment where functional and industrial engineers can work together influencing each other. The main driver for the Collaborative Engineering method is the construction of the iDMU. ARIADNE is focused in the collaboration between the functional design and industrial design teams. ARIADNE will check if 3DExperience provides such environment to build the iDMU where Collaborative Engineering can be accomplish.
To analyze 3DExperience collaborative environment a few use cases were defined and tested with the Airbus A400M empennage product represented in Figure 2a.
One of the bases to integrate the information in a PLM is to be able to hold the different ways or views (As-Design, As-Planned, As-Prepared) [START_REF] Mas | Proposal for the conceptual design of aeronautical final assembly lines based on the Industrial Digital Mock-Up concept[END_REF], shown in Figure 2b, for defining the product in Airbus. Keeping these views connected is basic to the Collaborative Engineering [START_REF] Mas | iDMU as the Collaborative Engineering engine: Research experiences in Airbus[END_REF]. In the work performed it was possible to build the As-Design view. Following, the As-Planned view was built from the As-Designed view while sharing the same 3D geometry for each of the structures. This is represented in Figure 3b. The third structure created in ARIADNE, which is the main one used for the industrialization of a product, is the As-Prepared. This structure is also a product structure rearranged as a result of the different assembly processes needed to build the product. The As-Prepared tree organization shown in Figure 4a is a consequence of the network of assembly processes. To build such network, precedence between assembly processes and operations must be assigned as in Figure 4b. Also tools like Gantt representation in Figure 4c helps deciding the precedence based on constrains (resources and times). Additional functionality for balancing constrains is too basic in 3DExperience for Airbus product complexity. Optimization tool is not offered by 3DExperience. Additional development would be needed to cover these last two functionalities [START_REF] Rios | A review of the A400m final assembly line balancing methodology[END_REF].
c) Operations Gantt chart
The iDMU was built by assigning product and resources to each operation together with the precedence. With such information in the iDMU, the use case design in context was performed. The design of an assembly process or a resource requires the representation of the product and the industrial environment based on the operations previously performed. This context was possible to be calculated and represented in 3DExperience.
The reconciliation between As-Planned and As-Prepared was tested to make sure that every product was assigned to a process. This functionality is also shown in the tree process structure with color code nodes. ARIADNE analyzed the capabilities to check how functional designers and industrial designers could carry out their activities influencing each other. For this, a mechanism to follow the evolution of the maturity states of the product, process and resources was proposed [START_REF] Morales-Palma | Managing maturity states in a collaborative platform for the iDMU of aeronautical assembly lines[END_REF]. This mechanism is intended to foster an interaction between both design areas.
3.3
Interoperability CATIA v5 and 3DExperience.
Recent developed aircrafts (A380, A350 and A400M) in Airbus have been designed in CATIA v5. Migration of a complete product design of an aircraft requires a high effort in resources and cost. Finding a solution where the product design can be kept in CATIA v5 while downstream product lifecycle uses a more adequate environment to cover their activities becomes a target for MINOS work package.
MINOS analyzed the degree of interoperability between 3DExperience and CATIA v5. Interoperability in this use case is understood as the set of required characteristics to develop the industrialization activities performed by manufacturing engineering in 3DExperience without affecting the product design activities (functional design) of the design office performed in CATIA v5. Initially a reading of product design (product structure and product design) in CATIA v5 was carried out in 3DExperience, step 1 in Figure 5. Checking the result of such work in 3DExperience demonstrated a successful import of the information for the product structure and for the 3D geometry. Following, a modification was intro-duced in the CATIA v5 product, step 2 in Figure 5. FBDI process did detect such change of the product and propagated it in 3DExperience, step 3 in Figure 5. Also 3DExperience sent a warning to update the product structure with the modified product. An impact on the process and resources related to the modified product was performed based on functionalities provided by 3DExperience.
3.4
Interoperability and iDMU exploitation.
Due to the increasing added value that the iDMU provides, it becomes an important asset to a company. Once assembly processes are designed and stored in the iDMU, information to production lines to perform the tasks can be extracted with an automatic application system.
As the current production environment in Airbus is CATIA v5, extracting information from the iDMU is constrained to such environment. HELIOS has developed an interoperable framework based in a set of transformations to exploit the iDMU independently from the PLM vendor and STEP is the tool selected. The use case HELIOS is based on is the ORION UML (Universal Model Language) model. The ORION UML model is transformed (UML2EXPRESS) into a schema defined in a standard language such as EXPRESS [START_REF]Industrial automation systems and integration -Product data representation and exchange[END_REF]. The schema is the input to any PLM (CATIA v5 or 3DExperience) to extract the information from the iDMU with a second set of transformations (PLM2EXPRESS). This last transformation will generate the instantiated code (EXPRESS-i) with the required information. This standardized code will be the same input to the different laser vendors.
Currently, in Airbus, the SAMBAlaser [START_REF] Serván | Augmented Reality Using Laser Projection for the Airbus A400M Wing Assembly[END_REF] is in production for projection of work instructions. To enhance the SAMBAlaser functionalities, ORION work package has developed an integrated user interface with the laser system control, optimized the quantity of information to project without flickering and built a simulation tool to check the capabilities of projecting within an industrial environment without occlusion.
Conclusions
Main conclusion is the successful proof of concept of the existing PLM technology in an industrial environment.
As mentioned, first test of interoperability CATIA v5 and 3DExperience was successfully. As preliminary conclusions it would be possible for industrialization engineers to work in a more advanced environment, 3dExperience, while functional designers can keep working in CATIA v5. Additional in-depth use cases (annotations, kinematics, and tolerances) need to be performed to check the degree of interoperability.
The introduction of HELIOS as the framework that 'separates' or make any iDMU exploitation system independent of the PLM that support it, is an important step for interoperability between different PLM systems and vendor independency and also enhances the necessity to have a model based definition for the iDMU. Thus, once 3DExperience becomes the production environment in Airbus, ORION will not need to be modified. HELIOS will be able to support any other iDMU exploitation system just by expanding the UML model.
The three views interconnected (As-Design, As-Planned, As-Prepared) together with the capability of creating a network of processes and operations have proven to build an iDMU to support the collaboration engineering and the facilitation of the interaction between functional an industrial engineers. 3DExperience has demonstrated to provide an interoperable collaborative 3D PLM environment to the industrialization of aeronautical assemblies. However an enterprise organizational model must be put in place to bring together functional and industrial engineering as one team with the iDMU as the unique deliverable.
Since ARIADNE is a proof of concept, no direct estimates on cost, time or other benefits are measured. However, based on previous experiences, significant benefits (time, costs, and reduction of errors) are expected after the deployment phase.
5
Future work
The current status of ARIADNE project suggests some improvements and future work after the proof of concept of the technology. ARIADNE project has tested some basic 3DExperience capabilities. The need to explore the 3DExperience capabilities to support the industrialization of an aircraft requires launching additional industrial use cases to cover industrialization activities. ARIADNE project avoids developing IT interfaces. Connection and interfaces from other tools that provide solutions not fully covered by 3DExperience such as line or station balancing and optimization might need to be analyzed. ARIADNE objective was not to test computing performances. Performing stress test with high volume of data (metadata, 3D geometry) is another important point to study, mainly for the aerospace industry.
Figure 1 .
1 Figure 1. ARIADNE project organization
Figure 2 .
2 Figure 2. a) Empennage A400M. b) Airbus product viewsA set of additional functionalities were performed in the As-Planned view such as the possibility of navigating through the structure as well as in the As-Design or filtering of product nodes in the product tree. Reconciliation in 3DExperience has proven to be an important functionality to assure a fully connection As-Design and As-
Figure 3 .
3 Figure 3. a) As-Design view. b) As-Planned view
Figure 4 .
4 Figure 4. a) As-Prepared. b) Precedence between operations. c) Operations Gantt chart
Figure 5 .
5 Figure 5. Interoperability CATIA v5 (functional design) and 3DExperience (industrial design)
Acknowledgments
Authors wish to thanks to Andres Soto, Gonzalo Monguió and Andres Padillo for their contributions. ARIADNE is partially funded by CTA (Corporación Tecnológica Andaluza) with support from the Regional and National Government. | 20,230 | [
"1030675",
"1030680",
"1030673",
"1030681",
"1000803"
] | [
"483884",
"254694",
"254694",
"254694",
"483884"
] |
01764177 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764177/file/462132_1_En_44_Chapter.pdf | Matteo Rucco
email: [email protected]
Katia Lupinetti
email: [email protected]
Franca Giannini
email: [email protected]
Marina Monti
email: [email protected]
Jean-Philippe Pernot
email: [email protected]
J.-P Pernot
CAD Assembly Retrieval and Browsing
Keywords: Assembly retrieval, shape matching, information visualization
Introduction
The large use of CAD (Computer Aided Design) and CAM (Computer Aided Manufacturing) systems in industries has generated a number of 3D databases making available a large amount of 3D digital models. The reuse of these models, either single parts or assemblies, and the exploitation of the knowledge associated with them are becoming an important way to facilitate new designs. To track and organize data related to a product and its lifecycle, modern CAD systems are integrated into PDM (Product Data Management) and PLM (Product Lifecycle Management) systems. Among others, the associated data usually involve the technical specifications of the product, provisions for its manufacturing and assembling, types of materials used for its production, costs and versioning. These systems efficiently manage a search based on textual metadata, which cannot be sufficient to effectively retrieving the searched data. Actually, standard parts, text-based annotation and naming convention are company-or operator-specific, thus difficult to generalize as search keys. To overcome these limitations, content-based algorithms for 3D model retrieval are being developed based on shape characteristics. A wide literature is available and some commer-cial systems provide shape-based model retrieval. [START_REF] Biasotti | Retrieval and clas-sification methods for textured 3D models: a comparative study[END_REF][START_REF] Cardone | A survey of shape similarity assessment algorithms for product design and manufacturing applications[END_REF][START_REF] Iyer | Three-dimensional shape searching: state-of-the-art review and future trends In[END_REF] provide an overview of the 3D shape descriptors most used in the CAD domain. However, these descriptors focus solely on the shape of a single component, which is not adapted for more complex products obtained as assemblies. An effective assembly search cannot be limited to simple shape comparison among components, but requires also information that is not always explicitly encoded in the CAD models, e.g. the relationships and the joint constraints between assembly components.
In this paper, we present methods for the retrieval of globally and/or partially similar assembly models according to different user-specified search criteria [START_REF] Lupinetti | CAD assembly descriptors for knowledge capitalization and model retrieval[END_REF] and for the inspection of the provided results. The proposed approach creates and exploits an assembly descriptor, called Enriched Assembly Model (EAM), organized in several layers that enable multi-level queries and described in section 4.1. The rest of the paper is organized as follows. Section 2 provides an overview of related works. Issues related to assembly retrieval are described in Section 3, while Section 4 presents the assembly descriptor and the comparison procedure. Section 5 reports some of the obtained results, focusing on the developed inspection capabilities. Section 6 concludes the paper discussing on current limits and future work.
Related works
Shape retrieval has been investigated far and wide in the recent years [START_REF] Biasotti | Retrieval and clas-sification methods for textured 3D models: a comparative study[END_REF][START_REF] Cardone | A survey of shape similarity assessment algorithms for product design and manufacturing applications[END_REF][START_REF] Iyer | Three-dimensional shape searching: state-of-the-art review and future trends In[END_REF][START_REF] Tangelder | A survey of content based 3D shape retrieval methods[END_REF]. However, most of the work present in literature deal with the shape of a single component and do not consider other relevant information of the assembly such as the relationships between the parts. One of the pioneer works dealing with assembly retrieval was presented by Deshmukh, et al. [START_REF] Deshmukh | Content-based assembly search: A step towards assembly reuse In[END_REF]. They investigate the possible usage scenarios for assembly retrieval and proposed a flexible retrieval system exploiting the explicit assembly data stored in a commercial CAD system. Hu et al. [START_REF] Hu | Relaxed lightweight assembly retrieval using vector space model In[END_REF] propose a tool to retrieve assemblies represented as vectors of watertight polygon meshes. Identical parts are merged and a weight based on the number of occurrences is attached to each part in the vector. Relative positions of parts and constraints are ignored, thus the method is weak in local matching. Miura and Kanai [START_REF] Miura | 3D Shape retrieval considering assembly structure In[END_REF] extend their assembly model by including structural information and other useful data, e.g. contact and interference stages and geometric constraints. However, it does not consider high-level information, such as kinematic pairs and some information must be made explicit by the user. A more complete system is proposed by Chen et al. [START_REF] Chen | A flexible assembly retrieval approach for model reuse In[END_REF]. It relays on the product structure and the relationships between the different parts of the assembly. The adopted assembly descriptor considers different information levels including the topological structure, the relationships between the components of the assembly, as well as the geometric information. Thus, the provided search is very flexible accepting rough and incomplete queries. Anyhow, most of the work require user support for the insertion of the required information and weakly support the analysis and browsing of the obtained results, which for large assemblies can be very critical. To overcome these limitations, in this paper, we present an assembly descriptor (i.e. the Enriched Assembly Model), which can support user requests based on different search criteria not restrained to the identification of assembly models with the same structure in terms of sub-assemblies, and tools for facilitating the inspection and browsing of the results of the retrieval process.
Assembly retrieval issues
Retrieving similar CAD assembly models can support various activities ranging from the re-use of the associated knowledge, such as production or assembly costs and operations, to part standardization and maintenance planning. For instance, knowing that a specific subassembly, which includes parts having a high consumption rate due to their part surrounding and usage, is present in various larger products may help in defining more appropriate maintenance actions and better planning of the warehouse stocks. Similarly, knowing that different products having problems share similar configurations can help in detecting critical configurations. Considering these scenarios, it is clear that simply looking for products (i.e. assemblies) that are completely similar to a given one is important but limited. It is therefore necessary to have the possibility to detect if an assembly is contained into another as well as local similarities among assemblies, i.e. assemblies that contain similar sub-assemblies. These relations can be described using the set theory. Being ≅ the symbol indicating the similarity according to given criteria, given two assemblies A and B, we say that:
A is globally similar to B iff for each component a Depending on the retrieval purpose, not only the criteria change but also the interest on the similarity among the parts or on their connections can have different priority. It is therefore important to provide flexible retrieval tools that can be adapted to the specific need and thus able to consider the various elements characterizing the assembly despite on how the assembly was described by the user (e.g. structural organization) or on the information available on the CAD model itself (e.g. explicit mating conditions).
i ∈ A, ∃ b h ∈ B s.t. a i ≅ b h , for each relation (a i, a j ) ∈ A, ∃ (b h, b k ) ∈ B s.t. (a i,
In addition, it might be difficult to assess the effective similarity when various elements contribute to it. It is crucial to provide tools for gathering results according to the various criteria and for their inspection. This is very important in the case of large assemblies, where detecting the parts considered similar to a given assembly might be particularly difficult.
4
The proposed approach
Based on the above considerations, we propose a method for the comparison of assembly models exploiting various levels of information of the assembly. Differently from most of the work presented in literature, our method can evaluate all the three types of similarity described above. It uses a multilayer information model, the socalled Enriched Assembly Model (see section 4.1), which stores the data describing the assembly according to three different layers, in turns specified at different level of details thus allowing a refinement of the similarity investigation. Depending on the type of requested similarity, an association graph is build putting in relation the elements of the EAM of two CAD models to be compared. The similar subset of these two models are then corresponding to the maximal clique of the association graph (see section 4.2). To analyze the retrieved results, a visualization tool has been developed; it highlights the correspondences of the parts and provides statistics on the matched elements (see section 4.3).
Enriched Assembly Model (EAM)
The EAM is an attributed graph, where nodes are the components and/or composing sub-assemblies while arcs represent their adjacency relations. It uses four information layers: structure, interface, shape and statistics [START_REF] Lupinetti | CAD assembly descriptors for knowledge capitalization and model retrieval[END_REF].
The structural layer encodes the hierarchical assembly structure as specified at the design stage. In this organization, the structure is represented as a tree where the root corresponds to the entire assembly model, the intermediate nodes are associated with the sub-assemblies and the leaves characterize the parts. Attributes to specify parts arrangement (regular patterns of repeated parts) are attached to the entire assembly and to its sub-assemblies [START_REF] Lupinetti | Use of regular patterns of repeated elements in CAD assembly models retrieval In[END_REF]. The organization in sub-assemblies is not always present and may vary according to the designer's objectives.
The interface layer specifies the relationships among the parts in the assembly. It is organized in two levels: contacts and joints. The first contains the faces involved in the contact between two parts and the degree of freedom between them. The joint level describes the potentially multiple motions resulting from several contacts between two parts [START_REF] Lupinetti | Automatic Extraction of Assembly Component Relationships for Assembly Model Retrieval[END_REF].
The shape layer describes the shape of the part assembly by several dedicated descriptors. Using several shape descriptors helps answering different assembly retrieval scenarios, which can consider different shape characteristics and at different level of details. They include information like shape volume, bounding surface area, bounding box and spherical harmonics [START_REF] Kazhdan | Rotation invariant spherical harmonic representation of 3 d shape descriptors[END_REF].
The statistics layer contains values that roughly characterize and discern assembly models. Statistics are associated as attributes to the various elements of the EAM. For the entire assembly and for each sub-assembly, they include: the numbers of subassemblies, of principal parts, of fasteners, of patterns of a specific type, of a specific joint type. To each node corresponding to a component, the statistics considered are: percentage of a specific type of surface (i.e. planar, cylindrical, conical spherical, toroidal, free form), number of maximal faces of a specific type of surface. Finally, for each arc corresponding to a joint between parts, the stored statistics include the number of elements in contact for a specific contact type.
The E.A.M. is created using ad hoc developed modules [START_REF] Lupinetti | CAD assembly descriptors for knowledge capitalization and model retrieval[END_REF][START_REF] Lupinetti | Use of regular patterns of repeated elements in CAD assembly models retrieval In[END_REF][START_REF] Lupinetti | Automatic Extraction of Assembly Component Relationships for Assembly Model Retrieval[END_REF], which analyze the content of the STEP (ISO 10303-203 and ISO 10303-214) representation of the assembly and extract the required information.
EAM comparison
Adopting this representation, if two models are similar, then their attribute graphs must have a common sub-graph. The similarity assessment between two EAMs can then be performed by matching their attribute graphs and finding their maximum common subgraph (MCS). The identification of the MCS is a well-known NP-hard problem and among the various techniques proposed for its solution [START_REF] Bunke | A comparison of algorithms for maximum common subgraph on randomly connected graphs[END_REF] we chose the detection of the maximal clique of the association graph, since it allows identifying also locally similarities.
The association graph is a support graph that reflects the adopted high-level similarity criteria. Each node in the association graph corresponds to a pair of compatible nodes in the two attributed graphs according to the specified criteria. Associated arcs connect nodes if they have equivalent relations expressed as arcs connecting the corresponding nodes in the attribute graphs.
A clique is a sub-graph in which for each couple of nodes a connecting arc exists. For the clique detection we applied the Eppstein-Strash algorithm [START_REF] Eppstein | Listing all maximal cliques in large sparse real-world graphs[END_REF]. This algorithm represents an improved version of the algorithm by Tomita [START_REF] Tomita | The worst-case time complexity for generating all maximal cliques and computational experiments[END_REF], which is in turn based on the Bron-Kerbosch algorithm for the detection of all maximal cliques in graphs [START_REF] Bron | Algorithm 457: finding all cliques of an undirected graph[END_REF]. As far as we know, Eppstein-Strash algorithm is up to now the best algorithm for listing all maximal cliques in undirected graphs, even in dense graphs. The performances of the algorithm are in general guaranteed by the degeneracy ordering.
The algorithm of Eppstein-Strash improves Tomita's algorithm by using the concept of degeneracy. The degeneracy of a graph G is the smallest number d such that every subgraph of G(V, E) contains a node of degree at most d. Moreover, every graph with degeneracy d has a degeneracy ordering: a linear ordering of the vertices such that each node has at most d neighbors after it in the ordering. Eppstein-Strash algorithm first computes the degeneracy ordering; then for each node v in the order, starting from the first, the algorithm of Tomita is used to compute all cliques containing v and v's later neighbors. Other improvements depend on the use of adjacency lists for data representation. For more details we refer to [START_REF] Eppstein | Listing all maximal cliques in large sparse real-world graphs[END_REF].
Among all the maximal cliques present in the associated graph, we consider as interesting candidates of the similar sub-graphs only those having: 1) the majority of arcs corresponding to real joints between the corresponding components, 2) a number of nodes bigger than a specified value. In this way, priority is given to sub-graphs which contain a significant number of joined similar components, thus possibly corresponding to sub-assemblies. Then, for each selected clique, a measure vector is computed. The first element of the vector indicates the degree of the clique, while the others report the similarity of the various assembly characteristics taken into consideration for the similarity assessment. Depending on the search objectives the set of characteristics to consider may change. The default characteristics are the shape of the components and the type of joint between them. The examples and results discussed in the next section consider the default characteristic selection.
Result visualisation
The proposed retrieval system has been implemented in a multi-module prototype system. The creation of the EAM description is developed by using Microsoft Visual C# 2013 and exploiting the Application Programming Interface (API) of the commercial CAD system SolidWorks. The matching and the similarity assessment module is developed by using Java and is invoked during the retrieval as a jar file. In the end, to analyze the obtained results, a browser view has been implemented. It is obtained by multiple dynamic web pages that are based on HTML5, jQuery, Ajax and PHP. Moreover, Mysql is used as database system, while X3D library is used for the STEP model visualization.
The system has been tested on assembly models obtained from on-line repositories [17, 18, 19] and from university students' tests. 2 shows an example of the developed user interface, where the design can choose an assembly model as query and set the required criteria of similarity. In this example, it is required to retrieve models similar for shape and joint. Some results of this query are shown in Fig. 3. The first model in the picture (top-left) coincides with the query model. The retrieved models are gathered together in the other views of Fig. 3. Each retrieved and displayed assemblies has a clique that has been detected in the association graph and satisfies the required conditions. The assemblies are visualized in X3D view that allows rotating, zooming and selecting the various 3D components of the retrieved assembly. Components are visualized in the transparency mode to make possible to see also the internal ones. Under each model, three bars are shown to quickly get an idea of how similar to the query the retrieved assemblies are. The first two indicate the percentage of coverage (i.e. percentage of matched elements) with respect to both the query and target model. Thank to these bars, the user can see the type of similarity (i.e. global, partial or local). If the green is not complete, it means that just a subset of the query model is matched, thus the similarity is locally. The global similarity is shown by the purple bar, if this bar is not complete, then the similarity is partial. The last bar shows the average shape similarity among the components associated with the displayed clique. Simply looking at the reported model and checking the purple bar, the user can notice that (except the first model which represents the query model) no models are globally similar to the query one according to the criteria he/she has specified. The first model in the second row is partially similar with the query one, since the query is completely included in it (see the green bar). Other models are locally similar with the query model, thus just a subset of the query model is included in them. If the user wants to further analyze the levels of similarity of the chosen characteristics or to visualize all the subsets of matched parts, he/she can select one of the retrieved assemblies and a new browser page is prompted. Once selected, a new page as in Fig. 4 is available, where the user can get the list of all the interesting clique, using the sliders at the top of window. With these sliders, the user can choose some thresholds that the proposed results have to satisfy. In particular, they refer to the dimension of the matched portion, the shape similarity measure and the joint similarity measure. After the setting of those parameters, the button "Clique finding" can be pressed to get the results displayed in a table as visible on the left of Fig. 5. The rows of the table gather together all the matching portions that satisfy the required criteria. In this example, four information are accessible for each matching portion: an identification number, the number of matched parts, the shape similarity and the joint similarity. Selecting one of them, the corresponding clique is visualized within the assembly. It highlights the component correspondence with the query model using same colors for corresponding components in the two objects, as shown on the right part of Fig. 4. To easy the comparison according to several available criteria (here reported just the default ones), a radar chart is used. It illustrates the shape and joint similarities among the overall assembly in relation to the clique degree. This type of visualization is very useful to compare multiple data and to have a global evaluation in just a look. Moreover, the radar charts are convenient to compare two or more models on various features expressed through numerical values. The larger is the covered area, the more the two assemblies are similar. In the reported case, the user can observe immediately that the two models are not completely matched, even if they look very similar. This is because the gears in the two models have a significant different shape, which avoids including those parts among the matched one, thus decreasing the global level of similarity. On the other side, the retrieved portion completely satisfies the requests, thus reporting an assemment of 1.
Conclusions
In this paper, methods for the identification and evaluation of similarities between CAD assemblies are presented. While almost all of the products are made of assembled parts, most of the works present in literature are addressing the problem of similarity among single parts. For assemblies, the shape of the components is not the only characteristic to be considered. Increasing the number of elements to consider, augments from the one hand the possibility of adapting the search to specific user needs, and on the other hand the difficulty to evaluate the results. The method here described can consider all or a subset of the various aspects of the assembly, namely the shape of the components, their arrangements (i.e. patterns), their mating contacts and joints. The evaluation of the retrieved results is supported by exploiting colour variations in the 3D visualisation of the components in correspondence between the compared assemblies. Measures and statistics quantifying the similarity of the overall assemblies and of the matched subparts are reported according to the various considered characteristics.
In future work, we plan to introduce graph databases, such as Neo4j, for speedingup the search of local similarity among big assembly models. We also intend to improve the clique-finding algorithm by allowing it to select automatically the dimension of the biggest clique. Moreover, we intend to involve the definition of a single measure for the overall ranking of the retrieved assemblies similar to a query one. This information will be displayed in an ad-hoc infographics, which will be developed for improving the user-experience.
Fig. 1 .
1 Fig. 1. Example of different type of similarities
Fig. 2 .
2 Fig. 2. An assembly model and the similarity criteria used for the matching
Figure
Figure2shows an example of the developed user interface, where the design can choose an assembly model as query and set the required criteria of similarity. In this example, it is required to retrieve models similar for shape and joint. Some results of this query are shown in Fig.3. The first model in the picture (top-left) coincides with the query model. The retrieved models are gathered together in the other views of Fig.3. Each retrieved and displayed assemblies has a clique that has been detected in the association graph and satisfies the required conditions. The assemblies are visualized in X3D view that allows rotating, zooming and selecting the various 3D components of the retrieved assembly. Components are visualized in the transparency mode to make possible to see also the internal ones. Under each model, three bars are shown to quickly get an idea of how similar to the query the retrieved assemblies are. The first two indicate the percentage of coverage (i.e. percentage of matched elements) with respect to both the query and target model. Thank to these bars, the user can see the type of similarity (i.e. global, partial or local). If the green is not complete, it means that just a subset of the query model is matched, thus the similarity is locally. The global similarity is shown by the purple bar, if this bar is not complete, then the similarity is partial. The last bar shows the average shape similarity among the components associated with the displayed clique. Simply looking at the reported model and checking the purple bar, the user can notice that (except the first model which represents the query model) no models are globally similar to the query one according to the criteria he/she has specified. The first model in the second row is partially similar with the query one, since the query is completely included in it (see the green bar). Other models are locally similar with the query model, thus just a subset of the query model is included in them.
Fig. 3 .
3 Fig. 3. A sample of the retrieved models for the proposed speed reducer query (top left)
Fig. 4 .
4 Fig. 4. Initial page for investigating the model similarity
Fig. 5 .
5 Fig. 5. Example of matching browsing | 25,969 | [
"1030682",
"173300",
"990067",
"990068",
"1030683"
] | [
"73335",
"73335",
"199402",
"175453",
"73335",
"73335",
"199402"
] |
01764180 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764180/file/462132_1_En_24_Chapter.pdf | Widad Es-Soufi
email: [email protected]
Esma Yahia
email: [email protected]
Lionel Roucoules
email: [email protected]
A Process Mining Based Approach to Support Decision Making
Keywords: Process mining, Decision mining, Process patterns, Decisionmaking, Business process
Currently, organizations tend to reuse their past knowledge to make good decisions quickly and effectively and thus, to improve their business processes performance in terms of time, quality, efficiency, etc. Process mining techniques allow organizations to achieve this objective through process discovery. This paper develops a semi-automated approach that supports decision making by discovering decision rules from the past process executions. It identifies a ranking of the process patterns that satisfy the discovered decision rules and which are the most likely to be executed by a given user in a given context. The approach is applied on a supervision process of the gas network exploitation.
Introduction
Business process is defined as a set of activities that take one or more inputs and produce a valuable output that satisfies the customer [START_REF] Hammer | Reengineering the Corporation: A Manifesto for Business Revolution[END_REF]. In [START_REF] Weske | Business Process Management: Concepts, Languages, Architectures[END_REF], authors define it as a set of activities that are performed in coordination in an organizational and technical environment and provide an output that responds to a business goal. Based on these definitions, the authors of this paper describe the business process, as a set of linked activities that have zero or more inputs, one or more resources and create a high added value output (i.e. product or service) that satisfies the industrial and customers constraints. These linked activities represent the business process flow and are controlled by different process gateways (And, Or, Xor) [START_REF]Business Process Model and Notation (BPMN) Version 2[END_REF]Sec. 8.3.9] that give rise to several patterns (pattern 1 to 9 in Fig. 1) where each one is a linear end-to-end execution. The "And" gateway, also called parallel gateway, means that all the following activities are going to be executed in several possible orders. The "Or" gateway, also called inclusive gateway, means that one or all the following activities are going to be executed based on some attributes conditions. The "Xor" gateway, also called exclusive gateway, means that only one following activity among others, is going to be executed based on some attributes conditions.
The presence of gateways in business processes results in making several decisions based on some criteria like experience, preference, or industrial constraints [START_REF] Iino | Decision-Making in Engineering Design: Theory and Practice[END_REF].
Making the right decisions in business processes is tightly related to business success. Indeed, a research that involved more than a thousand companies, shows an evident correlation between decision effectiveness and business performance [START_REF] Blenko | Decision Insights: The five steps to better decisions[END_REF]. In [START_REF] Es-Soufi | On the use of Process Mining and Machine Learning to support decision making in systems design[END_REF], authors explain that the process of decision-making can be broken down into two sub processes: The global and the local decision making. In this research, authors focus on the global decision making and aim at developing a generic approach that assists engineers in managing the business process associated with the life of their products or services. The approach automatically proposes a predicted ranking of the business process patterns, that are the most likely to be executed by a given user in a given context. This comes down to exploring these patterns and the decisions that control them in a complex business process, i.e. where all gateways are present (Fig. 1). Authors assume that this objective can be achieved using process mining techniques.
This paper is organized as follows. In Section 2, a literature review on decision and trace variants mining are discussed. The proposed approach is presented in Section 3 and then illustrated in a case study in Section 4. Finally, the discussion of future work concludes the paper.
Literature Review on Decision and Trace Variants Mining
Process mining is a research field that supports process understanding and improvement, it helps to automatically extract the hidden useful knowledge from the recorded event logs generated by information systems. Three types of applications in process mining are distinguished: discovery, conformance, and enhancement [START_REF] Van Der Aalst | Process mining: overview and opportunities[END_REF]. In this paper, authors focus on the discovery application, namely, the decision mining and the trace variants mining. A brief summary is provided of each.
Decision mining is a data-aware form of the process discovery application since it enriches process models with meaningful data. It aims at capturing the decision rules that control how the business processes flow (e.g. conditions 1,2,3,4 in Fig. 1). In [START_REF] Rozinat | Decision mining in business processes[END_REF], authors define it as the process in which data dependencies, that affect the routing of each activity in the business process, are detected. It analyses the data flow to find the 1 http://www.bpmn.org/ Process patterns: 1-A1A2A3A4A5A6A7 2-A1A2A3A5A4A6A7 3-A1A2A3A5A6A4A7 4-A1A2A5A6A3A4A7 5-A1A2A5A3A6A4A7 6-A1A2A5A3A4A6A7 7-A1A8A9A11 8-A1A8A10A11 9-A1A8A9A10A11 10-A1A8A10A9A11 rules that explain the rationale behind selecting an activity among others when the process flow splits [START_REF] De Leoni | Data-aware Process Mining: Discovering Decisions in Processes Using Alignments[END_REF].
While executing a business process, one may adopt the same logic several times (e.g. always executing pattern 1 in Fig. 1, rather than patterns 2 to 6, if condition 1 is enabled). This results in the existence of similar traces in the recorded event log. Trace variants mining aims at identifying the trace variants and their duplicates (e.g. patterns 1 to 9 in Fig. 1). Each trace variant refers to a process pattern that is a linear end-to-end process execution where only the activities execution order is taken into account [START_REF] Es-Soufi | On the use of Process Mining and Machine Learning to support decision making in systems design[END_REF].
Decision Mining
The starting point of the most common decision mining techniques is a recorded event log (i.e. past executions traces) and its corresponding petri net2 model that describes the concurrency and synchronisation of the traces activities. To automatically generate a petri net model from an event log, different algorithms were proposed. The alpha algorithm, alpha++ algorithm, ILP miner, genetic miner, among others, are presented in [START_REF] Van Dongen | Process mining: Overview and outlook of petri net discovery algorithms[END_REF], and the inductive visual miner that was recently proposed in [START_REF] Leemans | Exploring processes and deviations[END_REF].
Many research works contribute to decision mining development. In [START_REF] Rozinat | Decision mining in business processes[END_REF], authors propose an algorithm, called Decision point analysis, which allows one to detect decision points that depict choice splits within a process. Then for each decision point, an exclusive decision rule (Xor rule) in the form "v op c", where "v" is a variable, "op" is a comparison operator and "c" is a constant, allowing one activity among others to be executed is detected. The decision point analysis is implemented as a plug-in for the ProM3 framework. In [START_REF] Leoni | Discovering branching conditions from business process execution logs[END_REF], authors propose a technique that improves the decision point analysis by allowing one to discover complex decision rules for the Xor gateway, based on invariants discovery, that takes into account more than one variable, i.e. in the form "v1 op c" or "v1 op v2", where v1 and v2 are variables. This technique is implemented as a tool named Branch Miner 4 . In [START_REF] Catalkaya | Enriching business process models with decision rules[END_REF], authors propose a technique that embeds decision rules into process models by transforming the Xor gateway into a rule-based Xor gateway that automatically determines the optimal alternative in terms of performance (cost, time) during runtime. This technique is still not yet implemented. In [START_REF] Bazhenova | Deriving decision models from process models by enhanced decision mining[END_REF], authors propose an approach to derive decision models from process models using enhanced decision mining. The decision rules are discovered using the decision point analysis algorithm [START_REF] Rozinat | Decision mining in business processes[END_REF], and then enhanced by taking into account the predictions of process performance measures (time, risk score) related to different decision outcomes. This approach is not yet implemented. In [START_REF] Dunkl | A method for analyzing time series data in process mining: application and extension of decision point analysis[END_REF], authors propose a method that extends the Decision point analysis [START_REF] Rozinat | Decision mining in business processes[END_REF] which allows only single values to be analysed. The proposed method takes into account time series data (i.e. sequence of data points listed in time order) and allows one to generate complex decision rules with more than one variable. The method is implemented but not publicly shared. In [START_REF] Ghattas | Improving business process decision making based on past experience[END_REF], authors propose a process mining based technique that allows one to identify the most performant process path by mining decision rules based on the relationships between the context (i.e. situation in which the past decisions have taken place), path decisions and process performance (i.e. time, cost, quality). The approach is not yet implemented.
In [START_REF] De Leoni | Data-aware Process Mining: Discovering Decisions in Processes Using Alignments[END_REF], authors introduce a technique that takes the process petri net model, the process past executions log and the alignment result (indicating whether the model and the log conform to each other) as inputs, and produces a petri net model with the discovered inclusive/exclusive decision rules. It is implemented as a data flow discovery plug-in for the ProM framework. Another variant of this plug-in that needs only the event log and the related petri net as inputs is implemented as well. In [START_REF] Mannhardt | Decision mining revisited-discovering overlapping rules[END_REF], authors propose a technique that aims at discovering inclusive/exclusive decision rules even if they overlap due to incomplete process execution data. This technique is implemented in the multi-perspective explorer plug-in [START_REF] Mannhardt | The Multi-perspective Process Explorer[END_REF] of the ProM framework. In [START_REF] Sarno | Decision mining for multi choice workflow patterns[END_REF], authors propose an approach to explore inclusive decision rules using the Decision point analysis [START_REF] Rozinat | Decision mining in business processes[END_REF]. The approach consists in manually modifying the petri net model by transforming the "Or" gateway into an "And" gateway followed by a "Xor" gateway in each of its outgoing arcs.
Trace Variants Mining
Different researches were interested in trace variants mining. In [START_REF] Song | Trace Clustering in Process Mining[END_REF], authors propose an approach based on trace clustering, that groups the similar traces into homogeneous subsets based on several perspectives. In [START_REF] Bose | Abstractions in process mining: A taxonomy of patterns[END_REF], authors propose a Pattern abstraction plug-in, developed in ProM, that allows one to explore the common low-level patterns of execution, in an event log. These low-level patterns can be merged to generate the process most frequent patterns which can be exported in one single CSV file. The Explore Event Log (Trace Variants/Searchable/Sortable) visualizer 5 , developed in ProM, sorts the different trace variants as well as the number and names of duplicate traces. These variants can be exported in separate CSV files, where each file contains the trace variant, i.e. process pattern, as well as the related duplicate traces.
Discussion
In this paper, authors attempt to discover the decision rules related to both exclusive (Xor) and inclusive (Or) gateways, as well as the different activities execution order. Regarding decision mining, the algorithm that generates the petri net model should be selected first. Authors reject the algorithms presented in [START_REF] Van Dongen | Process mining: Overview and outlook of petri net discovery algorithms[END_REF] and select the inductive visual miner [START_REF] Leemans | Exploring processes and deviations[END_REF] as the petri net model generator. Indeed, experience has shown that only the inductive visual miner allows the inclusive gateways to be identified by the decision mining algorithm. This latter should afterward be selected.
The research works presented in [START_REF] Rozinat | Decision mining in business processes[END_REF], [START_REF] Leoni | Discovering branching conditions from business process execution logs[END_REF]- [START_REF] Ghattas | Improving business process decision making based on past experience[END_REF] attempt to discover exclusive decision rules considering only the exclusive (Xor) gateway. The work presented in [START_REF] Sarno | Decision mining for multi choice workflow patterns[END_REF] considers the inclusive and exclusive decision rules discovery, but the technique needs a manual modification of the petri net model which is not practical when dealing with complex processes. Therefore, authors assume that these works are not relevant for the proposition and consider the works presented in [START_REF] De Leoni | Data-aware Process Mining: Discovering Decisions in Processes Using Alignments[END_REF] and [START_REF] Mannhardt | Decision mining revisited-discovering overlapping rules[END_REF] which allow the discovery of both inclusive and exclusive decision rules. Moreover, authors assume that the data flow discovery plug-in [START_REF] De Leoni | Data-aware Process Mining: Discovering Decisions in Processes Using Alignments[END_REF] is more relevant since the experience has shown that the other one [START_REF] Mannhardt | Decision mining revisited-discovering overlapping rules[END_REF] could not correctly explore the decision rule related to the variables whose values do not frequently change in the event log.
Regarding trace variants mining, authors do not consider the approach presented in [START_REF] Song | Trace Clustering in Process Mining[END_REF] as relevant for the proposition since the objective is to discover the patterns that are exactly similar, i.e. patterns with the same activities that are performed in the same order. The work presented in [START_REF] Bose | Abstractions in process mining: A taxonomy of patterns[END_REF] and the Explore Event Log visualizer are considered as relevant for the proposition. Since none of the proposed techniques allow one to export a CSV file that contains only the trace variants and their frequency, authors assume that exploring trace variants using the Explore Event Log visualizer is more relevant because the discovered patterns can be exported in separate CSV files, which facilitates the postprocessing that needs to be made.
Decision and Trace Variants Mining Based Approach
The approach presented in Fig. 2 is the global workflow of the proposal and enables the achievement of the current research objective through seven steps. The first step of the approach concerns the construction of the event log from the past process executions. These latter represent the process traces generated with respect to the trace metamodel depicted in [START_REF] Roucoules | Engineering design memory for design rationale and change management toward innovation[END_REF][START_REF] Es-Soufi | Collaborative Design and Supervision Processes Meta-Model for Rationale Capitalization[END_REF] and expressed in XMI (XML Metadata Interchange) format. These traces should be automatically merged into a single XES 7(eXtensible Event Stream) event log in order to be processed in ProM, the framework in which the selected decision mining technique is developed. This automatic merge is implemented using ATL 8 (Atlas Transformation Language).
The second step concerns the generation of the petri net model from the event log. To this end, the inductive visual miner is used. Having both the event log and its corresponding petri net model, the decision mining practically starts using the data flow discovery plug-in as discussed in Section 2.
The third step aims at deriving the decision rules related to all the variables in the event log and exporting them in a single PNML 9 (Petri Net Markup Language) file. PNML is a XML based standardized interchange format for Petri nets that allows the decision rules to be expressed as guards, this means that the transition from a place (i.e. activity) to another can fire only if the guard, and thus the rule, evaluates to true. For instance, condition 1 in Fig. 1 is the decision rule that enables the transition from A1 to A2. The experience has shown that when all the variables in the event log are considered in the decision mining, some decision rules related to some of these variables may not be derived as expected, the origin of this problem is not yet clear. Therefore, to avoid this situation and be sure to have a correct decision rule, authors propose to execute the data flow discovery plug-in for each variable, this results in as much decision rules as variables (step 3 in Fig. 2).
The PNML files, that are generated in step 3, should be automatically merged into one single PNML file that contains the complete decision rules, i.e. related to all the event log's variables (step 4 in Fig. 2). This automatic merge is implemented using the Java programming language.
In parallel with decision mining (finding the Or and XOR rules), the trace variants mining can be performed in order to find the end-to-end processes (e.g. patterns 1 to 9 in Fig. 1). The Explore Event Log visualizer, as discussed in Section 2, is used to explore patterns in an event log. The detected patterns are then exported in CSV files where each file contains one pattern and its duplicates (step 1' of Fig. 2). To fit our objective, the patterns files need to be automatically post processed. This consists in computing the occurrence frequency of each pattern and removing its duplicates and then creating a file that contains a ranking of the different, non-duplicate, patterns based on their occurrence frequency (step 2' in Fig. 2). This post processing is implemented using the Java programming language.
During a new process execution, the ranked patterns file is automatically filtered to fit both the discovered decision rules and the user's context (user's name, date, process type, etc.). In other words, the patterns that do not satisfy the decision rules and those that are, for example, performed by another user than the one that is currently performing the process are removed. As a result, a ranking of suggestions (i.e. patterns that are the most suitable for the current user's context) are proposed to the user (step 5 in Fig. 2). The selected pattern is, then, captured and stored in order to enrich the event log.
4
Case Study: Supervision of Gas Network Exploitation
Systems supervision is a decision-based activity carried out by a supervisor to survey the progress of an industrial process. It is a business process that produces an action, depending on both the supervision result and the set-point (i.e. target value for the supervised system), that resolves systems malfunction. The authors of this paper present a supervision case study where the supervisor of an industrial process should take, in the shortest time, the right decision in case an alarm is received. The challenge here is to provide this supervisor with a ranking of the process patterns that are the most likely to be executed in his context. The proposed approach is verified under a specific supervision process related to gas network exploitation.
The process starts by receiving the malfunction alarm. The Chief Operating Officer (COO) has, then, to choose the process that best resolves the problem in this context. This latter can be described by the field sensors values, season, supervisor's name, etc. The first step of the proposed approach is to transform the already captured sixty traces of this supervision process into a single XES event log (step 1 in Fig. 2) and then generate its corresponding petri net model (step 2 in Fig. 2). Then, from the event log and the petri net model, generate the decision rules for each variable and export them in PNML files (step 3 in Fig. 2). In this process, the decision variables are: Pressure, season, network status, flow rate, human resource (the decision rule related to the pressure variable is depicted in Fig. 3). These PNML files are then merged into one single PNML file that contains the complete decision rules related to all the decision variables (step 4 in Fig. 2).
Fig. 3. Discovered decision rules for the pressure variable
In this process, based on both pressure value and season, the COO decides whether to send an emergency or a maintenance technician. If the emergency technician is sent (i.e. the decision rule: ((pressure>22millibars)and(season≠fall)) evaluates to true), he has then to decide which action should be performed based on the measured flow rate. If the decision rule ((pressure≤22millibars)and(season=fall)) evaluates to true, then the maintenance technician is sent. Moreover, if the rule (pressure<19millibars) evaluates to true, then in addition to sending the maintenance technician, the supervisor should extend the time scale then share it and write the problem then share it. In this last case, the inclusive logic is transformed into a parallel logic, and thus the activities may be executed in different possible orders.
In parallel with the decision rules mining, the step 1' and 2' in Fig. 2 are performed; the patterns that are contained in the event log are discovered (Fig. 4.a) then exported in CSV files and finally post processed by removing each pattern's duplicates and computing their occurrence frequency. If we consider all the possible process patterns and the different rules, it is possible to construct the BPMN process depicted in Fig. 5. These patterns (Fig. 4.a) are, then, filtered based on the current context and the decision rules that are generated (step 4 in Fig. 2). For instance, if the alarm is received in the fall by John, and the pressure of the supervised network equals to 18 millibars which is less than both 22 and 19 millibars (Fig. 5), the approach proposes two possible patterns to solve the problem (Fig. 4.b), where the first one "P12" is the most frequently used in this context.
Conclusion and Future Work
The objective of this paper is to support engineers in their decision-making processes by proposing the most relevant process patterns to be executed given the context. Through the proposed approach, the past execution traces are first analysed and the decision rules that control the process are mined. Then, the patterns and their occurrence frequency are discovered, postprocessed and filtered based on the discovered decision rules and the user context parameters. A ranking of the most likely patterns to be executed are then proposed. This approach illustrates the feasibility of the assumption about using process mining techniques to support decision making in complex processes that are controlled by inclusive, exclusive and parallel gateways. Future work consists in fully automating the approach and integrating it in the process visualizer tool presented in [START_REF] Roucoules | Engineering design memory for design rationale and change management toward innovation[END_REF]. It also consists in evaluating this approach, using real-world design and supervision processes, with respect to some performance indicators such as execution time, quality of the proposed decisions, changes propagation, etc.
Fig. 1 .
1 Fig. 1. Example of process patterns (expressed in BPMN 1 )
Fig. 2 .
2 Fig. 2. Overview of the proposal (expressed in IDEF0 6 )
Fig. 4 .
4 Fig. 4. (a) Discovered patterns, (b) proposed patterns
Fig. 5 .
5 Fig. 5. Part of the resulting supervision process with the different rules (expressed in BPMN)
https://en.wikipedia.org/wiki/Petri_net
http://www.promtools.org/
http://sep.cs.ut.ee/Main/BranchMiner
https://fmannhardt.de/blog/process-mining-tools
https://en.wikipedia.org/wiki/IDEF0
http://www.xes-standard.org/
https://en.wikipedia.org/wiki/ATLAS_Transformation_Language
http://www.pnml.org/
Acknowledgments. This research takes part of a national collaborative project (Gontrand) that aims at supervising a smart gas grid. Authors would like to thank the companies REGAZ, GDS and GRDF for their collaboration. | 25,582 | [
"1027513",
"933687",
"914683"
] | [
"175453",
"199402",
"175453",
"199402",
"175453",
"199402"
] |
01764212 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764212/file/462132_1_En_2_Chapter.pdf | Mourad Messaadia
email: [email protected]
Fatah Benatia
email: [email protected]
David Baudry
email: [email protected]
Anne Louis
email: [email protected]
PLM Adoption Model for SMEs
Keywords: PLM, ICT Adoption, SMEs, Data analysis 1
de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
The literature review has addressed the topic of PLM from different angles. However, the adoption aspect was only dealt by a few works such [START_REF] Bergsjö | PLM Adoption Through Statistical Analysis[END_REF] where author proposes statistical tools to improve the organizational adoption of new PLM systems and highlights on the importance of survey early in the PLM introduction process; [START_REF] Ristova | AHP methodology and selection of an advanced information technology due to PLM software adoption[END_REF] provides a review of the main developments in the AHP (Analytical Hierarchy Process) methodology as a tool for decision makers to be able to do more informed decisions regarding their investment in PLM; [START_REF] Rossi | Product Lifecycle Management Adoption versus Lifecycle Orientation: Evidences from Italian Companies[END_REF] on the adoption of PLM IT solutions and discussed the relationship between "PLM adopter" and "lifecycle-oriented" companies in order to achieve the adoption aspect we have considered PLM as an innovate ICT for SMEs. Thus we integrated works on ICT and innovation adoptions. ICT technology is one of the ways, at the disposal of a company to increase its productivity. ICT can reduce business costs, improve productivity and strengthen growth possibilities and the generation of competitive advantages [START_REF] Bergsjö | PLM Adoption Through Statistical Analysis[END_REF]. Despite the work done and large companies evolution in terms of PLM, SMEs still have difficulties to understand all the potential of such technologies [START_REF] Hollenstein | The decision to adopt information and communication technologies (ICT): firm-level evidence for Switzerland[END_REF]. Their adoption of ICT is slow and late, primarily because they find that ICT adoption is difficult [START_REF] Hashim | Information communication technology (ICT) adoption among SME owners in Malaysia[END_REF] and SMEs adoption is still lower than expected.
When implementing a PLM solution in a company, the implementation difficulties are directly dependent on the complexity of the organization, costs and the possible opacity of the real behaviours in the field. Indeed, the implementation of PLM solution seems to scare SMEs in terms of resource costs and deployment.
The integration of the PLM solutions and its adoption by the SMEs has succeeded the interest of several research works. Among these research works we distinguish those on adoption process improving through statistical tools [START_REF] Bergsjö | PLM Adoption Through Statistical Analysis[END_REF]. In the same way authors in [START_REF] Fabiani | ICT adoption in Italian manufacturing: firm-level evidence[END_REF] conducted an investigation around 1500 enterprises and analyse the process adoption. This investigation shows that size on enterprise, human capital of the workforce and the geographic proximity with large firms has an impact on ICT adoption. In another hand, we find investigation based on empirical analysis which highlights the role of management practices, especially the manager, and quality control on the ICT adoption.
Another investigation was conducted on a thousand firms in manufacturing in Brazil and India and examines the characteristics of firms adopting ICT and the consequences of adoption for performance [START_REF] Basant | ICT adoption and productivity in developing countries: new firm level evidence from Brazil and India[END_REF]. In addition to previous results, they show the impact of educational system and the positive association between ICT adoption and education. Several barriers to IT adoption have been identified, including: lack of knowledge about the potential of IT, a shortage of resources such financial and expertise and lack of skills [START_REF] Hashim | Information communication technology (ICT) adoption among SME owners in Malaysia[END_REF].
According to [START_REF] Forth | Information and Communication Technology (ICT) adoption and utilisation, skill constraints and firm level performance: evidence from UK benchmarking surveys[END_REF] the skill workers have an impact on ICT adoption. Workers with high (low) proportions of skill can have a comparative advantage (disadvantage) in minimizing the costs both of ICT adoption and of learning how to make best use of ICTs.
An investigation of works done on ICT adoption conclude on the importance to analyse the impact on ICT system implementation and adoption processes and how they do so, and how implementation and adoption processes could be supported on the organizational, group, and individual levels [START_REF] Korpelainen | Theories of ICT system implementation and adoption-A critical[END_REF]. Based on previous works, we will consider that PLM is an innovative ICT solution for SMEs.
Next paragraph will introduce the problem statement and context of study. Third paragraph is on the proposed the model of PLM adoption based on quantitative KPIs. The fourth paragraph highlights the obtained results and their discussion. Finally, we conclude and discuss future work on how to improve and deploy our model.
STUDY CONTEXT
The first initiative of this work was conducted during the INTERREG project called "BENEFITS" where different adoption KPI's was identified [START_REF] Messaadia | PLM adoption in SMEs context[END_REF].
On the basis of an analysis of the various studies carried out with several companies, it is possible to collect different indicators. These indicators have been classified according to 4 axes identified through PLM definitions analysis. The 4-axis structure (Strategy, Organisation, Process and Tools) seemed clear and gave a good visibility to the impact of the indicators on the different levels of enterprise [START_REF] Messaadia | PLM adoption in SMEs context[END_REF].
For our work, Survey conducted followed different steps from questionnaire designing until data analysis [START_REF]Survey methods and practices[END_REF]. One of problems faced during questionnaire design is the decision of what questions to ask, how to best word them and how to arrange the questions to yield the information required. For these questions were conducted on the basis of indicators, words were reviewed by experts and finally we reorganised questions according to new 4 axes: Human Factors, Organisational Factors, Technical Factors and Economic Factors. This new decomposition does not affect the indicators but brings a fluidity and easier understanding for the interviewees (SMEs).
Fig.1.PLM axis structuration
Also, the objective of the investigation is to understand the needs of SMEs according to the introduction of digital technology within the automotive sector and to anticipate the increase in competence needed to help these SMEs face the change by setting up the necessary services and training. The survey was conducted on a panel of 33 companies (14 with study activities and 19 with manufacturing activities) of which 50% are small structures as shown in Fig. 2.
Fig.2.Panel of SMEs interviewed
PLM ADOPTION INDICATORS
The concept of adoption may be defined as a process composed of a certain number of steps by which a potential adopter must pass before accepting the new product, new service or new idea [START_REF] Frambach | Organizational innovation adoption: A multilevel framework of determinants and opportunities for future research[END_REF]. Adoption can be seen as an individual adoption and organizational adoption. The individual one focuses on user behaviour according to new technology and have an impact on the investment in IT technology [START_REF] Magni | Intra-organizational relationships and technology acceptance[END_REF]. In the organisational adoption the organisation forms an opinion of the new technology and assesses it. Based on this, organisation makes the decision to purchase and use this new technology [START_REF] Magni | Intra-organizational relationships and technology acceptance[END_REF]. Based on work done in [START_REF] Messaadia | PLM adoption in SMEs context[END_REF] we developed the questionnaire according to adoption factors (Table 1.).
QUESTIONNAIRE ANALYSIS
The previous step was the construction of the questionnaire by methodological tool with a set of questions that follow in a structured way (Fig. 3). It is presented in electronic form and was administered directly through face to face and by phone. ε
For:
1, … , , the hypothesis related to the model (eq.1) is the distribution of the error ε is independent and the error is centred with constant ance ~ 0, ; In order to conclude that there is a significant relationship between PLM level and Adoption factors, the Regression (eq.1) is used during estimation and to improve the quality of the estimates. The first step is to calculate the adoption factors according to: Once the fourth factors calculated, the matrix form of our model becomes:
M ' ⋮ $ O = M '' '' '' '' ' ' ' ' ⋮ $' ⋮ $' ⋮ ⋮ $' $' O P Q + M ' ⋮ $ O ⟺ = S = TU + E (2)
For resolving our equation (Eq.2) we need to calculate the estimated matrix B. With estimated B called:
U W = T X T Y' T X S (3)
Through all these equations (observation) we can give the general regression equation of PLM.
= + + + + (4)
The methodology adopted started by determining (estimating) , , , parameters of the multiple-regression function. The result of estimation is defined by: Z, W , , \ . For this, we choose the method of "mean square error" calculated through Matlab. In the second step, we calculate the dependency between PLM level (result of multipleregression) and the adoption factors (H, O, T and E) by the regression coefficient (R), especially the Determination Coefficient (D).
Where:
] = ^ = ___ _H , aa^=
Numerical Results
After the investigation the PLM-Eval-Tool generates a data table (Fig. 4) of evaluated responses that will be used to build our adoption model. Once the data collected, we applied our approach for obtaining the estimated parameters Z, W , , \ through (Eq.3).
Fig.4. Brief view of collected data
M Z W ̂ \ O P 0.0697 0.6053 0.1958 0.1137 Q (5)
With ^ 0.9841 which is considered as a very good regression, and validate the proposed equation (Eq.1).
The numerical result equation is:
KmnopnX q$ = 0.0697 + 0.6053 + 0.1958 0.1137 (6)
Result discussion
Concerning the "Error" we will consider the highest one which is equal to
' $ ∑ s.t 'u vv 0.0128.
This means that all values of PLM_Evaluation will be considered with ± 0.0128. We can also determine confidence interval for the parameters a, b, c and d using the student law t x ,y , where α is the Confidence threshold, or the Tolerance error rate, the choice of the value α in our case is α 0.05 and { = 4 is the degree of freedom (the number of parameters) σ } ~ is the standard deviation (the square root of the variance). In our case / •,€ / s.s•,t 2.132 (Fig. 5). Using data from a sample, the probability that the observed values are the chance result of sampling, assuming the null hypothesis ( s ) is true, is calculated. If this probability turns out to be smaller than the significance level of the test, the null hypothesis is rejected. † s : = 0 ' ∶ … 0
For this we will calculate:
= |n Z| ‡ } ˆ
Then we will compare it to the value of / s,s•; t = 2,13 If T‰ / s,s•; t = 2,13, we accept s : = 0, the H parameter does not influence the realization of PLM and we will then recreate another equation of regression without H. The same analysis was done for b, c, d.
DISCUSSION
Once the model developed another aspect of the analysis was explored, that of the recommendations. Effectively, the PLM-Eval-Tool offer also a view (fig. 6.) the results according to such factors as change management, structured sharing, extended enterprise, evaluation capacity and willingness to integrate. These factors are seen as a numerical focus, and first returns on SMEs analysis are:
• 30% of companies consider themselves to be under-equipped regarding to information technology. • Companies recognize that information technology is very much involved in the development process, but for the majority of them organizational aspects and informal exchanges are decisive.
• They believe that they have the in-house skills to anticipate and evaluate techno- According to obtained results, here is a list of first actions that we propose to implement:
• To make the players in the sector aware of the evolution of this increasingly digital environment.
• Diagnose the existing digital chaining in companies to promote the benefits of the PLM approach. (Processes, tools, skills, etc.) • To propose levers of competitiveness by the identification of "Mutualized Services" and "Software as a Service" solutions.
• To propose devices to gain skills and accompany the change management of manufacturers, equipment manufacturers, to the SMEs in the region.
CONCLUSION
The statistical analysis allowed us to develop a mathematical model to evaluate the adoption of an SME in terms of PLM. Thus, SMEs will be able to carry out a first self-evaluation without calling on honest consultants. However, this model will have to improve with more SMEs results and taking into account the different activity sectors aspect.
As future work, we envisage to work on several cases studies (deployment on France) in order to improve the mathematical model. Also, another work will be carried out in order to generate recommendations automatically. The aim of this approach is to offer SMEs a tool for analysis and decision-making for the upstream stage in the introduction or adoption of PLM tools.
ACKNOWLEDGEMENT
Acknowledgement is made to PFA automotive which has initiated this study around the technical information, processes and skills management system, which provides data structuring for the extended company with the support of the DIRECCTE IdF, the RAVI for the identification of companies and the CETIM to conduct the interviews.
Fig. 3 .
3 Fig. 3. PLM-Eval-Tool: questionnaire
() * +, -(*./ + . *0 /* /+ /ℎ* * + +) 0 , /+
a+)* .-( * ^*J *.. + ; aa = +/ 0 a+)* a-( *; aa = a+)* .-( * + . aa = aa^+ aa = If |^| → 1, We have a strong dependence and good regression
Fig. 5 .
5 Fig.5. Student table
Fig. 6 .
6 Fig.6. Radar showing the average of the results obtained by the companies that responded to the questionnaire. Scaling from 0: Very low to 5: Very good
Table 1 .
1 Adoption factors according to the 4 th axes
Axes Questions according to adoption factors
Human factor Ability to assess technological opportunities (FH1)
Resistance to change (FH2)
The learning effects on previous use of ICT technology (FH3)
Relative advantage (FH4)
Risk aversion (FH5)
Emphasis on quality (FH6)
Organisational Average size of effective of SME between 50 and 200 (FO1)
factor Age of SMEs (FO2)
Competitive environment (FO3)
Rank of SME (FO4)
Geographical proximity (FO5)
Number of adopters (FO6)
Interdependencies Collaboration (FO7)
Existing leading firms (OEM) in your economic environment
(FO8)
Informal communication mode (FO9)
Existing Innovation process (FO10)
Knowledge Management (FO11)
Process synchronization (FO12)
Existing R&D activities (FO13)
Existing certified (QM) system (FO14)
Technological The position of SME related to ICT technologies (FT1)
factor Interoperability (FT2)
Ergonomic (FT3)
Compatibility with similar technology (FT4)
Compatibility with needs and existing process (FT5)
How is evaluated before adopting technology (FT6)
Have you had the opportunity to test the technology before its
adoption (FT7)
Complexity (FT8)
The frequency of new technology integration (FT9)
Level of skill and knowledge (FT10)
Existing software (PDM, CAD/CAM,ERP) (FT11)
Economical Indirect costs (FE1) | 16,270 | [
"975147",
"1030726",
"6029",
"1249457"
] | [
"1059165",
"325381",
"1059165",
"1059165"
] |
01764219 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764219/file/462132_1_En_25_Chapter.pdf | Joel Sauza Bedolla
Gianluca D'antonio
email: [email protected]
Frédéric Segonds
email: [email protected]
Paolo Chiabert
email: [email protected]
PLM in Engineering Education: a pilot study for insights on actual and future trends
Keywords: Product Lifecycle Management, Education, Survey
Universities around the world are teaching PLM following different strategies, at different degree levels and presenting this approach from different perspectives. This paper aims to provide preliminary results for a comprehensive review concerning the state of the art in PLM education. This contribution presents the design and analysis of a questionnaire that has been submitted to academics in Italy and France, and companies involved in a specific Master program on PLM. The main goal of the survey is to collect objective and quantitative data, as well as opinions and ideas gained from education expertise. The collected results enable to depict the state of the art of PLM education in Italian universities and to gain some insights concerning the French approach; the structure of the survey is validated for further worldwide submission.
Introduction
Product Lifecycle Management (PLM) is a key factor for innovation. The PLM approach to support complex goods manufacturing is now considered as one of the major technological and organizational challenges of this decade to cope with the shortening of product lifecycles [START_REF] Garetti | Organisational change and knowledge management in PLM implementation[END_REF]. Further, in a globalized world, products are often designed and manufactured in several locations worldwide, in "extreme" collaborative environments.
To deal with these challenges and maintain their competitiveness, companies and professional organizations need employees to own a basic understanding of engineering practices, and to be able to perform effectively, autonomously, in a team environment [START_REF] Chen | Web-based mechanical engineering design education environment simulating design firms[END_REF]. Traditional methodologies for design projects (i.e. with collocated teams and synchronous work) could be effective until a few decades ago, but they are insufficient nowadays. Thus, engineering education has changed in order to provide students with some experience in collaborative product development during their studies. It is essential to train students to Computer Supported Collaborative Work (CSCW) [START_REF] Pezeshki | Preparing undergraduate mechanical engineering students for the global marketplace -New demands and requirements[END_REF], and PLM is a means for students to structure their design methodology. Indeed, before starting an efficient professional collaboration, future engineers must be mindful of how this approach works, and how tasks can be split between stakeholders. Thus, from an educational point of view, the PLM approach can be considered as a sophisticated analysis and visualization tool that enables students to improve their problem solving and design skills, as well as their understanding of engineering systems behaviour [START_REF] Chen | Web-based mechanical engineering design education environment simulating design firms[END_REF]. Moreover, PLM can also be a solution to face one of the main problems in our educational system: the fragmentation of the knowledge and its lack of depth [START_REF] Pezeshki | Preparing undergraduate mechanical engineering students for the global marketplace -New demands and requirements[END_REF].
The main research question from here is: "How can we, as engineering educators, respond to global demands to make our students more productive, effective learners? And how can PLM help us to achieve this goal?". At the state of the art, the information about PLM education is fragmented. Hence, the aim of this paper is to propose a survey structure to collect quantitative data about the existing university courses in PLM, identify the most common practices and possible improvements to closer adhere to the needs of manufacturers.
The remainder of the paper is organized as follows: in section 2, an analysis of literature concerning recent changes in educational practices in engineering education is presented and the state of the art of PLM education is settled. Then, the survey structure is presented in section 3. The results are presented in section 4: data collected from Italian universities are presented, as well as the results of the test performed in France to validate the survey structure. Finally, in section 5, some conclusive remarks and hints for future work are provided.
State of the art
In literature, there is no evidence of a complete and full review of how PLM is taught in higher institutions around the world. Still, partial works can be found. Gandhi [START_REF] Gandhi | Product Lifecycle Management in Education: Key to Innovation in Engineering and Technology[END_REF] presents the educational strategy employed by three US universities. Fielding et al. [START_REF] Fielding | Product lifecycle management in design and engineering education: International perspectives[END_REF] show examples of PLM and collaborative practices implemented in higher education institutions from the United States and France. Sauza et al. [START_REF] Sauza-Bedolla | PLM in a didactic environment: the path to smart factory[END_REF] performed a two-step research. The first attempt consisted in a systematic research of keywords (i.e. PLM education, PLM certification, PLM course, PLM training) in the principal citation databases. Nevertheless, the analysis of scientific literature was limited to some specific programs of a limited quantity of countries. For this reason, the research was extended to direct research on universities' websites. The inclusion criteria for institutions was the attendance to one of the two main events in scientific and industrial use of PLM: (i) the IFIP working group 5.1 PLM Inter-national Conference, and (ii) Partners for the Advancement of Collaborative Engineering Education. The review process covered 191 universities from Europe, Asia, America and Oceania. It was found that there is a high variety in the topics that are presented to students, departments involved in the course management, the education strategy and the number of hours related to PLM.
The analysis presents useful insights. However, the research methodology based on website analysis was not sufficient and may present some lacks. In some cases, websites did not present a "search" option and this limited the accessibility to information. Moreover, during the research, some issues with languages were experienced: not all of the universities offered information in English, and for this reason, the universities were not considered. In some other cases, information was presented in the curricula that can be accessed only to institution members. The specific didactics nature of this study is precisely in that it brings researchers and professors from engineering education to work explain their vision of how PLM is taught. The objective is to get real participatory innovation based on integration of the PLM within a proven training curriculum in engineering education. One step further, we prone that by stimulating the desire to appropriate knowledge, innovative courses are also likely to convince a broad swath of students averse to traditional teaching methods and much more in phase with their definition as "digital natives" [START_REF] Prensky | Digital natives, digital immigrants. Part 1[END_REF] This paper is intended to be the first step of a broader effort to map the actual situation of PLM education around the world. This contribution presents the methodology employed to scientifically collect information from universities. Before going global, a first test has been made to evaluate the robustness of the tool in the authors' countries of origin, where the knowledge of the university system structure was clear.
Methodology
In order to get insights on the state of the art in PLM education, a survey structured in three parts has been prepared.
The first part is named "Presentation": the recipient is asked to state the name of his institution and to provide an email address for possible future feedbacks. Further, he is asked to state whether he is aware about the existence of courses in PLM in his institution or not, and if he is in charge of such courses. In case of positive reply, the recipient is invited to fill the subsequent part of the survey.
The second part of the survey aims to collect objective information to describe the PLM course. In particular, the following data are required:
- Finally, in the third part of the survey, subjective data are collected to measure the interest of the recipient in teaching the PLM approach and the interest of the students in this topic (both in a likert 1-5 scale). Further, an opinion about the duration of the course is required (not enough/proper/excessive) and whether the presentation of applied case studies or the contribution of industrial experts are included in the course. A space for further free comments is also available.
The
The invitations to fill the survey been organized in two steps. First, a full experiment has been made in Italy. The official database owned by the Italian Ministry of Education and University has been accessed to identify the academics to be involved. In Italy, academics are grouped according to the main topic of their research. Therefore, the contacts of all the professors and researchers working in the closest topics to PLM have been downloaded, namely: (i) Design and methods of industrial engineering; (ii) Technologies and production systems; (iii) Industrial plants; (iv) Economics and management Engineering; (v) Information elaboration systems; (vi) Computer science. This research led to a database consisting of 2208 people from 64 public universities. A first invitation and a single reminder have been sent; the survey, realized through a Google Form, has been made accessible online for 2 weeks in January 2017.
The second step consisted in inviting a small set of academics from French universities through focused e-mails: 11 replies have been collected. 11 replies have been collected. Further, a similar survey has been submitted to French companies employing people that has been attending a Master in PLM in the years 2015 and 2016.
Survey data analysis 4.1 Results from the Italian sample
The overall number of replies from Italian academics is equal to 213, from 49 different institutions. Among this sample, 124 people do not have information about PLM courses in their universities; therefore, they were not asked further questions. The 89 respondents aware about a PLM course belong to 36 universities; among them, 40 professors are directly involved in teaching PLM. A synthetic overview of the results is provided in Fig. 1; the map of the Italian universities in which PLM is taught is shown in Fig. 2. Type of course. The teachers involved in teaching PLM state that this topic is mostly dealt in broader courses, such as Drawing, Industrial Plants, Management. Practical activities. Among the 40 PLM teachers, 25 of them do not use software to support their educational activity. Some courses deploy Arena, Enovia, the PLM module embedded in SAP, Windchill. Other solutions, developed by smaller software houses are also used. Among the respondents, no one uses Aras Innovator, a PLM solution that has a license model inspired by open source products. However, in the majority of the teachers (27), industrial case studies are presented to show the role of PLM in managing product information and to provide students with a practical demonstration of the possible benefits coming from its implementation. Furthermore, interventions from industrial experts, aiming to show the practical implications of the theoretical notions taught in frontal lectures, are planned by 21 teachers.
Interest in PLM. The interest of students in PLM is variable: the replies are equally distributed among "Low" or "Fair" (25 occurrences) and "High" or "Very high" (25 occurrences). The interest of replicants in PLM is variable too: 34 people replied "Strongly low", "Low" or "Fair"; 34 people replied "High" or "Very high"; the remainder sample states "I don't know". As expected, the interest in PLM of people teaching this topic is high: 29 people replied "High" or "Very high" (out of a sample of 40 teachers).
Results from the French sample
On the French side, 11 replies were collected from 7 different Universities and School of Engineering. All the respondents teach PLM courses in their Universities. Similarly to the Italian sample, PLM is mostly taught in the M.Sc. level: beside a Master course, one B.Sc. and 8 M.Sc. courses were mapped. Most of the courses (8) are devoted to Mechanical Engineers. In 6 cases, a specific course is designed for PLM; further, in the Ecole supérieure d'électricité settled in Châtenay-Malabry the so-called 'PLM week' is organized. The duration of the PLM courses mainly ranges between 32 and 64 hours, which is an appropriate duration, according to the teachers; conversely, in the broader courses, the time spent in teaching PLM is lower than 6 hours. The only Master mapped through the survey is held in Ecole Nationale Supérieure d'Arts et Métiers (Paris): the duration is equal to 350 hours, with high interest of the participants.
A reduced version of the survey was also sent to a small set of French companies to map internal courses in PLM.7 replies have been obtained.: 3 were from large companies in the field of aeronautics, textile and consulting, and 4 were small-medium companies from the PLM and BIM sector. 57% of these companies declare they have courses dedicated to PLM. The name of the courses are various. In particular, a textile enterprise has course structured in 11 modules as business process:
1
Conclusions
The present paper presented a methodology for a systematic overview about university education in PLM. A survey has been submitted to all the Italian academics performing research and teaching activities in fields related with PLM. The percentage of respondents in the Italian experiment was approximately 10%, which is in line with the expectations of the authors: these replies enabled to identify PLM courses in 36 different universities, mainly located in the north-central part of the country, which is characterized by a higher density of industries. However, to have a successful realization of the survey a complete database of university teachers is mandatory.
The proof-of-concept realized on the French sample led to good results: no criticalities have been found in the survey. Hence, the next steps of the work are the creation of the recipients database and the full-scale experiment. Then, the experiment can be replicated in other countries, to have a more exhaustive picture about PLM education. We plan to rely on Bloom taxonomy of objectives to sharpen the skills taught in PLM courses [START_REF] Bloom | Bloom taxonomy of educattional objectives[END_REF].
Our research question was: "How can we, as engineering educators, respond to global demands to make our students more productive, effective learners? And how can PLM help us to achieve this goal?". A first insight given to this research question is the proposal, as an ultimate goal, of the creation of a network made of PLM teachers, that will enable mutual exchange of expertise, teaching material, exercises and practices. To reach this goal and to wider our approach to IFIP WG 5.1 community, a first step could be the creation of shared storage space for documents that allow any user to teach PLM at any level.
level at which the course is taught (among B.Sc, M.Sc, Ph.D, Master); -The curriculum in which the course is taught (free reply); -At which year the course is taught, and the overall duration of the curriculum (values constrained between 1 and 5); -The department in charge of the course (free reply); -If PLM is taught in a devoted course (Yes/No) or as a topic in a broader course (Yes/No); -The name of the course (free reply) and its duration; -If software training is included (Yes/No) and which software is used.
Degree level. In the sample of 36 universities, PLM is taught at different levels. The Master of Science is the most common: 53 courses have been identified. In 22 cases, PLM is also taught at the Bachelor level. Furthermore, there are 4 courses devoted to Ph.D. candidates and 2 Masters are organized. The latter two Master courses are organized in the Polytechnic universities of Torino and Milano; however, the first one has recently moved to University of Torino. Curricula. There is a variety of curricula involved in teaching PLM. Course for Management Engineering and Mechanical Engineering are organized (23 occurrences each). The area of Computer Science is also involved (23 occurrences): topics concerning the architecture of PLM systems, or the so-called Software Lifecycle Management are taught. Moreover, PLM courses are also provided in Industrial Engineering (6 occurrences), Automotive Engineering (B.Sc. at Polytechnic University di Torino) and Building Engineering (Ph.D. course at Politecnico di Bari).
Fig. 1 .
1 Fig. 1. Synthesis of the results obtained through both the Italian and the French PLM teachers.
Fig. 2 .
2 Fig. 2. Map of the Italian universities in which PLM is taught.
Acknowledgments
The authors are grateful to the colleagues and industrials that replied to the survey. | 17,672 | [
"990126",
"990127",
"916993",
"990128"
] | [
"6571",
"6571",
"301320",
"6571"
] |
01764438 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01764438/file/UltracoldVFShortHAL.pdf | Improving the accuracy of atom interferometers with ultracold sources R. Karcher, 1 A. Imanaliev, 1 S. Merlet, 1 and F. Pereira Dos Santos 1 1 LNE-SYRTE, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, 61 avenue de l'Observatoire 75014 Paris (Dated: April 11, 2018) We report on the implementation of ultracold atoms as a source in a state of the art atom gravimeter. We perform gravity measurements with 10 nm/s 2 statistical uncertainties in a so-far unexplored temperature range for such a high accuracy sensor, down to 50 nK. This allows for an improved characterization of the most limiting systematic effect, related to wavefront aberrations of light beam splitters. A thorough model of the impact of this effect onto the measurement is developed and a method is proposed to correct for this bias based on the extrapolation of the measurements down to zero temperature. Finally, an uncertainty of 13 nm/s 2 is obtained in the evaluation of this systematic effect, which can be improved further by performing measurements at even lower temperatures. Our results clearly demonstrate the benefit brought by ultracold atoms to the metrological study of free falling atom interferometers. By tackling their main limitation, our method allows reaching record-breaking accuracies for inertial sensors based on atom interferometry.
Atom gravimeters constitute today the most mature application of cold atom inertial sensors based on atom interferometry. They reach performances better than their classical counterparts, the free fall corner cube gravimeters, both in terms of short term sensitivity [1,2] and long term stability [3]. They offer the possibility to perform high repetition rate continuous measurements over extended periods of time [3,4], which represents an operation mode inaccessible to other absolute gravimeters. These features have motivated the development of commercial cold atom gravimeters [5], addressing in particular applications in the fields of geophysics. Nevertheless, the accuracy of these sensors is today slightly worse. Best accuracies in the 30 -40 nm/s 2 range have been reported [3,4] and validated through the participation of these instruments to international comparisons of absolute gravimeters since 2009 [6,7], to be compared with the accuracy of the best commercial state of the art corner cube gravimeters, of order of 20 nm/s
2 [START_REF]FG5-X specifications[END_REF].
The dominant limit in the accuracy of cold atom gravimeters is due to the wavefront distortions of the lasers beamsplitters. This effect is related to the ballistic expansion of the atomic source through its motion in the beamsplitter laser beams, as illustrated in figure 1, and cancels out at zero atomic temperature. In practice, it has been tuned by increasing the atomic temperature [4] and/or by using truncation methods, such as varying the size of the detection area [START_REF] Schkolnik | [END_REF] or of the Raman laser beam [10]. Comparing these measurements with measured or modelled wavefronts allows to gain insight on the amplitude of the effect, and estimate the uncertainty on its evaluation. It can be reduced by improving the optical quality of the optical elements of the interferometer lasers, or by operating the interferometer in a cavity [11], which filters the spatial mode of the lasers, and/or by compensating the wavefront distortions, using for instance a deformable mirror [12].
The strategy we pursue here consists in reducing the atomic temperature below the few µK limit imposed by cooling in optical molasses in order to study the temper-ature dependence of the wavefront aberration bias over a wider range, and down to the lowest possible temperature. For that, we use ultracold atoms produced by evaporative cooling as the atomic source in our interferometer. Such sources, eventually Bose-Einstein condensed, show high brightness and reduced spatial and velocity spread. These features allow for a drastic increase in the interaction time, on the ground [13] or in space [14] and for the efficient implementation of large momentum transfer beam splitters [15][16][17]. The potential gain in sensitivity has been largely demonstrated (for instance, by up to two orders of magnitude in [13]). But it is only recently that a gain was demonstrated in the measurement sensitivity of an actual inertial quantity [18], when compared to best sensors based on the more traditional approach exploiting two photon Raman transitions and laser cooled atoms. Here, implementing such a source in a state of the art absolute gravimeter, we demonstrate that ultracold atom sources also improve the accuracy of atom interferometers, by providing an ideal tool for the precise study of their most limiting systematic effect.
We briefly detail here the main features of our cold atom gravimeter. A more detailed description can be found in [4]. It is based on an atom interferometer [19] based on two-photon Raman transitions, performed on free-falling 87 Rb atoms. A measurement sequence is as follows. We start by collecting a sample of cold or ultracold atoms, which is then released in free fall. After state preparation, a sequence of three Raman pulses drives a Mach Zehnder type interferometer. These pulses, separated by free evolution times of duration T = 80 ms, respectively split, redirect and recombine the matter waves, creating a two-wave interferometer. The total duration of the interferometer is thus 2T = 160 ms. The populations in the two interferometer output ports N 1 and N 2 are finally measured by a state selective fluorescence detection method, and the transition probability P is calculated out of these populations (P = N 1 /(N 1 + N 2 )). This transition probability depends on the phase difference accumulated by the matter waves along the two arms of the interferometer that is, in our geometry, given by Φ = k. gT 2 , where k is the effective wave vector of the Raman transition and g the gravity acceleration. Gravity measurements are then repeated in a cyclic manner. Using laser cooled atoms, repetition rates of about 3 Hz are achieved which allows for a fast averaging of the interferometer phase noise dominated by parasitic vibrations. We have demonstrated a best short term sensitivity of 56 nm.s -2 at 1 s measurement time [1], which averages down to below 1 nm.s -2 . These performances are comparable to the ones of the two other best atom gravimeters developed so far [2,3]. The use of ultracold atoms reduces the cycling rate due to the increased duration of the preparation of the source. Indeed, we first load the magneto-optical trap for 1 s (instead of 80 ms only when using laser cooled atoms) before transferring the atoms in a far detuned dipole trap realized using a 30 W fibre laser at 1550 nm. It is first focused onto the atoms with a 170 µm waist (radius at 1/e 2 ), before being sent back at a 90 • angle and tightly focused with a 27 µm waist, forming a crossed dipole trap in the horizontal plane. The cooling and repumping lasers are then switched off, and we end up with about 3 × 10 8 atoms trapped at a temperature of 26 µK. Evaporative cooling is then implemented by decreasing the laser powers from 14.5 W and 8 W to 2.9 W and 100 mW typically in the two arms over a duration of 3 s. We finally end up with atomic samples in the low 100 nK range containing 10 4 atoms. Changing the powers at the end of the evaporation sequence allows to vary the temperature over a large temperature range, from 50 nK to 7 µK. The total preparation time is then 4.22 s, and the cycle time 4.49 s, which reduces the repetition rate down to 0.22 Hz. Furthermore, at the lowest temperatures, the number of atoms is reduced down to the level where detection noise becomes comparable to vibration noise. The short term sensitivity is thus significantly degraded and varies in our experiment in the 1200-3000 nm.s -2 range at 1 s, depending on the final temperature of the sample. The red line is a fit to the data with a subset of five Zernike polynomials and the filled area the corresponding 68% confidence area.
We performed differential measurements of the gravity value as a function of the temperature of the source, which we varied over more than two orders of magnitude. The results are displayed as black circles in figure 2, which reveals a non-trivial behaviour, with a fairly flat behaviour in the 2-7 µK range, consistent with previous measurements obtained with optical molasses [4], and a rapid variation of the measurements below 2 µK. This shows that a linear extrapolation to zero temperature based on high temperature data taken with laser cooled atoms would lead to a significant error. These measurements have been performed for two opposite orientations of the experiment (with respect to the direction of the Earth rotation vector) showing the same behaviour, indicating that these effects are not related to Coriolis acceleration [4]. Moreover, the measurements are performed by averaging measurements with two different orientations of the Raman wavevector, which suppresses the effect of many systematic effects, such as differential light shifts of the Raman lasers that could vary with the temperature [4].
To interpret these data, we have developed a Monte Carlo model of the experiment, which averages the con-tributions to the interferometer signal of atoms randomly drawn in their initial position and velocity distributions. It takes into account the selection and interferometer processes, by including the finite size and finite coupling of the Raman lasers, and the detection process, whose finite field of view cuts the contribution of the hottest atoms to the measured atomic populations [20]. This model is used to calculate the effect of wavefront aberrations onto the gravity measurement as a function of the experimental parameters. For that, we calculate for each randomly drawn atom its trajectory and positions at the three pulses in the Raman beams, and take into account the phase shifts which are imparted to the atomic wavepackets at the Raman pulses: δφ = kδz i , where δz i is the flatness defect at the i-th pulse. We sum the contributions of a packet of 10 4 atoms to the measured atomic populations to evaluate a mean transition probability. The mean phase shift is finally determined from consecutive such determinations of mean transition probabilities using a numerical integrator onto the central fringe of the interferometer, analogous to the measurement protocol used in the experiment [4]. With 10 4 such packets, we evaluate the interferometer phase shifts with relative uncertainties smaller than 10 -3 . We decompose the aberrations δz onto the basis of Zernike polynomials Z m n , taking as a reference radius the finite size of the Raman beam (set by a 28 mm diameter aperture in the optical system). Assuming that the atoms are initially centred on the Raman mirror and in the detection zone, the effect of polynomials with no rotation symmetry (m = 0) averages to zero, due to the symmetry of the position and velocity distributions [12]. We thus consider here only Zernike polynomials with no angular dependence that correspond to the curvature of the wavefront (or defocus) and to higher order spherical aberrations.
To illustrate the impact of finite size effects, we display in figure 3 calculated gravity shifts corresponding to different cases, for a defocus (Z 0
2 ) with a peak-to-peak amplitude of 2a 0 = 20 nm across the size of the reference radius, which corresponds to δz(r) = a 0 (1 -2r 2 ), with r the normalized radial distance. The black squares corresponds to the ideal case of infinite Raman laser radius size and detection field of view and give a linear dependence versus temperature. The circles (resp. triangles) correspond to the case of finite beam waist and infinite detection field of view (resp. infinite beam waist and finite detection field of view), and finally diamonds include both finite size effects. Deviations from the linear behaviour arise from the reduction or suppression of the contribution of the hottest atoms. The effect of the finite Raman beam waist is found to be more important than the effect of the finite detection area. Finally, we calculate for this simple study case a bias of -63 nm/s 2 at the temperature of 1.8 µK, for a peak-to-peak amplitude of 20 nm. This implies that, at the temperature of laser cooled samples and for a pure curvature, a peak-to-peak amplitude of less than 3 nm (λ/260 PV) over a reference diameter of 28 mm is required for the bias to be smaller than 10 nm/s 2 .
We then calculate the effect of the 7 first Z 0 n polynomials (for even n ranging from 2 and 14) for the same peak-to-peak amplitude of 2a 0 = 20 nm as a function of the atomic temperature. Figure 4 displays the results obtained, restricted for clarity to the first five polynomials. All orders exhibit as common features a linear behaviour at low temperatures and a trend for saturation at high temperatures. Interestingly, we find non monotonic behaviours in the temperature range we explore and the presence of local extrema.
Using the phase shifts calculated at the temperatures of the measurements, the data of figure 2 can now be adjusted, using a weighted least square adjustment, by a combination of the contribution of the first Zernike polynomials, which then constitute a finite basis for the decomposition of the wavefront. The adjustment was realized for increasing numbers of polynomials, so as to assess the impact of the truncation of the basis. We give in table I the values of the correlation coefficient R and the extrapolated value at zero temperature as a function of the number of polynomials. We obtain stable values for both R and the extrapolated value to zero temperature, of about -55 nm/s 2 for numbers of polynomials larger than 5. This indicates that the first 5 polynomials are enough to faithfully reconstruct a model wavefront that well reproduces the data. When increasing the number of polynomials, we indeed find that the reconstructed wavefront is dominated by the lowest polynomial orders. The results of the adjustment with 5 polynomials is displayed as a red line in figure 2 and the 68% confidence bounds as a filled area. The flatness of the reconstructed wavefront at the centre of the Raman laser beam is found to be as small as 20 nm PV (Peak Valley) over a diameter of 20 mm. The bias due to the optical aberrations at the reference temperature of 1.8 µK, which corresponds to the temperature of the laser cooled atom source, is thus 56(13) nm/s 2 . Its uncertainty is three times better than its previous evaluation [4], which in principle will improve accordingly our accuracy budget.
On the other hand, interatomic interactions in ultracold sources can induce significant phase shifts [21,22] and phase diffusion [23], leading to bias and loss of contrast for the interferometer. Nevertheless, the rapid decrease of the atomic density when interrogating the atoms in free fall reduces drastically the impact of interactions [24][25][26]. To investigate this, we have performed a differential measurement for two different atom numbers at the temperature of 650 nK. The number of atoms was varied from 25000 to 5000 by changing the efficiency of a microwave pulse in the preparation phase, which leaves the spatial distribution and temperature unchanged. We measured an unresolved difference of -7(12) nm/s 2 . This allows us to put an upper bound on the effect of interactions, which we find lower than 1 nm/s 2 per thousand atoms.
The uncertainty in the evaluation of the bias related to optical aberrations can be improved further by performing measurements at even lower temperatures, which will require in our set-up to improve the efficiency of the evaporative cooling stage. A larger number of atoms would allow to limit the degradation of the short term sensitivity and to perform measurements with shorter averaging times. More, absorption imaging with a vertical probe beam would allow for spatially resolved phase measurements across the cloud [13], which would allow for improving the reconstruction of the wavefront. The temperature can also be drastically reduced, down to the low nK range, using delta kick collimation techniques [27,28]. In addition to a reduced ballistic expansion, the use of ultracold atoms also offers a better control of the initial position and mean velocity of the source with respect to laser cooled sources, which suffer from fluctuations induced by polarisation and intensity variations of the cooling laser beams. Such an improved control reduces the fluctuations of systematic effects related to the transverse motion of the atoms, such as the Coriolis acceleration and the bias due to aberrations, and thus will improve the long term stability [26].
With the above-mentioned improvements, and after a careful re-examination of the accuracy budget [7], accuracies better than 10 nm/s 2 are within reach. This will make quantum sensors based on atom interferometry the best standards in gravimetry. Furthermore, the improved control of systematics and the resulting gain in stability will open new perspectives for applications, in particular in the field of geophysics [29]. Finally, the method proposed here can be applied to any atomic sensor based on light beamsplitters, which are inevitably affected by distortions of the lasers wavefronts. The improved control of systematics it provides will have significant impact in high precision measurements with atom interferometry, with important applications to geodesy [30,31], fundamental physics tests [14,32,33] and to the development of highest grade inertial sensors [34].
FIG. 1 .
1 FIG.1. (color online) Scheme of the experimental setup, illustrating the effect of wavefront aberrations. Due to their ballistic expansion across the Raman beam, the atoms sample different parasitic phase shifts at the three π/2 -π -π/2 pulses due to distortions in the wavefront (displayed in blue as a distorted surface). This leads to a bias, resulting from the average of the effect over all atomic trajectories, filtered by finite size effects, such as related to the waist and clear aperture of the Raman beam and to the finite detection field of view.
FIG. 2 .
2 FIG. 2. (color online)Gravity measurements as a function of the atom temperature. The measurements, displayed as black circles, are performed in a differential way, with respect to a reference temperature of 1.8 µK (displayed as a red circle). The red line is a fit to the data with a subset of five Zernike polynomials and the filled area the corresponding 68% confidence area.
FIG. 3 .
3 FIG. 3. (color online) Calculation of the impact of the size of the Raman beam waist (RB) and of the detection field of view (DFoV) on the gravity shift induced by a defocus as a function of the atomic temperature. The peak-to-peak amplitude of the defocus is 20 nm. The results correspond to four different cases, depending on whether the sizes of the Raman beam waist and detection field of view are taken as or infinite.
ACKNOWLEDGMENTS
We acknowledge the contributions from X. Joffrin, J. Velardo and C. Guerlin in earlier stages of this project. We thank R. Geiger and A. Landragin for useful discussions and careful reading of the manuscript. | 19,376 | [
"768017",
"6778"
] | [
"541776",
"541776",
"541776",
"541776"
] |
01764547 | en | [
"info"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01764547/file/texMesh.pdf | Mohamed Boussaha
email: [email protected]
Bruno Vallet
email: [email protected]
Patrick Rives
email: [email protected]
LARGE SCALE TEXTURED MESH RECONSTRUCTION FROM MOBILE MAPPING IMAGES AND LIDAR SCANS
Keywords: Urban scene, Mobile mapping, LiDAR, Oriented images, Surface reconstruction, Texturing
is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.
INTRODUCTION 1.1 Context
Mobile Mapping Systems (MMS) have become more and more popular to map cities from the ground level, allowing for a very interesting compromise between level of detail and productivity. Such MMS are increasingly becoming hybrid, acquiring both images and LiDAR point clouds of the environment. However, these two modalities remain essentially exploited independently, and few works propose to process them jointly. Nevertheless, such a joint exploitation would benefit from the high complementarity of these two sources of information:
• High resolution of the images vs high precision of the Li-DAR range measurement.
• Passive RGB measurement vs active intensity measurement in near infrared.
• Different acquisition geometries.
In this paper, we propose a fusion of image and LiDAR information into a single representation: a textured mesh. Textured meshes have been the central representation for virtual scenes in Computer Graphics, massively used in the video games and animation movies industry. Graphics cards are highly optimized for their visualization, and they allow a representation of scenes that holds both their geometry and radiometry. Textured meshes are now gaining more and more attention in the geospatial industry as Digital Elevation Models coupled with orthophotos, which were well adapted for high altitude airborne or space-borne acquisition, are not suited for the newer means of acquisition: closer range platforms (drones, mobile mapping) and oblique imagery.
We believe that this trend will accelerate, such that the geospatial industry will have an increasing need for efficient and high quality surface reconstruction and texturing algorithms that scale up to the massive amounts of data that these new means of acquisition produce.
This paper focuses on:
• using a simple reconstruction approach based on the sensor topology
• adapting the state-of-the-art texturing method of [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF] to mobile mapping images and LiDAR scans
We are able to produce a highly accurate surface mesh with a high level of detail and high resolution textures at city scale.
Related work
In this paper we present a visibility consistent 3D mapping framework to construct large scale urban textured mesh using both oriented images and georeferenced point cloud coming from a terrestrial mobile mapping system. In the following, we give an overview of the various methods related to the design of our pipeline.
From the robotics community perspective, conventional 3D urban mapping approaches usually propose to use LiDAR or camera separately but a minority has recently exploited both data sources to build dense textured maps [START_REF] Romanoni | Mesh-based 3d textured urban mapping[END_REF].
In the literature, both image-based methods [START_REF] Wu | Towards linear-time incremental structure from motion[END_REF][START_REF] Litvinov | Incremental solid modeling from sparse structure-from-motion data with improved visual artifacts removal[END_REF][START_REF] Romanoni | Incremental reconstruction of urban environments by edge-points delaunay triangulation[END_REF] and LiDARbased methods [START_REF] Hornung | Octomap: an efficient probabilistic 3d mapping framework based on octrees[END_REF][START_REF] Khan | Adaptive rectangular cuboids for 3d mapping[END_REF] often represent the map as a point cloud or a mesh relying only on Figure 1. The texturing pipeline geometric properties of the scene and discarding interesting photometric cues while a faithful 3D textured mesh representation would be useful for not only navigation and localization but also for photo-realistic accurate modeling and visualization.
The computer vision, computer graphics and photogrammetry communities have generated compelling urban texturing results. [START_REF] Sinha | Interactive 3d architectural modeling from unordered photo collections[END_REF]) developed an interactive system to texture architectural scenes with planar surfaces from an unordered collection of photographs using cues from structurefrom-motion. [START_REF] Tan | Large scale texture mapping of building facades. The International Archives of the Photogrammetry[END_REF] proposed an interactive tool for only building facades texturing using oblique images. [START_REF] Garcia-Dorado | Automatic urban modeling using volumetric reconstruction with surface graph cuts[END_REF] perform impressive work by texturing entire cities. Still, they are restricted to 2.5D scene representation and they also operate exclusively on regular block city structures with planar surfaces and treat buildings, ground, and buildingground transitions differently during texturing process. In order to achieve a consistent texture across patch borders in a setting of unordered registered views, [START_REF] Callieri | Masked photo blending: Mapping dense photographic data set on high-resolution sampled 3d models[END_REF][START_REF] Grammatikopoulos | Automatic multi-view texture mapping of 3d surface projections[END_REF] choose to blend these multiple views by computing a weighted cost indicating the suitability of input image pixels for texturing with respect to angle, proximity to the model and the proximity to the depth discontinuities. However, blending images induces strongly visible seams in the final model especially in the case of a multi-view stereo setting because of the potential inaccuracy in the reconstructed geometry.
While there exists a prominent work on texturing urban scenes, we argue that large scale texture mapping should be fully automatic without the user intervention and efficient enough to handle its computational burden in a reasonable time frame without increasing the geometric complexity in the final model. In contrast to the latter methods, [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF] proposed to use the multi-view stereo technique [START_REF] Frahm | Building rome on a cloudless day[END_REF][START_REF] Furukawa | Towards internet-scale multi-view stereo[END_REF] to perform a surface reconstruction and subsequently select a single view per face based on a pairwise Markov random field taking into account the viewing angle, the proximity to the model and the resolution of the image. Then, color discontinuities are properly adjusted by looking up the vertex' color along all adjacent seam edges. We consider the method of [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF] as a base for our work since it is the first comprehensive framework for texture mapping that enables fast and scalable processing.
In our work, we abstain from the surface reconstruction step for multiple reasons. As pointed out above, methods based on structure-from-motion and multi-view stereo techniques usually yield less accurate camera parameters, hence the reconstructed geometry might not be faithful to the underlying model compared to LiDAR based methods [START_REF] Pollefeys | Detailed real-time urban 3d reconstruction from video[END_REF] which results in ghosting effect and strongly visible seams in the textured model. Besides, such methods do not allow a direct and automatic processing on raw data due to relative parameters tuning for each dataset and in certain cases their computational cost may become prohibitive. [START_REF] Caraffa | 3d octree based watertight mesh generation from ubiquitous data[END_REF] proposed a generic framework to generate an octree-cell based mesh and texture it with the regularized reflectance of the LiDAR. Instead, we propose a simple but fast algorithm to construct a mesh from the raw LiDAR scans and produce photo-realistic textured models. In Figure 1, we depict the whole pipeline to generate large scale high quality textured models leveraging on the georeferenced raw data. Then, we construct a 3D mesh representation of the urban scene and subsequently fuse it with the preprocessed images to get the final model.
The rest of the paper is organized as follows: In Section 2 we present the data acquisition system. A fast and scalable mesh reconstruction algorithm is discussed in Section 3. Section 4 explains the texturing approach. We show our experimental results in Section 5. Finally, in Section 6, we conclude the paper proposing out some future direction of research. The used LiDAR scanner is a RIEGL VQ-250 that rotates at 100 Hz and emits 3000 pulses per rotation with 0 to 8 echoes recorded for each pulse, producing an average of 250 thousand points per second in typical urban scenes. The sensor records information for each pulse (direction (θ, φ), time of emission) and echo (amplitude, range, deviation).
DATA ACQUISITION
The MMS is also mounted with a georeferencing system combining a GPS, an inertial unit and an odometer. This system outputs the reference frame of the system in geographical coordinates at 100Hz. Combining this information with the information recorded by the LiDAR scanner and its calibration, a point cloud in (x, y, z) coordinates can be constructed. In the same way, using the intrinsic and extrinsic calibrations of each camera, each acquired image can be precisely oriented. It is important for our application to note that this process ensures that images and LiDAR points acquired simultaneously are precisely aligned (depending on the quality of the calibrations).
SENSOR TOPOLOGY BASED SURFACE RECONSTRUCTION
In this section, we propose an algorithm to extract a large scale mesh on-the-fly using the point cloud structured as series of line scans gathered from the LiDAR sensor being moved through space along an arbitrary path.
Mesh extraction process
During urban mapping, the mobile platform may stop for a moment because of external factors (e.g. road sign, red light, traffic congestion . . . ) which results in massive redundant data at the same scanned location. Thus, a filtering step is mandatory to get an isotropic distribution of the line scans. To do so, we fix a minimum distance between two successive line scans and we remove all lines whose distances to the previous (unremoved) line is less than a fixed threshold. In practice, we use a threshold of 1cm, close to the LiDAR accuracy.
Once the regular sampling is done, we consider the resulting point cloud in the sensor space where one dimension is the acquisition time t and the other is the θ rotation angle. Let θi be the angle of the i th pulse and Ei the corresponding echo. In case of multiple echoes, Ei is defined as the last (furthest) one, and in case of no return, Ei does not exist so we do not build any triangle based on it. In general, the number Np of pulses for a 2π rotation is not an integer so Ei has six neighbors Ei-1, Ei+1, Ei-n, Ei-n-1, Ei+n, Ei+n+1 where n = Np is the integer part of Np. These six neighbors allow to build six triangles. In practice, we avoid creating the same triangle more than once by creating for each echo Ei the two triangles it forms with echoes of greater indexes: Ei, Ei+n, Ei+n+1 and Ei, Ei+n+1, Ei+1 (if the three echoes exist) as illustrated in Figure 3. This allows the algorithm to incrementally and quickly build a triangulated surface based on the input points of the scans. In practice, the (non integer) number of pulses Np emitted during a 360 deg rotation of the scanner may slightly vary, so to add robustness we check if θi+n < θi < θi+n+1 and if it doesn't, increase or decrease n until it does.
Mesh cleaning
The triangulation of 3D measurements from a mobile mapping system usually comes with several imperfections such as elongated triangles, noisy unreferenced vertices, holes in the model, redundant triangles . . . to mention a few. In this section, we focus Figure 3. Triangulation based on the sensor space topology on three main issues that frequently occur with mobile terrestrial systems and affect significantly the texturing results if not adequately dealt with.
Elongated triangles filtering
In practice, neighboring echoes in sensor topology might belong to different objects at different distances. This generates very elongated triangles connecting two objects (or an object and its background). Such elongated triangles might also occur when the MMS follows a sharp turn. We filter them out by applying a threshold on the maximum length of an edge before creating a triangle, experimentally set to 0.5m for the data used in this study.
Isolated pieces removal
In contrast with camera and eyes that captures light from external sources, the LiDAR scanner is an active sensor that emits light itself. This results in measurements that are dependent on the transparency of the scanned objects which cause a problem in the case of semitransparent faces such as windows and front glass. The laser beam will traverse these objects, creating isolated pieces behind them in the final mesh. To tackle this problem, isolated connected components composed by a limited number of triangles and whose diameter is smaller than a user-defined threshold (set experimentally) are automatically deleted from the final model.
Hole filling
After the surface reconstruction process, the resulting mesh may still contain a consequent number of holes due to specular surfaces deflecting the LiDAR beam, occlusions and the non-uniform motion of the acquisition vehicle. To overcome this problem we use the method of (Liepa et al., 2003).
The algorithm takes a user-defined parameter which consists of the maximum hole size in terms of number of edges and closes the hole in a recursive fashion by splitting it until it gets a hole composed exactly with 3 edges and fills it with the corresponding triangle.
Scalability
The interest in mobile mapping techniques has been increasing over the past decade as it allows the collection of dense and very accurate and detailed data at the scale of an entire city with a high productivity. However, processing such data is limited by various difficulties specific to this type of acquisition especially the very high data volume (up to 1 TB by day of acquisition (Paparoditis et al., 2012)) which requires very efficient processing tools in terms of number of operations and memory footprint. In order to perform an automatic surface reconstruction over large distances, memory constraints and scalability issues must be addressed. First, the raw LiDAR scans are sliced into N chunks of 10s of acquisition which corresponds to nearly 3 million points per chunk. Each recorded point cloud (chunk) is processed separately as explained in the work-flow of our pipeline presented in Figure 4, allowing a parallel processing and faster production. Yet, whereas the aforementioned filtering steps alleviate the size of the processed chunks, the resulting models remain unnecessarily heavy as flat surfaces (road, walls) may be represented by a very large number of triangles that could be drastically reduced without loosing in detail. To this end, we apply the decimation algorithm of (Lindstrom and[START_REF] Lindstrom | Fast and memory efficient polygonal simplification[END_REF][START_REF] Lindstrom | Evaluation of memoryless simplification[END_REF]. The algorithm proceeds in two stages. First, an initial collapse cost, given by the position chosen for the vertex that replaces it, is assigned to every edge in the reconstructed mesh. Then, at each iteration the edge with the lowest cost is selected for collapsing and replacing it with a vertex. Finally, the collapse cost of all the edges now incident on the replacement vertex is recalculated. For more technical details, we refer the reader to (Lindstrom and[START_REF] Lindstrom | Fast and memory efficient polygonal simplification[END_REF][START_REF] Lindstrom | Evaluation of memoryless simplification[END_REF].
TEXTURING APPROACH
This section presents the used approach for texturing large scale 3D realistic urban scenes. Based on the work of [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF], we adapt the algorithm so it can handle our camera model (with five perspective images) and the smoothing parameters are properly adjusted to enhance the results. In the following, we give the outline of this texturing technique and its requirements. To work jointly with oriented images and LiDAR scans acquired by a mobile mapping system, the first requirement is that both sensing modalities have to be aligned in a common frame. Thanks to the rigid setting of the camera and the LiDAR mounted on the mobile platform yielding a simultaneous image and Li-DAR acquisition, this step is no more required. However, such setting entails that a visible part of the vehicle appears in the acquired images. To avoid using these irrelevant parts, an adequate mask is automatically applied to the concerned images (back and front images) before texturing as shown in Figure 5.
Typically, texturing a 3D model with oriented images is a twostage process. First, the optimal view per triangle is selected with respect to certain criteria yielding a preliminary texture. Second, a local and global color optimization is performed to minimize the discontinuities between adjacent texture patches. The two steps are discussed in Sections 4.2 and 4.3.
View selection
To determine the visibility of faces in the input images, a pairwise Markov random field energy formulation is adopted to compute a labeling l that assigns a view li to be used as texture for each mesh face Fi:
E(l) = F i ∈F aces E d (Fi, li) + F i ,F j ∈Edges
Es(Fi, Fj, li, lj)
(1) where
E d = - φ(F i ,l i ) ||∇(I l i )||2dp
(2)
Es = [li = lj] (3)
The data term E d (2) computes the gradient magnitude ||∇(I l i )||2 of the image into which face Fi is projected using a Sobel operator and sum over all pixels of the gradient magnitude image within face Fi's projection φ(Fi, li). This term is large if the projection area is large which means that it prefers close, orthogonal and in-focus images with high resolution. The smoothness term Es (3) minimizes the seams visibility (edges between faces textured with different images). In the chosen method, this regularization term is based on the Potts model ([.] the Iverson bracket) which prefers compact patches by penalizing those that might give severe seams in the final model and it is extremely fast to compute. Finally, E(l) (1) is minimized with graph-cuts and α-expansion [START_REF] Boykov | Fast approximate energy minimization via graph cuts[END_REF].
Color adjustment
After the view selection step, the obtained model exhibits strong color discontinuities due to the fusion of texture patches coming from different images and to the exposure and illumination variation especially in an outdoor environment. Thus, adjacent texture patches need to be photometrically adjusted. To address this problem, first, a global radiometric correction is performed along the seam's edge by computing a weighted average of a set of samples (pixels sampled along the discontinuity's right and left) depending on the distance of each sample to the seam edge extremities (vertices). Then, this global adjustment is followed by a local Poisson editing [START_REF] Pérez | Poisson image editing[END_REF] applied to the border of the texture patches. All the process is discussed in details in [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF] work.
Finally, the corrections are added to the input images, the texture patches are packed into texture atlases, and texture coordinates are attached to the mesh vertices.
EXPERIMENTAL RESULTS
Mesh reconstruction
In Figure 6, we show the reconstructed mesh based on the sensor topology and the adopted decimation process. In practice, we parameterize the algorithm such that the approximation error is below 3cm, which allows in average to reduce the number of triangles to around 30% of the input triangles.
Texturing the reconstructed models
In this section, we show some texturing result (Figure 7) and the influence of the color adjustment step on the final textured models (Figure 8). Before the radiometric correction, one can see several color discontinuities especially on the border of the door and on some parts of the road (best viewed on screen). More results are presented in the appendix to illustrate the high quality textured models in different places in Rouen, France.
Performance evaluation
We evaluate the performance of each step of our pipeline on a dataset acquired by Stereopolis II [START_REF] Paparoditis | Stereopolis ii: A multi-purpose and multi-sensor 3d mobile mapping system for street visualisation and 3d metrology[END_REF] 1, we present the required input data to texture a chunk of acquisition (10s); the average number of views and the number of triangles after decimation. Figure 9 shows the timing of each step in the pipeline to texture the described setting. Using a 16-core Xeon E5-2665 CPU with 12GB of memory, we are able to generate a 3D mesh of nearly 6 Million triangles in less than one minute compared to the improved version of Poisson surface reconstruction [START_REF] Kazhdan | Screened poisson surface reconstruction[END_REF] where they reconstruct a surface of nearly 20000 triangle in 10 minutes. Moreover, in order to texture small models with few images (36 of size (768 × 584)) in a context of super-resolution, [START_REF] Goldlücke | A super-resolution framework for high-accuracy multiview reconstruction[END_REF] takes several hours (partially on GPU) compared to the few minutes we take to texture our huge models. Finally, all the dataset is textured in less than 30 computing hours. The sensor mesh reconstruction is quite novel but very simple.
We believe that such a textured mesh can find multiple applications, directly through visualization of a mobile mapping acquisition, or more indirectly for processing jointly image and LiDAR data: urban scene analysis, structured reconstruction, semantization, ...
PERSPECTIVES
This work leaves however important topics unsolved, and most importantly the handling of overlaps between acquired data, at intersections or when the vehicle passes multiple times in the same scene. We have left this issue out of the scope of the current paper as it poses numerous challenges:
• Precise registration over the overlaps, sometimes referred to as the loop-closure problem.
• Change detection between the overlaps.
• Data fusion over the overlaps, which is strongly connected to change detection and how changes are handled in the final model.
Moreover, this paper proposed a reconstruction from LiDAR only, but we believe that the images hold a pertinent geometric information that could be used to complement the LiDAR reconstruction, in areas occluded to the LiDAR but not to the cameras (which often happens as their geometries are different). Finally, an important issue that is partially tackled in the texturation: the presence of mobile objects. Because the LiDAR and images are most of the time not acquired strictly simultaneously, mobile objects might have an incoherent position between image and LiDAR, which is a problem that should be tackled explicitly.
Figure 2 .
2 Figure 2. The set of images acquired by the 5 full HD cameras (left, right, up, in front, from behind)
Figure 4 .
4 Figure 4. The proposed work-flow to produce large scale models
4. 1
1 Figure 5. Illustration of the acquired frontal images processing
Figure 9 .
9 Figure 9. Performance evaluation of a chunk of 10s of acquisition
Figure 6. Decimation of sensor space topology mesh
Table 1 .
1 Statistics on the input data per chunkIn Table
dur-
ACKNOWLEDGEMENTS
We would like to acknowledge the French ANR project pLaT-INUM (ANR-15-CE23-0010) for its financial support.
APPENDIX
In this appendix, we show more texturing results obtained from the acquired data in Rouen, France. Due to memory constraints, we are not able to explicitly show the entire textured model (17Km). However, we can show a maximum size of 70s (350m) of textured acquisition (Figure 10). | 25,071 | [
"747339",
"183304",
"933733"
] | [
"11157",
"11157",
"491336"
] |
01764553 | en | [
"spi"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01764553/file/doc00028903.pdf | Bruno Jeanneret
email: [email protected]
Daniel Ndiaye
Sylvain Gillet
Rochdi Trigui
H&HIL: A novel tool to test control strategy with Human and Hardware In the Loop
With this work, the authors try to make HIL simulation more realistic with the introduction of the Human driver in the loop. To succeed in this objective we develop a set of tools to connect easily a wide variety of real or virtual devices together : of course the driver but also racing gamer joystick, real engine, virtual drivetrain, virtual driver environment ... The detailed approach represent a step forward before testing control strategy or new powertrain on a real vehicle. First results showed a good effectiveness and modularity of the tool.
I. INTRODUCTION
The transportation sector is responsible for a wide share of Energy consumption and pollutant emission everywhere in the world. The growth of environmental awareness is today among the most constringent drivers that the researchers and the manufacturers have to consider when designing environmental friendly solutions for drive trains, including electric, hybrid and fuel cell Vehicles. Nevertheless, most of the development and the evaluation of the new solutions still follow classical scheme that includes modeling and testing components and drivetrain under standard driving cycles or in the best case, using preregistered real world driving cycle [START_REF] Oh | Evaluation of motor characteristics for hybrid electric vehicles using the hil concept[END_REF], [START_REF] Trigui | Performance comparison of three storage systems for mild hevs using phil simulation[END_REF], [START_REF] Bouscayrol | Hardware in the loop simulation in "Control and Mechatronics[END_REF], [START_REF] Shidore | Phev 'all electric range' and fuel economy in charge sustaining mode for low soc operation of the jcs vl41m li-ion battery using battery hil[END_REF], [START_REF] Castaings | Comparison of energy management strategies of a battery/supercapacitors system for electric vehicle under real-time constraints[END_REF] and [START_REF] Verhille | Hardwarein-the-loop simulation of the traction system of an automatic subway[END_REF]. This methodology of design does not include the variability of the driving conditions generated by the two major factors that are the driver behavior and the infrastructure influence (elevation, turns, speed limitation, traffic jam, traffic lights, . . . ).
Moreover, the progress registered in the Intelligent Transportation Systems (ITS) makes it possible today to optimize the actual use of the vehicles (classical, EVs, HEVs) by capturing instantaneous information about the infrastructure (road profile, road signs), the traffic and also about the weather conditions. These informations are used for HEV and PHEVs to optimize their energy management and for mono-sources vehicles (classical, EV) to give inputs to the ADAS systems in order to perform a lower consumption and emission (ecodriving concepts for example). The simulation and testing using standard or preregistered driving cycles is therefore too limited to take into account all these aspects. The need is then to develop new simulation and testing schemes able to consider a more realistic modeling of the vehicle environment and to include the driver or its model in the simulation/testing loop.
The methodology presented in this paper is based on a progressive and modular approach for simulating and testing in a HIL configuration different types of drive trains while including the vehicle environment models and the driver. The modularity is developed considering two axes:
• virtual to real : The ability of the developed system ranges from all simulation configuration (SIL) to driver + hardware in the loop simulation (H&HIL)
• multiplatform: The communication protocol between different systems or models allows easy exchange of systems and simulation platforms (energetic and dynamic vehicle model, different driving simulators configuration, Power plant in a HIL configuration, . . . ). The chosen protocol, namely the CAN (Control Area Network) can also easily be used to address properly further steps like vehicle prototype design
In the following sections we will describe the developed concepts and tools. The last chapter presents two applications made with the tool. A conclusion lists applications that can be developped with the facility.
II. TOOL DESCRIPTION
A. Architecture of the tool
The main program named MODYVES aims at connecting a generic input and a driver to a generic output with a vehicle model. The main architecture of the framework is presented in figure 1. Inputs can be choosen among:
• a Gamepad like Logitech G27 Joystick or equivalent. Modyves uses Simple DirectMedia Layer1 to provide low level access to the device
B. Communication layer
The CAN protocol is widely used in automotive industry. For this reason, we selected this kind of network for the communication of the different pieces of hardware. Easy to implement, another advantage is that the communication can be secured by testing the Rx time (Received frame) in a real time clock.
Two different USB to CAN converter have been integrated in the tool, PEAK2 module and Systec3 module.
Both provide device driver (Dynamic Link Library files) and header files to connect to the module and parametrise it, decode received frame and send transmitted ones.
C. Hardware
Two main hardwares have been integrated in our tool, a prototyping hardware commercialised by dSPACE, namely Micro-autobox (MABX-II), and a low cost 32 bit microcontroler from Texas Instrument, the C2000 Peripheral Experimenter kit equipped with a F28335 control card. Both are supported by The Mathworks with matlab and simulink coder dSPACE hardware, and matlab and embedded coder for C2000.
The table I makes a short comparison between hardwares. Of course these two devices have not the same performances and are not suitable for the same kind of application.
One can notice the power of dSPACE board (procesor speed and size of RAM and Flash), but the major advantage of the tool is its combinations with a RTI interface and controlDesk software that enable to rapidly develop, debug and validate a real time project.
On the other hand, TI C2000 control card is well suited for a variety of automotive applications. If offers a good processor speed with a moderate but sufficient Flash memory to developp and embed a real time application at a moderate price. Debugging is far more complex than with dSPACE product.
D. Software
Modyves is written in python and is cadenced by a windows timer, so depending upon computer characteristics and computer load, it can deviate from its theorical execution period. Nevertheless, when connected to real plant, the only critical part running in Modyves is the driver behavior and the communication layer.
The IFSTTAR driving simulator is a static driving simulator based on a real Peugeot 308 and a software part named SIM2. The cockpit and all commands are unchanged to offer the most realistic driving environments to the driver. An embedded electronic card in the vehicle reads all sensors values placed on the pedals, the gearbox and the steering wheel. It also controls the force feedback on the steering wheel. The electronic card controls the vehicle dashboard by sending CAN informations through the OBD connector. SIM2 is the IFSTTAR simulator software, and contains various types of models: vehicle, road, sound, traffic and visual models. The road scene is displayed on 5 screens offering up to 180°h
orizontal Field Of View. The IFSTTAR driving simulator is used in the field of human factor research, ergonomics studies, energy-efficient driving, advanced training and studies of "Human and Hardware in Loop"( H&HIL).
VEHLIB is namely a Simulink library. A framework has been developped for years around this library to integrate all the necessary components model to develop and simulate conventional, hybrid or all electric vehicles [START_REF] Vinot | Model simulation, validation and case study of the 2004 THS of Toyota Prius[END_REF] [START_REF] Jeanneret | New Hybrid Concept Simulation Tools, evaluation on the Toyota Prius Car[END_REF]. VEHLIB is developped with a combined backward or forward approach. The forward models are able to run on real time hardware with their respectives connexion blocks to real environment [START_REF] Jeanneret | Mise en oeuvre d une commande temps reel de transmission hybride sur banc moteur[END_REF].
III. EXAMPLES OF APPLICATION
The figure 3 presents different step of integration from the pure simulation on a personal computer up to the deployement on a final facilty. This procedure has been succesfully conducted for IFSTTAR's driving simulator. For simplicity reason, the vehicle model has been compiled in a C2000 hardware instead of linking the dynamic library to the simulator software. As presented in figure 1, the tool allows to make a large variety of tests and integration depending on the effort and the final objective. Two applications are presented hereafter, the first one is a model in the loop application with a G27 joystick, the second is a Power Hardware in the Loop with a driver simulator, a virtual vehicle connected to a real engine. Both include a driver in the loop.
A. G27 Joystick and Model in the loop application
A first example to introduce the human in the loop consists of a real time application running in a soft real time mode (i.e. a windows timer). It is easy to implement and need only a G27 Joystick and a computer to run (See example 1 in figure II-D).
As mentioned earlier, this application uses a windows timer to set the switching time of the application. In order to verify the real execution period of Modyves, the jitter time has been monitored on a personal laptop (intel core i7 3610QM @2.3 GHz running Windows 7) without perturbation (no other program were running on the computer). The theoretical frequency is 1 kHz. The results are presented in figure 4. The mean step time is 0.001002 sec., the max value is 0.0027 sec. (only one occurence during this test which lasted around 30 seconds) and the minimum value is 0.0009999 second. These values do not affect neither the driver perception, nor the model behavior, because they are far from the time constant of these "systems". Consequently the deviation from the theoretical period is small enough for our application.
B. Driver simulator with HIL application on engine test bench
This is the most complete situation described in figure II-D. In this case, the vehicle model (excepted the engine) is running in a hard real time platform, a Micro-autobox hardware in this particular case. The latter communicates with the engine test bench and exchange some informations with it, namely:
• send accelerator pedal position to the engine and rotation speed to the electric generator
• receive actual torque
The bench is presented in figure 7. The bench is running in the so-called throttle/speed mode. At each time step, the actual torque is measured on the bench, transmitted to the model and introduced in the simulated clutch. It passes throwgh the different components up to the vehicle wheels. The longitudinal motion equation is resolved allowing to calculate the speed of the different shafts up to the engine speed, which is in turn transmitted to the bench as a target generator speed. In the same time, the accelerator position is also send to the engine ECU.
The driving simulator is presented in figure 8. One can notice that fuel cut-off is effective in engine simulation when the vehicle decelerate (figure 6) but are not present in the bench (figure 10) where cut-off is not validated in the actual engine ECU.
IV. CONCLUSION
In this paper, a generic framework has been presented to develop and test mecatronic applications in a progressive way. The facility is used in the laboratory to test ADAS (Advanced Driver Assistance Systems) and allows to take into account driver behavior as well as realistic vehicle environment. In this context, not only the fuel consumption is measured in a real engine, but one can measure engine emissions thanks to the gas analyzers available in the laboratory including CO, HC, NOx and particles measurements.
Number of applications could be performed with this facility:
• the coupled setting could be used to approach a real use behavior of the vehicle and the ICE instead of performing only standard driving cycles. In fact, the new emission regulation consider pollutant measurement on a real track in a procedure called Real Drive Emission (RDE) using Portable devices (PEMS). In order to anticipate this phase, the coupled test bench could help for the engine design and tuning in order to reduce real emissions
• the facility could embed ADAS system. They can be easily implemented and tested on urban, road or highway context with a human driver in the loop. For example, the assistance systems for eco-driving could be assessed in terms of near-real fuel consumption and pollutant emission
• the simulator can be a platform to evaluate different degrees of driving delegation towards autonomous vehicle. For example, it is easy to simulate manual or automatic gearbox, to test different kind of speed regulator, speed limitation...
• hybrid vehicles can be emulated in the real time model or they can be introduced in the bench to simulate hybrid vehicle. A clutch can also be physically present to test all electric mode for a parallel hybrid vehicle. With this kind of application, several energy management laws can be quickly implemented and tested in a almost real environment A further step for the software can consist of generalizing it to allow the core module to connect to any kind of devices as described in figure 11.
Fig. 1 .
1 Fig. 1. The Modyves framework
Fig. 2 .
2 Fig. 2. VEHLIB in one H&HIL configuration
Fig. 3 .
3 Fig. 3. The different step of integration
Fig. 4 .
4 Fig. 4. Jitter for a 1 kHz application
Fig. 5 .Fig. 6 .
56 Fig. 5. Vehicle speed and driver actuators
Fig. 7 .
7 Fig. 7. Engine on the test bench
Fig. 8 .
8 Fig. 8. Driving simulator Figures 9 and 10 illustrate the behaviour of the driver and the response of the engine on the test bench.
Fig. 9 .Fig. 10 .
910 Fig. 9. Vehicle speed and driver actuators
Fig. 11 .
11 Fig. 11. The engine for software integration
TABLE I .
I HARDWARE CHARACTERISTICS
https://www.libsdl.org/
http://www.peak-system.com/
http://www.systec-electronic.com/ | 14,594 | [
"180004"
] | [
"222114",
"222123",
"222114",
"222114"
] |
01764560 | en | [
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01764560/file/gr2017-pub00055873.pdf | Bernd Kister
Stéphane Lambert1
Bernard Loup
IMPACT TESTS ON SMALL SCALE EMBANKMENTS WITH ROCKERY -LESSONS LEARNED
Keywords: rockfall protection embankment (RPE), impact, rockery, block shape, ratio rotational to translational energy, freeboard, energy dissipation
In the project AERES (Analysis of Existing Rockfall Embankments of Switzerland) smallscale quasi-2D-experiments had been done with embankments with stones placed parallel to the slope and stones placed horizontally. The experiments showed that rotating cylinders acting as impactors may surmount an embankment with batter 2:1, even if the freeboard is chosen to 1.5 times the block diameter. So a slope with an inclination of about 60° and equipped with rockery in general does not guarantee that a freeboard of one block diameter will be sufficient as described by the Austrian standard. During the test series also a block with an octagonal cross section had been used. This block, with no or only very low rotation, on the other hand was not able to surmount an embankment with rockery and a freeboard of about 0.8 times the block diameter. The evaluation of test data showed additional that the main part of energy dissipation occurs during the first 6 ms of the impact process. At least 75% to 85% of the block's total kinetic energy will be transformed into compression work, wave energy and heat when the block hits the embankment.
INTRODUCTION
A common measure used in Switzerland to stabilize steep slopes of rockfall protection embankments is the use of rockery. According to the interviews done with employees of canton departments as well as with design engineers the rockery used in the past at the uphill slope of an embankment was constructed with a batter between 60° to 80°. But in general only less attention had been paid on the behavior of such natural stone walls during the impact of a block. The main reasons to use rockery was to limit the area, which is necessary for embankment construction, and/or to stop rolling blocks.
To check the experimental results of [START_REF] Hofmann | Bemessungsvorschlag für Steinschlagschutzdämme[END_REF] and to study the impact process of blocks impacting rockfall protection embankments (RPE) with a rockery cover at the uphill slope, small scale quasi-2D-experiments have been executed and analyzed at the Lucerne University of Applied Sciences and Arts (HSLU) during the project AERES [START_REF] Kister | Analysis of Existing Rockfall Embankments of Switzerland (AERES)[END_REF].
The load case "impact of a block onto an embankment" may be divided into two scenarios:
The embankment is punched-through by the block and the embankment is surmounted by the block. The first one is a question of stability of the construction itself while the second one concerns the fitness for purpose of a RPE. The most significant results of the tests done in the project AERES concerning the fitness for purpose of a RPE are shown below.
TEST CONDITIONS
Two types of rockery with different orientation of stones at the "uphill" slope had been studied. To create a relatively "smooth" rockery surface the stones had been placed parallel to the slope (Fig. 1a). For a "rough" rockery surface the stones had been placed horizontally, which resulted in a graded slope (Fig. 1b). Additional impact tests had been done on an embankment with a bi-linear slope with rockery at the lower part and soil at the upper part (Fig. 1c). The batter of the "rockery" was chosen to be 2:1, 5:2 and 5:1 in the tests. Three types of impactors had been used: A concrete cylinder G with a ratio of rotational to translational energy > 0.3, a hollow cylinder GS with a triaxial acceleration sensor inside with a ratio of rotational to translational energy between 0.2 and 0.1 and a block OKT with an octagonal cross section with no or only very low rotation. The ratio of rotational to translational energy of block GS corresponds very well with the results of in-situ tests done with natural blocks [START_REF] Usiro | An experimental study related to rockfall movement mechanism[END_REF]. The impact translational velocity in most tests was between 6 m/s and 7 m/s. Transformed to a prototype embankment with height of approx. 7 m this will result in real world block velocities of about 18 m/s to 21 m/s [START_REF] Kister | Analysis of Existing Rockfall Embankments of Switzerland (AERES)[END_REF].
FREEBOARD
The statements of Hofmann & Mölk [START_REF] Hofmann | Bemessungsvorschlag für Steinschlagschutzdämme[END_REF] concerning the freeboard have been transferred to the Austrian technical guideline ONR 24810:2013 [2] and it is said that the freeboard for an embankment with riprap (resp. rockery) and a slope angle of 50° or more should be at least one block diameter. To determine the maximum climbing height of a block during an impact and to get information about the influence of the roughness of the rockery surface, two impact
a) b) c)
tests on embankment models with a batter 2:1, but with different "rockery roughness" had been done. For these tests the freeboard was chosen to be approximately 1.9 times the block diameter. This value is a little bit less than the value of 2 times the block diameter specified in [2] for the freeboard of pure soil embankments, but significantly larger than the minimum value specified for embankments with rockery. The impact point was at a level, where the embankment thickness is larger than three times the block diameter (Fig. 2) and therefore according to [START_REF] Kister | Development of basics for dimensioning rock fall protection embankments in experiment and theory[END_REF] there was no risk that the embankment will be punched through. For "rough" rockery surface the climbing height of block GS was 1.8 times the block diameter for the first impact and 1.55 times the block diameter for the second impact (Fig. 2). The "rough" surface of the rockery led to a larger climbing height for the block than the "smooth" surface, although the block velocities had been very similar. The first impact of the block resulted in a larger climbing height than the second impact for both surface roughness types.
The tests showed that a slope with an inclination of about 60° and equipped with rockery in general does not guarantee that a freeboard of approx. one block diameter will be sufficient as described by the Austrian standard.
BLOCK SHAPE AND BLOCK ROTATION
During the test series the block OKT with an octagonal cross section and with no or only very low rotation was not able to surmount an embankment with rockery, if a crest to block diameter ratio of approx. 1.1 was chosen, even so the freeboard was only about 0.8 times the block diameter. Fig. 3 shows the trajectories of the concrete cylinder G with rotation and the block OKT impacting an embankment with stones placed horizontally at the "uphill" slope, batter of rockery 5:2. The difference in the impact translational velocities of both blocks was about 7%, which is within the measurement error (G: 5.9 m/s, OKT: 6.3 m/s). So block shape and block rotation play a significant role in the impact process.
ENERGY DISSIPATION
The evaluation of test data received in the project AERES showed that the main part of energy dissipation occurs during the first 6 ms of the impact process. During this period the block translational velocity is reduced to less than the half value for all three types of impactors used in the project. Differences in loss of block velocity and block energy of the three used impactors mainly occur after the large drop of velocity and energy.
CONCLUSION
The following parameters had been found, which are dominating the impact process and led the embankment be either surmounted by a block or punched-through: The total block energy, the ratio of rotational to translational block energy, the impact angle, which is a function of block trajectory and slope inclination, the shape of the block, the embankment's thickness at the impact point. These parameters are mainly responsible for the fitness for purpose of a rockfall protection embankment. The experiments have shown that there are some interactions between these parameters, which could not be solved in detail with the existing experimental set-up. Further research has to be done to determine the freeboard in case of blocks with a natural shape and with a ratio of rotational to translational energy between the limits 0.1 and 0.2.
Fig. 1
1 Fig. 1 Orientation of stones at the "uphill" slope: a) upright, b) horizontal, c) upright, upper slope without stones and reduced slope angle
Fig. 2 :
2 Fig. 2: Max. climbing height CH max of impactor GS, stones placed horizontally, freeboard FB = 1.9*2r: a) first impact: CH max approx. 1.8*2r, b) second impact: CH max approx. 1.55*2r, 2r = block diameter
Fig. 3 :
3 Fig. 3: Influence of block shape and block rotation: Trajectories of cylinder G (a) and block OKT (b), embankment with stones placed horizontally at the "uphill" slope, batter of rockery 5:2
Lucerne University of Applied Sciences and Arts, Technikumstrasse
21, CH -6048 Horw, Switzerland, since 2017: kister -geotechnical engineering & research, Neckarsteinacher Str. 4 B, D -Neckargemünd, +49 6223 71363, [email protected] 2 irstea, 2 rue de la Papeterie -BP 76, 38402 Saint-Martin-d'Hères cedex, France, +33 (0)4 76 76 27 94, [email protected] 3 Federal Office for the Environment (FOEN), 3003 Bern, Switzerland, Tel. +41 58 465 50 98, [email protected] | 9,550 | [
"171258"
] | [
"454205",
"182213"
] |
01764616 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01764616/file/CnF.pdf | Pranav Chandramouli
email: [email protected]
Dominique Heitz
Sylvain Laizet
Etienne Mémin
Coarse large-eddy simulations in a transitional wake flow with flow models under location uncertainty
The focus of this paper is to perform coarse-grid large eddy simulation (LES) using recently developed sub-grid scale (SGS) models of cylinder wake flow at Reynolds number (Re) of 3900. As we approach coarser resolutions, a drop in accuracy is noted for all LES models but more importantly, the numerical stability of classical models is called into question. The objective is to identify a statistically accurate, stable sub-grid scale (SGS) model for this transitional flow at a coarse resolution. The proposed new models under location uncertainty (MULU) are applied in a deterministic coarse LES context and the statistical results are compared with variants of the Smagorinsky model and various reference data-sets (both experimental and Direct Numerical Simulation (DNS)). MULU are shown to better estimate statistics for coarse resolution (at 0.46% the cost of a DNS) while being numerically stable. The performance of the MULU is studied through statistical comparisons, energy spectra, and sub-grid scale (SGS) contributions. The physics behind the MULU are characterised and explored using divergence and curl functions. The additional terms present (velocity bias) in the MULU are shown to improve model performance. The spanwise periodicity observed at low Reynolds is achieved at this moderate Reynolds number through the curl function, in coherence with the birth of streamwise vortices.
Large Eddy Simulation, Cylinder Wake Flow
Introduction
Cylinder wake flow has been studied extensively starting with the experimental works of Townsend (1; 2) to the numerical works of Kravchenko and others (3; 4; 5). The flow exhibits a strong dependance on Reynolds number Re = U D/ν, where U is the inflow velocity, D is the cylinder diameter and ν is the kinematic viscosity of the fluid. Beyond a critical Reynolds number Re ∼ 40 the wake becomes unstable leading to the well known von Kármán vortex street. The eddy formation remains laminar until gradually being replaced by turbulent vortex shedding at higher Re. The shear layers remain laminar until Re ∼ 400 beyond which the transition to turbulence takes place up to Re = 10 5 -this regime, referred to as the sub-critical regime, is the focus of this paper.
The transitional nature of the wake flow in the sub-critical regime, especially in the shear layers is a challenging problem for turbulence modelling and hence has attracted a lot of attention. The fragile stability of the shear layers leads to more or less delayed roll-up into von Kármán vortices and shorter or longer vortex formation regions. As a consequence significant discrepancies have been observed in near wake quantities both for numerical simulations [START_REF] Ma | Dynamics and lowdimensionality of a turbulent near wake[END_REF] and experiments [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF].
Within the sub-critical regime, 3900 has been established as a benchmark Re. The study of [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] provides accurate experimental data-set showing good agreement with previous numerical studies contrary to early experimental datasets [START_REF] Beaudan | Numerical experiments on the flow past a circular cylinder at sub-critical Reynolds number[END_REF]. The early experiments of Lourenco and Shih (9) obtained a V-shaped mean streamwise velocity profile in the near wake contrary to the U-shaped profile obtained by [START_REF] Beaudan | Numerical experiments on the flow past a circular cylinder at sub-critical Reynolds number[END_REF]. The discrepancy was attributed to inaccuracies in the experiment -a fact confirmed by the studies of [START_REF] Mittal | Suitability of Upwind-Biased Finite Difference Schemes for Large-Eddy Simulation of Turbulent Flows[END_REF] and (4). Parnaudeau et al.'s [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] experimental database, which obtains the U-shaped mean profile in the near wake, is thus becoming useful for numerical validation studies. With increasing computation power, the LES data sets at Re = 3900 have been further augmented with DNS studies performed by [START_REF] Ma | Dynamics and lowdimensionality of a turbulent near wake[END_REF].
The transitional nature of the flow combined with the availability of validated experimental and numerical data-sets at Re = 3900 makes this an ideal flow for model development and comparison. The LES model parametrisation controls the turbulent dissipation. A good SGS model should ensure suitable dissipation mechanism. Standard Smagorinsky model [START_REF] Smagorinsky | General circulation experiments with the primitive equations[END_REF] based on an equilibrium between turbulence production and dissipation, has a tendency to overestimate dissipation in general [START_REF] Meyers | On the model coefficients for the standard and the variational multi-scale Smagorinsky model[END_REF]. In transitional flows, where the dissipation is weak, such a SGS model leads to laminar regimes, for example, in the shear layers for cylinder wake. Different modifications of the model have been proposed to correct this behaviour. As addressed by [START_REF] Meyers | On the model coefficients for the standard and the variational multi-scale Smagorinsky model[END_REF], who introduced relevant improvements, the model coefficients exhibit a strong dependency both on the ratio between the integral length scale and the LES filter width, and on the ratio between the LES filter width and the Kolmogorov scale. In this context of SGS models, coarse LES remains a challenging issue.
The motivation for coarse LES is dominated by the general interest towards reduced computational cost which could pave the way for performing higher Re simulations, sensitivity analyses, and Data Assimilation (DA) studies. DA has gathered a lot of focus recently with the works of ( 13), [START_REF] Gronskis | Inflow and initial conditions for direct numerical simulation based on adjoint data assimilation[END_REF], and [START_REF] Yang | Enhanced ensemble-based 4dvar scheme for data assimilation[END_REF] but still remains limited by computational requirement.
With the focus on coarse resolution, this study analyses the performance of LES models for transitional wake flow at Re 3900. The models under location uncertainty (16; 17) are analysed in depth for their performance at a coarse resolution and compared with classical models. The models are so called as the equations are derived assuming that the location of a fluid parcel is known only up to a random noise i.e. location uncertainty. Within this reformulation of the Navier-Stokes equations, the contributions of the subgrid scale random component is split into an inhomogeneous turbulent diffusion and a velocity bias which corrects the advection due to resolved velocity field. Such a scheme has been shown to perform well on the Taylor Green vortex flow [START_REF] Harouna | Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling[END_REF] at Reynolds number of 1600, 3000, and 5000. The new scheme was shown to outperform the established dynamic Smagorinky model especially at higher Re. However, this flow is associated to an almost isotropic turbulence and no comparison with data is possible (as it is a pure numerical flow). Here we wish to assess the model skills with respect to more complex situations (with laminar, transient and turbulent areas) and coarse resolution grids. We provide also a physical analysis of the solutions computed and compare them with classical LES schemes and experimental data. Although the models are applied to a specific Reynolds number, the nature of the flow generalises the applicability of the results to a wide range of Reynolds number from 10 3 -10 5 , i.e. up to the pivotal point where the transition into turbulence of the boundary layer starts at the wall of the cylinder. The goal is to show the ability of such new LES approaches for simulation at coarse resolution of a wake flow in the subcritical regime. Note that recently for the same flow configuration, [START_REF] Resseguier | Stochastic modelling and diffusion modes for proper orthogonal decomposition models and small-scale flow analysis[END_REF] have derived the MULU in a reduced-order form using Proper Orthogonal Decomposition (POD), successfully providing physical interpretations of the local corrective advection and diffusion terms. The authors showed that the near wake regions like the pivotal zone of the shear layers rolling into vortices are key players into the modelling of the action of small-scale unresolved flow on the resolved flow .
In the following, we will show that the MULU are able to capture, in the context of coarse simulation, the essential physical mechanisms of the transitional very near wake flow. This is due to the split of the SGS contribution into directional dissipation and velocity bias. The next section elaborates on the various SGS models analysed in this study followed by a section on the flow configuration and numerical methods used. A comparison of the elaborated models and the associated physics is provided in the results section. Finally, a section of concluding remarks follows.
Models under location uncertainty
General classical models such as Smagorinsky or Wall-Adaptive Local Eddy (WALE) viscosity model proceed through a deterministic approach towards modelling the SGS dissipation tensor. However, [START_REF] Mémin | Fluid flow dynamics under location uncertainty[END_REF] suggests a stochastic approach towards modelling the SGS contributions in the Navier-Stokes (NS) equation. Building a stochastic NS formulation can be achieved via various methods. The simplest way consists in considering an additional additive random forcing [START_REF] Bensoussan | Equations stochastiques du type Navier-Stokes[END_REF]. Other modelling considered the introduction of fluctuations in the subgrid models (21; 22). Also, in the wake of Kraichnan's work [START_REF] Kraichnan | The structure of isotropic turbulence at very high Reynolds numbers[END_REF] another choice consisted in closing the large-scale flow in the Fourier space from a Langevin equation (24; 25; 26). Lagrangian models based on Langevin equation in the physical space have been also successfully proposed for turbulent dispersion [START_REF] Sawford | Generalized random forcing in random-walk models of turbu[END_REF] or for probability density function (PDF) modelling of turbulent flows (28; 29; 30). These attractive models for particle based representation of turbulent flows are nevertheless not well suited to a large-scale Eulerian modelling.
In this work we rely on a different stochastic framework of the NS equation recently derived from the splitting of the Lagrangian velocity into a smooth component and a highly oscillating random velocity component (i.e. the uncertainty in the parcel location expressed as velocity) [START_REF] Mémin | Fluid flow dynamics under location uncertainty[END_REF]:
dX t dt = u(X t , t) + σ(X t , t) Ḃ
The first term, on the right-hand side represents the large-scale smooth velocity component, while the second term is the small-scale component. This latter term is a random field defined from a Brownian term function Ḃ = dB/dt and a diffusion tensor σ. The small-scale component is rapidly decorrelating at the resolved time scale with spatial correlations (which might be inhomogeneous and non stationary) fixed through the diffusion tensor. It is associated with a covariance tensor:
Q ij (x, y, t, t ) = E((σ(x, t)dB t )(σ(x, t)dB t ) T ) = c ij (x, y, t)δ(t -t )dt. ( 1
)
In the following the diagonal of the covariance tensor, termed here as the variance tensor, plays a central role; it is denoted as a(x) = c(x, t). This tensor is a (3 × 3) symmetric positive definite matrix with dimension of a viscosity in m 2 s -1 . With such a decomposition, the rate of change of a scalar within a material volume, is given through a stochastic representation of the Reynolds Transport Theorem (RTT) (16; 17). For an incompressible small-scale random component (∇•σ = 0) the RTT has the following expression: d
V(t) q = V(t) d t q + ∇•(q u)- 1 2 d i,j=1 ∂ x i (a ij ∂ x j q) dt+∇q •σdB t dx. ( 2
)
where the effective advection u is defined as:
u = u - 1 2 ∇ • a. (3)
The first term on the right-hand side represents the variation of quantity q with respect to time: d t q = q(x, t+dt)-q(x, t). It is similar to the temporal derivative. It is important here to quote that q is a non differentiable random function that depends among other things on the particles driven by the Brownian component and flowing through a given location. The second term on the right-hand side stands for the scalar transport by the largescale velocity. However, it can be noticed that this scalar advection is not purely a function of the large-scale velocity. Indeed, the large-scale velocity is here affected by the inhomogeneity of the small-scale component through a modified large-scale advection (henceforth termed as velocity bias u ), where the effect of the fluctuating component is taken into account via the small-scale velocity auto-correlations a = (σσ T ). A similar modification of the large-scale velocity was also suggested in random walks Langevin models by [START_REF] Macinnes | Stochastic particle dispersion modeling and the tracer-particle limit[END_REF] who studied various stochastic models for particle dispersion -they concluded that an artificially introduced bias velocity to counter particle drift was necessary to optimise the models for a given flow. In the framework of modelling under location uncertainty, this term appears automatically. The third term in the stochastic RTT corresponds to a diffusion contribution due to the small-scale components. This can be compared with the SGS dissipation term in LES Modelling. This dissipation term corresponds to a generalization of the classical SGS dissipation term, which ensues in the usual context from the Reynolds decomposition and the Boussinesq's eddy viscosity assumption. Here it figures the mixing effect exerted by the smallscale component on the large-scale component. Despite originating from a very different construction, in the following, for ease of reference, we keep designating this term as the SGS contribution. The final term in the equation is the direct scalar advection by the small-scale noise.
It should be noted that the RTT corresponds to the differential of the composition of two stochastic processes. The Ito formulae, which is restricted to deterministic functions of a stochastic process, does not apply here. An extended formulae know as Ito-Wentzell (or generalized Ito) formulae must be used instead [START_REF] Kunita | Stochastic Flows and Stochastic Differential Equations[END_REF].
Using the Stochastic RTT, the large-scale flow conservation equations can be derived (for the full derivation please refer to (16; 17)). The final conservation equations are presented below:
Mass conservation:
d t ρ + ∇ • (ρ u)dt + ∇ρ •σ dB t = 1 2 ∇ • (a∇ρ)dt, (4)
which simplifies to the following constraints for an incompressible fluid:
∇ • σ = 0, ∇ • u = 0, (5)
The first constraint maintains a divergence free small-scale velocity field, while the second imposes the same for the large smooth effective component.
We observe that the large-scale component, u, is allowed to be diverging, with a divergence given by ∇ • ∇ • a. As we shall see, this value is in practice quite low. This weak incompressibility constraint results in a modified pressure computation, which is numerically not difficult to handle. Imposing instead a stronger incompressibility constraint on u introduces an additional cumbersome constraint on the variance tensor (∇ • ∇ • a = 0). In this work we will rely on the weak form of the incompressibility constraint. The largescale momentum equation boils down to a large-scale deterministic equation after separation between the bounded variation terms (i.e. terms in "dt") and the Brownian terms, which is rigorously authorized -due to uniqueness of this decomposition. Momentum conservation:
∂ t u + u∇ T (u - 1 2 ∇ • a) - 1 2 ij ∂ x i (a ij ∂ x j u) ρ = ρg -∇p + µ∆u. ( 6
)
Similar to the deterministic version of the NS equation, we have the flow material derivative, the forces, and viscous dissipation. The difference lies in the modification of the advection which includes the velocity bias and the presence of the dissipation term which can be compared with the SGS term present in the filtered NS equation. Both the additional terms present in the stochastic version are computed via the auto-correlation tensor a. Thus to perform a LES, one needs to model, either directly or through the smallscale noise, the auto-correlation tensor. Two methodologies can be envisaged towards this: the first would be to model the stochastic small-scale noise (σ(X t , t) Ḃ) and thus evaluate the auto-correlation tensor. We term such an approach as purely 'stochastic LES'. The second method would be to model the auto-correlation tensor directly as it encompasses the total contribution of the small scales. This method can be viewed as a form of 'deterministic LES' using stochastically derived conservation equations and this is the approach followed in this paper. The crux of the 'deterministic LES' approach thus revolves around the characterisation of the auto-correlation tensor. The small-scale noise is considered subsumed within the mesh and is not defined explicitly. This opens up various possibilities for turbulence modelling. The specification of the variance tensor a can be performed through an empirical local velocity fluctuation variance times a decorrelation time, or by physical models/approximations or using experimental measurements. The options explored in this study include physical approximation based models and empirical local variance based models as described below. Note that this derivation can be applied to any flow model. For instance, such a modelling has been successfully applied to derive stochastic large-scale representation of geophysical flows by (17; 33; 34).
A similar stochastic framework arising also from a decomposition of the Lagrangian velocity has been proposed in [START_REF] Holm | Variational principles for stochastic fluid dynamics[END_REF] and analysed in [START_REF] Crisan | Solution properties of a 3d stochastic Euler fluid equation[END_REF] and [START_REF] Cotter | Stochastic partial differential fluid equations as a diffusive limit of deterministic Lagrangian multi-time dynamics[END_REF]. This framework leads to enstrophy conservation whereas the formulation under location uncertainty conserves the kinetic energy of a transported scalar [START_REF] Resseguier | Geophysical flows under location uncertainty, part I: Random transport and general models[END_REF].
Physical approximation based models:
Smagorinsky's work on atmospheric flows and the corresponding model developement is considered to be the pioneering work on LES modelling [START_REF] Smagorinsky | General circulation experiments with the primitive equations[END_REF]. Based on Boussinesq's eddy viscosity hypothesis which postulates that the momentum transfer caused by turbulent eddies can be modelled by an eddy viscosity (ν t ) combined with Prandtl's mixing length hypothesis he developed a model (Smag) for characterising the SGS dissipation.
ν t = C||S||, (7)
τ = C||S||S, (8)
where τ stands for the SGS stress tensor, C is the Smagorinsky coefficient defined as (C s ∆) 2 , where ∆ is the LES filter width,
||S|| = 1 2 [ ij (∂ x i u j + ∂ x j u i ) 2 ] 1 2
is the Frobenius norm of the rate of strain tensor, and
S ij = 1 2 ( ∂ ūi ∂x j + ∂ ūj ∂x i ). (9)
Similar to Smagorinsky's eddy viscosity model, the variance tensor for the formulation under location uncertainty can also be specified using the strain rate tensor. Termed in the following as the Stochastic Smagorinsky model (StSm), it specifies the variance tensor similar to the eddy viscosity in the Classical Smagorinsky model:
a(x, t) = C||S||I 3 , (10)
where I 3 stands for 3 × 3 identity matrix and C is the Smagorinsky coefficient.
The equivalency between the two models can be obtained in the following case (as shown by ( 16)):
The SGS contribution (effective advection and SGS dissipation) for the StSm model is:
u j ∂ x j (∂ x j a kj ) + ij ∂ x i (a ij ∂ x j u k ) = u j ∂ x j (∂ x j ||S||δ kj ) + ij ∂ x i (||S||δ ij ∂ x j u k ), = u k ∆||S|| + ||S||∆u k + j ∂ x j ||S||∂ x j u k , (11)
and the SGS contribution for Smagorinsky model (∇ • τ ) is:
∇ • τ = j ∂ x j (||S||S), = j ∂ x j (||S||(∂ x j u k + ∂ x k u j )), = j ∂ x j ||S||∂ x j u k + ∂ x j ||S||∂ x k u j + ||S||∆u k . ( 12
)
An equivalency can be drawn between the two models by adding j ∂ x j ||S||∂ x k u ju k ∆||S|| to the StSm model. The additional term may also be written as:
∂ x k j ∂ x j (||S||)u j - j ∂ x j ∂ x k (||S||)u j -u k ∆||S||, ( 13
)
where the first term represents a velocity gradient which can be included within a modified pressure term as is employed for Smagorinsky model. The other two terms can be neglected for smooth enough strain rate tensor function. For smooth deformations both models are equivalent in terms of dissipation. It is important to note here that even if the effective advection is ignored in the StSm model, the two models still differ in a general case due to the first two terms in (13). Smagorinsky's pioneering work remains to date a popular model for LES, however it has certain associated drawbacks. The model assumes the existence of equilibrium between the kinetic energy flux across scale and the large scales of turbulence -this equilibrium is not established in many cases such as the leading edge of an airplane wing or turbulence with strong buoyancy. In addition, a form of arbitrariness and user dependency is introduced due to the presence of the Smagorinsky coefficient. This coefficient is assumed to be constant irrespective of position and time. Lilly [START_REF] Lilly | The representation of small scale turbulence in numerical simulation experiments[END_REF] suggests a constant coefficient value to be appropriate for the Smagorinsky model. However this was disproved by the works of [START_REF] Meyers | On the model coefficients for the standard and the variational multi-scale Smagorinsky model[END_REF], and ( 39) who shows that a constant value did not efficiently capture turbulence especially in boundary layers.
Numerous attempts were made to correct for the fixed constant such as damping functions [START_REF] Van Driest | The problem of aerodynamic heating[END_REF] or renormalisation group theory [START_REF] Yakhot | Renormalization group analysis of turbulence. I. Basic theory[END_REF]. Germano et al. [START_REF] Germano | A dynamic subgrid-scale eddy viscosity model[END_REF] provided a non ad-hoc manner of calculating the Smagorinsky coefficient varying with space and time using the Germano identity and an additional test filter t (termed as the Dynamic Smagorinsky (DSmag) model).
L ij = T ij -τ t ij , (14)
where τ stands for the SGS stress filtered by the test filter t, T is the filtered SGS stress calculated from the test filtered velocity field, and L stands for the resolved turbulent stress. The Smagorinsky coefficient can thus be calculated as:
C 2 s = < L ij M ij > < M ij M ij > , where (15)
M ij = -2∆ 2 (α 2 || St || St ij -(|| S|| Sij ) t (16)
and α stands for the ratio between the test filter and the LES filter. The dynamical update procedure removes the user dependancy aspect in the model, however it introduces unphysical values for the coefficient at certain instances. An averaging procedure along a homogenous direction is necessary to provide physical values for C s . However, most turbulent flows, foremost being wake flow around a cylinder, lack a homogenous direction for averaging. In such cases, defining the coefficient is difficult and needs adhoc measures such as local averaging, threshold limitation, and/or filtering methods to provide nominal values for C s . For the present study, we focus on the classical and dynamic variations of the Smagorinsky model -these model were used to study cylinder wake flow by ( 8), ( 42), [START_REF] Ouvrard | Classical and variational multiscale LES of the flow around a circular cylinder on unstructured grids[END_REF], among many others.
Local variance based models:
As the name states, the variance tensor can be calculated by an empirical covariance of the resolved velocity within a specified local neighbourhood. The neighbourhood can be spatially or temporally located giving rise to two formulations. A spatial neighbourhood based calculation (referred to as Stochastic Spatial Variance model (StSp)) is given as:
a(x, nδt) = 1 |Γ| -1 x i ∈η(x) (u(x i , nδt) -ū(x, nδt))(u(x i , nδt) -ū(x, nδt)) T C sp , (17)
where ū(x, nδt) stands for the empirical mean around the arbitrarily selected local neighbourhood defined by Γ. The constant C sp is defined as [START_REF] Harouna | Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling[END_REF]:
C sp = res η 5 3 ∆t, ( 18
)
where res is the resolved length scale, η is the Kolmogorov length scale and ∆t is the simulation time step. A similar local variance based model can be envisaged in the temporal framework; however, it has not been analysed in this paper due to memory limitations.
It is important to note that the prefix stochastic has been added to the MULU to differentiate the MULU version of the Smagorinsky model from its classical purely deterministic version. The model equations while derived using stochastic principles are applied in this work in a purely deterministic sense. The full stochastic formulation of MULU has been studied by [START_REF] Resseguier | Geophysical flows under location uncertainty, part I: Random transport and general models[END_REF].
Flow configuration and numerical methods
The flow was simulated using a parallelised flow solver, Incompact3d, developed by [START_REF] Laizet | High-order compact schemes for incompressible flows: A simple and efficient method with quasispectral accuracy[END_REF]. Incompact3d relies on a sixth order finite difference scheme (the discrete schemes are described in [START_REF] Lele | Compact finite difference schemes with spectral-like resolution[END_REF]) and the Immersed Boundary Method (IBM) (for more details on IBM refer to [START_REF] Gautier | A DNS study of jet control with microjets using an immersed boundary method[END_REF]) to emulate a body forcing. The main advantage of using IBM is the ability to represent the mesh in cartesian coordinates and the straightforward implementation of high-order finite difference schemes in this coordinate system. The IBM in Incompact3d has been applied effectively to cylinder wake flow by [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] and to other flows by [START_REF] Gautier | A DNS study of jet control with microjets using an immersed boundary method[END_REF], and (47) among others. A detailed explanation of the IBM as applied in Incompact3d, as well as its application to cylinder wake flow can also be found in [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF]. It is important to note that this paper focuses on the accuracy of the sub-grid models within the code and not on the numerical methodology (IBM/numerical schemes) of the code itself.
The incompressibility condition is treated with a fractional step method based on the resolution of a Poisson equation in spectral space on a staggered pressure grid combined with IBM. While solving the Poisson equation for the stochastic formulation, the velocity bias was taken into account in order to satisfy the stochastic mass conservation constraints. It can be noted that although the solution of the Poisson equation in physical space is computationally heavy, the same when performed in Fourier space is cheap and easily implemented with Fast Fourier transforms. For more details on Incompact3d the authors refer you to [START_REF] Laizet | High-order compact schemes for incompressible flows: A simple and efficient method with quasispectral accuracy[END_REF] and [START_REF] Laizet | Incompact3d: A powerful tool to tackle turbulence problems with up to O(105) computational cores[END_REF].
The flow over the cylinder is simulated for a Re of 3900 on a domain measuring 20D × 20D × πD. The cylinder is placed in the centre of the lateral domain at 10D and at 5D from the domain inlet. For statistical purposes, the centre of the cylinder is assumed to be (0, 0). A coarse mesh resolution of 241 × 241 × 48 is used for the coarse LES (cLES). cLES discretisation has been termed as coarse as this resolution is ∼ 6.2% the resolution of the reference LES of (7) (henceforth referred to as LES -Parn). In terms of Kolmogorov units (η), the mesh size for the cLES is 41η × 7η -60η × 32η. The Kolmogorov length scale has been calculated based on the dissipation rate and viscosity, where the dissipation rate can be estimated as ∼ U 3 /L where U and L are the characteristic velocity scale and the integral length scale. A size range for y is used due to mesh stretching along the lateral (y) direction which provides a finer mesh in the middle. Despite the stretching, the minimum mesh size for the cLES is still larger than the mesh size of particle imagery velocimetry (PIV) reference measurements of (7) (henceforth referred to as PIV -Parn). For all simulations, inflow/outflow boundary condition is implemented along the streamwise (x) direction with free-slip and periodic boundary conditions along the lateral (y) and spanwise (z) directions respectively -the size of the spanwise domain has been fixed to πD as set by [START_REF] Beaudan | Numerical experiments on the flow past a circular cylinder at sub-critical Reynolds number[END_REF], which was also validated by [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] to be sufficient with periodic boundary conditions. The turbulence is initiated in the flow by introducing a white noise in the initial condition. Time advancement is performed using the third order Adam-Bashforth scheme. A fixed coefficient of 0.1 is used for the Smagorinsky models as suggested in literature (43) while a spatial neighbourhood of 7 × 7 × 7 is used for the Stochastic Spatial model. For the dynamic Smagorinsky model, despite the lack of clear homogenous direction, a spanwise averaging is employed. In addition, the constant is filtered and a threshold on negative and large positive coefficients is also applied to stabilise the model. Note that the positive threshold is mesh dependant and needs user-intervention to specify the limits.
The reference PIV [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] was performed with a cylinder of diameter 12 mm and 280 mm in length placed 3.5D from the entrance of the testing zone in a wind tunnel of length 100 cm and height 28 cm. Thin rectangular end plates placed 240 mm apart were used with a clearance of 20 mm between the plates and the wall. 2D2C measurements were carried out at a free stream velocity of 4.6 m s -1 (Re ∼ 3900) recording 5000 image pairs separated by 25 µs with a final interrogation window measuring 16 × 16 pixels with relatively weak noise. For more details about the experiment refer to [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF].
The high resolution LES of ( 7) was performed on Incompact3d on the same domain measuring 20D × 20D × πD with 961 × 961 × 48 cartesian mesh points. The simulation was performed with the structure function model of ( 49) with a constant mesh size. LES -Parn is well resolved, however, there is a distinct statistical mismatch between LES -Parn and PIV-Parn especially along the centre-line (see figure 1a and figure 1b). Literature suggests that the wake behind the cylinder at a Re ∼ 3900 is highly volatile and different studies predict slightly varied profiles for the streamwise velocity along the centre-line. The averaging time period, the type of model, and the mesh type all affect the centre-line velocity profile. As can be seen in figure 1a and figure 1b, each reference data set predicts a different profile/magnitude for the streamwise velocity profiles. The DNS study of [START_REF] Ma | Dynamics and lowdimensionality of a turbulent near wake[END_REF] does not present the centreline velocity profiles. This provided the motivation for performing a DNS study at Re ∼ 3900 to accurately quantify the velocity profiles and to reduce the mismatch between the existing experimental and simulation datasets. The DNS was performed on the same domain with 1537 × 1025 × 96 cartesian mesh points using Incompact3d with stretching implemented in the lateral (y) direction.
From figure 1a we can see that the DNS and the PIV of Parnaudeau are the closest match among the data sets while significant deviation is seen in the other statistics. In the fluctuating streamwise velocity profiles, the only other data sets that exist are of (50) who performed Laser Doppler Velocimetry (LDV) experiments at Re = 3000 and Re = 5000. Among the remaining data-sets (LES of Parnaudeau, PIV of Parnaudeau, and current DNS) matching profiles are observed for the DNS and PIV despite a magnitude difference. These curves also match the profiles obtained by the experiments of Norberg (50) in shape, i.e. the similar magnitude, dual peak nature. The LES of Parnaudeau [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] is the only data-set to estimate an inflection point and hence is not considered further as a reference. The lower energy profile of the PIV may be attributed to the methods used for calculating the vector fields which employ a large-scale representation of the flow via interrogation win- 1 concisely depicts all the important parameters for the flow configuration as well as the reference datasets. Wake flow around a cylinder was simulated in the above enumerated configuration with the following SGS models: Classic Smagorinsky (Smag), Dynamic Smagorinsky (DSmag), Stochastic Smagorinsky (StSm), and Stochastic Spatial (StSp) variance. In accordance with the statistical comparison performed by [START_REF] Beaudan | Numerical experiments on the flow past a circular cylinder at sub-critical Reynolds number[END_REF], first and second order temporal statistics have been compared at 3 locations (x = 1.06D (top), x = 1.54D(middle), and x = 2.02D(bottom)) in the wake of the cylinder. All cLES statistics are computed (after an initial convergence period) over 90,000 time steps corresponding to 270 non-dimensional time or ∼ 54 vortex shedding cycles. All statistics are also averaged along the spanwise (z) direction. The model statistics are evaluated against the PIV experimental data of [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] and the DNS for which the data has been averaged over 400,000 time steps corresponding to 60 vortex sheddings. The work of [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] suggests that at least 52 vortex sheddings are needed for convergence which is satisfied for all the simulations. In addition, spanwise averaging of the statistics results in converged statistics comparable with the PIV ones. Both DNS and PIV statistics are provided for all statistical comparison, however, the DNS is used as the principal reference when an ambiguity exists between the two references.
Results
In this section, we present the model results, performance analysis and physical interpretations. Firstly, The cLES results are compared with the reference PIV and the DNS. The focus on centreline results for certain comparisons is to avoid redundancy and because these curves show maximum statistical deviation. This is followed by a characterisation and physical analysis of the velocity bias and SGS contributions for the MULU. The section is concluded with the computation costs of the different models.
Coarse LES
For cLES, the MULU have been compared with classic and dynamic version of the Smagorinksy model, DNS, and PIV -Parn. Figure 2 and figure 3 depict the mean streamwise and lateral velocity respectively plotted along the lateral (y) direction. In the mean streamwise velocity profile (see figure 2a), the velocity deficit behind the cylinder depicted via the U-shaped profile in the mean streamwise velocity is captured by all models. The expected downstream transition from U-shaped to a V-shaped profile is see for all the models -a delay in transition is observed for Smag model which biases the statistics at x = 1.54D and 2.02D. For the mean lateral component (see figure 3), all models display the anti-symmetric quality with respect to y = 0. Smag model shows maximum deviation from the reference DNS statistics in all observed profiles. All models but Smag capture the profile well while broadly StSp and DSmag models better capture the magnitude. As a general trend, Smag model can be seen to under-predict statistics while StSm model over-predicts. A better understanding of the model performance can be obtained through Figure 4 -6 which depict the second order statistics, i.e. the rms component of the streamwise (< u u >) and lateral (< v v >) velocity fluctuations and the cross-component (< u v >) fluctuations. The transitional state of the shear layer can be seen in the reference statistics by the two strong peaks at x = 1.06D in figure 4a. The magnitude of these peaks is in general underpredicted, however, a best estimate is given the MULU. DSmag and Smag models can be seen to under-predict these peaks at all x/D. This peak is eclipsed by a stronger peak further downstream due to the formation of the primary vortices (see figure 4b) which is captured by all the models.
The maxima at the centreline for figure 5 and the anti-symmetric structure for figure 6 are seen for all models. Significant mismatch is observed between the reference and the Smag/StSm models especially in figure 5a and 6a. In all second-order statistics, StSm model improves in estimation as we move further downstream. No such trend is seen for StSp or DSmag models while a constant under-prediction is seen for all Smag model statistics. This under-prediction could be due to the inherent over-dissipativeness of the Smagorinsky model which smooths the velocity field. This is corrected by DSmag/StSm models and in some instances over-corrected by the StSm model. A more detailed analysis of the two formulations under location uncertainty (StSm and StSp) is presented in sections 4.2.
The smoothing for each model is better observed in the 3D isocontours of vorticity modulus (Ω) plotted in figure 7. Plotted at non-dimensional Ω = 7, the isocontours provide an understanding of the dominant vortex structures within the flow. While large-scale vortex structures are observed in all flows, the small-scale structures and their spatial extent seen in the DNS are better represented by the MULU. The over-dissipativeness of the Smag model leads to smoothed isocontours with reduced spatial extent. The large-scale vortex structures behind the cylinder exhibit the spanwise periodicity observed by Williamson (53) for cylinder wake flow at low Re ∼ 270. Inferred to be due to mode B instability by Williamson, this spanwise periodicity was associated with the formation of small-scale streamwise vortex pairs. It is interesting to observe here the presence of similar periodicity at higher Re -this periodicity will be further studied at a later stage in this paper.
A stable shear layer associated with higher dissipation is observed in Smag model with the shear layer instabilities beginning further downstream than the MULU. An accurate shear layer comparison can be done by calculating the recirculation length (L r ) behind the cylinder. Also called the bubble length, it is the distance between the base of the cylinder and the point with null longitudinal mean velocity on the centreline of the wake flow. This parameter has been exclusively studied due to its strong dependence on external disturbances in experiments and numerical methods in simulations (54; 4). The effective capture of the recirculation length leads to the formation of U-shaped velocity profile in the near wake while the presence of external disturbances can lead to a V-shaped profile as obtained by the experiments of (9). Parnaudeau et al. [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] used this characteristic to effectively parameterise their simulations. The instantaneous contours can provide a qualitative outlook on the recirculation length based on shear layer breakdown and vortex formation. However, in order to quantify accurately the parameter, the mean and rms streamwise velocity fluctuation components were plotted in the streamwise (x) direction along the centreline (see figure 8a, and figure 8b). The recirculation length for each model is tabulated in table 2. StSp and DSmag models capture the size of the recirculation region with 0% error while the StSm model under estimates the length by 5.9% and the Smag model over estimates by 15.9%. The magnitude at the point of inflection is accurately captured by all the models (figure 8a).
For the rms centreline statistics of figure 8b, due to ambiguity between references, the DNS is chosen for comparison purposes. However, the similar magnitude, dual peak nature of the profile can be established through both the references. This dual peak nature of the model was also observed in the experiments of (50) who concluded that within experimental accuracy, the secondary peak was the slightly larger RMS peak as seen for the DNS. The presence of the secondary peak is attributed to the cross over of mode B for LES-Parn in figure 1b despite the simulation being within the transition regime.
The fluctuating centreline velocity profiles for the deterministic Smagorinsky models display an inflection point unlike the references. The MULU display a hint of the correct dual peak nature while under-predicting the magnitude matching with the PIV's large scale magnitude rather than the DNS. Although the Smag model has a second peak magnitude closer to the DNS, the position of this peak is shifted farther downstream. This combined with the inability of the model to capture the dual-peak nature speaks strongly against the validity of the Smag model statistics. Further analysis can be done by plotting 2D isocontours of the streamwise fluctuating velocity behind the cylinder, as shown in figure 9. The isocontours are averaged in time and along the spanwise direction. The profiles show a clear distinction between the classical models and the MULU in the vortex bubbles just behind the recirculation region. The vortex bubbles refer to the region in the wake where the initial fold-up of the vortices start to occur from the shear layers. The MULU match better with the DNS isocontours within this bubble as compared to the Smag or DSmag models. Along the centreline, MULU under-predict the magnitude, as depicted by the lower magnitude dual peaks in figure 8b. As we deviate from the centre-line, the match between the MULU and the DNS improves considerably. The mismatch of the isocontours in the vortex bubbles for the Smag and DSmag models with the DNS suggests that a higher magnitude for the centreline profile is not indicative of an accurate model. The dual peak nature of the streamwise velocity rms statistics show a strong dependance on the numerical model and parameters. This can be better understood via the constant definition within the StSp model formulation (refer to [START_REF] Harouna | Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling[END_REF]). The constant requires definition of the scale ( res ) of the simulation which is similar to ∆ used in classic Smagorinsky model, i.e. it defines the resolved length scale of the simulation. In the case of stretched mesh, the definition of this res can be tricky due to the lack of a fixed mesh size. It can be defined as a maximum (max(dx, dy, dz)) or a minimum (min(dx, dy, dz)) or an average (dx * dy * dz) (1/3) ). A larger value res is large, corresponding to a "PIV resolution", the centreline streamwise rms statistics display a dual peak nature with a larger initial peak similar to PIV reference. A smaller value for res , corresponding to a"higher resolution LES", shifts this dual peak into a small initial peak and a larger second peak similar to the DNS and of higher magnitude. The statistics shown above have been obtained with an res defined as (max(dx, dy, dz)) to emulate the coarseness within the model. Figure 10a and figure 10b show the power spectra of the streamwise and lateral velocity fluctuation calculated over time using probes at x/D = 3D behind the cylinder along the full spanwise domain. For the model power spectra, 135, 000 time-steps were considered corresponding to a nondimensional time of 405 which encapsulates ∼ 81 vortex shedding cycles. The hanning methodology is used to calculate the power spectra with an overlap of 50% considering 30 vortex shedding cycles in each sequence. The reference energy spectra (namely HWA) have been obtained from [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] while the DNS energy spectra has been calculated similar to the cLES. The process of spectra calculation for both reference and models are identical. All the values have been non-dimensionalised.
The fundamental harmonic frequency (f /f s = 1) and the second harmonic frequency are captured accurately by all models in the v-spectra. Let us recall that the cLES mesh is coarser than the PIV grid. Twice the vortex shedding frequency is captured by the peak in u-spectra at f /f s ∼ 2 as expected -twice the Strouhal frequency is observed due to symmetry condition at the centreline [START_REF] Ma | Dynamics and lowdimensionality of a turbulent near wake[END_REF]. The HWA measurement has an erroneous peak at f /f s ∼ 1 which was attributed to calibration issues and the cosine law by [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF]. All models match with both the reference spectra. One can observe a clear inertial subrange for all models in line with the expected -5/3 slope. The models in the order of increasing energy at the small scales is DSmag <Smag = StSp <StSm. For the StSm model, an accumulation of energy is observed at the smaller scales in the u-spectra, unlike the StSp model. This suggests that the small-scale fluctuations seen in vorticity or velocity contours for the StSp model (i.e. in figure 7) are physical structures and not a numerical accumulation of energy at the smaller scales known to occur for LES.
The statistical comparisons show the accuracy and applicability of the MULU. The next sub-section focuses on the physical characterisation of the MULU -SGS dissipation, velocity bias and their contributions are studied in detail.
Velocity bias characterisation
The functioning of the MULU is through the small-scale velocity autocorrelation a. The effect of this parameter on the simulation is threefold: firstly, it contributes to a velocity bias/correction which is a unique feature of the MULU. Secondly, this velocity correction plays a vital part in the pressure calculation to maintain incompressibility. Finally, it contributes to the SGS dissipation similar to classical LES models -this signifies the dissipation occurring at small scales. This threefold feature of the MULU is explored in this section.
The contribution of the velocity bias can be characterised by simulating the MULU (StSm and StSp) with and without the velocity bias (denoted by Nad for no advection bias) and comparing the statistics (see figure 11a -11b). Only the centre-line statistics have been shown for this purpose as they display maximum statistical variation among the models and provide an appropriate medium for comparison. For the simulations without the velocity bias, the convective part in the NS equations remains purely a function of the large-scale velocity. In addition, the weak incompressibility constraint ( 5) is not enforced in the simulations with no velocity bias and the pressure is computed only on the basis of large-scale velocity. Similar to the Smagorinsky model, where the gradients of the trace of the stress tensor are considered subsumed within the pressure term, the divergence of the velocity bias is considered subsumed within the pressure term. The simulations parameters and flow configuration remain identical to cLES configuration.
The statistics show improvement in statistical correlation when the velocity bias is included in the simulation -all statistical profiles show improvement with inclusion of velocity bias but only the centre-line statistics have been shown to avoid redundancy. In the mean profile, the inclusion of velocity bias appears to correct the statistics for both models to match better with the reference. For the StSm model, there is a right shift in the statistics while the opposite is seen for the StSp model. The correction for the StSp model appears stronger than that for the StSm model. This is further supported by the fluctuation profile where without the velocity bias, the StSp model tends to the Smag model with an inflection point while the inclusion of a connection between the large-scale velocity advection and the small-scale variance results in the correct dual peak nature of the references. For the StSm model, figure 11b suggests a reduction in statistical correlation with the inclusion of the velocity bias -this is studied further through 2D isocontours.
Figure 12 plots 2D isocontours for the streamwise fluctuating profiles for the MULU. Once again an averaging is performed in time and along the spanwise direction. A clear distinction between the models with and without velocity bias is again difficult to observe. However, on closer inspection, within the vortex bubbles, we can see that including velocity bias improves the agreement with the DNS by reducing the bubble size for the StSm model and increasing it for the StSp model. The higher magnitude prediction along the centreline seen for the StSm -Nad model could be the result of an overall bias of the statistics and not due to an improvement in model performancethe presence of an inflection point in the profile further validates the model inaccuracy. This error is corrected in the model with velocity bias. This corrective nature of the bias is further analysed.
For the StSp model, the simulation without the bias has a larger recir-culation zone or is "over dissipative" and this is corrected by the bias. This result supports the findings of (55) whose structure model, when employed in physical space, applies a similar statistical averaging procedure of squarevelocity differences in a local neighbourhood. They found their model to also be over-dissipative in free-shear flows and did not work for wall flows as too much dissipation suppressed development of turbulence and had to be turned off in regions of low three-dimensionality. To achieve that [START_REF] Ducros | Large-eddy simulation of transition to turbulence in a boundary layer developing spatially over a flat plate[END_REF] proposed the filtered-structure-function model, which removes the large-scale fluctuations before computing the statistical average. They applied this model with success to large-eddy simulation and analysis of transition to turbulence in a boundary layer. For the StSp model, which also displays this over dissipation quality (without velocity bias), the correction appears to be implicitly done by the velocity bias. Such a velocity correction is consistent with the recent findings of ( 19) who provided physical interpretations of the local corrective advection due to the turbulence inhomegeneity in the pivotal region of the near wake shear layers where transition to turbulence takes place. The recirculation length for all cases is tabulated in table 3. Data are obtained from the centre-line velocity statistics shown in figure 11a. The tabulated values further exemplify the corrective nature of the velocity bias where we see an improved estimation of the recirculation length with the inclusion of the velocity bias. Also, a marginal improvement in statistical match similar to figure 11a is observed with the inclusion of the velocity bias for all lateral profiles (not shown here). It can be concluded that the inclusion of velocity bias provides, in general, an improvement to the model. The physical characteristics of the velocity bias (expressed henceforth as u * = 1 2 ∇ • a) are explored further. The bias u * , having the same units as velocity, can be seen as an extension or a correction to velocity. Extending this analogy, the divergence of u * is similar to "the divergence of a velocity field". This is the case in the MULU where to ensure incompressibility, the divergence free constraints (eq. ( 5)) are necessary. The stability and statistical accuracy of simulations were improved with a pressure field calculated using the modified velocity u, i.e. when the weak incompressibility constraint was enforced on the flow. This pressure field can be visualised as a true pressure field unlike the Smagorinsky model where the gradients of the trace of the stress tensor are absorbed in an effective pressure field.
Stretching the u * and velocity analogy, we can also interpret the curl of u * (∇ × u * ) as vorticity or more specifically as a vorticity bias. The curl of u * plays a role in the wake of the flow where it can be seen as a correction to the vorticity field. The divergence and curl of u * are features solely of the MULU and their characterisation defines the functioning of these models. Figure 13 depicts the mean isocontour of ∇ • (u * ) = 0.02 for the two MULU. This divergence function is included in the poisson equation for pressure calculation in order to enforce the weak incompressibility constraint. In the StSm model the contribution is strictly limited to within the shear layer while in the StSp model the spatial influence extends far into the downstream wake. The stark difference in the spatial range could be due to the lack of directional dissipation in the StSm model which is modelled on the classic Smagorinsky model. This modelling results in a constant diagonal auto correlation matrix, and the trace elements simplifying to a laplacian of a (∆a) for ∇ • (u * ). This formulation contains a no cross-correlation assumption (zero non-diagonal elements in the auto correlation matrix) as well as ignoring directional dissipation contribution (constant diagonal terms provides equal SGS dissipation in all three principle directions). These Smagorinsky like assumptions place a restriction on the form and magnitude of u * which are absent in the StSp model. The existence of cross-correlation terms in a for the StSp model results in a better defined and spatially well-extended structure for the divergence.
The importance of the cross-correlation terms are further amplified in the mean curl isocontour of u * (see figure 14) where once again a spatial limitation is observed for the StSm model. However, the more interesting observation is the presence of spanwise periodicity for the curl of u * observed in the StSp model. The curl parameter is analogous to vorticity and is in coherence with the birth of streamwise vortices seen in figure 7, a spanwise periodicity is observed with a wavelength λ ∼ 0.8. Figure 15 superimposes isocontour. While clear periodicity is not observed for the mean vorticity, alternate peaks and troughs can be seen which match with the peaks in the mean curl isocontour. The wavelength of this periodicity is comparable with the spanwise wavelength of approximately 1D of mode B instabilities observed by [START_REF] Williamson | Vortex dynamics in the cylinder wake[END_REF] for Re ∼ 270. The footprint of mode B instabilities is linked to secondary instabilities leading to streamwise vortices observed for Re ranging from 270 to ∼ 21000 [START_REF] Bays-Muchmore | On streamwise vortices in turbulent wakes of cylinders[END_REF]. These results demonstrate the ability of the spatial variance model to capture the essence of the auto-correlation tensor.
The regions of the flow affected by the auto-correlation term can be characterised by plotting the contours of SGS dissipation density ((∇u)a(∇u) T ) of the MULU averaged in time and along the spanwise direction. These have been compared with the dissipation densities for the Smag and DSmag models ((∇u)ν t (∇u) T ) (see figure 17). A 'reference' dissipation density has been obtained by filtering the DNS dissipation density at the cLES resolution (see figure 16e) and by plotting the difference. The StSp model density matches best with the DNS compared with all other models -a larger spatial extent and a better magnitude match for the dissipation density is observed. The high dissipation density observed just behind the recirculation zone is captured only by the StSp model while all Smag models under-predict the density in this region. The longer recirculation zone for Smag model can be observed in the density contours. A few important questions need to be addressed here: firstly, the Smag model is known to be over-dissipative, however, in the density contours, a lower magnitude is observed for this model. This is a case of cause and effect where the over-dissipative nature of the Smag model smooths the velocity field thus reducing the velocity gradients which inversely affects the value of the dissipation density. Secondly, in the statistical comparison only a marginal difference is observed especially between the DSmag and StSp models while in the dissipation density contours we observe considerable difference. This is because the statistical profiles are a result of contributions from the resolved scales and the sub-grid scales. The dissipation density contours of figure 17 represent only a contribution of the sub-grid scales, i.e. the scales of turbulence characterised by the model. Thus, larger differences are observed in this case due to focus on the scales of model activity. Finally, we observe in figure 9 that within the vortex bubbles behind the cylinder the MULU perform better than the Smag or DSmag models. For the StSp model, this improvement is associated with the higher magnitude seen within this region in the SGS dissipation density. For the StSm model, no such direct relation can be made with the SGS dissipation density. However, when we look at the resolved scale dissipation ((∇u)ν(∇u) T ) for the models (see figure 16), a higher density is observed in the vortex bubbles for this model. For the classical models high dissipation is observed mainly in the shear layers. As the kinematic viscosity (ν) for all models is the same, the density maps are indicative of the smoothness of the velocity gradient. For the classical models, we see a highly smoothed field while for the MULU, we see higher density in the wake. This difference could induce the isocontour mismatch seen in figure 9. These results are consistent with the findings of ( 19), who applied the MULU in the context of reduced order model and observed that MULU plays a significant role in the very near wake where important physical mechanisms take place. For the MULU, the SGS contributions can be split into velocity bias (u∇ T (-1 2 ∇ • a) and dissipation ( 1 However, it is important to note that while the computational cost for the Smagorinsky models stay fixed despite changes in model parameters, the cost for the StSp model strictly depends on the size of the local neighbourhood used. A smaller neighbourhood reduces the simulation cost but could lead to loss of accuracy and vice versa for a larger neighbourhood. The definition of an optimal local neighbourhood is one avenue of future research that could be promising. StSm model, which also provides comparable improvement on the classic Smag model, can be performed at 24% the cost of the DSmag model. Thus, the MULU provide a low cost (2/3rd reduction) alternative to the dynamic Smagorinsky model while improving the level of statistical accuracy.
2 ij ∂ x i (a ij ∂ x j u)).
Conclusion
In this study, cylinder wake flow in the transitional regime was simulated in a coarse mesh construct using the formulation under location uncertainty. The simulations were performed on a mesh 54 times coarser than the DNS study. The simulation resolution is of comparable size with PIV resolutionthis presents a useful tool for performing DA where in disparity between the two resolutions can lead to difficulties.
This study focused on the MULU whose formulation introduces a velocity bias term in addition to the SGS dissipation term. These models were compared with the classic and dynamic Smagorinsky model. The MULU were shown to perform well with a coarsened mesh -the statistical accuracy of the spatial variance based model was, in general, better than the other compared models. The spatial variance based model and DSmag model both captured accurately the volatile recirculation length. The 2D streamwise velocity isocontours of the MULU matched better with the DNS reference than the Smagorinsky models. Additionally, the physical characterisation of the MULU showed that the velocity bias improved the statistics -considerably in the case of the StSp model. The analogy of the velocity bias with velocity was explored further through divergence and curl functions. The spanwise periodicity observed at low Re in literature was observed at this higher Re with the StSp model through the curl of u * (analogous with vorticity) and through mean vorticity albeit noisily. The SGS contribution was compared with Smagorinsky models and the split of the velocity bias and dissipation was also delineated through isocontours.
The authors show that the performance of the MULU under a coarse mesh construct could provide the necessary computational cost reduction needed for performing LES of higher Re flows. The higher cost of the StSp model compared with Smagorinsky, is compensated by the improvement in accuracy obtained at coarse resolution. In addition, the StSp model performs marginally better than the currently established DSmag model at just 37% the cost of the DSmag model. This cost reduction could pave the way for different avenue of research such as sensitivity analyses, high Reynolds number flows, etc. Of particular interest is the possible expansion of Data Assimilation studies from the currently existing 2D assimilations [START_REF] Gronskis | Inflow and initial conditions for direct numerical simulation based on adjoint data assimilation[END_REF] or low Re 3D assimilations [START_REF] Robinson | Image assimilation techniques for Large Eddy Scale models : Application to 3d reconstruction[END_REF] to a more informative 3D assimilation at realistic Re making use of advanced experimental techniques such as tomo-PIV. Also and more importantly, the simplistic definition of the MULU facilitate an
Figure 1 :
1 Figure 1: Mean streamwise velocity (a) and fluctuating streamwise velocity (b) in the streamwise direction along the centreline (y = 0) behind the cylinder for the reference data-sets. Legend: HWA -hot wire anemometry(7), K&M -B-spline simulations (case II) of Kravchenko and Moin(4), L&S -experiment of Lourenco and Shih (9), N -experiment of Norberg at Re = 3000 and 5000[START_REF] Norberg | LDV measurements in the near wake of a circular cylinder[END_REF], O&W -experiement of Ong and Wallace[START_REF] Ong | The velocity field of the turbulent very near wake of a circular cylinder[END_REF]
Figure 2 :
2 Figure 2: Mean streamwise velocity at 1.06D (top), 1.54D (middle), and 2.02D (bottom) in the wake of the circular cylinder.
Figure 3 :
3 Figure 3: Mean lateral velocity at 1.06D, 1.54D, and 2.02D in the wake of the circular cylinder.
Figure 4 :
4 Figure 4: Streamwise rms velocity (u u ) fluctuations at 1.06D, 1.54D, and 2.02D in the wake of the circular cylinder.
Figure 5 :
5 Figure 5: Lateral rms velocity (v v ) fluctuations at 1.06D, 1.54D, and 2.02D in the wake of the circular cylinder.
Figure 6 :
6 Figure 6: Rms velocity fluctuations cross-component (u v ) at 1.06D, 1.54D, and 2.02D in the wake of the circular cylinder.
Figure 7 :
7 Figure 7: 3D instantaneous vorticity iso-surface at Ω = 7.
Figure 8 :
8 Figure 8: Mean (a) and Fluctuating (b) streamwise velocity profile in the streamwise direction along the centreline behind the cylinder.
Figure 9 :
9 Figure 9: 2D isocontours of time averaged fluctuating streamwise velocity (u u ).
Figure 10 :
10 Figure 10: Power spectra of streamwise (a) and lateral (b) velocity component at x/D = 3 behind the cylinder.
Figure 11 :
11 Figure 11: Effect of velocity bias on centre-line mean (a) and fluctuating (b) streamwise velocity behind the cylinder.
Figure 12 :
12 Figure 12: Effect of velocity bias on the 2D isocontour of time averaged fluctuating streamwise velocity (u u ).
(a) Full scale view of the isocontour superposition with the outlined zoom area (b) Zoomed in view of the DNS mean vorticity isocontour (c) Zoomed in view of the isocontour superposition
Figure 15 :
15 Figure 15: 3D isocontour superposition of the mean curl of u * (blue) for the StSp model at ∇ × (u * ) = 0.05 with the mean vorticity (yellow) for DNS at Ω = 3
Figure 16 :
16 Figure 16: Sub-grid scale dissipation density in the wake of the cylinder. o stands for the original DNS dissipation and f stands for filtered (to cLES resolution) DNS dissipation.
Figure 18
18 shows the contribution of the two via 3D isocontours (dissipation in yellow and velocity bias in red). The contribution of velocity bias is limited for the StSm model as expected while in the StSp model it plays a larger role. Velocity bias in StSp model is dominant in the near wake of the flow especially in and around the recirculation zone. It is important to outline that this is the
Figure 17 :
17 Figure 17: Resolved scale dissipation density in the wake of the cylinder for each model.
Figure 18 :
18 Figure18: 3d SGS contribution iso-surface along the primary flow direction (x) with dissipation iso-surface in yellow (at 0.002) and velocity bias in red (at 0.001)
feature of the StSp model -the model captures accurately the statistics at only 0.37 the cost of performing the DSmag model.
Table 1 :
1 Flow parameters. LES resolution[START_REF] Corpetti | Fluid experimental flow estimation based on an optical-flow scheme[END_REF]. The DNS, however, exhibits a profile similar to other references and a magnitude in between the two LDV experiments of Norberg. Considering the intermediate Reynolds number of the DNS compared to the Norberg experiments, this suggests good convergence and accuracy of the DNS statistics. Note that the cLES mesh is ∼ 0.46% the cost of the DNS. Table
Re nx × ny × nz lx/D × ly /D × lz /D ∆x/D ∆y/D ∆z/D U∆t/D
cLES 3900 241×241×48 20×20×π 0.083 0.024-0.289 0.065 0.003
DNS 3900 1537×1025×96 20×20×π 0.013 0.0056-0.068 0.033 0.00075
PIV -Parn 3900 160×128×1 3.6×2.9×0.083 0.023 0.023 0.083 0.01
LES -Parn 3900 961×961×48 20×20×π 0.021 0.021 0.065 0.003
dows similar to a
Table 2 :
2 Recirculation lengths for cLES.
Model PIV -Parn DNS Smag DSmag StSm StSp
L r /D 1.51 1.50 1.75 1.50 1.42 1.50
of this parameter would signify a coarser mesh (i.e. a rough resolution) while
a small value indicates a finer cut off scale or a finer mesh resolution. When
Table 3 :
3 Recirculation lengths with and without velocity bias.
Model PIV -Parn DNS StSm -Nad StSm StSp -Nad StSp
L r /D 1.51 1.50 1.42 1.42 1.58 1.50 | 69,225 | [
"5673",
"952791"
] | [
"452218",
"486012",
"75",
"69530",
"486012"
] |
01764622 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01764622/file/Incompact3d_meet.pdf | LES -SGS Models
Smagorinsky Variants :
Classic : ν t = (C s * ∆) 2 S,
(1)
Dynamic : C 2 s = - 1 2 L ij Mij M kl Mkl . (2)
where,
L ij = T ij -τij (3)
M ij = ∆2 | S| Sij -∆2 (| S| Sij ), (4)
WALE :
ν t = (C w ∆) 2 (ς d ij ς d ij ) 3/2 ( Sij Sij ) 5/2 + (ς d ij ς d ij ) 5/4
(5)
MULU
NS formulation as derived in [START_REF] Memin | Fluid flow dynamics under location uncertainty[END_REF] :
Mass conservation :
d t ρ t + ∇•(ρ w )dt + ∇ρ •σ dB t = 1 2 ∇• (a∇q)dt, (7)
w = w - 1 2 ∇ • a (8)
For an incompressible fluid :
∇• (σdB t ) = 0, ∇ • w = 0, (9)
Momentum conservation :
∂ t w +w ∇ T (w - 1 2 ∇ • a)- 1 2 ij ∂ x i (a ij ∂ x j w ) ρ = ρg -∇p+µ∆w . ( 10
)
Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting 6 / 29
Modelling of a :
Stochastic Smagorinsky model (StSm) :
a(x, t) = C ||S||I 3 , (11)
Local variance based models (StSp / StTe) :
a(x, nδt) = 1 |Γ| -1 x i ∈η(x) (w (x i , nδt)-w (x, nδt))(w (x i , nδt)-w (x, nδt)) T C st , (
n x × n y × n z l x × l y ×
VDA -General Formulation
Objective : Estimate the unknown true state of interest x t (t, x)
Formulation :
∂ t x(t, x) + M(x(t, x)) = q(t, x), (13)
x(t 0 , x) = x b 0 + η(x) (14) Y(t, x) = H(x(t, x)) + (t, x) (15) q(t, x) -model error (Covariance matrix Q) η(x) -
VDA -General Formulation
Cost Function :
J(x 0 ) = 1 2 (x 0 -x b 0 ) T B -1 (x 0 -x b 0 )+ 1 2 t f t0 (H(x t ) -Y(t)) T R -1 (H(x t ) -Y(t)).
4DVar -Adjoint Method
Evolution of the state of interest (x 0 ) as a function of time
∂J ∂η = -λ(to) + B -1 (∂x(t 0 ) -∂x 0 ). ( 17
)
Le Dimet, Francois-Xavier, and Olivier Talagrand. "Variational algorithms for analysis and assimilation of meteorological observations : theoretical aspects." Tellus A : Dynamic Meteorology and Oceanography 38.2 (1986) : 97-110.
Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting
Additional Control
Minimisation of J with respect to additional incremental control parameter δu :
δγ (i) = {δx (i) 0 , δu (i) } J(δγ (i) ) = 1 2 ||δx (i) 0 + x (i) 0 -x (b) 0 || 2 B -1 + 1 2 t f t 0 ||δu (i) + u (i) -u (b) || 2 Bc + ... (18)
constrained by :
∂ t δx (i) + ∂ x M(x (i) ) • ∂x (i) + ∂ u M(u (i) ) • ∂u (i) = 0 (19)
a(x, t) = C ||S||I 3 , (20)
Local variance based models (StSp / StTe) :
a(x, nδt) = 1 |Γ| -1 x i ∈η(x) (w (x i , nδt)-w (x, nδt))(w (x i , nδt)-w (x, nδt)) T C sp ,
τ
Velocity fluctuation profiles for turbulent channel flow at Reτ = 395
contour n x × n y × n z l x × l y × l z n y × n z l x /D × l y /D × l z /D ∆x/
Pranav CHANDRAMOULI4D-Var with LES -Incompact3d Meeting
Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Plan 1 Large Eddy Simulation SGS Models LES -Results 2 Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation -Types Sequential approach : Cuzol and Mémin (2009), Colburn et al. (2011), Kato and Obayashi (2013), Combes et al. (2015) Variational approach (VDA) : Papadakis and Mémin (2009), Suzuki et al. (2009), Heitz et al. (2010), Lemke and Sesterhenne (2013), Gronskis et al. (2013), Dovetta et al. (2014) Hybrid approach : Yang (2014) Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results 4DVar -Incremental Optimisation Evolution of the cost function (J(x 0 )) as a function of time Courtier, P., J. N. Thépaut, and Anthony Hollingsworth. "A strategy for operational implementation of 4D-Var, using an incremental approach." Quarterly Journal of the Royal Meteorological Society 120.519 (1994) : 1367-1387. Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Code Formulation VDA with LES -Results 4DVar -Flow Chart 4DVar incremental Data Assimilation using Adjoint methodology. Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results
Synthetic Assimilation at Re 3900
Optimisation Parameter -U(x, y, z, t 0 ) Control Parameters -U in (1, y, z, t) Cst (x, y , z) | 4,698 | [
"1016018"
] | [
"486012"
] |
01764628 | en | [
"spi"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01764628/file/ris00000775.pdf | A Chenel
C J F Kahn
K Bruyère
T Bège
K Chaumoitre
C Masson
Morphotypes and typical locations of the liver and relationship with anthropometry
Introduction
The liver is one of the most injured organs in road or domestic accidents. To protect or repair it, companies and clinicians rely more and more on numerical finite elements models. It is highly complex owing to its structure, its dual blood supply and its environment. It is located under the diaphragm and partially covered by the thoracic cage and it presents a high variability of its shape. In order to improve finite element modelling of the liver, an anatomical customization must be done. Three aspects of the liver anatomy have been studied. First, some authors focused on the external shape of the liver [1][2][3]. Caix and Cubertafond [1] have found two morphotypes according to the subject's morphology. Nagato et al. [2] have divided the liver in six morphotypes depending on the costal and diaphragmatic impressions, or the development of one lobe in relation to the other. Studer al et. [3] have stated on two morphotypes thanks to the ratio of two geometrical characteristics of the liver. Secondly some authors focused on the location of the liver in the thoracic cage [4][5]. The liver's location in different postures has been determined in vivo [4] and on cadaveric subjects [5]. Finally, the variability of the internal shape of the liver has been reported. Some authors have described different segments based on the hepatic vessels and particularly the hepatic veins [START_REF] Couinaud | Le foie : étude anatomiques et chirurgicales[END_REF][START_REF] Bismuth | [END_REF]. The purpose of our study is a global analysis of the liver anatomy, quantifying at the same time its external shape, its internal vascular structure and its anatomical location, applied to livers reconstructed from 78 CT-scans, in order to identify liver's morphotypes and typical locations in the thoracic cage. Moreover, we analyzed the ability of subject's characteristics to predict these morphotypes and locations.
Materials and methods
Population -This study is based on 78 CT-scans from the Department of Medical Imaging and Interventional Radiology at Hôpital Nord in Marseille. These CT-scans were performed on patients between 17 and 95 years old. with no liver disease nor morphological abnormalities of the abdominal organs or the peritoneum.
Measurement of geometric and anthropometric parameters -
The 3D reconstructions of the liver, the associated veins and the thoracic cage were performed manually. A database was created with 53 geometrical characteristics per liver, qualifying its external geometry [START_REF] Serre | Digital Human modeling for Design and Engineering Conference[END_REF], its internal geometry, the diameters and angles of the veins and their first two bifurcations, and its location in the thoracic cage. Furthermore, anthropometric measurements were measured, such as the xiphoïd angle, the abdominal and thoracic perimeters. Lastly, data such as the subject's age and gender were known.
Statistical analysis -To homogenize the data, a transformation in logarithm, logarithm of the cubic root or by the log shape ratio of Mosimann [START_REF] Mosimann | of the American stat[END_REF] was used. To reduce the number of variables, principal component analysis (PCA) was performed on parameters characterizing the external geometry, the internal geometry, the veins geometry and the location of the liver. The first two dimensions were kept and two new variables by linear combination were created. An ascending hierarchical classification was then produced to determine the number of categories. Then, the partitioning around medoids method was chosen to classify the different individuals into categories. Lastly, ANOVAs flowed by post-hoc Tukey (HSD) tests were performed to verify the existence of a relationship between the subject's anthropometry and the liver's morphotypes.
Results and discussion
Four morphotypes were found and described by Fig. 1.
The first morphotype corresponds to a liver with a very small volume which presents as a small volume of the right lobe particularly segments 4 to 7. The associated veins globally have small diameters. A small angle between the two lobes can be noted. This kind of morphotype is noticed for subjects with a xiphoïd angle under 80° and a thoracic perimeter under 75 cm.
The second morphotype corresponds to a liver with a small volume which manifests as a small volume of the right lobe, and particularly segments 5 to 7. The associated veins globally have small diameters. A large angle between the two lobes can be noted. This kind of morphotype is noticed for subjects with a xiphoïd angle under 80° and a thoracic perimeter under 75 cm.
The third morphotype corresponds to a liver with a very large volume which presents as a very large volume of the right lobe, and particularly segments 4 to 6. The associated veins have large diameters. A large angle between the two lobes can be noted. This kind of morphotype is noticed for subjects with a xiphoïd angle over 80° and a thoracic perimeter over 75 cm.
The fourth morphotype corresponds to a liver with a large volume which manifests as a large volume of the right lobe, and particularly segments 4, 5 and 7. The associated veins have large diameters. A small angle between the two lobes can be noted. This kind of morphotype is noticed for subjects with a xiphoïd angle under 80°, a thoracic perimeter around 75 cm.
No statistical difference can be noted for the position of the liver in the thoracic cage. Only the position of one lobe to the other seems to vary.
Although the volume of the segments varies from one morphotype to another, the proportion of these segments, especially the fifth, seems stable and the volumes of segments are correlated with the hepatic volume (R 2 =0.42 for segment 5). Moreover, the diameters of the veins seems correlated with the volume (R 2 =0.33 for the portal vein, but only 0.17 for the vena cava). | 6,082 | [
"174367",
"1136582"
] | [
"581100",
"581100",
"222174",
"301315",
"301315",
"581100"
] |
01764647 | en | [
"shs"
] | 2024/03/05 22:32:13 | 1993 | https://hal.science/cel-01764647/file/PowersofUnreal.pdf | THE POWERS OF THE UNREAL: MYTHS AND IDEOLOGY IN THE USA
P. CARMIGNANI Université de Perpignan-Via Domitia
AN INTRODUCTION
As a starting-point, I'd like to quote the opinion of a French historian, Ph. Ariès, who stated in his Essais sur l'histoire de la mort en Occident (Paris, Le Seuil, 1975) that "pour la connaissance de la civilisation d'une époque, l'illusion même dans laquelle ont vécu les contemporains a valeur d'une vérité", which means that to get an idea of a society and its culture one needs a history and a para-history as well, para-history recording not what happened but what people, at different times, said or believed had happened. A famous novelist, W. Faulkner expressed the same conviction in a more literary way when he stated in Absalom Absalom that "there is a might have been which is more true than truth", an interesting acknowledgement of the power of myths and legends. This being said, I'd like now to say a few words about my basic orientations; the aim of this course is twofold :
-firstly, to introduce students to the technique of research in the field of American culture and society and give them a good grounding in the methodology of the classic academic exercise known as "analysis of historical texts and documents" ; -secondly, to analyze the emergence and workings of "l'imaginaire social" in the States through two of its most characteristic manifestations : Myths and Ideology. We'll see that every society generates collective representations (such as symbols, images etc.) and identification patterns gaining acceptance and permanence through such mediators or vehicles as social and political institutions (for instance, the educational system, the armed forces, religious denominations) and of course, the mass media (the Press, the radio, the cinema and, last but not least, television). They all combine their efforts to inculcate and perpetuate some sort of mass culture and ideology whose function is to hold the nation together and provide it with a convenient set of ready-made pretexts or rationalizations it often uses to justify various social or political choices.
DEFINITIONS OF KEY NOTIONS
A) Imagination vs. "the imaginary"
There is no exact English equivalent of the French word "l'imaginaire" or "l'imaginaire social"; however, the word "imaginary" does exist in English but chiefly as an epithet in the sense of "existing only in the imagination, not real" (Random House Dict.) and not as a substantive. It is sometimes found as a noun "the imaginary" as opposed to "the symbolic" in some works making reference to J. Lacan's well-known distinction between the three registers of "le réel, l'imaginaire et le symbolique", but its meaning has little to do with what we're interested in. For convenience sake, I'll coin the phrase "the imaginary" or "the social imaginary" on the model of "the collective unconscious" for instance to refer to our object of study. First of all, we must distinguish between "the imagination" and "the imaginary" though both are etymologically related to the word "image" and refer, according to G. Durand -the author of Les Structures anthropologiques de l'imaginaire -to "l'ensemble des images et des relations d'images qui constitue le capital pensé de l'homo sapiens", they do not share the same characteristics. IMAGINATION means "the power to form mental images of objects not perceived or not wholly perceived by the senses and also the power to form new ideas by a synthesis of separate elements of experience" (English Larousse). The IMAGINARY also implies the human capacity for seeing resemblances between objects but it also stresses the creative function of mind, its ability to organize images according to the subject's personality and psyche: as a local specialist, Pr. J. Thomas, stated: L'imaginaire est essentiellement un dynamisme, la façon dont nous organisons notre vision du monde, une tension entre notre conscience et le monde créant un lien entre le en-nous et le hors-nous [...] La fonction imaginaire apparaît donc comme voisine de la définition même du vivant, c'est-àdire organisation d'un système capable d'autogénération dans son adaptation à l'environnement, et dans le contrôle d'une tension rythmique (intégrant le temps) entre des polarisations opposées (vie/ mort, ordre/désordre, stable/dynamique, symétrie/dissymétrie, etc.) mais en même temps dans sa capacité imprévisible de création et de mutation [...] L'imaginaire assure ainsi une fonction générale d'équilibration anthropologique.
Thus, to sum up, if the imagination has a lot to do with the perception of analogies or resemblances between objects or notions, "the imaginary" is more concerned with binary oppositions and their possible resolution in a "tertium quid" i.e. something related in some way to two things but distinct from both.
B) Myth
Myth is a protean entity and none of the numerous definitions of myth is ever comprehensive enough to explain it away (cf. "Myth is a fragment of the soul-life, the dream-thinking of people, as the dream is the myth of the individual", Reuthven, 70). Etymologically, myth comes from the Greek "mythos". A mythos to the Greeks was primarily just a thing spoken, uttered by the mouth, a tale or a narrative, which stresses the verbality of myth and it essential relationship with the language within which it exists and signifies (parenthetically, it seems that many myths originate in some sort of word play cf. Oedipus = swollen foot. So bear in mind that the medium of myth is language: whatever myth conveys it does in and through language).
A myth also implies an allegoric and symbolic dimension (i.e. a latent meaning different from the manifest content) and it is a primordial "symbolic form" i.e. one of those things -like language itself -which we interpose between ourselves and the outside world in order to apprehend it.
It usually serves several purposes :
-to explain how something came into existence: it is "a prescientific and imaginative attempt to explain some phenomenon, real or supposed, which excites the curiosity of the mythmaker or observer" (K. R., 17)
-to provide a logical model capable of overcoming a contradiction (L. Strauss). In simpler terms, myths attempt to mediate between contradictions in human experience; they mediate a "coincidentia oppositorum" (cf. examples).
So to sum up, in the words of R. Barthes: "le mythe est un message qui procèderait de la prise de conscience de certaines oppositions et tendrait à leur médiation", in plain English, myth is a message originating in the awareness of certain oppositions, contradictions or polarities, and aiming at the mediation; myth is "a reconciler of opposites" or to quote G. Durand once more: "un discours dynamique résolvant en son dire l'indicible d'un dilemme" (Figures mythiques,306). Lastly, an essential feature of myth: it can be weakened but hardly annihilated by disbelief or historical evidence; myth is immune from any form of denial, whether experimental or historical (e.g. we still think of a rising and setting sun though we know it is a fallacy).
C) Ideology
The relationship between myth and ideology is obvious inasmuch as "toute idéologie est une mythologie conceptuelle dans laquelle les hommes se représentent sous une forme imaginaire leurs conditions d'existence réelles".
As far as language in general, and myth in particular, is a way of articulating experience, they both participate in ideology i.e. the sum of the ways in which people both live and represent to themselves their relationship to the conditions of their existence. Ideology is inscribed in signifying practices -in discourses, myths, presentations and representations of the way things are. Man is not only a social but also an "ideological animal". According to French philosopher L. Althusser, ideology is: un système (possédant sa logique et sa rigueur propres) de représentations (images, mythes, idées ou concepts selon les cas) doué d'une existence et d'un rôle historiques au sein d'une société donnée [...] Dans l'idéologie, qui est profondément inconsciente, même lorsqu'elle se présente sous une forme réfléchie, les hommes expriment, en effet, non pas leur rapport à leurs conditions d'existence, mais la façon dont ils vivent leur rapport à leurs conditions d'existence: ce qui suppose à la fois rapport réel et rapport vécu, imaginaire (Pour Marx,(238)(239)(240).
So between the individual and the real conditions of his existence are interposed certain interpretative structures, but ideology is not just a system of interpretation, it also assumes the function of a cementing force for society. According to Althusser, ideological practices are supported and reproduced in the institutions of our society which he calls "Ideological State Apparatuses" (ISA): their function is to guarantee consent to the existing mode of production. The central ISA in all Western societies is the educational system which prepares children to act consistently with the values of society by inculcating in them the dominant versions of appropriate behaviour as well as history, social studies and of course literature. Among the allies of the educational ISA are the family, the law, the media and the arts all helping to represent and reproduce the myths and beliefs necessary to enable people to live and work within the existing social formation. As witness its Latin motto "E Pluribus Unum" meaning "Out of many, one" or "One from many", America, like any nation in the making, was from the very beginning, confronted with a question of the utmost importance viz. how to foster national cohesion and achieve a unity of spirit and ideal. Before the Constitution there were thirteen separate, quasi independent States; in the words of D. Boorstin, "Independence had created not one nation but thirteen", which is paradoxical yet true since each former colony adopted a Constitution which in practice turned it into a sovereign state. However, the new States shared a common experience and set of values, and in the wake of Independence and throughout the XIX th century, the new country gradually developed a collective representation and a unifying force counterbalancing an obvious strain of individualism in the American character as well as holding in check certain centrifugal tendencies in the American experience; to quote just a few instances: the mobility of the population, its composite character, the slavery issue, sectional and regional differences, oppositions between the haves and have-nots are part of the disunifying forces that have threatened the concept as well as the reality of a single unmistakably American nationality and culture (the question of making a super identity out of all the identities imported by its constituent immigrants still besets America). For an examination and discussion of the genesis of the nation, the formation of the State, and the establishment of its model of recognized power, we'll have a look at the article by E. Marienstras "Nation, État, Idéologie".
However, even if the different people making up the USA have not coalesced into one dull homogeneous nation of look-alikes, talk-alikes and think-alikes, even if one can rightly maintain that there exist not one but fifty Americas (cf. the concept of "the American puzzle") there's no doubt that the USA succeeded in developing a national conciousness which is the spiritual counterpart of the political entity that came into being with the Declaration of Independence. The elaboration of a national identity was inseparable from the creation of a national ideology in the sense we have defined i. e. a coherent system of beliefs, assumptions, principles, images, symbols and myths that has become an organic whole and part and parcel of national consciousness. Let me remind you, at this stage, that my use of the concept, derived from L. Althusser, assumes that ideology is both a real and an imaginary relation to the world, that its rôle is to suppress all contradictions in the interest of the existing social formation by providing (or appearing to provide) answers to questions which in reality it evades. I'd like to point out as well, for the sake of honesty and argument, that some historians and social scientists might question the truth of my assumption: some consider that in view of the vastness and diversity of the New World it is absurd to speak of an American ideology and would subs-titute for it the concept of ideologies, in the plural; others claim that we have entered a post-mythical age or maintain, like D. Bell, the author of a famous book The End of Ideology (1960) that ideology no longer plays any rôle in Western countries, an opinion to which the fall of Communism has given new credence (but ironically enough two years later, in 1962, Robert E. Lane published a book entitled Political Ideology: Why The American Common Man Believes What He Does ?, which clearly shows that ideology is a moot point). Now, whatever such specialists may claim, there's no denying that the Americans take a number of assumptions for granted and, either individually or collectively, either consciously or unconsciously, often resort, in vindication of their polity (i.e. an organized society together with its government and administration), to a set of arguments or "signifiers" in Barthesian parlance, at the core of which lie the key notions of the American way of life and Americanism, two concepts about which there seems to be a consensus of opinion.
The American way of life is too familiar a notion to look into it in detail; everybody knows it suggests a certain degree of affluence and material well-being (illustrated by the possession of one or several cars, a big house with an impressive array of machines and gadgets etc.), and also implies a certain type of social relations based on a sense of community which does not preclude an obvious strain of rugged individualism and lastly, to strengthen the whole thing, an indestructible faith in freedom and a superior moral worth. As far as Americanism is concerned, it suggests devotion to or preference for the USA and its institutions and is the only creed to which Americans are genuinely committed. Although Americanism has been in common use since the late XVIII th century no one has ever been completely sure of its meaning, and it is perhaps best defined in contrast to its opposite Un-Americanism, i. e. all that is foreign to or opposed to the character, standards or ideals of the USA. Be that as it may, the concept of Americanism apparently rests on a structure of ideas about democracy, liberty and equality; through Americanism public opinion expresses its confidence in a number of hallowed institutions and principles: the Constitution, the pursuit of happiness, the preservation of individual liberty and human rights, a sense of mission, the free enterprise system, a fluid social system, a practical belief in individual effort, equality of opportunity, etc., in short a set of tenets that prompts the Americans' stock reply to those who criticize their country: "If you don't like this country, why don't you go back where you came from ?" a jingoistic reaction which is sometimes even more tersely expressed by "America: love it or leave it". Thus Americanism is the backbone of the nation and it has changed very little even if America has changed a lot. To sum up, the vindication of Americanism and the American way of life aims at reaffirming, both at home and abroad, the reality and permanence of an American identity and distinctiveness. However if such identity and specifity are unquestionable, they nonetheless pertain to the realm of the imaginary: why? There are at least two reasons for this : A) First of all, America is the outgrowth -not to say the child -of a dream i.e. the American Dream which has always been invoked by those in charge of the destiny of the American people whether a presidential candidate, a preacher or a columnist : "Ours is the only nation that prides itself upon a dream and gives its name to one: the American Dream", wrote critic L. Trilling. The Dream is the main framework of reference, it comes first and History comes next. One can maintain that from the very beginning of the settlement the Pilgrim Fathers and the pioneers settled or colonized a dream as well as a country. America originated in a twofold project bearing the marks of both idealism and materialism and such duality, as we shall see, was sooner or later bound to call for some sort of ideological patching up. At this stage, a brief survey of how things happened is in order: the first permanent settlement on American soil started in May 1607 in Virginia. The settlers, mainly adventurers, and ambitious young men employed by the Virginia Company of London were attracted by the lure of profit: they hoped to locate gold mines and a water route through the continent to the fabulous markets of Asia. A few decades later the colonists were reinforced by members of the loyalist country gentry who supported the King in the English Civil War (1642-52) -the Cavaliers, who deeply influenced the shaping of Antebellum South and gave Southern upper classes their distinctively aristocratic flavour.
In 1620, some five hundred miles to the North, another settlement -Plymouth Colony -was set up under the leadership of the famous Pilgrim Fathers, a group of Puritans who where dissatisfied with religious and political conditions in England. Unlike the the planters of Virginia the settlers of New England were motivated less by the search for profits than by ideological considerations. They sailed to America not only to escape the evils of England, but also to build an ideal community, what their leader J. Winthrop called "A Model of Christian Charity," to demonstrate to the world the efficacy and superiority of true Christian principles. So, the beginnings of America were marked by a divided heritage and culture: the Puritans in the North and the Cavaliers in the South, Democracy with its leveling effect, and Aristocracy with slavery as its "mudsill". And these two ways of life steadily diverged from colonial times until after the Civil War. Now I'd like to embark upon a short digression to show you an interesting and revealing instance of ideologogical manipulation : on Thanksgiving Day, i. e. the fourth Thursday in November, a national holiday, the Americans commemorate the founding of Plymouth Colony by the Pilgrim Fathers in 1620. This event has come to symbolize the birth of the American nation, but it unduly highlights the part taken by New England in its emergence. The importance that history and tradition attach to the Puritan community should not obliterate the fact that the colonization of the Continent actually started in the South 13 years before. Jamestown, as you know now, was founded in 1607 and one year before the "Mayflower" (the ship in which the Pilgrim Fathers sailed) reached Massachusetts, a Dutch sailing ship, named the "Jesus" (truth is indeed stranger than fiction) had already unloaded her cargo of 20 Negroes on the coast of Virginia. Small wonder then that in the collective consciousness of American people, the Pilgrim Fathers, with their halo of innocence and idealism overshadowed the Southerners guilty of the double sin of slavery and Secession.
B) The second reason is that the American socio-political experience, and consequently ideology, roots itself, for better or for worse in "Utopia" (from Greek "ou"/not + "topos"/place ; after Utopia by Sir Thomas More, 1516, describing an island in which ideal conditions existed; since that time the name has come to refer to any imaginary political or social system in which relationships between individual and the State are perfectly adjusted). The early Puritan settlers in New England compared themselves with God's Chosen People of the Old Testament and America was seen as a second Promised Land where a New Jerusalem was to be founded ("We shall be as a city upon a hill...," proclaimed their leader , J. Winthrop). What the early settlers'experience brings to light is the role of the fictitious in the making of America: the Pilgrim Fathers modelled their adventure on what I am tempted to call a Biblical or scriptural script. The settlement of the American continent was seen as a re-enactment of various episodes of the Old testament and was interpreted in biblical terms: for instance, the Pilgrim Fathers identified themselves with the Hebrews of Exodus who under the leadership of Moses fled Egypt for the Promised Land. The English Kings whose policies were detrimental to the Puritan community were compared to Pharaoh and the long journey across the Atlantic Ocean was interpreted as an obvious parallel with the wanderings of the Hebrews across the Sinaï Desert. Even the Indian tribes, who made it possible for the early colonists to survive the hardships of settlement, were readily identified with the Canaanites, the enemies of the Hebrews, who occupied ancient Palestine. Another corollary of the Promised Land scenario was, as we have just seen, that the Pilgrim Fathers had the deep-rooted conviction that they were endowed with a double mission: spreading the Word of God all over the new continent and setting up a New Jerusalem and a more perfect form of government under the guidance of the Church placed at the head of the community (a theocracy). Parenthetically, the identification with the Hebrews was so strong that at the time of the Declaration of Independence some delegates suggested that Hebrew should become the official language of the New Republic! Thus the Pilgrim Fathers were under the impression of leaving the secular arena to enter the mythical one: they looked forward to an end to history i. e. the record of what man has done and this record is so gruesome that Byron called history "the devil's scripture". The Puritans planned to substitute God's scripture for the devil's: myth redeems history. The saga of the Pilgrim Fathers is evidence of the supremacy of the mythical or imaginary over the actual; it is also an illustration of the everlasting power of mythical structures to give shape to human experience: the flight from corrupt, sin-ridden Europe was assimilated to the deliverance of Israel from Egypt. Now it is worthy of note that if utopia means lofty ideals, aspiration, enterprise and a desire to improve the order of things, it also tends to degenerate and to content itself with paltry substitutes, makeshift solutions and vicarious experiences: as M. Atwood puts it the city upon the hill has never materialized and: "Some Americans have even confused the actuality with the promise: in that case Heaven is a Hilton Hotel with a coke machine in it".
Such falling-off is illustrated by the evolution of the myth of the Promised Land which, as an ideological construction served a double purpose: first of all, there is no doubt that this myth and its derivative (the Idea of the Puritans as a Chosen People) reflected an intensely personal conviction and expressed a whole philosophy of life but at the same time it is obvious that these religious convictions readily lent themselves to the furtherance of New England's political and economic interests. The consciousness of being God's chosen intruments in bringing civilization and true religion to the wilderness justified a policy of territorial expansion and war on the Indians: their culture was all but destroyed and the race nearly extinguished. As you all know, the Indians were forced into ever-smaller hunting-grounds as they were persuaded or compelled to give their forests and fields in exchange for arms, trinkets or alcohol. This policy of removal culminated in the massacre of Wounded Knee and the harrowing episode of the Trail of Tears. So this is a perfect example of the way ideology works: in the present instance, it served as a cover for territorial expansion and genocide.
As a historian put it, "the American national epic is but the glorification of a genocide".
What the early days of settlement prove beyond doubt is that the American experiment took root in an archetypal as well as in a geographical universe, both outside history and yet at a particular stage in the course of history. The Pilgrim Fathers' motivations were metaphysical as well as temporal and the ideological discourse held by those founders was inscribed in the imaginary: it was rich in fables, symbols and metaphors that served as a system of interpretation or framework of references to give shape and meaning to their experience. Thus the American Dream was intimately related to the Sacred and eventually regarded as sacred. Later, under the influence of the writings of such philosophers as John Locke (1632-1704) or Benjamin Franklin (1706-1790), the American Dream was gradually remodelled and secularized but it has always kept a mystical dimension.
Nowadays, it is obvious that ideology in present-day America is mostly concerned with what the Dream has become and the most striking feature of it is its permanence -however changeable its forms and short-lived its manifestations may have been. It is of course an endless debate revealing great differences of attitude; some like John Kennedy in A Nation of Immigrants (1964) maintaining the Dream has materialized ("Les occasions que l'Amérique a offertes ont fait du rêve une réalité, au moins pour un bon nombre de gens; mais le rêve lui-même était pour une large part le produit de millions de simples gens qui commençaient une nouvelle existence avec la conviction que l'existence en effet pouvait être meilleure, et que chaque nouvelle vague d'immigrants ravivait le rêve") while others contend that it has vanished into thin air or again claim like John M. Gill in his introduction toThe American Dream that the American Dream has not been destroyed because it has not materialized yet; it is just, in the words of the Negro poet Langston Hughes "a dream deferred" i.e. "ajourné" : What happens to a dream deferred ? Does it dry up like a raisin in the sun? Or fester like a sore -And then run? Does it stink like rotten meat? Or crust and sugar overlike a syrupy sweet? May be it just sags like a heavy load. Or does it explode?
Let America be America again. Let it be the dream it used to be. Let it be the pioneer on the plain Seeking a home where he himself is free. (America never was America to me.) Let America be the dream the dreamers dreamed -Let it be that great strong land of love Where never Kings connive nor tyrants scheme That any man be crushed by one above. (It never was America to me.) O, let my land be a land where liberty Is crowned with no false patriotic wreath, But opportunity is real, and life is free, Equality is the air we breathe. (There's never been equality for me Nor freedom in this "homeland of the free".") ---------------O, yes, I say it plain, America never was America to me, And yet, I swear this oath -America will be!
The Dream is all the more enduring as it is seen as being deferred. Note as well that the definition of the Dream has continually changed as the notion of happiness evolved, but if some elements have disappeared, others have been included and the Dream still embodies a number of obsessions and phantasms that haunted the people of Massachusetts or New Jersey four centuries ago. Such perennity and resilience are most remarkable features and prove beyond doubt that myth cannot be destroyed by history. The main components of the American Dream are well-known; it is a cluster of myths where one can find, side by side or alternating with each other :
-the myth of Promise ("America is promises") in its quasi theological form, America being seen as God's own country -the myth of plenty : America = a Land of plenty, a myth which originated in the Bible and then assumed more materialistic connotations -the Myth of Adamic Innocence -a sense of mission, at first divine and then imperialistic (Manifest Destiny) -the Frontier -the Melting-Pot etc., the list is by no means exhaustive and might include all the concepts at the core of Americanism such as the pursuit of happiness, equality of opportunity, freedom, selfreliance and what not.
However nebulous this series of elements may be, it played at one time or another in American -and for the most part still plays -the rôle of motive power or propelling force for the American experiment: this is what makes Americans tick! I shall now embark upon a more detailed examination of the major ones.
THE PROMISED LAND
The significance of the Promised Land varied according to what the settlers or immigrants expected to find in the New World. For some it was chiefly a religious myth (America was seen as a haven of peace for latter-day pilgrims); for those who were more interested in worldly things it was supposed to be an El Dorado, a legendary country said to be rich in gold and treasures, lastly, for a third category of people, it symbolized a prelapsarian world, a place of renewal and the site of a second golden age of humanity. As we have seen, most of the colonists who left Europe for the New World did so in the hope of finding a more congenial environment and for the Pilgrim Fathers New England was a modern counterpart of the Biblical archetype and the success of the settlement was seen as evidence of their peculiar relation to God (a sign of divine election). But from the outset, the myth also served different purposes:
-it was used as propaganda material and a lure to stimulate immigration from Europe;
-it provided the colonists with a convenient justification for the extermination of the Indians;
-it offered an argument against British rule since God's Chosen People could not acknowledge any other authority but God's, which resulted in a theocratic organization of the colony.
What is worthy of note is that even if subsequent settlers did not share this explicitly religious outlook stemming from radical Protestantism, most of them did think of America as in some sense a gift of Divine Providence. But the secularization of the myth set in very early; in the late XVIII th and in the early XIX th , with the rise of capitalism and incipient industrialism which made living conditions worse for large numbers of people, a reaction set in which revived the pastoral ideal, a pagan version or new avatar of the concept of the Promised Land. The myth of a rustic para-dise, as formulated by Rousseau for instance, postulates that the beauty of nature, the peace and harmony of the virgin forest have a regenerative, purifying and therapeutic effect both physically and morally.
Thomas Jefferson, 3 rd President of the U.S. was the originator of the pastoral tradition in America: he maintained that the independent yeoman farmer was the true social foundation of democratic government: "Those who labor in the earth are the chosen people of God, if ever he had a chosen people," he wrote. If the pastoral myth played an obvious role in the conquest of the Continent, it was nonetheless a sort of rearguard action doomed to failure: the advance of progress, and urbanization was irresistible, but the ideal of a life close to nature was to persist in the realm of fiction where it repeatedly crops up in the works of Cooper, Emerson, Twain or Thoreau.
However, the religious interpretation of the myth survived and continued at intervals to reassert itself through much of the XIX th century; see for instance the saga of the Mormons who trekked westward across the prairie to settle in Utah and found Salt Lake City, their Jerusalem. The motif of the journey out of captivity into a land of freedom also found an answering echo in Black slaves on Southern plantations; for some of them the dream did materialize when they managed to reach the Northern free states or Canada thanks to the "Underground Railroad", a secret organization helping fugitive slaves to flee to free territory. But the most important consequence of the idea of the Promised Land was the Messianic spirit that bred into Americans a sense of moral and endowed them with the conviction that God had given them a world mission i.e. America's "Manifest Destiny", an imperialistic slogan launched by Horace Greely a journalist and political leader. According to the doctrine it was the destiny of the U.S. to be the beacon of human progress, the liberator of oppressed peoples and consequently to expand across the continent of North America. The notion of "Manifest Destiny" is perfectly illustrative of the way ideology works and turns every principle or doctrine to its adavantage: from a sense of mission in the field of religion, the concept evolved into a secular and imperialistic justification for territorial expansion. This self-imposed mission served as a most convenient pretext to justify acts of imperialistic interventions in the affairs of foreign countries but one must also acknowledge that on occasion it also provided the moral basis for acts of altruism or generosity toward other nations. Thus there was a shift from spreading the Word of God to spreading the American model of government and way of life; the impetus or drive was kept but the goal was changed: spiritual militancy evolved into imperialism.
THE AMERICAN ADAM
As was to be expected, the myth of the Promised Land gave rise to a novel idea of human nature embodied by a new type of man, the American Adam i.e. homo americanus having recaptured pristine innocence. Sinful, corrupt Europe was an unlikely place for the emergence of this new avatar of humanity, but the American wilderness being a virgin environment was to prove much more favorable to the advent of a mythic American new man. As St John de Crèvecoeur stated in his Letters from an American Farmer (1782): "The American is a new man, who acts upon new principles he must therefore entertain new ideas and form new opinions". The forefather of the American Adam was "the natural man" or "the noble savage" of Locke's and Rousseau's philosophies i.e. an ideal type of individual seen as the very opposite of the corrupt and degenerate social man. The American farmer, hedged in by the forest, partaking of none of the vices of urban life, came to be regarded as the very type of Adamic innocence. In the wake of Independence, the new country elaborated a national ideology characterized by a strong antagonism towards Europe and towards the past: cf. John L. O'Sullivan (1839):
Our National birth was the beginning of new history, the formation and progress of an untried political system, which separates us from the past and connects us with the future only; so far as regards the entire development of the rights of man, in moral, political and national life, we may confidently assume that our country is destined to be the great nation of futurity.
XIX th -century authors like Thoreau, Emerson or Cooper were the principal myth-makers: they created a collective representation, the American Adam, which they dsecribed as:
An individual emancipated from history, happily bereft of ancestry, untouched and undefiled by the usual inheritances of family and race; an individual standing alone, self-reliant and self-propelling, ready to confront whatever awaited him with the aid of his unique and inherent resources. It was not surprising, in a Bible-reading generation, that the new hero (in praise or disapproval) was most easily identified with Adam before the Fall. Adam was the first, the archetypal man. His moral position was prior to experience, and in his very newness he was fundamentally innocent. (R. W. Lewis)
As immigrants from Europe contaminated the east of the American continent, the western part of the country, being thinly populated and therefore unsullied, became the repository of American innocence. An interesting political development from the fear of European corruption and the myth of the American Adam was isolationism. It proceeded from the assumption that America was likely to be tainted in its dealings with foreign nations and resulted in the formulation of the Monroe Doctrine in 1823: a declaration enunciated by James Monroe (5 th President of the US) that the Americas were not to be considered as a field for European colonization and that the USA would view with displeasure any European attempt to intervene in the political affairs of American countries. It dominated American diplomacy for the next century, and came, in the late 19th century to be associated with the assertion of U.S. hegemony in Latin America. One of the objectives of the Monroe Doctrine was to preserve the moral purity of the nation. However with the sobering experiences of the Civil War, WWI and WWII, and above all the War in Vietnam, the myth lost some of its credibility for experience conclusively proved that the Americans did not belong to a radically different species. America was to be, in the words of M. Lerner, "an extended genesis" but it fizzled out with the outbreak of the War between brothers; then America entered or rather fell into History again: "We've had our Fall" said a Southern woman of letters (E. Welty, Flannery O'Connor?). Though it suffered severe setbacks, the myth of the American Adam remains deeply rooted in the American psyche and is a leitmotif in American fiction, in the Press or in political speeches. The very stereotype of the self-made man ("l'homme qui est le fils de ses oeuvres"), totally dedicated to the present and the future, testifies to the all-engrossing and abiding power of the Adamic idea in American life.
THE MELTING-POT (Facts and Fiction)
As we have seen, building a new polity required the development of a national sense of peoplehood but in the U.S.A the question of national identity was from the start inseperable from assimilation i.e. America's ability to absorb unlimited numbers of immigrants, a process of massive cultural adaptation symbolized by the image of the "Melting-Pot". Thus the motto "E Pluribus Unum" sums up the essence of America's cosmopolitan faith, a conviction that this new country would bring unity out of diversity, but the national motto may assume two widely different meanings depending on whether one places greater stress on the "pluribus" or the "unum". Should "pluribus" be subordinated to and assimilated into "unum", or the other way round i.e. should unity/ "unum" be superseded by diversity/"pluribus"? Although the question is of crucial importance, its relevance is relatively recent: why? Simply because the original colonists were all coming from England and thus the American nationality was originally formed in a basically Anglo-Saxon mold.
As long as the settlers came from the British Isles, Germany and Northern Europe i.e. were mostly Protestant in religion, the process of assimilation or melting-pot worked smoothly and resulted in the emergence of culturally and politically dominant group which though it also contained strong Celtic admixtures (the Welsh, the Scots and the Irish) came to be referred to as WASPS, an acronym formed from the initial letters of the words "White Anglo-Saxon Protestants". Now the melting-pot idea of immigrant assimilation and American nationality was first put forward by Michel-Guillaume Jean de Crèvecoeur in the oft-quoted passage from Letters from an American Farmer (1782):
What then is the American, this new man? He is either a European or the descendant of a European, hence that strange mixture of blood, which you will find in no other country. I could point out to you a family whose grandfather was an Englishman, whose wife was Dutch, whose son married a French woman, and whose present four sons have four wives of different nations. He is an American, who, leaving behind him all his ancient prejudices and manners, receives new ones from the new mode of life he has embraced, the new government he obeys, and the new rank he holds. He becomes an America by being received into the broad lap of our great Alma Mater. Here individuals are melted into a new race of men, whose labours and posterity will one cause great changes in the world. (emphasis mine)
The term Melting-pot which remains the most popular symbol for ethnic interaction and the society in which it takes place, was launched by Isreal Zangwill's play The Melting-Pot which had a long run in New York in 1909. Now what must be pointed out is that in spite of its liberality and tolerance, the cosmopolitan version of the melting-pot was far from being a catholic or universal process. It seemed obvious that from the outset some allegedly unmeltable elements such as the Indians or the Blacks would be simply excluded from the process. Besides, the Melting-Pot was first of all and still is a theory of assimilation. The idea that the immigrants must change was basic; they were, as Crèvecoeur put it, to discard all vestiges of their former culture nationality to conform to what was at bottom an essentially Anglo-Saxon model. If, before the Civil war, the first big wave of immigrants from Ireland, Germany, Sweden and Norway was easily melted into a new race of men in the crucible of American society, in the 1880s, the second wave, an influx of Catholic people from the mediterranean area, followed by Slavic people and Jews, strained the assimilationist capacity of the so-called melting-pot. The flood of immigrants whose life-styles and ways of thinking were conspicuously different from American standards raised the problem of mutation and assimilation; it also gave rise to a feeling of racism towards the newly-arrived immigrants. Xenophobia was then rampant and found expression in such movements as the Ku Klux Klan (the 2 nd organization founded in 1915 and professing Americanism as its object), Nativism (the policy of protecting the interests of native inhabitants against those of immigrants) and Know-Nothingism (from the answer "I know nothing" that the members of the organization were advised to give inquisitive people). The program of the Know-Nothing party called for the exclusion of Catholics and foreigners from public office and demanded that immigrants should not be granted citizenship until twelve years after arrival. From the 1880s on, increasing numbers of Americans came to doubt that the mysterious alembic of American society was actually functioning as it was supposed to: the melting-pot gave signs of overheating and the USA assumed the disquieting appearance of "AmeriKKKa". (Parenthetically, I'd like to point out that the strain of xenophobia has not disappeared from American culture; there have been several resurgences of the phenomenon, e.g.
McCarthyism "Red-hunting during the cold war and nearer to us various campaigns in favour of "100 percent Americanism"). To return to the XIX th century, the public outcry against overly liberal immigration policies and the increasing number of "hyphenated" Americans (i.e. Afro-American) led U.S Government to pass legislation restricting entry to the Promised Land (quotas, literacy tests, or the Exclusion Act in 1882 to put an end to Chinese immigration). That period brought to light the limitations and true nature of the melting-pot theory which was just a cover for a process of WASPification in an essentially Anglo-Saxon mold which was almost by definition and from the outset unable to assimilate heterogeneous elements sharply diverging from a certain standard. In the words of N. Glazer and D. Moynihan, "the point about the melting-pot is that it did not happen" i.e. it was just a fallacy and an ideological argument masking the domination of one social group, the Wasps, under the guise of universal principles.
What is the situation today? Immigration laws are a little more liberal and new Americans are still pouring in by the million (nearly 5 million immigrants were admitted from 1969 to 1979). The 70's were the decade of the immigrants and above all the decade of the Asian (refugees from the Philippines or Vietnam etc.). A new and interesting development is Cuban immigration, concentrating in Florida and the influx of illegal immigrants from Mexico, the Wetbacks, settling in the South-West. The most dramatic consequence of the presence of fast-growing communities of Cubans, Puerto Ricans, or Mexicans is the increasing hispanicization of some parts of the USA. Spanish is already the most common foreign language spoken in the States and in some cities or counties it may one day replace American English.
The last three decades were marked by a revival of ethnicity and the rise of new forms of ethnic militancy; the 60s witnessed not only an undeniable heightening of ethnic and racial consciousness among the Blacks (pride of race manifested itself in the purposeful promotion of black power, black pride, black history, and patriotism), the Hispanic Americans and the native Americans, but also an emphatic rejection of the assimilationist model expressed in the idea of the Melting-Pot. Nowadays, foreign-born Americans want the best of both worlds i.e. enjoy the benefits of the American system and way of life but at the same time preserve their customs, traditions and languages. They refuse to sacrifice their own cultural identities on the altar of Americanism and claim a right to a twofold identity.
By way of conclusion: the two decades from 1960 to 1980 witnessed a severe weakening of confidence in the American system, in the principles on which it was based and in the efficacy of its institutions. This crisis in confidence originated in the realization that in the words of Harold Cruse, "America is a nation that lies to itself about who and what it is. It is a nation of minorities ruled by a minority of one--it thinks and acts as if it were a nation of White-Anglo-Saxon Protestants".
The debunking of the melting-pot theory will hopefully pave the way for a different type of society: what seems to be emerging today is the goal of a society that will be genuinely pluralistic in that it will deliberately attempt to preserve and foster all the diverse cultural and economic interests of its constituent groups. The motto "E Pluribus Unum" is coming to seem more and more outdated and one may wonder whether the country's new motto should not be "Ex Uno Plures".
Conclusion
What must be pointed out, after this survey of some of the basic components of national consciousness and ideology, is that they constitute the motive power of the American experiment, what moves or prompts American people to action and stimulates their imagination, in other words, it is what makes them tick.
A third feature of American ideology is that the ideals and goals it assigns to American people are consistently defined in terms of "prophetic vision", whether it be the vision of a brave new world, of a perfect society or whatever. One of the most forceful exponents of this prophetic vision was Thomas Paine, the political writer, who wrote in Common Sense (1776): "We, Americans, have it in our power to begin the world over again. A situation similar to the present, hath not happened since the days of Noah until now. The birthday of a new world is at hand".
Nearer to us F. D. Roosevelt stated in 1937: "We have dedicated ourselves to the achievement of a vision" and we're all familiar with Martin L. King's famous opening lines: "I had a dream that one day the sons of former slaves and the sons of former slave-owners will be able to sit together at the table of brotherhood" (August 1963 in Washington).
What is noteworthy -and the previous quotation is a case in point -is that there is an obvious relationship between American ideology and religion. As an observer put it: "America is a missionary institution that preaches mankind a Gospel". As we saw, American ideology, as embodied in the American Dream, is inseparable from the Sacred and buttressed by the three major denominations in the States: Protestantism, Catholicism, and Judaism. It must be borne in mind that myths, religions, ideology and of course politics are in constant interplay: they often overlap and interpenetrate with each other, cf. Tocqueville : « Je ne sais si tous les Américains ont foi dans leur religion, mais je suis sûr qu'ils la croient nécessaire au maintien des institutions républicaines ».
An opinion borne out by President Eisenhower's contention that: "Our Government makes no sense unless it is founded in a deeply religious faith -and I don't care what it is" (« Notre gouvernement n'a pas de sens s'il n'est fondé sur une foi religieuse intensément ressentie, et peu importe de quelle foi il s'agit »). It seems that in the States the attitude toward religion is more important than the object of devotion; the point is to show one has faith in something -whether God or the American way of life does not really matter.
In the words of a sociologist, "we worship not God but our own worshiping" or to put it differently, the Americans have faith in faith and believe in religion. Thus the nation has always upheld the idea of pluralism of belief and freedom of worship. The State supports no religion but even nowadays religion is so much part of American public life that there seems to be a confusion between God and America, God's own country: dollar banknotes bear the inscription "In God we trust" and the President takes the oath on the Bible. Public atheism remains rare: it is regarded as intellectual, radical, un-American and is accompanied by social disapproval. Since 1960 church attendance has declined steadily, but experimentation with new forms of worship still continues, as witness the increasing number of sects of every description vying with the three main religious groups viz. Protestants, Roman Catholics and Jews.
It is a well-known aspect of religious life in the USA that an American church is in many ways very similar to a club: it is a center of social life and an expression of group solidarity and conformity. People tend to change religious groups or sects according to their rise in social status or their moving into a new neighbourhood. Little emphasis is laid on theology, doctrine or religious argument: morality is the main concern. Tocqueville once remarked: "Go into the churches (I mean the Protestant ones) you will hear morality preached, of doctrine not a word...". The observation is still valid, for churches and religious denominations are expressions of group solidarity rather than of rigid adherence to doctrine. However, this religious dimension is so firmly entrenched in the mind of Americans that Hubert Humphrey a presidential candidate campaigned for "the brotherhood of man under the fatherhood of God", something unthinkable on the French political scene, and President Jimmy Carter was a Baptist preacher. The American people do have their common religion and that religion is the system familiarly known as the American way of life. By every realistic criterion the American way of life is the operative faith of the American people, for the American way of life is at bottom a spiritual structure of ideas and ideals, of aspirations and values, of beliefs and standards: it synthesizes all that commends itself to the Americans as the right, the good and the true in actual life. It is a faith that markerdly influences, and is influenced by, the official religions of American society. The American way of life is a kind of secularized Puritanism and so is democracy which has been erected into a "superfaith" above and embracing the three recognized religions; cf. J. P. Williams: "Americans must come (I am tempted to substitute 'have come') to look on the democratic ideal as the Will of God" so that the democratic faith is in the States the religion of religions and religion, in its turn, is something that reassures the American citizen about the essential rightness of everything American, his nation, his culture and himself. So, to conclude this series of observations, one can maintain that the Americans are "at one and the same time, one of the most religious and most secular of nations".
If one of the functions of religion is, among other things, to sanctify the American way of life, if democracy can be seen as a civic religion then the core of this religion is faith in the Constitution as well as in Law and Order. Without going into too much detail, I'd like to point out that the implications and connotations of the two terms are quite different from those they have in other cultures. Law for instance is endowed with a prestige that comes from the Bible through its association with Mosaic Law and British tradition (the Common Law) which accounts for its sacred, selfevident nature and its being seen as a "transcendental category": cf. M. L. King's statement that "an unjust law is a human law that is not rooted in eternal law or in natural law". Despite King's lofty conception, American law embodies many of the moralisms and taboos of the American mind and aims at enforcing an order that is dear to the establishment. At the apex of the American legal system stands the Supreme Court as interpreter of the Constitution which enshrines the nation's cohesive force and lends itself to idolization. The Constitution is America's covenant and its guardians, the justices of the Supreme Court are touched with its divinity.
Lastly, among the key values underpinning American ideology, there's common sense, the very foundation of the American Revolution as witness Thomas Paine's pamphlet. Common sense or sound practical judgment is akin to what R. Barthes used to call Doxa (i.e. « l'opinion courante, les fausses évidences, c'est-à-dire les masques de l'idéologie, le vraisemblable, ce qui va de soi: le propre de l'idéologie est de toujours tenter de faire passer pour naturel ce qui est profondément culturel ou historique », L-J Calvet, R. Barthes). Thus there is in American ideology an enduring relationship between the notion of common sense and that of the common man, a stereotype, the main constituent of the middle-class and mainstream America, the backbone of the system. The high valuation of the "common man", endowed with all the virtues that are dear to Americanism, dates back to Jacksonian democracy; the common man has undergone a series of avatars: the frontiersman, the farmer of the Middle-West, the man in the street, in lastly the middle-class citizens forming the so-called silent majority which is, in spite of its name, quite vocal on the defense of its interests and values and considers itself the guardian of normalcy. But, paradoxically enough, what must be emphasized is the complementary link between the mass of ordinary people advocating common sense and belonging to the middle-classes and hero-worship, the cult of individuals out of the common run. American culture has given rise to an impressive gallery of national or comic strip heroes such as Kit Carson, Davy Crockett, Paul Bunyan, Superman or Batman... In the same way, the perfect President is the one whose life follows the well-known pattern set by such national heroes as Jackson or Lincoln i. e. a trajectory leading the individual from the log-cabin to the White House, a fate which is evidence of the openness of American society and equality of opportunity.
All the elements we have reviewed account for the remarkable stability of American ideology: I grant that there have been periods when that very ideology was questioned -the Americans are currently undergoing one of these cyclical crises in confidence -but national consensus though somewhat shattered is still going strong. Ideology continues to play its traditional rôle of cementing force aiming both at neutralizing all potential conflicts or disruptive tensions and at revitalizing the key values of Americanism. Its flexibility is its main asset and accounts for the multiple adjustments it resorted to ward off all threats to the system: Populism, Progressivism, the Square Deal, the Fair Deal, and the New Frontier -"to get America moving again" -were all attempts to avert disin-tegration. It is the same fundamental ideology that underpins the particular position on this or that issue that the Republicans, the Democrats, the Liberals or even the Radicals may take up. In spite of great differences of opinion and interests, there's general agreement, with the usual qualifications, on such basic principles as the defence of:
-Americanism, set up as a universal model; -a regime of free enterprise and free competition; -a free world; -national safety; -American leadership.
It is worthy of note that opposition to the system and criticisms levelled against Americanism, the consumer society or the society of alienation, are more often than not inspired by the same values and ideals that its opponents accuse American society of having forfeited; besides, those who challenge traditional values and the goals of official culture i.e. adherents to the so-called counter-culture, whether it be the youth culture, the drug culture, the hippie movement and flower children, can seldom conceive of any lasting alternative to the American way of life.
The wiles and resilience of national ideology are such that it never fails to absorb or "recuperate" subversive practices by turning them to its advantage. As I said earlier, the USA is currently undergoing a period of self-doubt and loss of confidence; in spite of some oustanding achievements in the field of foreign policy there's a rising tide of discontent and disenchantment at home. Some Americans have come to question their country's ability to materialize the promise and the dream upon which America was founded. Is the age of ideology passing away to give way to the age of debunking and demythologizing? The question is uppermost in the national consciousness but as far as I am concerned it will go unanswered. | 57,914 | [
"17905"
] | [
"178707",
"420086"
] |
01764716 | en | [
"sdv",
"sde"
] | 2024/03/05 22:32:13 | 2011 | https://amu.hal.science/hal-01764716/file/Ait%20Said_et_al_2011.pdf | Samir Ait Said
Catherine Fernandez
Stéphane Greff
Arezki Derridj
Thierry Gauquelin
Jean-Philippe Mevy
email: [email protected]
Inter-population variability of leaf morpho-anatomical and terpenoid patterns of Pistacia atlantica Desf. ssp. atlantica growing along an aridity gradient in Algeria
Keywords:
Three Algerian populations of female Pistacia atlantica shrubs were investigated in order to check whether their terpenoid contents and morpho-anatomical parameters may characterize the infraspecific variability. The populations were sampled along a gradient of increasing aridity from the Atlas mountains into the northwestern Central Sahara.
As evidenced by Scanning Electron Microscopy, tufted hairs could be found only on seedling leaves from the low aridity site as a population-specific trait preserved also in culture. Under common garden cultivation seedlings of the high aridity site showed a three times higher density of glandular trichomes compared to the low aridity site. Increased aridity resulted also in reduction of leaf sizes while their thickness increased. Palisade parenchyma thickness also increases with aridity, being the best variable that discriminates the three populations of P. atlantica.
Analysis of terpenoids from the leaves carried out by GC-MS reveals the presence of 65 compounds. The major compounds identified were spathulenol (23 g g -1 dw), ␣-pinene (10 g g -1 dw), verbenone (7 g g -1 dw) and -pinene (6 g g -1 dw) in leaves from the low aridity site; spathulenol (73 g g -1 dw), ␣-pinene (25 g g -1 dw), -pinene (18 g g -1 dw) and ␥-amorphene (16 g g -1 dw) in those from medium aridity and spathulenol (114 g g -1 dw), ␣-pinene (49 g g -1 dw), germacrene D (29 g g -1 dw) and camphene (23 g g -1 dw) in leaves from the high aridity site. Terpene concentrations increased with the degree of aridity: the highest mean concentration of monoterpenes (136 g g -1 dw), sesquiterpenes (290 g g -1 dw) and total terpenes (427 g g -1 dw) were observed in the highest arid site and are, respectively, 3-, 5-and 4-fold higher compared to the lower arid site. Spathulenol and ␣-pinene can be taken as chemical markers of aridity. Drought discriminating compounds in low, but detectable concentrations are ␦-cadinene and -copaene. The functional roles of the terpenoids found in P. atlantica leaves and principles of their biosynthesis are discussed with emphasis on the mechanisms of plant resistance to drought conditions.
Introduction
Plants respond to environmental variations, particularly to water availability through morphological, anatomical and biochemical adjustments that help them cope with such variations [START_REF] Lukovic | Histological characteristics of sugar beet leaves potentially linked to drought tolerance[END_REF]. Plants are adapted to drought stress by developing xeromorphic characters based mainly on reduction of leaf size [START_REF] Trubat | Plant morphology and root hydraulics are altered by nutrient deficiency in Pistacia lentiscus L[END_REF] and increase in thickness of cell walls, a more dense vascular system, greater density of stomata and an increased development of palisade tissue at the expense of the spongy tissue [START_REF] Bussotti | Structural and functional traits of Quercus ilex in response to water availability[END_REF][START_REF] Bacelar | Immediate responses and adaptative strategies of three olive cultivars under contrasting water availability regimes: changes on structure and chemical composition of foliage and oxidative damage[END_REF][START_REF] Syros | Leaf structural dynamics associated with adaptation of two Ebenus cretica ecotypes[END_REF].
Terpenes are one of the most diverse family of chemical compounds found in plant kingdom and they exhibit several roles in plant defense and communication [START_REF] Kirby | Biosynthesis of plant isoprenoids: perspectives for microbial engineering[END_REF]. In response to drought conditions, significant changes of terpene emissions were shown in many Mediterranean species (Orme ño et al., 2007a;[START_REF] Lavoir | Drought reduced monoterpene emissions from the evergreen Mediterranean oak Quercus ilex: results from a throughfall displacement experiment[END_REF]. Similar results were reported regarding the occurrence of terpenic components from Erica multiflora and Globularia alypum [START_REF] Llusià | Net ecosystem exchange and whole plant isoprenoid emissions by a Mediterranean shrubland exposed to experimental climate change[END_REF]. It has been shown also that monoterpenes and sesquiterpenes have a role in protecting plants from thermal damage (Pe ñuelas [START_REF] Pe Ñuelas | Linking photorespiration, monoterpenes and thermotolerance in Quercus[END_REF][START_REF] Loreto | Impact of ozone on monoterpene emissions and evidence for an isoprene-like antioxidant action of monoterpenes emitted by Quercus ilex leaves[END_REF][START_REF] Llusià | Airborne limonene confers limited thermotolerance to Quercus ilex[END_REF][START_REF] Pe Ñuelas | Linking isoprene with plant thermotolerance, antioxidants and monoterpene emissions[END_REF]. Terpenes are recognized as being relatively stable and also as precursors of numerous potential physiological components including growth regulators [START_REF] Byrd | Narrow hybrid zone between two subspecies of big sagebrush, Artemisia tridentata (Asteraceae). VIII. Spatial and temporal pattern of terpenes[END_REF]. Another property of these compounds is their great variability in time and depending on the geographic distribution of species as shown by many studies in literature [START_REF] Lang | Abies alba Mill -differentiation of provenances and provenance groups by the monoterpene patterns in the cortex resin of twigs[END_REF][START_REF] Staudt | Seasonal variation in amount and composition of monoterpenes emitted by young Pinus pinea trees -implications for emission modeling[END_REF][START_REF] Hillig | A chemotaxonomic analysis of terpenoid variation in Cannabis[END_REF][START_REF] Smelcerovic | Essential oil composition of Hypericum L. species from Southeastern Serbia and their chemotaxonomy[END_REF]. As a result, many studies relate terpenic constituents with plant systematic and population issues [START_REF] Adams | Systematics of multi-seeded eastern hemisphere Juniperus based on leaf essential oils and RAPD DNA fingerprinting[END_REF][START_REF] Naydenov | Structure of Pinus nigra Arn. populations in Bulgaria revealed by chloroplast microsatellites and terpenes analysis: provenance tests[END_REF].
The genus Pistacia (Anacardiaceae) consists of at least eleven dioecious species [START_REF] Zohary | A monographic study of the genus Pistacia[END_REF][START_REF] Kokwaro | Notes on the Anacardiaceae of Eastern Africa[END_REF]) that all intensely produce terpenes. There are three wild Pistacia species in Algeria: P. atlantica Desf. ssp. atlantica which exhibits high morphological variability [START_REF] Belhadj | Analyse de la variabilité morphologique chez huit populations spontanées de Pistacia atlantica en Algérie[END_REF], P lentiscus L. and less frequently P. terebinthus L. Pistacia atlantica is considered to be an Irano-Turanian species which is distributed from south-west Asia to north-west Africa [START_REF] Zohary | A monographic study of the genus Pistacia[END_REF]. In Algeria, it occurs in the wild from sub-humid environments to extreme Sahara sites [START_REF] Monjauze | Note sur la régénération du Bétoum par semis naturels dans la place d'éssais de Kef Lefaa[END_REF][START_REF] Quézel | Ecologie et biogéographie des forêts du bassin méditerranéen[END_REF][START_REF] Benhassaini | The chemical composition of fruits of Pistacia atlantica Desf. subsp. atlantica from Algeria[END_REF]. As a thermophilous xerophyte P. atlantica grows in dry stony or rocky hill sides, edges of field, roadsides, near the base of dry stone walls and other similar habitats [START_REF] Tzakou | Volatile metabolites of Pistacia atlantica Desf. from Greece[END_REF]. The species grows well on clay or silty soils, although it can thrive also on calcareous rocks where roots develop inside cracks. Hence, P. atlantica has a wide ecological plasticity as also shown by [START_REF] Belhadj | Comparative morphology of leaf epidermis in eight populations of atlas pistachio (Pistacia atlantica Desf., Anacardiaceae)[END_REF] through leaf epidermis analysis. For all these reasons, P. atlantica is used in re-planting projects in Algeria but only few studies are carried out on the infraspecific variability of this plant.
Regarding the phytochemistry of P. atlantica, essential oils from samples harvested in Greece [START_REF] Tzakou | Volatile metabolites of Pistacia atlantica Desf. from Greece[END_REF] and Morocco [START_REF] Barrero | Chemical composition of the essential oils of Pistacia atlantica Desf[END_REF] were described. Recently also a study was published describing essential oils and their biological properties from P. atlantica harvested in Algeria [START_REF] Gourine | Chemical composition and antioxidant activity of essential oil of leaves of Pistacia atlantica Desf. from Algeria[END_REF]. However, to the best of our knowledge, there is no detailed study on the relationship between the phytochemistry of P. atlantica and its ecological conditions of growth.
The aim of this work is to investigate the intraspecific diversity of three populations of P. atlantica growing wild in arid zones of Algeria through terpenoid analysis and leaf morpho-anatomical traits. We also examined the possible links that may exist between plant chemical composition and aridity conditions of these three locations.
Material and methods
Sampling sites
Pistacia atlantica Desf. ssp. atlantica was harvested in June 2008 from three Algerian sites chosen along a Northeast-Southwest transect of increasing aridity: Oued-Besbes (Medea)-Low aridity, Tilghemt (Laghouat)-Medium aridity and Beni-Ouniff (Bechar)-High aridity (Fig. 1). Specimens were deposited at the herbarium of the University of Provence Marseille and referred as Mar-PA1-2008;Mar-PA2-2008;Mar-PA3-2008 for the locations of Medea, Laghouat and Bechar, respectively. Ecological factors of samplings sites are described in Table 1.
For all the sites, sampling was carried out during fructification stage in order to take into account the phenological shift due to local climatic conditions. Ten healthy female individuals with the same age were chosen per site. Plants density and soil conditions were similar for the different sites.
Leaf morphology and anatomy
From each of the three locations, ten female trees were selected and thirty leaves fully sun exposed were harvested per tree. Once harvested, these leaves were carefully dried and kept in herbarium prior to biometric measurements: leaf length and width, petiole length, rachis length and the terminal leaflet length and width.
For anatomical parameters, cross sections were prepared across the middle part of three fresh leaflets per leaf, stained with carmino-green, and thickness measured using light microscopy of abaxial and adaxial epidermis, cuticle, palisade and spongy parenchyma and total leaflet.
Scanning Electron Microscopy (SEM) of seedling leaves
Seeds were collected in August 2008 at Medea and Bechar sites. After germination the seedlings were transplanted in pots filled with peat and sand, then kept in a growth chamber at constant temperature of 25 • C. The photoperiod was set at 11/13 h and the light irradiance was 500 mol photons m -2 s -1 . After 11 months of culture eight plants from each location were randomly selected, then three leaves per plant were harvested and carefully dried prior to SEM observations. Micromorphological observations were carried out on three leaflet samples (adaxial and abaxial surfaces) per leaf. These were gold coated before scanning through an electronic microscope: FEI XL30 ESEM (USA).
Terpenoids extraction
Mature and sun exposed leaves were harvested in the field, dried in dark at ambient air temperature conditions until constant weight, then, 100 g per tree were grounded and stored until use. Shade-drying method has no significant effect on the qualitative composition of volatile oils compared to fresh, sun-drying and oven-drying at 40 or 45 • C [START_REF] Omidbaigi | Influence of drying methods on the essential oil content and composition of Roman chamomile[END_REF][START_REF] Sefidkon | Influence of drying and extraction methods on yield and chemical composition of the essential oil of Satureja hortensis[END_REF][START_REF] Ashafa | Effect of drying methods on the chemical composition of essential oil from Felicia muricata leaves[END_REF]. The extraction method used consisted of suspending leaf dry matter in dichloromethane according to a ratio of 1:2 (w/v), for 30 min, under constant shaking at room temperature. 50 l of dodecane (5 mg ml -1 ) were added as internal standard for quantification.
Quantitative and qualitative analysis of terpenoids
Extracts were filtered on RC syringe filter (regenerated Cellulose, 0.45 m, 25 mm; Phenomenex, Le Pecq, France) then analyzed with a gas chromatograph Hewlett Packard ® GC 6890 coupled to a mass selective detector 5973 Network. The system was fitted with an HP-5MS capillary column 30 m, 0.25 mm, 0.25 m. 2 l of extracts was injected through an automatic injector ALS 7683 in splitless mode. Purge was set at 50 min ml -1 after 1 min. Injection temperature was maintained at 250 • C. Helium was used as carrier gas. A constant flow rate of 1 ml min -1 was set throughout the run. The oven temperature initially set at 40 • C was increased to 270 • C at a rate of 4 • C min -1 and remained constant for 5 min. The MSD transfer line heater was maintained at 280 • C.
Terpenes were identified by comparison of their arithmetic index (AI) and mass spectra with those obtained from authentic samples and literature [START_REF] Adams | Identification of Essential Oil Components by Gas Chromatography/Mass Spectrometry[END_REF].
Statistical analysis
The data were analyzed by a one-way ANOVA model. Newman-Keuls test was used to test for significant differences in monoterpene, sesquiterpene, total terpene concentrations and morpho-anatomical measurement data between the three populations. In order to evaluate the information contained in the collected chemical data, Principal Component Analysis was carried out. The statistical analyses were performed using R statistical software and packages "ade4".
Results
Morpho-anatomical measurements
Among the biometric parameters studied it appears that leaf length and width as well as terminal leaflet length and width highly discriminate statistically the three populations of P. atlantica (Table 2). The population from the most arid site shows the lowest leaf and terminal leaflet sizes. However the number of leaflet pairs increases with aridity. Regarding the anatomical data, the thickness of palisade parenchyma is the major discriminating variable and it increases with aridity (Table 3).
SEM observations
The epidermis of seedling leaves has markedly sinuous walls in both Medea and Bechar populations. Abaxial and adaxial leaf surfaces of each population are covered with two types of trichomes, elongated hairs and glandular trichomes. The former are essen- tially located at midrib of the adaxial leaf surface (Fig. 2A) and at the rachis forming parallel rows (Fig. 2B). The latter (Fig. 2C) are distributed over the entire leaf surface (essentially at the abaxial surface) with high density (18.31 ± 0.29 mm -2 ) in plants that seeds were sampled from the population of the most arid site (Bechar).
Trichome density of the plants raised from seeds sampled from the population that grows under less arid conditions (Medea) was 6.15 ± 0.21 mm -2 when both seedling lots were cultivated in the same environment (Fig. 2D andE). The Medea population could further be discriminated by the presence of tufted hairs which never were observed in the Bechar population on P. atlantica, neither in seedlings nor in adult plants (Fig. 2F).
Terpenoid analysis
P. atlantica leaves contain forty nine compounds identified (Table 4). Among these, twenty two were monoterpenes (8 hydrocarbons and 14 oxygenated) and twenty five were sesquiterpenes (16 hydrocarbons and 9 oxygenated). In the high aridity site, the major compounds identified were spathulenol (114 g g -1 dw), ␣-pinene (49 g g -1 dw), germacrene D (29 g g -1 dw) and camphene (23 g g -1 dw) while from the low aridity site spathulenol (23 g g -1 dw), ␣-pinene (10 g g -1 dw), verbenone (7 g g -1 dw) and -pinene (6 g g -1 dw) were the dominant constituents. For the medium aridity site situated between these two extreme con-ditions of aridity, spathulenol (73 g g -1 dw), ␣-pinene (25 g g -1 dw), -pinene (18 g g -1 dw) and ␥-amorphene (16 g g -1 dw) were the main terpenes found.
The quantitative analysis showed significant differences in both monoterpene, sesquiterpene and total terpene concentrations of the P. atlantica leaves according to the sites investigated (Fig. 3). Three distinct groups were obtained (Newman-Keuls test, 5% level). Terpene concentrations increase with the degree of aridity. The highest mean concentrations of monoterpenes (136 g g -1 dw), sesquiterpenes (290 g g -1 dw) and total terpenes (427 g g -1 dw) were observed in the high aridity site, whereas these figures were: 57 g g -1 dw, 57 g g -1 dw and 113 g g -1 dw, respectively, at the low aridity site.
Multivariate analysis was applied to the terpenoid contents of 30 solvent extracts. Fig. 4 shows the two-dimensional mapping of the Principal Component Analysis which comprises 77% of the total inertia. Axis 1 represents 62% of the information and is characterized on the positive side by thuja-2,4(10)-diene and on the negative side by a couple of compounds, essentially tricyclene, ␣-pinene, camphene, isoborneol acetate, -cubebene, -copaene, germacrene D, ␦-cadinene and spathulenol. Axis 2 representing 15% of the information is characterized on the negative side by -pinene and terpinen-4-ol.
Positions of the individual samples from leaf extractions in the two-axes space show an overall homogeneity between leaf extracts belonging to the same study site (Fig. 5). Three main groups which are characterized by the geographical provenances can be distinguished. The first group is situated on the positive side of Axis 1 and includes samples from individuals of the low aridity site. The second group is located on the negative side of Axis 1 and includes all individuals of the high aridity site. The third group situated on the negative side of Axis 2, between the points related to samples from the two extreme sites, includes in the majority samples from individuals of the medium aridity site. These three groups are clearly separated along Axis 1 which can be interpreted as indicating the aridity gradient. The most discriminating variables encompass ␣-pinene, spathulenol, ␦-cadinene and copaene.
Discussion
Increase of epidermis, cuticle, palisade parenchyma and total leaf thickness with the degree of aridity may enhance survival and growth of P. atlantica by improving water relations and providing higher protection for the inner tissues in the high aridity site. Such patterns were observed in many species submitted to water stress (e.g., [START_REF] Bussotti | Structural and functional traits of Quercus ilex in response to water availability[END_REF][START_REF] Bacelar | Immediate responses and adaptative strategies of three olive cultivars under contrasting water availability regimes: changes on structure and chemical composition of foliage and oxidative damage[END_REF][START_REF] Guerfel | Impacts of water stress on gas exchange, water relations, chlorophyll content and leaf structure in the two main Tunisian olive (Olea europaea L.) cultivars[END_REF]. Also, a pronounced decrease of leaf size reduces transpiration in sites where water is scarce, as also reported for other plants [START_REF] Huang | Leaf morphological and physiological responses to drought and shade in two Populus cathayana populations[END_REF][START_REF] Macek | Morphological and ecophysiological traits shaping altitudinal distribution of three Polylepis treeline species in the dry tropical Andes[END_REF]. The high morpho-anatomical plasticity of Pistacia atlantica in response to aridity may explain its wide ecological distribution in northern Africa. Trichomes are considered as important taxonomic characters [START_REF] Krak | Trichomes in the tribe Lactuceae (Asteraceae) -taxonomic implications[END_REF][START_REF] Salmaki | Trichome micromorphology of Iranian Stachys (Lamiaceae) with emphasis on its systematic implication[END_REF][START_REF] Shaheen | Diversity of foliar trichomes and their systematic relevance in the genus Hibiscus (Malvaceae)[END_REF]. The absence of tufted hairs in Bechar population suggests the existence of genetic differences between the populations studied.
Regarding the phytochemistry of P. atlantica, no data were reported before on extractable terpenoids composition of the pistacia leaves. However, qualitative and quantitative analyses of essential oils from leaves of P. atlantica were reported by several authors. Oils from female plants originating from Greece contained myrcene (17.8-24.8%), sabinene (7.8-5.2%) and terpinene (6-11.6%) as major components [START_REF] Tzakou | Volatile metabolites of Pistacia atlantica Desf. from Greece[END_REF]. Some compounds found in our samples like ␥-amorphene, p-mentha-1,3,5-triene, cis-and trans-sabinene hydrate, ␣-campholenic aldehyde, trans-verbenol, myrtenal, myrtenol, verbenone, ␣muurolene and spathulenol were not found in leaves of P. atlantica from Greece. A provenance from Morocco whose sex was not specified was rich in terpinen-4-ol (21.7%) and elemol (20.0%) [START_REF] Barrero | Chemical composition of the essential oils of Pistacia atlantica Desf[END_REF]. These compounds were found in small amounts (less than 1.1%) also in our samples. Recently, [START_REF] Gourine | Chemical composition and antioxidant activity of essential oil of leaves of Pistacia atlantica Desf. from Algeria[END_REF] have identified 31 compounds from samples harvested at Laghouat with -pinene (19.1%), ␣-terpineol (12.8%), bicyclogermacrene (8.2%) and spathulenol (9.5%) as the principal molecules. Qualitative and quantitative differences between literature data and our results may be explained by such factors as sex of the plants [START_REF] Tzakou | Volatile metabolites of Pistacia atlantica Desf. from Greece[END_REF], period of plant collection [START_REF] Barra | Characterization of the volatile constituents in the essential oil of Pistacia lentiscus L. from different origins and its antifungal and antioxidant activity[END_REF][START_REF] Gardeli | Essential oil composition of Pistacia lentiscus L. and Myrtus communis L.: evaluation of antioxidant capacity of methanolic extracts[END_REF][START_REF] Hussain | Chemical composition, antioxidant and antimicrobial activities of basil (Ocimum basilicum) essential oils depends on seasonal variations[END_REF]), plant competition (Orme ño et al., 2007b), position of leaves in the trees [START_REF] Gambliel | Terpene changes due to maturation and canopy level in douglas-fir (Pseudotsuga-menziesii) flush needle oil[END_REF][START_REF] Barnola | Intraindividual variations of volatile terpene contents in Pinus caribaea needles and its possible relationship to Atta laevigata herbivory[END_REF], soil nutrient availability [START_REF] Yang | Effects of ammonium concentration on the yield, mineral content and active terpene components of Chrysanthemum coronarium L. in a hydroponic system[END_REF][START_REF] Orme Ño | Production and diversity of volatile terpenes from plants on calcareous and siliceous soils: effect of soil nutrients[END_REF][START_REF] Blanch | Drought, warming and soil fertilization effects on leaf volatile terpene concentrations in Pinus halepensis and Quercus ilex[END_REF] and water availability [START_REF] Turtola | Drought stress alters the concentration of wood terpenoids in Scots pine and Norway spruce seedlings[END_REF][START_REF] Blanch | Drought, warming and soil fertilization effects on leaf volatile terpene concentrations in Pinus halepensis and Quercus ilex[END_REF]. Moreover, according to the method of extraction used, recovering the true components of the plant in vivo still remains a matter of debate. Indeed through hydrodistillation, thermal hydrolysis in acid medium may be a source of artifacts in terms of the essential oil composition [START_REF] Adams | Cedar wood oil -analysis and properties[END_REF].
However, the chemical analysis indicated that there are significant differences between the three populations which were analyzed by the same method. These differences comprise both the quantitative and the qualitative composition of the terpenoids. Spathulenol and ␣-pinene are the dominant compounds that clearly discriminate quantitatively the three stations. Although being identified in minor contents from samples of low and medium aridity stations, thuja-2,4(10)-diene, p-mentha-1,3,5triene, nopinone and trans-3-pinocarvone were not registered from high arid station samples. This raises the question of the role of indi-vidual terpenoid components in plant responses to aridity and the central issue of phenotypic/genotypic diversity of the investigated populations.
Allelopathic properties of ␣-pinene are reported in literature. This hydrocarbon monoterpene inhibits radicula growth of several species, enhances root solute leakage and increases level of malondialdehyde, proline and hydrogen peroxide indicating lipid peroxidation and induction of oxidative stress [START_REF] Singh | alpha-Pinene inhibits growth and induces oxidative stress in roots[END_REF]. It is likely that, the high content of ␣-pinene found in the leaves from the driest site may influence interspecific competition for water resources. For all sites investigated, the understory diversity was low, composed mainly of Ziziphus lotus. Hence, ␣-pinene might play direct and indirect roles in P. atlantica responses to drought situations.
Spathulenol is an azulenic sesquiterpene alcohol that occurs in several plant essential oils [START_REF] Mévy | Composition of the volatile constituents of the aerial parts of an endemic plant of Ivory Coast, Monanthotaxis capea (E. G. & A. Camus) Verdc[END_REF][START_REF] Cavar | Chemical composition and antioxidant and antimicrobial activity of two Satureja essential oils[END_REF]. Azulenes are also known as allelochemicals [START_REF] Inderjit | Principles and Practices in Plant Ecology: Allelochemical Interactions[END_REF]. Especially their bactericidal activity has been proven as well as their function as plant growth regulator precursors [START_REF] Muir | Azulene derivatives as plant growth regulators[END_REF][START_REF] Konovalov | Natural azulenes in plants[END_REF]. Azulene is a polycyclic hydrocarbon, consisting of an unsaturated five member ring linked to an unsaturated seven member ring. This molecule absorbs red light 600 nm for the first excited state transition and UVA 330 nm light for the second excited state transition producing a dark blue color in aqueous medium [START_REF] Tetreault | Control of the photophysical properties of polyatomic molecules by substitution and solvation: the second excited singlet state of azulene[END_REF]. The high content of spathulenol found from leaves collected in the high arid station may be interpreted as a defense mechanism against deleterious effects of biotic interactions and UV-light during summer.
Our results are in accordance with several authors who reported increased terpene concentrations in plants under high temperature and water stress conditions [START_REF] Llusià | Changes in terpene content and emission in potted Mediterranean woody plants under severe drought[END_REF][START_REF] Loreto | Impact of ozone on monoterpene emissions and evidence for an isoprene-like antioxidant action of monoterpenes emitted by Quercus ilex leaves[END_REF][START_REF] Pe Ñuelas | Linking isoprene with plant thermotolerance, antioxidants and monoterpene emissions[END_REF][START_REF] Llusià | Net ecosystem exchange and whole plant isoprenoid emissions by a Mediterranean shrubland exposed to experimental climate change[END_REF]. For instance, 54 and 119% increases of total terpene contents under drought treatment were recorded from Pinus halepensis and Quercus ilex, respectively [START_REF] Blanch | Drought, warming and soil fertilization effects on leaf volatile terpene concentrations in Pinus halepensis and Quercus ilex[END_REF]. Because monoterpene biosynthesis is strictly dependent on photosynthesis [START_REF] López | Allelopathic potential of Tagetes minuta terpenes by a chemical, anatomical and phytotoxic approach[END_REF] the increase of their content along with aridity suggests an involvement of specific metabolic pathways that sustain photosynthesis in harsh environmental conditions. In our study, the high thickness of palisade parenchyma can be mentioned in favor of this assumption. On the other hand, monoterpenes act as plant chloroplast membrane stabilizers and protectors against free radicals due to their lipophily and the presence of double bonds in their molecules (Pe ñuelas [START_REF] Pe Ñuelas | Linking photorespiration, monoterpenes and thermotolerance in Quercus[END_REF][START_REF] Chen | Inhibition of monoterpene biosynthesis accelerates oxidative stress and leads to enhancement of antioxidant defenses in leaves of rubber tree (Hevea brasiliensis)[END_REF]. Hence, the increase of monoterpenes may be considered as a regulatory feedback loop that protects photosynthesis machinery from oxidative and thermal damages.
Glandular trichomes are one of the most common secretory structures that produce and store essential oil in plants [START_REF] Covello | Functional genomics and the biosynthesis of artemisinin[END_REF][START_REF] Giuliani | Insight into the structure and chemistry of glandular trichomes of Labiatae, with emphasis on subfamily Lamioideae[END_REF][START_REF] Biswas | Essential oil production: relationship with abundance of glandular trichomes in aerial surface of plants[END_REF]. The high terpenoid contents in Bechar population could be related to the high density of glandular trichomes in this population, which would be also in accordance with other results found by several authors [START_REF] Mahmoud | Cosuppression of limonene-3hydroxylase in peppermint promotes accumulation of limonene in the essential oil[END_REF][START_REF] Fridman | Metabolic, genomic, and biochemical analyses of glandular trichomes from the wild tomato species Lycopersicon hirsutum identify a key enzyme in the biosynthesis of methylketones[END_REF][START_REF] Ringer | Monoterpene metabolism. Cloning, expression, and characterization of (-)-isopiperitenol/(-)-carveol dehydrogenase of peppermint and spearmint[END_REF].
␦-cadinene and -copaene are two compounds found in low contents (0.5-3.8 and 1-4.9 g g -1 dw, respectively) and are similarly as spathulenol and ␣-pinene correlated with the increased aridity the populations are experiencing. Except for antibacterial effects [START_REF] Townsend | Antisense suppression of a (+)-delta-cadinene synthase gene in cotton prevents the induction of this defense response gene during bacterial blight infection but not its constitutive expression[END_REF][START_REF] Bakkali | Biological effects of essential oils -a review[END_REF], no information is available about specific ecological roles of -copaene and ␦-cadinene. It should be noted that they are germacrene D derivatives [START_REF] Bülow | The role of germacrene D as a precursor in sesquiterpene biosynthesis: investigations of acid catalyzed, photochemically and thermally induced rearrangements[END_REF] which is found in high concentration in Cupressus sempervirens after long-term water stress [START_REF] Yani | The effect of a long-term waterstress on the metabolism and emission of terpenes of the foliage of Cupressus sempervirens[END_REF]. Also, the content of germacrene D from Pistacia lentiscus was shown to increase four time during the summer season compared to spring [START_REF] Gardeli | Essential oil composition of Pistacia lentiscus L. and Myrtus communis L.: evaluation of antioxidant capacity of methanolic extracts[END_REF].
The different terpenoids can be appreciated as aridity markers characterizing the three P. atlantica populations. It is not clear whether they are constitutively synthesized or induced by the environmental conditions. Morphological data of leaves indicate that the three populations significantly differ. Scanning electronic microscopy of leaves of seedlings from the high aridity and low aridity provenances grown under controlled conditions reveals that the two populations keep their morphological differences with respect to trichome typology and density. Hence it is likely that the three populations investigated indeed are genetically different. Therefore the chemical variability observed might be as well genetically based. This should be tested in the future by submitting clones selected from the three populations to the same conditions of drought.
Fig. 1 .
1 Fig. 1. Geographical location of the investigated P. atlantica populations. Sites: ᭹.
Fig. 2 .
2 Fig. 2. Scanning electron micrographs showing epidermis and trichomes of P. atlantica seedling leaves. (A) Midrib of adaxial leaf surface, covered by elongated trichomes. Bar = 200 m. (B) Elongated trichomes in parallel rows. Bar = 10 m. (C) Glandular trichome. Bar = 20 m. (D and E) Low density of glandular trichomes in Medea population (D) compared to Bechar population (E). Bar = 500 m. (F) Tufted hairs at the adaxial leaf surface in Medea population. Bar = 50 m.
Fig. 3 .
3 Fig. 3. Variance analysis of monoterpene, sesquiterpene and total terpene contents found in female Pistacia atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria. Means of n = 10 with standard errors, p < 0.05.
Fig. 4 .
4 Fig.4. Correlation of occurrences of terpenoid compounds (g g -1 dw) from female Pistacia atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria; shown are only those terpenoids among which high correlation could be found.
Fig. 5 .
5 Fig.5. Two dimensional PCA of Pistacia atlantica ssp. atlantica individual samples originating from low (la), medium (ma) and high (ha) aridity sites in Algeria.
Table 1
1 Ecological factors of the Pistacia atlantica collection sites, selected to define the aridity gradient.
Site Mean annual Maximal Drought duration in Emberger, Q2 a Latitude Elevation (m)
precipitation (mm) temperature M ( • C) months (Bagnouls and
of the driest month Gaussen, 1953)
Medea low aridity 393.10 31.00 4 15.40 36 • 11 -36 • 22 north 720
3 • 00 -3 • 10 east
Laghouat medium aridity 116.60 39.40 10 04.34 28 • 00 north 780
3 • 00 east
Bechar high aridity 57.70 40.70 12 02.36 31 • 38 -32 • 03 north 790
1 • 13 -2 • 13 west
a Emberger's pluviothermic quotient.
Table 2
2 Morphological data (cm) of female P. atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria. Mean of 30 measurements per tree with standard errors.
Leaf biometry (cm) Low aridity site (Medea) Medium aridity site (Laghouat) High aridity site (Bechar) p
Leaf length 9.63 ± 0.19 a 9.17 ± 0.17 b 8.92 ± 0.18 c <0.001
Leaf width 7.61 ± 0.16 a 7.16 ± 0.14 b 6.65 ± 0.17 c <0.001
Rachis length 4.09 ± 0.10 a 3.78 ± 0.07 b 3.72 ± 0.08 b <0.001
Petiole length 2.13 ± 0.04 2.11 ± 0.05 2.05 ± 0.06 >0.05
Terminal leaflet length 3.41 ± 0.03 a 3.29 ± 0.03 b 3.14 ± 0.02 c <0.001
Terminal leaflet width 1.58 ± 0.03 a 1.49 ± 0.01 b 1.45 ± 0.02 c <0.001
Number of leaflet pairs 3.09 ± 0.07 b 3.12 ± 0.08 b 3.26 ± 0.10 a <0.05
Table 3
3 Anatomical data (m) of female P. atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria. Mean of 30 measurements per plant (3 replicates per leaf) with standard errors.
Leaf anatomy (m) Low aridity site (Medea) Medium aridity site (Laghouat) High aridity site (Bechar) p
Abaxial cuticle 4.98 ± 0.08 b 5.99 ± 0.12 a 6.08 ± 0.13 a <0.001
Adaxial cuticle 4.32 ± 0.06 b 4.88 ± 0.16 a 4.91 ± 0.12 a <0.01
Abaxial epidermis 12.70 ± 0.18 b 12.69 ± 0.18 b 14.07 ± 0.20 a <0.001
Adaxial epidermis 13.26 ± 0.16 b 13.30 ± 0.14 b 13.45 ± 0.18 a <0.05
Palisade parenchyma 64.66 ± 1.5 c 72.77 ± 1.38 b 95.76 ± 1.42 a <0.001
Spongy parenchyma 98.67 ± 1.64 102.45 ± 1.81 106.34 ± 1.86 >0.05
Leaf thickness 198.53 ± 2.78 c 212.08 ± 2.97 b 240.61 ± 3.51 a <0.001
Table 4
4 Concentrations of terpenoids (g g -1 dw) found in female Pistacia atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria. Mean of 10 extractions per site with standard errors.
Group Compounds AI Compound content in leaves (g g -1 dw) and location
Low aridity Medium aridity High aridity p
Hydrocarbon 1 Tricyclene 914 1.2 ± 0.2 b 2.4 ± 0.4 b 8.7 ± 0.5 a <0.001
monoterpenes 2 ␣-Pinene 926 10.0 ± 0.4 c 24.5 ± 0.8 b 49.4 ± 1.0 a <0.001
3 Camphene 941 3.1 ± 0.5 b 5.5 ± 0.9 b 23.2 ± 1.1 a <0.001
4 Thuja-2,4(10)-diene 948 1.0 ± 0.1 a 0.6 ± 0.0 b - <0.001
5 -Pinene 971 6.5 ± 2.3 b 18.1 ± 0.9 a 12.6 ± 0.7 ab <0.001
6 Mentha-1,3,5-triene, p- 1007 0.7 ± 0.1 a 0.1 ± 0.0 b - <0.001
7 Cymene, p- 1023 0.7 ± 0.1 b 1.9 ± 0.2 a 0.2 ± 0.0 b <0.001
8 ␥-Terpinene 1058 0.6 ± 0.3 ab 1.6 ± 0.3 a 0.3 ± 0.0 b <0.001
Oxygenated 9 Sabinene hydrate, cis-(IPP vs OH) 1067 0.4 ± 0.2 b 1.6 ± 0.3 a 0.2 ± 0.0 b <0.001
monoterpenes 10 NI 1088 0.6 ± 0.1 a 0.5 ± 0.0 a 0.2 ± 0.0 b <0.001
11 Sabinene hydrate, trans-(IPP vs OH) 1098 0.6 ± 0.2 b 1.5 ± 0.2 a 0.1 ± 0.0 b <0.001
12 NI 1101 5.8 ± 1.5 3.9 ± 0.5 4.9 ± 0.4 >0.05
13 ␣-Campholenic aldehyde 1125 1.9 ± 0.4 2.1 ± 0.3 1.9 ± 0.3 >0.05
14 Nopinone 1133 0.3 ± 0.1 - - <0.05
15 Pinocarveol, trans- 1138 1.8 ± 0.4 ab 1.5 ± 0.1 b 3.2 ± 0.4 a <0.01
16 Verbenol, trans- 1146 6.0 ± 1.6 3.9 ± 0.6 6.1 ± 0.8 >0.05
17 3-Pinocarvone, trans- 1157 1.2 ± 0.4 ab 1.9 ± 0.4 a - <0.001
18 Pinocarvone 1161 0.8 ± 0.1 b 0.7 ± 0.1 ab 1.2 ± 0.2 a <0.05
19 Terpinen-4-ol 1177 1.3 ± 0.3 b 3.8 ± 0.4 a 1.3 ± 0.2 b <0.001
20 Myrtenal 1194 0.4 ± 0.2 b 0.6 ± 0.2 b 1.4 ± 0.2 a <0.001
21 Myrtenol 1197 1.4 ± 0.3 1.9 ± 0.4 1.5 ± 0.2 >0.05
22 Verbenone 1208 7.0 ± 1.7 3.9 ± 0.7 5.1 ± 0.9 >0.05
23 Carveol, trans 1221 0.8 ± 0.2 b 0.4 ± 0.1 ab 0.9 ± 0.1 a <0.05
24 Borneol, iso-, acetate 1285 2.6 ± 0.5 b 3.9 ± 0.4 b 13.9 ± 0.7 a <0.001
Hydrocarbon 25 ␦-Elemene 1337 1.4 ± 0.6 b 14.0 ± 3.7 a 22.0 ± 1.8 a <0.001
sesquiterpenes 26 ␣-Cubebene 1349 0.3 ± 0.0 b 0.7 ± 0.2 b 1.5 ± 0.2 a <0.001
27 ␣-Copaene 1375 0.2 ± 0.0 b 0.5 ± 0.1 ab 1.0 ± 0.1 a <0.001
28 -Bourbonene 1383 0.9 ± 0.2 1.2 ± 0.2 0.8 ± 0.2 >0.05
29 -Cubebene 1389 0.2 ± 0.0 b 0.5 ± 0.1 ab 0.8 ± 0.1 a <0.001
30 -Elemene 1392 0.1 ± 0.0 b 0.5 ± 0.1 ab 0.8 ± 0.2 a <0.001
31 -Ylangene 1418 1.6 ± 0.2 b 9.0 ± 1.3 a 7.2 ± 0.9 a <0.001
32 -Copaene 1429 0.5 ± 0.1 b 1.2 ± 0.1 b 3.8 ± 0.6 a <0.001
33 ␥-Elemene 1433 0.5 ± 0.0 b 2.6 ± 0.8 b 7.4 ± 0.9 a <0.001
34 Guaia-6,9-diene 1438 0.3 ± 0.1 b 2.0 ± 0.2 a 1.9 ± 0.4 a <0.001
35 NI 1444 0.1 ± 0.0 b 0.6 ± 0.1 b 1.3 ± 0.2 a <0.001
36 NI 1453 0.2 ± 0.1 b 1.4 ± 0.3 a 2.2 ± 0.3 a <0.001
37 Caryophyllene, 9-epi- 1461 0.8 ± 0.2 b 4.0 ± 0.6 a 3.8 ± 0.3 a <0.001
38 NI 1470 0.8 ± 0.4 0.4 ± 0.0 1.1 ± 0.1 >0.05
39 Germacrene D 1482 3.0 ± 0.4 b 5.2 ± 1.0 b 29.0 ± 2.9 a <0.001
40 ␥-Amorphene 1496 1.8 ± 0.6 b 15.5 ± 2.8 a 20.5 ± 2.2 a <0.001
41 ␣-Muurolene 1501 0.3 ± 0.0 b 3.7 ± 0.4 a 0.9 ± 0.1 b <0.001
42 ␥-Cadinene 1515 0.3 ± 0.0 b 0.7 ± 0.1 b 2.0 ± 0.3 a <0.001
43 ␦-Cadinene 1524 1.0 ± 0.1 b 2.0 ± 0.2 b 4.9 ± 0.6 a <0.001
Oxygenated 44 Cubebol 1518 1.1 ± 0.1 b 1.2 ± 0.1 ab 1.7 ± 0.1 a <0.01
sesquiterpenes 45 NI 1527 0.7 ± 0.4 1.4 ± 0.2 1.6 ± 0.3 >0.05
46 Elemol 1552 0.7 ± 0.1 b 2.1 ± 0.6 b 5.8 ± 0.9 a <0.001
47 NI 1557 1.0 ± 0.2 b 2.0 ± 0.6 ab 3.1 ± 0.7 a <0.05
48 NI 1568 0.4 ± 0.0 0.5 ± 0.2 0.8 ± 0.2 >0.05
49 Spathulenol 1581 23.2 ± 1.1c 72.9 ± 1.9 b 114.4 ± 2.2 a <0.001
50 NI 1586 3.5 ± 0.8 b 10.1 ± 0.6 a 3.4 ± 0.3 b <0.001
51 NI 1590 0.4 ± 0.1 b 1.2 ± 0.3 ab 2.4 ± 0.5 a <0.01
52 Salvial-4(14)-en-1-one 1595 0.8 ± 0.1 b 1.5 ± 0.2 ab 2.4 ± 0.3 a <0.001
53 NI 1609 0.6 ± 0.2 b 1.1 ± 0.2 ab 1.9 ± 0.5 a <0.05
54 NI 1615 1.4 ± 0.2 b 2.8 ± 0.3 b 5.1 ± 0.8 a <0.001
55 NI 1620 0.7 ± 0.2 1.0 ± 0.5 0.8 ± 0.1 >0.05
56 Germacrene D-4-ol 1623 0.3 ± 0.0 b 0.8 ± 0.2 b 2.1 ± 0.3 a <0.001
57 ␥-Eudesmol 1634 0.2 ± 0.0 b 0.7 ± 0.1 b 1.3 ± 0.2 a <0.001
58 NI 1641 1.4 ± 0.3 b 9.5 ± 1.2 a 12.8 ± 1.3 a <0.001
59 ␣-Muurolol 1645 0.3 ± 0.1 b 1.1 ± 0.1 ab 1.6 ± 0.3 a <0.001
60 Cedr-8(15)-en-10-ol 1650 0.5 ± 0.1 b 1.3 ± 0.3 ab 2.7 ± 0.4 a <0.001
61 -Eudesmol 1653 0.4 ± 0.0 b 2.0 ± 0.3 b 4.7 ± 0.5 a <0.001
62 NI 1657 2.0 ± 0.2 b 3.7 ± 0.8 b 10.2 ± 1.3 a <0.001
63 NI 1677 0.7 ± 0.3 b 2.3 ± 0.3 a 0.7 ± 0.1 a <0.001
Others 64 Hex-3-en-1-ol benzoate, (Z)- 1572 tr 0.5 ± 0.1 1.0 ± 0.1
65 Actinolide, dihydro- 1530 2.0 ± 0.3 2.8 ± 0.3 2.3 ± 0.1
NI: non-identified; AI: arithmetic index of
[START_REF] Adams | Identification of Essential Oil Components by Gas Chromatography/Mass Spectrometry[END_REF]
calculated with the formula of
[START_REF] Van Den Dool | A generalization of the retention index system including linear temperature programmed gas-liquid partition chromatography[END_REF]
; tr: trace.
Acknowledgments
The authors gratefully acknowledge F. Torre for statistical analysis, R. Zergane of Beni Slimane and people of Laghouat and Bechar forestry conservation for their help in plant collection, and A. Tonetto for Scanning Electron Micrographs. The French and Algerian Inter-university Cooperation is also gratefully acknowledged for funding this work. | 42,194 | [
"18874",
"18764",
"18890",
"171561"
] | [
"449102",
"834",
"834",
"508832",
"834",
"834"
] |
01764851 | en | [
"sdu"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01764851/file/Costa_etal_PLoSoneREVISED3_CalculationBetweennessMarineConnectivity.pdf | Andrea Costa
email: [email protected]
Anne A Petrenko
Katell Guizien
Andrea M Doglioli
On the Calculation of Betweenness Centrality in Marine Connectivity Studies Using Transfer Probabilities
Betweenness has been used in a number of marine studies to identify portions of sea that sustain the connectivity of whole marine networks. Herein we highlight the need of methodological exactness in the calculation of betweenness when graph theory is applied to marine connectivity studies based on transfer probabilities. We show the inconsistency in calculating betweeness directly from transfer probabilities and propose a new metric for the node-to-node distance that solves it. Our argumentation is illustrated by both simple theoretical examples and the analysis of a literature data set.
Introduction
In the last decade, graph theory has increasingly been used in ecology and conservation studies [START_REF] Moilanen | On the limitations of graph-theoretic connectivity in spatial ecology and conservation[END_REF] and particularly in marine connectivity studies (e.g., [START_REF] Treml | Modeling population connectivity by ocean currents, a graph theoretic approach for marine conservation[END_REF] [3] [4] [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF] [START_REF] Rossi | Hydrodynamic provinces and oceanic connectivity from a transport network help desining marine reserves[END_REF]).
Graphs are a mathematical representation of a network of entities (called nodes) linked by pairwise relationships (called edges). Graph theory is a set of mathematical results that permit to calculate different measures to identify nodes, or set of nodes, that play specific roles in a graph (e.g., [START_REF] Bondy | Graph theory with applications[END_REF]). Graph theory application to the study of marine connectivity typically consists in the representation of portions of sea as nodes. Then, the edges between these nodes represent transfer probabilities between these portions of sea.
Transfer probabilities estimate the physical dispersion of propagula [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF] [START_REF] Berline | A connectivity-based ecoregionalization of the Mediterranean Sea[END_REF] [10] [START_REF] Jonsson | How to select networks of marine protected areas for multiple species with different dispersal strategies[END_REF], nutrients or pollutants [START_REF] Doglioli | Development of a numerical model to study the dispersion of wastes coming from a marine fish farm in the Ligurian Sea (Western Mediterranean)[END_REF], particulate matter [START_REF] Mansui | Modelling the transport and accumulation of floating marine debris in the Mediterranean basin[END_REF], or other particles either passive or interacting with the environment (see [START_REF] Ghezzo | Connectivity in three European coastal lagoons[END_REF] [START_REF] Bacher | Probabilistic approach of water residence time and connectivity using Markov chains with application to tidal embayments[END_REF] and references therein). As a result, graph theory already proved valuable in the identification of hydrodynamical provinces [START_REF] Rossi | Hydrodynamic provinces and oceanic connectivity from a transport network help desining marine reserves[END_REF], genetic stepping stones [START_REF] Rozenfeld | Network analysis identifies weak and strong links in a metapopulation system[END_REF], genetic communities [START_REF] Kininmonth | Determining the community structure of the coral Seriatopora hystrix from hydrodynamic and genetic networks[END_REF], sub-populations [START_REF] Jacobi | Identification of subpopulations from connectivity matrices[END_REF], and in assessing Marine Protected Areas connectivity [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF].
In many marine connectivity studies, it is of interest to identify specific portions of sea where a relevant amount of the transfer across a graph passes through. A well-known graph theory measure is frequently used for this purpose: betweenness centrality. In the literature, high values of this measure are commonly assumed to identify nodes sustaining the connectivity of the whole network. For this reason a high value of betweenness has been used in the framework of marine connectivity to identify PLOS 1/10 migration stepping stones [START_REF] Treml | Modeling population connectivity by ocean currents, a graph theoretic approach for marine conservation[END_REF], genetic gateways [START_REF] Rozenfeld | Network analysis identifies weak and strong links in a metapopulation system[END_REF], and marine protected areas ensuring a good connectivity between them [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF].
Our scope in the present letter is to highlight some errors that can occur in implementing graph theory analysis. Especially we focus on the definition of edges when one is interested in calculating the betweenness centrality and other related measures.
We also point out two papers in the literature in which this methodological inconsistency can be found: [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] and [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF].
In Materials and Methods we introduce the essential graph theory concepts for our scope. In Results we present our argument on the base of the analysis of a literature data set. In the last Section we draw our conclusions.
Materials and Methods
A simple graph G is a couple of sets (V, E), where V is the set of nodes and E is the set of edges. The set V represents the collection of objects under study that are pair-wise linked by an edge a ij , with (i,j) ∈ V , representing a relation of interest between two of these objects. If a ij = a ji , ∀(i,j) ∈ V , the graph is said to be 'undirected', otherwise it is 'directed'. The second case is the one we deal with when studying marine connectivity, where the edges' weights represent the transfer probabilities between two zones of sea (e.g., [3] [4] [5] [START_REF] Rossi | Hydrodynamic provinces and oceanic connectivity from a transport network help desining marine reserves[END_REF]).
If more than one edge in each direction between two nodes is allowed, the graph is called multigraph. The number of edges between each pair of nodes (i,j) is then called multiplicity of the edge linking i and j.
The in-degree of a node k, deg + (k), is the sum of all the edges that arrive in k:
deg + (k) = i a ik . The out-degree of a node k, deg -(k)
, is the sum of all the edges that start from k: deg -(k) = j a kj . The total degree of a node k, deg(k), is the sum of the in-degree and out-degree of k: deg(k
) = deg + (k) + deg -(k).
In a graph, there can be multiple ways (called paths) to go from a node i to a node j passing by other nodes. The weight of a path is the sum of the weights of the edges composing the path itself. In general, it is of interest to know the shortest or fastest path σ ij between two nodes, i.e. the one with the lowest weight. But it is even more instructive to know which nodes participate to the greater numbers of shortest paths. In fact, this permits to measure the influence of a given node over the spread of information through a network. This measure is called betweenness value of a node in the graph. The betweenness value of a node k, BC(k), is defined as the fraction of shortest paths existing in the graph, σ ij , with i = j, that effectively pass through k,
σ ij (k), with i = j = k: BC(k) = i =k =j σ ij (k) σ ij (1)
with (i,j,k) ∈ V . Note that the subscript i = k = j means that betweenness is not influenced by direct connections between the nodes. Betweenness is then normalized by the total number of possible connections in the graph once excluded node k:
(N -1)(N -2)
, where N is the number of nodes in the graph, so that 0 ≤ BC ≤ 1.
Although betweenness interpretation is seemingly straightforward, one must be careful in its calculation. In fact betweenness interpretation is sensitive to the node-to-node metric one chooses to use as edge weight. If, as frequently the case of the marine connectivity studies, one uses transfer probabilities as edge weight, betweenness loses its original meaning. Based on additional details -personally given by the authors PLOS 2/10 of [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] and [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF]-on their methods, this was the case in those studies. In those cases, edge weight would decrease when probability decreases and the shortest paths would be the sum of edges with lowest value of transfer probability. As a consequence, high betweenness would be associated to the nodes through which a high number of improbable paths pass through. Exactly the opposite of betweenness original purpose.
Hence, defining betweenness using Equation 1(the case of [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] and [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF]) leads to an inconsistency that affects the interpretation of betweenness values.
Alternative definitions of betweenness accounting for all the paths between two nodes and not just the most probable one have been proposed to analyze graphs in which the edge weight is a probability [START_REF] Newman | A measure of betweenness centrality based on random walks[END_REF] and avoid the above inconsistency.
Herein, we propose to solve the inconsistency when using the original betweenness definition of transfer probabilities by using a new metric for the edge weights instead of modifying the betweenness definition. The new metric transforms transfer probabilities a ij into a distance in order to conserve the original meaning of betweenness, by ensuring that a larger transfer probability between two nodes corresponds to a smaller node-to-node distance. Hence, the shortest path between two nodes effectively is the most probable one. Therefore, high betweenness is associated to the nodes through which a high number of probable paths pass through.
In the first place, in defining the new metric, we need to reverse the order of the probabilities in order to have higher values of the old metric a ij correspond to lower values of the new one. In the second place we also consider three other facts: (i) transfer probabilities a ij are commonly calculated with regards to the position of the particles only at the beginning and at the end of the advection period; (ii) the probability to go from i to j does not depend on the node the particle is coming from before arriving in i; and (iii) the calculation of the shortest paths implies the summation of a variable number of transfer probability values. Note that, as the a ij values are typically calculated on the base of the particles' positions at the beginning and at the end of a spawning period, we are dealing with paths whose values are calculated taking into account different numbers of generations. Therefore, the transfer probabilities between sites are independent from each other and should be multiplied by each other when calculating the value of a path. Nevertheless, the classical algorithms commonly used in graph theory analysis calculate the shortest paths as the summation of the edges composing them (e.g., the Dijkstra algorithm, [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] or the Brandes algorithm [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF]).
Therefore, these algorithms, if directly applied to the probabilities at play here, are incompatible with their independence.
A possible workaround could be to not use the algorithms in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF] and use instead the 10 th algorithm proposed in [START_REF] Brandes | On variants of shortest-path betweenness centrality and their generic computation[END_REF]. Therein, the author suggests to define the betweenness of a simple graph via its interpretation as a multigraph. He then shows that the value of a path can be calculated as the product of the multiplicities of its edges. When the multiplicity of an edge is set equal to the weight of the corresponding edge in the simple graph, one can calculate the value of a path as the product of its edges' weights a ij . However, this algorithm selects the shortest path on the basis of the number of steps (or hop count) between a pair of nodes (Breadth-First Search algorithm [START_REF] Moore | The shortest path through a maze[END_REF]). This causes the algorithm to fail in identifying the shortest path in some cases. For example, in Fig 1 it would identify the path ACB (2 steps with total probability 1 × 10 -8 ) when, instead, the most probable path is ADEB (3 steps with total probability 1 × 10 -6 ). See Table 1 for more details.
However, by changing the metric used in the algorithms, it is possible to calculate the shortest path in a meaningful way with the algorithms in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF]. In particular, we propose to define the weight of an edge between two nodes i and j as:
Fig 1.
Example of graph in which the 10 th algorithm in [START_REF] Brandes | On variants of shortest-path betweenness centrality and their generic computation[END_REF] would fail to identify the shortest path between A and B (ADEB) when using a ij as metric.
d ij = log 1 a ij ( 2
)
This definition is the composition of two functions: h(x) = 1/x and f (x) = log(x).
The use of h(x) allows one to reverse the ordering of the metric in order to make the most probable path the shortest. The use of f (x), thanks to the basic properties of logarithms, allows the use of classical shortest-path finding algorithms while dealing correctly with the independence of the connectivity values. In fact, we are de facto calculating the value of a path as the product of the values of its edges.
It is worth mentioning that the values d ij = ∞, coming from the values a ij = 0, do not influence the calculation of betweenness values via the Dijkstra and Brandes algorithms. Note that d ij is additive:
d il + d lj = log 1 a il •a lj = log 1 aij = d ij ,
for any (i,l,j) ∈ V thus being suitable to be used in conjunction with the algorithms proposed by [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF]. Also, note that both a ij and d ij are dimensionless. Equation 2 is the only metric that allows to consistently apply the algorithms in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF] to transfer probabilities. Other metrics would permit to make the weight decrease when probability increases: for example, 1
-a ij , 1/a ij , -a ij , log(1 -a ij ).
However, the first three ones do not permit to account for the independence of the transfer probabilities along a path. Furthermore, log(1 -a ij ) takes negative values as 0 ≤ a ij ≤ 1. Therefore, it cannot be used to calculate shortest paths because the algorithms in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF] would either endlessly go through a cycle (see Fig 2a and
Results
The consequences of the use of the raw transfer probability (a ij ) rather than the distance we propose (d ij ) are potentially radical. To show this, we used 20 connectivity matrices calculated for [START_REF] Guizien | Vulnerability of marine benthic metapopulations: implications of spatially structured connectivity for conservation practice[END_REF]. They were calculated from Lagrangian simulations using a Table 1. Paths and respective probabilities, weights and hop count for the graph in In particular matrix #1 was obtained after a period of reversed (eastward) circulation. Indeed, this case of circulation is less frequent than the westward circulation [START_REF] Petrenko | Barotropic eastward currents in the western Gulf of Lion, north-western Mediterranean Sea, during stratified conditions[END_REF]. Matrices #14, #10 and #13 correspond to a circulation pattern with an enhanced recirculation in the center of the gulf. Finally, matrices #2, #3, #5, #6, #8, #9, #14, #16, #18, #19, #20 correspond to a rather mixed circulation with no clear pattern. The proportions of particles coming from an origin node and arriving at a settlement node after 3, 4 and 5 weeks were weight-averaged to compute a connectivity PLOS 5/10
-a ij ) Figure 2a ADEDE. . . DEB → 0 → -∞ ACFB (1 × 10 -3 ) 3 = 1 × 10 -9 -3 × 10 -3 Figure 2b ADEFB (1 × 10 -3 ) 4 = 1 × 10 -12 -4 × 10 -3 ACB (1 × 10 -3 ) 2 = 1 × 10 -6 -2 × 10 -3
matrix for larvae with a competency period extending from 3 to 5 weeks. Furthermore, it is expected to have a positive correlation between the degree of a node and its betweenness (e.g., [START_REF] Valente | How correlated are network centrality measures?[END_REF] and [START_REF] Lee | Correlations among centrality measures in complex networks[END_REF]). However, we find that the betweenness values, calculated on the 20 connectivity matrices containing a ij , have an average correlation coefficient of -0.42 with the total degree, -0.42 with the in-degree, and -0.39 with the out-degree. Instead, betweenness calculated with the metric of Equation 2 has an average correlation coefficient of 0.48 with the total degree, 0.45 with the in-degree, and a not significant correlation with the out-degree (p-value > 0.05). Fig 4, betweenness values of the 32 nodes calculated using the two node-to-node distances a ij and log(1/a ij ) are drastically different between each other. Moreover, in 10 out of 20 connectivity matrices, the correlation between node ranking based on betweenness values with the two metrics were not significant. In the 10 cases it was (p-value < 0.05), the correlation coefficient was lower than 0.6 (data not shown). Such partial correlation is not unexpected as the betweenness of a node with a lot of connections could be similar when calculated with a ij or d ij if among these connections there are both very improbable and highly probable ones, like in node 21 in the present test case. Furthermore, it is noticeable that if one uses the a ij values (Fig 4a ), the betweenness values are much more variable than the ones obtained using d ij (Fig 4b). This is because, in the first case, the results depend on the most improbable connections that, in the ocean, are likely to be numerous and unsteady.
As an example, in
As we show in
Conclusion
We highlighted the need of methodological exactness inconsistency in the betweenness calculation when graph theory to marine transfer probabilities. Indeed, the inconsistency comes from the need to reverse the probability when calculating shortest paths. If this is not done, one considers the most improbable paths as the most probable ones. We showed the drastic consequences of this methodological error on the analysis of a published data set of connectivity matrices for the Gulf of Lion [START_REF] Guizien | Vulnerability of marine benthic metapopulations: implications of spatially structured connectivity for conservation practice[END_REF].
On the basis of our study, it may be possible that results in [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] and [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF] might also be affected. A re-analysis of [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] would not affect the conclusions drawn by the authors about the small-world characteristics of the Great Barrier Reef as that is purely topological characteristics of a network. About [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF], according to Marco Andrello (personal communication), due to the particular topology of the network at study, which forces most of the paths -both probable or improbable-to follow the Mediterranean large-scale steady circulation (e.g., [START_REF] Pinardi | The physical and ecological structure and variability of shelf areas in the Mediterranean Sea[END_REF]). As a consequence, sites along the prevalent circulation pathways have high betweenness when using either a ij or d ij . However, betweenness values of sites influenced by smaller-scale circulation will significantly vary according to the way of calculating betweenness.
To solve the highlighted inconsistency, we proposed the use of a node-to-node metric that provides a meaningful way to calculate shortest paths and -as a consequencebetweenness, when relying on transfer probabilities issued from Lagrangian simulations and the algorithm proposed in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF]. The new metric permits to reverse the probability and to calculate the value of a path as the product of its edges and to account for the independence of the transfer probabilities. Moreover, this metric is not limited to the calculation of betweenness alone but is also valid for the calculation of every graph theory measure related to the concept of shortest paths: for example, shortest cycles, closeness centrality, global and local efficiency, and average path length [START_REF] Costa | Tuning the interpretation of graph theory measures in analyzing marine larval connectivity[END_REF].
1 ADEB ( 1 × 10 - 2 ) 3 = 1 × 10 Fig 2 .
1110231102 Figure 1
Fig 3 we show the representation of the graph corresponding to matrix #7. The arrows starting from a node i and ending in a node j represent the direction of the element a ij (in Fig 3a) or d ij (in Fig 3b). The arrows' color code represents the magnitude of the edges' weights. The nodes' color code indicates the betweenness values calculated using the metric a ij (in Fig 3a) or d ij (in Fig 3b). In Fig 3a the edges corresponding to the lower 5% of the weights a ij are represented. These are the larval transfers that, though improbable, are the most influential in determining high betweenness values when using a ij as metric. In Fig 3b the edges corresponding to the lower 5% of the weights d ij are represented. These are the most probable larval transfers that -correctly-are the most influential in determining high betweenness values when using d ij as metric. While in Fig 3a the nodes with highest betweenness are the nodes 31 (0.26), 27 (0.25) and 2 (0.21); in Fig 3b the nodes with highest betweenness are nodes 21 (0.33), 20 (0.03) and 29 (0.03).
Fig 3 .Fig 4 .
34 Fig 3. Representation of matrix #7 from [21], the right side colorbars indicate the metric values. a) Results obtained by using a ij as edge weight, b) results obtained by using d ij as edge weight. In a) the lowest 5% of edges weights are represented. In b) the lowest 5% of edges weights are represented. Note the change in the colorbars' ranges.
Table 2 )
2 or choose the path with more edges (see Fig 2b and Table2), hence arbitrarily lowering the value of the paths between two nodes.
Table 2 .
2 Paths and respective probabilities and weights for the networks in Fig 2.
Path Probability Weight using log(1
Acknowledgments
The authors thank Dr. S.J. Kininmonth and Dr. M. Andrello for kindly providing the code they used for the betweenness calculation in their studies. The first author especially thanks Dr. R. Puzis for helpful conversations. Andrea Costa was financed by a MENRT Ph.D. grant. The research leading to these results has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under Grant Agreement No. 287844 for the project 'Towards COast to COast NETworks of marine protected areas (from the shore to the high and deep sea), coupled with sea-based wind energy potential' (COCONET). The project leading to this publication has received funding from European FEDER Fund under project 1166-39417. | 25,570 | [
"774311",
"18870",
"14215",
"20187"
] | [
"191652",
"191652",
"542001",
"191652"
] |
01764854 | en | [
"sdv",
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01764854/file/Favre_et_al_2017.pdf | Laurie Favre
Annick Ortalo-Magne
Steṕhane Greff
Thierry Peŕez
Olivier P Thomas
Jean-Charles Martin
Geŕald Culioli
Discrimination of Four Marine Biofilm-Forming Bacteria by LC-MS Metabolomics and Influence of Culture Parameters
Keywords: marine bacteria, biofilms, metabolomics, liquid chromatography-mass spectrometry, MS/MS networking, ornithine lipids, polyamines
Most marine bacteria can form biofilms, and they are the main components of biofilms observed on marine surfaces. Biofilms constitute a widespread life strategy, as growing in such structures offers many important biological benefits. The molecular compounds expressed in biofilms and, more generally, the metabolomes of marine bacteria remain poorly studied. In this context, a nontargeted LC-MS metabolomics approach of marine biofilm-forming bacterial strains was developed. Four marine bacteria, Persicivirga (Nonlabens) mediterranea TC4 and TC7, Pseudoalteromonas lipolytica TC8, and Shewanella sp. TC11, were used as model organisms. The main objective was to search for some strainspecific bacterial metabolites and to determine how culture parameters (culture medium, growth phase, and mode of culture) may affect the cellular metabolism of each strain and thus the global interstrain metabolic discrimination. LC-MS profiling and statistical partial least-squares discriminant analyses showed that the four strains could be differentiated at the species level whatever the medium, the growth phase, or the mode of culture (planktonic vs biofilm). A MS/MS molecular network was subsequently built and allowed the identification of putative bacterial biomarkers. TC8 was discriminated by a series of ornithine lipids, while the P. mediterranea strains produced hydroxylated ornithine and glycine lipids. Among the P. mediterranea strains, TC7 extracts were distinguished by the occurrence of diamine derivatives, such as putrescine amides.
■ INTRODUCTION
All biotic or abiotic surfaces immersed in the marine environment are subjected to colonization pressure by a great diversity of micro-and macroorganisms (e.g., bacteria, diatoms, micro-and macroalgae, invertebrate larvae). This so-called "marine biofouling" generates serious economic issues for endusers of the marine environment. Biofouling drastically alters boat hulls, pipelines, aquaculture, and port structures, [START_REF] Yebra | Antifouling technology-Past, present and future steps towards efficient and environmentally friendly antifouling coatings[END_REF] thus affecting fisheries and the maritime industry by reducing vessel efficiency and increasing maintenance costs. [START_REF] Schultz | Economic impact of biofouling on a naval surface ship[END_REF] Among fouling organisms, bacteria are well known for their significant pioneer role in the process of colonization. [START_REF] Railkin | Marine Biofouling: Colonization Processes and Defenses[END_REF] They are commonly considered as the first colonizers of immersed surfaces. They organize themselves in communities called biofilms, forming complex structures of cells embedded in an exopolymeric matrix. [START_REF] Stoodley | Biofilms as complex differentiated communities[END_REF] Thousands of bacterial strains are present in marine biofilms, and bacterial cell concentration is higher than in planktonic samples isolated from the same environment. Such an organization confers a special functioning to the prokaryotic community: [START_REF] Flemming | Biofilms: an emergent form of bacterial life[END_REF] (i) it provides a better resistance to exogenous stresses, (ii) it allows nutrients to accumulate at the surface, and (iii) it can constitute a protective system to predation. [START_REF] Matz | Marine biofilm bacteria evade eukaryotic predation by targeted chemical defense[END_REF] Moreover, the composition of the community and its biochemical production have been shown to impact the settlement of other organisms and thus the maturation of the biofouling. [START_REF] Lau | Roles of bacterial community composition in biofilms as a mediator for larval settlement of three marine invertebrates[END_REF][START_REF] Dobretsov | Facilitation and inhibition of larval attachment of the bryozoan Bugula neritina in association with monospecies and multi-species biofilms[END_REF] From a chemical point of view, marine bacteria are known to produce a wide array of specialized metabolites exhibiting various biological activities. [START_REF] Blunt | Marine natural products[END_REF] Among them, a vast number of compounds serve as protectors in highly competitive environments, and others have specific roles in physiology, communication, or constitute adaptive responses to environmental changes. [START_REF] Dang | Microbial surface colonization and biofilm development in marine environments[END_REF] Therefore, obtaining broad information on the metabolic status of bacterial strains isolated from marine biofilms and correlating it with external parameters is of high interest. Such knowledge constitutes a prerequisite for further studies on the overall understanding of these complex ecological systems.
With the recent developments of metabolomics, it is now possible to obtain a snapshot view, as complete and accurate as possible, of a large set of metabolites (i.e., small organic molecules with M w < 1500 Da) in a biological sample reflecting the metabolic state of the cells as a result of the specificity of their genetic background and an environmental context. Nuclear magnetic resonance spectroscopy or hyphenated techniques such as liquid chromatography (LC) or gas chromatography (GC) coupled to mass spectrometry are commonly used as analytical tools for metabolomics studies. Liquid chromatography-mass spectrometry (LC-MS) has the advantage to analyze a large pool of metabolites with high sensitivity and resolution, even without derivatization. [START_REF] Zhou | LC-MS-based metabolomics[END_REF] In comparative experiments, metabolomics applied to bacteria allows the identification of biomarkers able to differentiate strains. To date, a limited number of metabolomics studies have focused on marine bacteria, and only few of them are related to the effects of physiological and culture parameters on bacterial metabolism. [START_REF] Romano | Exo-metabolome of Pseudovibrio sp. FO-BEG1 analyzed by ultra-high resolution mass spectrometry and the effect of phosphate limitation[END_REF][START_REF] Zech | Growth phasedependent global protein and metabolite profiles of Phaeobacter gallaeciensis strain DSM 17395, a member of the marine Roseobacterclade[END_REF][START_REF] Takahashi | Metabolomics approach for determining growth-specific metabolites based on Fourier transform ion cyclotron resonance mass spectrometry[END_REF][START_REF] Brito-Echeverría | Response to adverse conditions in two strains of the extremely halophilic species Salinibacter ruber[END_REF] With the main objectives to search for some strain-specific bacterial metabolites and to assess the influence of culture parameters on the strain metabolism, this study intended: (i) to evaluate the LC-MS-based discrimination between the metabolome of four marine biofilm-forming bacterial strains depending on different extraction solvents and culture conditions and (ii) to putatively annotate the main discriminating compounds (Figure 1).
The four marine strains studied herein are all Gram-negative bacteria isolated from natural biofilms: Persicivirga (Nonlabens) mediterranea TC4 and TC7 belong to the phylum Bacteroidetes, while Pseudoalteromonas lipolytica TC8 and Shewanella sp. TC11 are γ-proteobacteria. They were selected on the basis of their biofilm-forming capability when cultivated in vitro and their ease for growing. [START_REF] Brian-Jaisson | Identification of bacterial strains isolated from the Mediterranean sea exhibiting different abilities of biofilm formation[END_REF] The two first strains (TC4 and TC7) were specifically chosen to evaluate the discriminative potential of our metabolomics approach as they belong to the same species. Because of the use of high-salt culture media when working on marine bacteria and to obtain an efficient extraction of intracellular metabolites, liquid-liquid extraction with medium polarity agents was specifically selected. For the analytical conditions, C18 reversed-phase HPLC columns are widely used for LC-MS profiling. [START_REF] Kuehnbaum | New advances in separation science for metabolomics: Resolving chemical diversity in a post-genomic era[END_REF] Such separation process provides satisfactory retention of medium to low polar analytes but does not allow a proper retention of more polar compounds. In the present study, analyses were performed on a phenyl-hexyl stationary phase to detect a large array of bacterial metabolites. The recently developed core-shell stationary phase was applied here for improved efficiency. [START_REF] Gritti | Performance of columns packed with the new shell Kinetex-C 18 particles in gradient elution chromatography[END_REF] For the MS detection, even if high-resolution mass spectrometry (HRMS) is mainly used in metabolomics, a low-resolution mass spectrometer (LRMS) was first chosen to assess the potential of the metabolomic approach to discriminate between the bacteria. A cross-platform comparison including HRMS was subsequently undertaken to assess the robustness of the method. Finally, HRMS and MS/MS data were used for the metabolite annotation. The resulting data were analyzed by multivariate statistical methods, including principal component analysis (PCA) and supervised partial least-squares discriminate analysis (PLS-DA). Unsupervised PCA models were first used to evaluate the divide between bacterial strains, while supervised PLS-DA models allowed us to increase the separation between sample classes and to extract information on discriminating metabolites.
■ EXPERIMENTAL SECTION
Reagents
Ethyl acetate (EtOAc), methanol (MeOH), and dichloromethane (DCM) used for the extraction procedures were purchased from VWR (Fontenay-sous-Bois, France). LC-MS analyses were performed using LC-MS-grade acetonitrile (ACN) and MeOH (VWR). Milli-Q water was generated by the Millipore ultrapure water system (Waters-Millipore, Milford, MA). Formic acid of mass spectrometry grade (99%) was obtained from Sigma-Aldrich (St. Quentin-Fallavier, France).
Bacterial Strains, Culture Conditions, and Metabolite Extraction
Persicivirga (Nonlabens) mediterranea TC4 and TC7 (TC for Toulon Collection), Pseudoalteromonas lipolytica TC8, and Shewanella sp. TC11 strains were isolated from marine biofilms harvested on artificial surfaces immersed in the Mediterranean Sea (Bay of Toulon, France, 43°06′23″ N, 5°57′17″ E). [START_REF] Brian-Jaisson | Identification of bacterial strains isolated from the Mediterranean sea exhibiting different abilities of biofilm formation[END_REF] All strains were stored at -80 °C in 50% glycerol medium until use and were grown in Vaäẗanen nine salt solution (VNSS) at 20 °C on a rotator/shaker (120 rpm) to obtain synchronized bacteria in postexponential phase. A cell suspension was used as starting inoculum to prepare planktonic and sessile cultures. Depending on the experiment, these cultures were performed in two different nutrient media: VNSS or marine broth (MB) (BD, Franklin Lakes, NJ), always at the same temperature of 20 °C. In the case of planktonic cultures, precultured bacteria (10 mL) were suspended in culture medium (50 mL) at 0.1 absorbance unit (OD 600 ) and placed in 250 mL Erlenmeyer flasks. Strains were grown in a rotary shaker (120 rpm). Medium turbidity was measured at 600 nm (Genesys 20 spectrophotometer, Thermo Fisher Scientific, Waltham, MA) every hour for the determination of growth curves before metabolite extraction. Cultures were then extracted according to the OD 600 value correlated to the growth curve. For sessile conditions, precultured planktonic cells were suspended in culture medium (10 mL) at 0.1 absorbance unit (OD 600 ) in Petri dishes. After 24 or 48 h of incubation, the culture medium was removed and biofilms were physically recovered by scraping. The resulting mixture was then extracted.
Metabolite extractions were performed with EtOAc, cold MeOH, or a mixture of cold MeOH/DCM (1:1 v/v). 100 mL of solvent was added to the bacterial culture. The resulting mixture was shaken for 1 min and then subjected to ultrasounds for 30 min at 20 °C. For samples extracted with EtOAc, the organic phase was recovered and concentrated to dryness under reduced pressure. Samples extracted with MeOH or MeOH/ DCM were dried in vacuo. Dried extracts were then dissolved in MeOH at a concentration of 15 mg/mL. Samples were transferred to 2 mL HPLC vials and stored at -80 °C until analysis.
For all experiments, bacterial cultures, extraction, and sample preparation were carried out by the same operator.
Metabolic Fingerprinting by LC-MS
LC-ESI-IT-MS Analyses. The bacterial extracts were analyzed on an Elite LaChrom (VWR-Hitachi, Fontenay-sous-Bois, France) chromatographic system coupled to an ion trap mass spectrometer (Esquire 6000, Bruker Daltonics, Wissembourg, France). Chromatographic separation was achieved on an analytical core-shell reversed-phase column (150 × 3 mm, 2.6 μm, Kinetex Phenyl-Hexyl, Phenomenex, Le Pecq, France) equipped with a guard cartridge (4 × 3 mm, SecurityGuard Ultra Phenomenex) and maintained at 30 °C. The injected sample volume was 5 μL. The mobile phase consisted of water (A) and ACN (B) containing both 0.1% of formic acid. The flow rate was 0.5 mL/min. The elution gradient started at 20% B during 5 min, ascended to 100% B in 20 min with a final isocratic step for 10 min; and then returned to 20% B in 0.1 min and maintained 9.9 min. The electrospray interface (ESI) parameters were set as follows: nebulizing gas (N 2 ) pressure at 40 psi, drying gas (N 2 ) flow at 8 L/min, drying temperature at 350 °C, and capillary voltage at 4000 V. Mass spectra were acquired in the full scan range m/z 50 to 1200 in positive mode as this mode provides a higher number of metabolite features after filtering and also a better discrimination between clusters in the multivariate statistics. Data were handled with Data Analysis (version 4.3, Bruker Daltonics).
UPLC-ESI-QToF-MS Analyses. The UPLC-MS instrumentation consisted of a Dionex Ultimate 3000 Rapid Separation (Thermo Fisher Scientific) chromatographic system coupled to a QToF Impact II mass spectrometer (Bruker Daltonics). The analyses were performed using an analytical core-shell reversed-phase column (150 × 2.1 mm, 1.7 μm, Kinetex Phenyl-Hexyl with a SecurityGuard cartridge, Phenomenex) with a column temperature of 40 °C and a flow rate of 0.5 mL/min. The injection volume was 5 μL. Mobile phases were water (A) and ACN (B) containing each 0.1% (v/v) of formic acid. The elution gradient (A:B, v/v) was as follows: 80:20 from 0 to 1 min, 0:100 in 7 min and kept 4 min, and then 80:20 at 11.5 min and kept 2 min. The capillary voltage was set at 4500 V (positive mode), and the nebulizing parameters were set as follows: nebulizing gas (N 2 ) pressure at 0.4 bar, drying gas (N 2 ) flow at 4 L/min, and drying temperature at 180 °C. Mass spectra were recorded from m/z 50 to 1200 at a mass resolving power of 25 000 full width at half-maximum (fwhm, m/z = 200) and a frequency of 2 Hz. Tandem mass spectrometry analyses were performed thanks to a collisioninduced dissociation (CID) with a collision energy of 25 eV. A solution of formate/acetate forming clusters was automatically injected before each sample for internal mass calibration, and the mass spectrometer was calibrated with the same solution before each sequence of samples. Data handling was done using Data Analysis (version 4.3).
Quality Control. For each sequence, a pool sample was prepared by combining 100 μL of each bacterial extract. The pool sample was divided into several HPLC vials that were used as quality-control samples (QCs). Samples of each condition were randomly injected to avoid any possible time-dependent changes in LC-MS chromatographic fingerprints. To ensure analytical repeatability, the QCs were injected at the beginning, at the end, and every four samples within each sequence run. Cell-free control samples (media blanks) were prepared in the same way as cultures with cells, and they were randomly injected within the sequence. These blanks allowed the subsequent subtraction of contaminants or components coming from the growth media. Moreover, to assess sample carry-over of the analytical process, three solvent blanks were injected for each set of experiments before the first QC and after the last QC.
Data Preprocessing and Filtering. LC-MS raw data were converted into netCDF files with a script developed within the Data Analysis software and preprocessed with the XCMS software (version 1.38.0) under R 3.1.0 environment. Peak picking was performed with the "matchedFilter" method for HPLC-IT-MS data and "centwave" method for UPLC-QToF-MS data. The other XCMS parameters were as follows: "snthresh" = 5, retention time correction with the obiwarp method ("profstep" = 0.1), peak grouping with "bw" = 5 for ion trap data, "bw" = 2 for QToF data and "mzwidth" = 0.5 for ion trap data, and "mzwidth" = 0.015 for QToF data, gap filling with default parameters. [START_REF] Patti | Meta-analysis of untargeted metabolomic data from multiple profiling experiments[END_REF] To ensure data quality and remove redundant signals, three successive filtering steps were applied to preprocessed data using an in-house script on R. The first was based on the signal/noise (S/N) ratio to remove signals observed in medium blanks (S/N set at 10 for features matching between samples and medium blanks). The second allowed suppression of signals based on the value of the coefficient of variation (CV) of the intensity of the variables in the QCs (cutoff set at 20%). A third filtering step was applied using the coefficient of the autocorrelation (with a cutoff set at 80%) between variables with a same retention time in the extract samples.
MS/MS Networking. The molecular network was generated on the Internet platform GNPS (http://gnps.ucsd. edu) from MS/MS spectra. Raw data were converted into .mzXML format with DataAnalysis. Data were filtered by removing MS/MS peaks within ±17 Da of the m/z of the precursor ions. Only the top 6 peaks were conserved in a window of 50 Da. Data were clustered using MS-Cluster with a tolerance of 1 Da for precursor ions and of 0.5 Da for MS/MS fragment ions to create a consensus spectrum. Consensus spectra containing fewer than two spectra were eliminated. The resulting spectra were compared with those of the GNPS spectral bank. The molecular network was then generated and previewed directly on GNPS online. Data were imported and treated offline with Cystoscape (version 3.4.0). MS/MS spectra with a high spectral similarity (cosine score (CS) > 0.65) were represented as nodes. Connections between these nodes appeared because the CS was above 0.65 and at least four common ions were detected. The thickness of the connections was proportional to the CS.
Annotation of Biomarkers. Variables of importance were identified from the multivariate statistical analyses (see the Statistical Analyses section). They were then subjected to annotation by searching the most probable molecular formula with the "smartformula" package of DataAnalysis and by analyzing their accurate masses and their fragmentation patterns in comparison with the literature data. Other data available online in KEGG (www.genome.jp/kegg), PubChem (https://pubchem.ncbi.nlm.nih.gov), ChemSpider (www. chemspider.com), Lipid Maps (http://www.lipidmaps.org), Metlin (https://metlin.scripps.edu/), and GNPS (http:// gnps.ucsd.edu) were also used for complementary information.
Statistical Analyses. Simca 13.0.3 software (Umetrics, Umea, Sweden) was used for all multivariate data analyses and modeling. Data were log10-transformed and mean-centered. Models were built on principal component analysis (PCA) or on partial least-squares discriminant analysis (PLS-DA). PLS-DA allowed the determination of discriminating metabolites using the variable importance on projection (VIP). The VIP score value indicates the contribution of a variable to the discrimination between all of the classes of samples. Mathematically, these scores are calculated for each variable as a weighted sum of squares of PLS weights. The mean VIP value is one, and usually VIP values over one are considered as significant. A high score is in agreement with a strong discriminatory ability and thus constitutes a criterion for the selection of biomarkers. All of the models evaluated were tested for over fitting with methods of permutation tests and cross-validation analysis of variance (CV-ANOVA). The descriptive performance of the models was determined by R 2 X (cumulative) (perfect model: R 2 X (cum) = 1) and R 2 Y (cumulative) (perfect model: R 2 Y (cum) = 1) values, while their prediction performance was measured by Q 2 (cumulative) (perfect model: Q 2 (cum) = 1), p (CV-ANOVA) (perfect model: p = 0) values, and a permutation test (n = 150). The permuted model should not be able to predict classes: R 2 and Q 2 values at the Y-axis intercept must be lower than those of Q 2 and the R 2 of the nonpermuted model. Data Visualization. The heatmap representation was obtained with the PermutMatrix software. [START_REF] Caraux | PermutMatrix: a graphical environment to arrange gene expression profiles in optimal linear order[END_REF] Dissimilarity was calculated with the squared Pearson correlation distance, while the Ward's minimum variance method was used to obtain the hierarchical clustering.
■ RESULTS AND DISCUSSION
Selection of the Metabolite Extraction Method (Experiment #1)
Metabolomics allows the analysis of many metabolites simultaneously detected in a biological sample. To ensure that the resulting metabolomic profiles characterize the widest range of metabolites of high relevance, the metabolite extraction protocol must be nonselective and highly reproducible. [START_REF] Kido Soule | Environmental metabolomics: Analytical strategies[END_REF] Therefore, the biological material must be studied after simple preparation steps to prevent any potential degradation or loss of metabolites. In microbial metabolomics, the first step of the sample preparation corresponds to quenching to avoid alterations of the intracellular metabolome, which is known for its fast turnover. [START_REF] De Jonge | Optimization of cold methanol quenching for quantitative metabolomics of Penicillium chrysogenum[END_REF] In this study, the first objective was therefore to develop an extraction protocol for LC-MS-based metabolome profiling (exo-and endometabolomes) of marine bacteria that should be applied to any strain, cultivated either planktonically or under sessile conditions, but also easily transposable to natural complex biofilms. For this purpose, liquid-liquid extraction was selected because this process allows quenching and extraction of the bacterial culture in a single step. The second issue is linked to the high salinity of the extracts, which implies a required desalting step. For these reasons, EtOAc, MeOH, and MeOH/DCM (1:1) were selected as extractive solvents for this experiment.
For the first experiment, common cultures conditions were selected. Thus the four bacterial strains were grown planktonically in single-species cultures (in VNSS medium), each in biological triplicates, and extracted until they reached their stationary phase (at t = t 5 , Supporting Information Figure S1) and before the decline phase. For each sample, the whole culture was extracted using a predefined set of experimental conditions and analyzed by LC-(+)-ESI-IT-MS. The selection of the optimal solvents was performed based on: (i) the number of features detected on LC-MS profiles after filtering, (ii) the ability to discriminate the bacterial strains by multivariate analyses, and (iii) the ease of implementation of the experimental protocol.
In such a rich culture medium, data filtering constitutes a key requirement because bacterial metabolites are masked by components of the culture broth (e.g., peptone, starch, yeast extract). Moreover, such a process was essential to reduce falsepositives and redundant data for the further statistical analyses. First, for each solvent, treatment of all chromatograms with the XCMS package gave a primary data set with 3190 ± 109 metabolite features (Supporting Information Figure S2). A primary filtering between variables present in both bacterial extracts and blank samples removed >80% of the detected features, which were inferred to culture medium components, solvent contamination, or instrumentation noise. After two additional filtering steps, one with the CV of variable intensities and the other with the coefficient of autocorrelation across samples and between variables with a same retention time, a final list of 155 ± 22 m/z features was reached.
The resulting data showed a different number of detected metabolite features depending on the extraction solvent (Supporting Information Figure S3): MeOH/DCM yielded a higher number of metabolites for TC4 and TC11, while EtOAc was the most effective extraction solvent for TC7 and TC8. This result was expected because previous works showed that the extraction method had a strong effect on the detected microbial metabolome, with the physicochemical properties of the extraction solvent being one of the main factor of the observed discrepancies. [START_REF] Duportet | The biological interpretation of metabolomic data can be misled by the extraction method used[END_REF][START_REF] Shin | Evaluation of sampling and extraction methodologies for the global metabolic profiling of Saccharophagus degradans[END_REF] The extraction parameters had an effect not only on the number of detected features but also on their concentration. [START_REF] Canelas | Quantitative evaluation of intracellular metabolite extraction techniques for yeast metabolomics[END_REF] The LC-MS data sets were analyzed by PCA and PLS-DA to evaluate the potential of the method to discriminate among the bacterial strains according to the extraction solvent system. PCA evidenced interstrain cleavage on the score plots (Figure 2a and Supporting Information Figure S4a,b). For each solvent, samples from TC4 and TC7, on one hand, and from TC8 and TC11, on the other hand, were clearly distinguished on the first component, which accounted for 56-72% of the total variance. The second component, with 12 to 29%, allowed the distinction between TC8 and TC11 and, to a lesser extent, between TC4 and TC7.
experiments parameters models N°a R 2 X cum b R 2 Y cum c Q 2 Y cum d R intercept e Q
To find discriminating biomarkers, PLS-DA was also applied to the LC-MS data (one model by solvent condition and one class by strain). For each extraction solvent, the resulting score plots showed three distinct clusters composed of both P. mediterranea strains (TC4 and TC7), TC8 and TC11, respectively (data not shown). The PLS-DA four-class models gave R 2 Xcum and R 2 Ycum values of 0.951-0.966 and 0.985-0.997, respectively, showing the consistency of the obtained data, and Q 2 Ycum values of 0.820-0.967, estimating their predictive ability (Table 1). Nevertheless, the p values (>0.05) obtained from the cross validation indicated that the bacterial samples were not significantly separated according to the strain, while the R intercept values (>0.4) obtained from a permutation test (n = 150) showed overfitting of the models. Taking these results into account, three-class PLS-DA models regrouping the TC4 and TC7 strains into a same class were constructed. The resulting R 2 Xcum (0.886-0.930), R 2 Ycum (0.974-0.991), and Q 2 Ycum (0.935-0.956) values attested the quality of these improved models. In addition, a permutation test (n = 150) allowed the successful validation of the PLS-DA models: R intercept values (<0.4, except for MeOH/DCM) and Q intercept values (<-0.2) indicated that no overfitting was observed, while p values (<0.05) showed that the three groups fitted by the models were significantly different (Table 1, Supporting Information Figure S4c-e). Samples extracted with MeOH and EtOAc showed higher quality and more robust PLS-DA models for the strain discrimination than those obtained after extraction with MeOH/DCM. For all of these reasons, EtOAc was selected for metabolome extraction in the subsequent experiments. These results were in accordance with the use of a similar protocol in recent studies dealing with the chemical profiling of marine bacteria. [START_REF] Lu | A highresolution LC-MS-based secondary metabolite fingerprint database of marine bacteria[END_REF][START_REF] Bose | LC-MS-based metabolomics study of marine bacterial secondary metabolite and antibiotic production in Salinispora arenicola[END_REF][START_REF] Vynne | chemical profiling, and 16S rRNA-based phylogeny of Pseudoalteromonas strains collected on a global research cruise[END_REF] Three culture parameters (culture media, phase of growth, and mode of culture) were then analyzed sequentially to evaluate their respective impact on the interstrain metabolic discrimination.
Impact of the Culture Medium (Experiment #2)
The influence of the culture medium on the marine bacteria metabolome has been poorly investigated. [START_REF] Brito-Echeverría | Response to adverse conditions in two strains of the extremely halophilic species Salinibacter ruber[END_REF][START_REF] Canelas | Quantitative evaluation of intracellular metabolite extraction techniques for yeast metabolomics[END_REF][START_REF] Bose | LC-MS-based metabolomics study of marine bacterial secondary metabolite and antibiotic production in Salinispora arenicola[END_REF][START_REF] Djinni | Metabolite profile of marine-derived endophytic Streptomyces sundarbansensis WR1L1S8 by liquid chromatography-mass spectrometry and evaluation of culture conditions on antibacterial activity and mycelial growth[END_REF] To ascertain that the chemical discrimination of the bacterial strains studied was not medium-dependent, a second culture broth was used. This second set of experiments was designed as follows: the four bacterial strains were cultivated in parallel in VNSS and MB media (each in biological triplicates) until they reached the stationary phase (t = t 5 , Supporting Information Figure S1), and their organic extracts (extraction with EtOAc) were analyzed by LC-MS. Just like in the case of VNSS, MB is a salt-rich medium widely used for marine bacterial cultures. The number of metabolites detected and the chemical discrimination between the bacterial strains were then determined for this set of samples. First, the number of metabolites obtained after the three filtering steps was similar for both culture media (Supporting Information Figure S5), and all of the detected m/z features were common to both media. This result showed the robustness of the filtering method because the chemical compositions of both culture media are highly different (Supporting Information Table S1). Indeed, MB contains more salts, and, in terms of organic components, higher amounts of yeast extract and peptone, while starch and glucose are specific ingredients of VNSS. A small difference was observed on the PCA score plots obtained with samples from a single strain cultured in these two different media, but the low number of samples did not allow the validation of the corresponding PLS-DA models (data not shown). Whatever the medium, an obvious clustering pattern for each of the four strains was observed on the PCA score plots when all of the samples were considered (Supporting Information Figure S6a). Four-and three-class PLS-DA models were constructed to evaluate the discrimination capacity of the method. As demonstrated for VNSS cultures (Supporting Information Figure S4c), the PLS-DA three-class model obtained with the bacteria grown in MB (Table 1 and Supporting Information Figure S6b) also showed a clear separation between the groups (p < 0.05), and it was statistically validated by a permutation test. When the whole data set (VNSS and MB) was analyzed, the resulting PLS-DA models (Table 1), which passed crossvalidation and permutation test, indicated that the bacterial samples could be efficiently discriminated at the species level.
On the basis of the PLS-DA score plot, TC8 was the bacterial strain, which demonstrated the higher metabolic variation with the culture media used (Figure 2b). It is now well established that changing bacterial culture media not only affects the metabolome quantitatively but also has a significant impact on the expression of some distinct biosynthetic pathways. Such an approach, named OSMAC (One Strain-MAny Compounds), has been used in recent years to improve the number of secondary metabolites produced by a single microbial strain. [START_REF] Bode | Big effects from small changes: Possible ways to explore Nature's chemical diversity[END_REF] In the present study, some intrastrain differences were observed between cultures in both media, but they did not prevent from a clear interstrain discrimination. Therefore, these results showed that this method allowed the discrimination between samples of the three marine biofilm-forming bacterial species, even if they are grown in distinct media.
Impact of the Growth Phase (Experiment #3)
Growth of bacteria in suspension, as planktonic microorganisms, follows a typical curve with a sequence of a lag phase, an exponential phase (multiplication of cells), a stationary phase (stabilization), and a decline phase. To date, only a few studies have focused on differences in the metabolome of microorganisms along their growth phase, [START_REF] Zech | Growth phasedependent global protein and metabolite profiles of Phaeobacter gallaeciensis strain DSM 17395, a member of the marine Roseobacterclade[END_REF][START_REF] Drapal | The application of metabolite profiling to Mycobacterium spp.: Determination of metabolite changes associated with growth[END_REF][START_REF] Jin | Metabolomics-based component profiling of Halomonas sp. KM-1 during different growth phases in poly(3-hydroxybutyrate) production[END_REF] and most of these analyses were performed by NMR and GC-MS. These different culture phases are related to the rapid bacterial response to environmental changes and thus to different metabolic expressions. To determine the impact of this biological variation on the discrimination between bacterial cell samples, the metabolome content was analyzed (in biological triplicates) for the four strains grown in VNSS at five times of their different growth stages: two time points during the exponential phase (t 1 and t 2 ), one at the end of the exponential phase (t 3 ), and two others during the stationary phase (t 4 and t 5 ) (Supporting Information Figure S1). All aliquots were treated with the selected extraction protocol, followed by analysis with LC-MS. The data obtained for the four strains were preprocessed, filtered, and then analyzed by PCA and PLS-DA. As shown in Figure S1, the strains grew differently, as indicated by OD 600 changes: the exponential phase of all of the strains started directly after inoculation and occurred during 3 h for TC8 and TC11 and 8 h for TC4 and TC7, respectively. After filtering, the data showed that among all of the strains TC8 produced the highest number of metabolites in all phases, while TC7 was always the less productive. The number of metabolites detected was higher for TC8 during the stationary phase, while it was slightly higher for TC11 during the exponential phase, and no significant differences were noticed for TC4 and for TC7 (Supporting Information Figure S7). For each strain, most of the detected m/z signals were found in both growth phases, but more than two-thirds of them were present in higher amounts during the stationary phase.
To determine if this method was also able to differentiate between the phases of growth, PLS-DA models were then constructed for each bacterial strain with the LC-MS profiles (Supporting Information Table S2). For TC8, bacterial cultures were clearly discriminated with their growth phase, as described on the corresponding PLS-DA score plot (Supporting Information Figure S8a). This constructed PLS-DA model was well-fitted to the experimental data: It consisted of four components, and the two first explained almost 75% of the variation. The first dimension showed a significant separation between cultures harvested at the beginning and the middle of the exponential phase (t 1 and t 2 ), the end of this same growth phase (t 3 ), and the stationary phase (t 4 and t 5 ), while the second one emphasized the discrimination of cultures collected at the end of the exponential phase (t 3 ) from the others. With a less pronounced separation between samples of the exponential phase, a similar pattern was observed for TC4 and, to a lesser extent, for TC11 (Supporting Information Figure S8b,c). For TC7, no PLS-DA model allowed highlighting significant differences between samples with the growth phase (data not shown). Finally, the discrimination between all of the bacterial species harvested during the two growth phases (five time points) was analyzed. The resulting PLS-DA model explained >78% of the variance of the data set (Table 1 and Figure 2c). Here again, the metabolome of the TC8 strain showed the most important variability, but a clear discrimination between the metabolomes of the four bacterial strains was observed whatever the phase of growth. In accordance with their taxonomic proximity, it was highlighted that both P. mediterranea strains (TC4 and TC7) were closely related.
It is now well-established that drastic changes may occur in bacterial metabolic production at the transition from exponential phase to stationary phase. This phenomenon is often due to a lowered protein biosynthesis, which induces the biosynthetic machinery to switch from a metabolic production mainly dedicated to cell growth during exponential phase toward alternative metabolism, producing a new set of compounds during the stationary phase. [START_REF] Alam | Metabolic modeling and analysis of the metabolic switch in Streptomyces coelicolor[END_REF][START_REF] Herbst | Label-free quantification reveals major proteomic changes in Pseudomonas putida F1 during the exponential growth phase[END_REF] However, in contrast with well-studied model microorganisms, several marine bacteria undergo a stand-by step between these two growth phases. [START_REF] Sowell | Proteomic analysis of stationary phase in the marine bacterium "Candidatus Pelagibacter ubique[END_REF][START_REF] Gade | Proteomic analysis of carbohydrate catabolism and regulation in the marine bacterium Rhodopirellula baltica[END_REF] For each strain, our results showed that most of the changes between the growth phases correspond to the upregulation of a large part of the metabolites during the stationary phase. This trend was already observed in previous studies, [START_REF] Drapal | The application of metabolite profiling to Mycobacterium spp.: Determination of metabolite changes associated with growth[END_REF][START_REF] Jin | Metabolomics-based component profiling of Halomonas sp. KM-1 during different growth phases in poly(3-hydroxybutyrate) production[END_REF] but opposite results have also been described due to distinct metabolome coverage or studied microorganism. [START_REF] Zech | Growth phasedependent global protein and metabolite profiles of Phaeobacter gallaeciensis strain DSM 17395, a member of the marine Roseobacterclade[END_REF] These bibliographic data were also in accordance with the different behavior of each of the four strains when the metabolome, restricted to the extraction and analytical procedures, was investigated at different time points of the growth curve. The chemical discrimination of these bacteria was thus not dependent on their growth phase. Overall, because the chemical diversity seemed to be higher during the stationary phase, this growth phase was then chosen for the rest of the study.
Impact of the Mode of Culture (Experiment #4)
The bacterial strains were isolated from marine biofilms developed on artificial surfaces immersed in situ. [START_REF] Brian-Jaisson | Identification of bacterial strains isolated from the Mediterranean sea exhibiting different abilities of biofilm formation[END_REF] In addition to their facility to grow in vitro, these strains were chosen for their propensity to form biofilms. The intrinsic differences between the metabolisms of planktonic and biofilm cells and the impact of these two modes of culture on the interstrain discrimination were analyzed by LC-MS profiling of three of the bacteria. Indeed, due to the chemical similarity of both P. mediterranea strains, only TC4 was used for this experiment. For this purpose, these strains were cultured in triplicate in planktonic (at five points of their growth curve) and biofilm modes (at two culture times: 24 and 48 h). This difference in growth time between both culture modes was due to the slowgrowing nature of biofilms. To compare accurately the two modes of culture, the development of biofilms was performed under static conditions and in the same medium as those used for planktonic growth (VNSS). For each strain, PLS-DA models were constructed and showed a clear discrimination between samples with their culture mode with total variances ranging from 52 to 59% (Supporting Information Figure S9). PLS-DA models with good-quality parameters were obtained, and validation values indicated that they could be regarded as predictable (Supporting Information Table S2). Moreover, a similar number of m/z features upregulated specifically in one of the two culture modes was detected for each strain. When dealing with the interstrain discrimination for bacteria cultured as biofilms, the three strains were clearly separated on the PCA score plot, and the total variance due to the two main projections accounted for 59% (Supporting Information Figure S10a). The corresponding PLS-DA model showed a similar trend and gave good results, indicating that this model could distinguish the three strains (Table 1 and Supporting Information Figure S10b). When the full data set (biofilms and planktonic cultures) was analyzed, the same pattern was further noticed with the occurrence of one cluster by strain on the PCA score plot (Supporting Information Figure S11). A PLS-DA model was built and demonstrated again, after validation, a good separation among all of the strains (Table 1 and Figure 2d).
To date, the few metabolomics studies undertaken on biofilms were mostly based on NMR, which is limited by intrinsic low sensitivity. [START_REF] Yeom | 1 H NMR-based metabolite profiling of planktonic and biofilm cells in Acinetobacter baumannii 1656-2[END_REF][START_REF] Ammons | Quantitative NMR metabolite profiling of methicillin-resistant and methicillin-susceptible Staphylococcus aureus discriminates between biofilm and planktonic phenotypes[END_REF] More specifically, only two studies have used a metabolomic approach with the aim of analyzing marine bacterial biofilms. [START_REF] Chandramouli | Proteomic and metabolomic profiles of marine Vibrio sp. 010 in response to an antifoulant challenge[END_REF][START_REF] Chavez-Dozal | Proteomic and metabolomic profiles demonstrate variation among free-living and symbiotic vibrio f ischeri biofilms[END_REF] It is now well-established that in many aquatic environments most of the bacteria are organized in biofilms, and this living mode is significantly different from its planktonic counterpart. [START_REF] Hall-Stoodley | Bacterial biofilms: from the Natural environment to infectious diseases[END_REF] Deep modifications occur in bacterial cells at various levels (e.g., gene expression, proteome, transcriptome) during the transition from free-living planktonic to biofilm states. [START_REF] Sauer | The genomics and proteomics of biofilm formation[END_REF] Biofilm cells have traditionally been described as metabolically dormant with reduced growth and metabolic activity. Additionally, cells in biofilms show a higher tolerance to stress (e.g., chemical agents, competition, and predation). On the basis of these data, a liquid culture alone does not allow a full understanding of the ecological behavior or the realistic response to a specific challenge in the case of benthic marine bacteria. For the TC4, TC8, and TC11 strains, PLS-DA models allowed an unambiguous distinction between biofilm and planktonic samples at different ages. As described in the literature for other bacteria, these results agreed with a significant metabolic shift between the two modes of culture whatever the strain and the culture time. Considering the biofilm samples and the whole set of samples, our results demonstrated that chemical profiling by LC-MS followed by PLS-DA analysis led to a clear discrimination between the three strains. Therefore, the interstrain metabolic differences are more significant than the intrastrain differences inherent to the culture mode.
Analytical Platforms Comparison and Identification of Putative Biomarkers
The data collected during the first part of this study did not allow the annotation of the biomarkers. In this last part, both accurate MS and MS/MS data were obtained from a limited pool of samples (four strains, EtOAc as extraction solvent, planktonic cultures in VNSS until the stationary phase) with an UPLC-ESI-QToF equipment. After extraction and filtering, the data obtained from the LC-HRMS profiles were subjected to chemometric analyses. The resulting PCA and PLS-DA score plots (Supporting Information Figure S12 and Figure 3a) were compared with those obtained with the same set of samples on the previous LC-LRMS platform (Figure 2a and Supporting Information Figure S4c). For both platforms, the PCA score plots exhibited a clear discrimination between the four strains with a separation of the two couples TC4/TC7 and TC8/ TC11 on the first component and between the strains of each pair on the second component. The main difference relies in the total variance accounted for by these two first components, which was lower in the case of the HRMS platform (64% instead of 85% for the LRMS platform). These results prove the robustness of the method.
The subsequent step was to build a supervised discrimination model using PLS-DA for UPLC-QToF data. As already described for HPLC-IT-MS data, the resulting three-class PLS-DA model led to a proper differentiation of the three bacterial groups (Table 1). Moreover, despite the different kind of chromatographic conditions (HPLC vs UPLC) and mass spectrometry instrumentations (ESI-IT vs ESI-QToF), the two platforms gave similar results and the same conclusion was made with other sets of samples. Moreover, samples used for the study of the impact of the growth phase and the mode of culture on the TC8 strain were also analyzed on the HRMS 2.
platform, and the resulting PLS-DA model was similar to those obtained on the LC-LRMS platform (Supporting Information Figures S8a andS13).
In a second step, the aim was to identify putative biomarkers for each bacterial strain. Metabolome annotation is often considered as a bottleneck in the metabolomics data analysis, which is even more challenging for nonstudied species. For this reason, a molecular network was constructed based on MS/MS data (Figure 4). This analysis has the main advantage to organize mass spectra by fragmentation similarity, rendering easier the annotation of compounds of a same chemical family. [START_REF] Watrous | Mass spectral molecular networking of living microbial colonies[END_REF] The molecular network constructed with a set of data including all of the strains (EtOAc as extraction solvent, planktonic cultures in VNSS until the stationary phase) highlighted several clusters. At the same time, the most discriminating m/z features in the PLS-DA model (Figure 3b,c) were selected based on their VIP score, which resulted in 17 compounds with VIP value equal to or higher than 3 (Table 2). The molecular formulas of each VIP were proposed based on accurate mass measurement, true isotopic pattern, and fragmentation analysis. A detailed analysis of VIPs and molecular network revealed that most of these discriminating metabolites constitute the cluster A (Figure 4). These chemical compounds were specific to TC8, on one hand, and to TC4 and TC7, on the other hand. Interestingly, all of these specific compounds showed a similar fragmentation pattern with a characteristic ion fragment at m/z 115. A bibliographic review allowed us to propose ornithine-containing lipids (OL) as good candidates for this chemical group. OLs are widespread among Gram-negative bacteria, more rarely found in Gram-positive ones, and absent in eukaryotes and archaea. [START_REF] Moore | Elucidation and identification of amino acid containing membrane lipids using liquid chromatography/highresolution mass spectrometry[END_REF] These membrane lipids contain an ornithine headgroup linked to a 3-hydroxy fatty acid via its α-amino moiety and a second fatty acid chain (also called "piggyback" fatty acid) esterified to the hydroxyl group of the first fatty acid. In some bacteria the ester-linked fatty acid can be hydroxylated, usually at the C-2 position. [START_REF] Geiger | Amino acid-containing membrane lipids in bacteria[END_REF] OLs show a specific MS fragmentation pattern used in this study for their identification. Characteristic multistage MS fragmentation patterns include the sequential loss of H 2 O (from the ornithine part), the piggyback acyl chain, and the amide-linked fatty acid. [START_REF] Zhang | Characterization of ornithine and glutamine lipids extracted from cell membranes of Rhodobacter sphaeroides[END_REF] This characteristic mode of fragmentation leads to headgroup fragment ions at m/z 159
(C 6 H 11 N 2 O 3 ), 141 (C 6 H 9 N 2 O 2 ), 133 (C 5 H 13 N 2 O 2 ), 115 (C 5 H 11 N 2 O 2 )
, and 70 (C 4 H 8 N). On that basis, HRMS/MS fragmentation of VIP no. 1 (m/z 677) is proposed in Figure 5, and the same pattern was observed for the other OLs (Table 2). In Gram-negative bacteria, membranes are constituted by polar lipids frequently composed of phospholipids like phosphatidylethanolamine (PE). In this work, this type of lipids was detected in the four strains (cluster E, Figure 4), but several studies have shown that under phosphorus starvation, which is common in marine environments, the production of nonphosphorus polar lipids such as OLs may increase significantly. [START_REF] Yao | Heterotrophic bacteria from an extremely phosphate-poor lake have conditionally reduced phosphorus demand and utilize diverse sources of phosphorus[END_REF][START_REF] Sandoval-Calderoń | Plasticity of Streptomyces coelicolor membrane composition under different growth conditions and during development[END_REF] Moreover, because of their zwitterionic character, OLs have been speculated to play a crucial role for the membrane stability of Gram-negative bacteria and more broadly for the adaptation of the membrane in response to changes of environmental conditions. Under the culture conditions used in this study, OLs were produced by three of the strains but not by Shewanella sp. TC11. In a same way, components of cluster B specifically produced by Bacteroidetes (TC4 and TC7) were identified as hydroxylated OLs (HOLs). These compounds showed the same MS fragmentation pattern as their nonhydroxylated analogs, while a supplementary loss of H 2 O was observed at the beginning of the MS fragmentation pathway. HOLs have been described as metabolites specifically produced by bacteria under stress (e.g., temperature, 49 pH 50 ): the occurrence of an additional hydroxyl group seems to be implied in the membranes stability via an increase in strong lateral interactions between their components. [START_REF] Nikaido | Molecular basis of bacterial outer membrane permeability revisited[END_REF] HOLs were mainly described in α-, β-, and γ-proteobacteria and Bacteroidetes. [START_REF] Sohlenkamp | Bacterial membrane lipids: Diversity in structures and pathways[END_REF] In our study HOLs were only detected in LC-MS profiles of Bacteroidetes (TC4 and TC7) but not in those of the γ-proteobacteria (TC8 and TC11). Concerning the position of the additional hydroxyl group in these derivatives, the absence of characteristic ion fragments for ornithine headgroup hydroxylation [START_REF] Moore | Elucidation and identification of amino acid containing membrane lipids using liquid chromatography/highresolution mass spectrometry[END_REF] indicated that this group was linked to one of the two fatty acids (at the two-position). This structural feature was in agreement with the fact that hydroxylation of the ornithine headgroup in HOLs was only observed in α-proteobacteria and not in Bacteroidetes. [START_REF] Sohlenkamp | Bacterial membrane lipids: Diversity in structures and pathways[END_REF] TC4 and TC7 were also clearly discriminated from the other strains through another class of metabolites putatively identified on the basis of their HRMS/MS data as glycine lipids (GLs) and close derivatives, namely, methylglycine or alanine lipids (cluster C, Figure 4). These compounds are structurally similar to OLs, the main difference being the replacement of the ornithine unit by a glycine one. They showed a similar fragmentation sequence and were specifically characterized by headgroup fragment ions at m/z 76 (C 2 H 6 NO 2 ) for GLs and m/z 90 (C 3 H 8 NO 2 ) for methylglycine or alanine lipids. This last class of compounds needs to be further confirmed by purification and full structure characterization. From a chemotaxonomic point of view, GLs are valuable compounds because they have only been described
from Bacteroidetes and thus seem to be biomarkers of this bacterial group. [START_REF] Sohlenkamp | Bacterial membrane lipids: Diversity in structures and pathways[END_REF] Finally, when considering the discrimination between the two P. mediterranea strains, TC7 specifically produced a variety of lipids tentatively assigned as N-acyl diamines by HRMS/MS (cluster D, Figure 4). More precisely, a fragmentation pattern common for most of the compounds of cluster D showed the occurrence of a diamine backbone with an amide-linked fatty acid and yielded fragment ions at m/z 89 (C 4 H 13 N 2 ) and 72 (C 4 H 10 N) characteristic of the putrescine headgroup. [START_REF] Voynikov | Hydroxycinnamic acid amide profile of Solanum schimperianum Hochst by UPLC-HRMS[END_REF] Several other chemical members of this cluster showed similar fragmentation pathways but with fragment ions, in accordance with slight variations of the chemical structure of the headgroup (N-methylation, hydroxylation). In the case of TC7, compounds with a hydroxylated headgroup were specifically overexpressed. According to literature data, polyamines are commonly found in most living cells, and, among this chemical family, putrescine constitutes one of the simplest member. [START_REF] Michael | Polyamines in eukaryotes, bacteria, and archaea[END_REF] This diamine is widespread among bacteria and is involved in a large number of biological functions. [START_REF] Miller-Fleming | Remaining mysteries of molecular biology: The role of polyamines in the cell[END_REF] Interestingly, ornithine can form putrescine either directly (ornithine decarboxylase) or indirectly from arginine (arginine decarboxylase) via agmatine (agmatine deiminase). Taking into account the few studies on MS fragmentation of natural N-acyl diamines and the absence of commercially available standards, further structure investigations are required to fully characterize this class of bacterial biomarkers and to establish a possible biosynthetic link with OLs. Finally, some other specific clusters were remarkable in the molecular network but without possible affiliation of the corresponding compounds to an existing chemical family. Conversely, molecular components of the nondiscriminative cluster F were putatively identified as cyclic dipeptides. This type of compounds exhibits a wide range of biological functions, and cyclic dipeptides are involved in chemical signaling in Gram-negative bacteria with a potential role in interkingdom communication. [START_REF] Ryan | Diffusible signals and interspecies communication in bacteria[END_REF][START_REF] Ortiz-Castro | Transkingdom signaling based on bacterial cyclodipeptides with auxin activity in plants[END_REF][START_REF] Holden | Quorum-sensing cross talk: Isolation and chemical characterization of cyclic dipeptides from Pseudomonas aeruginosa and other Gram-negative bacteria[END_REF] ■ CONCLUSIONS
We described a metabolomics approach applied to the assessment of the effects of several culture parameters, such as culture media, growth phase, or mode of culture, in the metabolic discrimination between four marine biofilm-forming bacteria. The developed method based on a simple extraction protocol could differentiate bacterial strains cultured in organicrich media. Depending on the culture parameters, some significant intrastrain metabolic changes were observed, but overall these metabolome variations were always less pronounced than interstrain differences. Finally, several classes of biomarkers were putatively identified via HRMS/MS analysis and molecular networking. Under the culture conditions used (not phosphate-limited), OLs were thus identified as specifically produced by three of the bacteria, while HOLs and GLs were only detected in the two Bacteroidetes strains.
Our study provides evidence that such an analytical protocol is useful to explore more deeply the metabolome of marine bacteria under various culture conditions, including cultures in organic-rich media and biofilms. This efficient process gives information on the metabolome of marine bacterial strains that represent complementary data to those provided by genomic, transcriptomic, and proteomic analyses on the regulatory and metabolic pathways of marine bacteria involved in biofilms. Also, a broader coverage of the biofilm metabolome requires the examination of polar extracts even if high-salt contents limit drastically the analysis capabilities of polar compounds in marine bacterial cultures and environmental biofilm samples.
As an important result, bacterial acyl amino acids, and more broadly membrane lipids, can be used as efficient biomarkers not only for chemotaxonomy but also directly for studies directed toward bacterial stress response. Indeed, a targeted analysis of GLs would be efficient to estimate the occurrence of Bacteroidetes in complex natural biofilms, while OLs and HOLs would be valuable molecular tools to evaluate the response of bacteria to specific environmental conditions.
Moreover, to get closer to reality, future works in this specific field of research should address some more ecologically relevant questions. To this end, metabolomics studies involving multispecies cocultures or bacterial cultures implemented with signal compounds (e.g., N-acyl-homoserine lactones, diketopiperazines) may be considered in the future and linked to similar data obtained from natural biofilms. Venn diagrams showing unique and shared metabolites for the four bacterial strains in each extraction condition. Figure S4. PCA and PLS-DA score plots of the four bacterial strains in each extraction condition. Figure S5. Number of m/z features detected for the four bacterial strains depending on the culture media. Figure S6. PCA score plots of the four bacterial strains cultured in two media and PLS-DA score plots of the four bacterial strains cultured in MB. Figure S7. Number of m/z features detected for each bacteria at five time points of the growth curve. Figure S8. PLS-DA score plots of TC8, TC4, and TC11 at five time points of their growth curve. Figure S9. PLS-DA score plots of TC7, TC8, and TC11 cultured in planktonic and biofilm modes. Figure S10. PCA and PLS-DA score plots of TC7, TC8, and TC11 cultured in biofilms. Figure S11. PCA score plots of TC7, TC8, and TC11 cultured in biofilms and planktonic conditions. Figure S12. PCA score plots (LC-HRMS) of the four bacterial strains. Figure S13. PLS-DA score plots (LC-HRMS) of TC8 at five time points of its growth curve. Table S1. Composition of the MB and VNSS culture media. Table S2. Parameters of the PLS-DA models used for the intrastrain discrimination depending on different culture conditions.(PDF)
■ AUTHOR INFORMATION
Corresponding Author *Tel: (+33) 4 94 14 29 35. E-mail: [email protected].
ORCID
Olivier P. Thomas: 0000-0002-5708-1409 Geŕald Culioli: 0000-0001-5760-6394
Notes
The authors declare no competing financial interest.
■ ACKNOWLEDGMENTS
This study was partly funded by the French "Provence-Alpes-Cote d'Azur (PACA)" regional council (Ph.D. grant to L.F.). We are grateful to R. Gandolfo for the kind support of the French Mediterranean Marine Competitivity Centre (Pole Mer Mediterraneé) and thank J. C. Tabet, R. Lami, G. Genta-Jouve, J. F. Briand, and B. Misson for helpful discussions. LC-HRMS experiments were acquired on the regional platform MALLA-BAR (CNRS and PACA supports). Dedicated to Professor Louis Piovetti on the occasion of his 75th birthday.
■ ABBREVIATIONS ACN, acetonitrile; CS, cosine score; CV-ANOVA, cross validation-analysis of variance; DCM, dichloromethane; EtOAc, ethyl acetate; GC, gas-chromatography; GL, glycine lipids; GNPS, global natural product social molecular networking; HOL, hydroxylated ornithine lipid; HRMS, high resolution mass spectrometry; KEGG, Kyoto encyclopedia of genes and genomes; LC, liquid chromatography; LC-ESI-IT-MS, liquid chromatography-electrospray ionization ion trap tandem mass spectrometry; LC-MS, liquid chromatographymass spectrometry; LRMS, low resolution mass spectrometry; MB, marine broth; MeOH, methanol; NMR, nuclear magnetic resonance; OL, ornithine lipid; PCA, principal component analysis; PE, phosphatidylethanolamine; PLS-DA, partial leastsquares discriminant analysis; TC, Toulon collection; UPLC-ESI-QToF-MS, ultraperformance liquid chromatography-electrospray ionization quadrupole time-of-flight tandem mass spectrometry; VIP, variable importance on projection; VNSS, Vaäẗanen nine salt solution
Figure 1 .
1 Figure 1. Overview of the experimental workflow used for the discrimination of the four marine bacterial strains and for the putative identification of relevant biomarkers.
Figure 2 .
2 Figure 2. (a) PCA score plot obtained from LC-LRMS profiles of the four bacterial strains (extraction with EtOAc, stationary phase, planktonic cultures in VNSS). (b) PLS-DA score plot obtained from LC-MS profiles of the four bacterial strains (extraction with EtOAc, stationary phase) cultured planktonically in two media (VNSS and MB). (c) PLS-DA score plot obtained from LC-MS profiles of the four bacterial strains (extraction with EtOAc, planktonic cultures in VNSS) at five time points of their growth curve. (d) PLS-DA score plot obtained from LC-MS profiles of three of the strains (extraction with EtOAc) cultured in biofilms (two time points; dark symbols) and planktonic conditions (five time points; colored symbols) in VNSS.
Table 1 .
1 Summary of the Parameters for the Assessment of the Quality and of the Validity of the PLS-DA Models Used for the Discrimination of the Bacterial Strains According to Different Culture or Analysis Conditions
Figure 3 .
3 Figure 3. (a) PLS-DA score plot obtained from LC-HRMS profiles of the four bacterial strains (extraction with EtOAc, stationary phase, planktonic cultures in VNSS). (b) PLS-DA loading plots with the most contributing mass peaks (VIPs) numbered from 1 to 17. (c) Heatmap of the 17 differential metabolites with VIP values ≥3.0 from the PLS-DA model. Detailed VIPs description is given in Table2.
Figure 4 .
4 Figure 4. Molecular networks of HRMS fragmentation data obtained from cultures of the four bacterial strains (extraction with EtOAc, stationary phase, planktonic cultures in VNSS). AL: alanine lipid, GL: glycine lipid, HOL: hydroxylated ornithine lipid, HMPL: hydroxylated and methylated putrescine lipid, LOL: lyso-ornithine lipid. MPL: methylated putrescine lipid, OL: ornithine lipid, OPL: oxidized putrescine lipid, PE: phosphatidylethanolamine, PL: putrescine lipid.
Figure 5 .
5 Figure 5. (a) HRMS mass spectra of Pseudoalteromonas lipolytica TC8 ornithine lipid at m/z 677.5785 (VIP no. 1). (b) Proposed fragmentation of VIP no. 1. (The elemental composition of fragment ions is indicated and the corresponding theoretical value of m/z is given in parentheses.)
■
Figure S1. Growth stages of the four bacterial strains. Figure S2. Number of m/z features detected for the four bacteria depending on the extraction solvent. Figure S3.
Figure S1. Growth stages of the four bacterial strains. Figure S2. Number of m/z features detected for the four bacteria depending on the extraction solvent. Figure S3.
Table 2 .
2 List of the Biomarkers (VIP Value ≥3) Identified by LC-HRMS for the Discrimination of the Four Bacterial Strains Constructor statistical match factor obtained by comparison of the theoretical and observed isotopic pattern. b Total intensity of the explained peaks with respect to the total intensity of all peaks in the fragment spectrum peak list. c HOL: hydroxylated ornithine lipid, LOL: lyso-ornithine lipid, OL: ornithine lipid, PE: phosphatidylethanolamine.
mass
VIP number m/z RT (s) VIP value formula error (ppm) mσ a I expl (%) b MS/MS fragment ions (relative abundance in %) putative identification c
1 677.5806 438 4.0 C 41 H 77 N 2 O 5 3.8 4.9 63.5 659 (3) d , 413 (16) e , 395 (62) f , 377 (62) g , 159 (4) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (25) h OL (C18:1, C18:1)
2 625.5501 425 4.0 C 37 H 73 N 2 O 5 2.0 3.6 93.6 607 (2) d , 387 (11) e , 369 (41) f , 351 (44) g , 159 (6) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (24) h OL (C16:0, C16:0)
3 651.5653 429 3.7 C 39 H 75 N 2 O 5 2.7 5.9 61.7 633 (3) d , 413 (6) e , 395 (24) f , 377 (24) g , 159 (4) h , 141 (3) h , 133 (4) h , 115 (100) h , 70 (21) h OL (C18:1, C16:0)
4 611.5354 426 3.7 C 36 H 71 N 2 O 5 0.6 2.8 62.5 593 (2) d , 387 (12) e , 369 (43) f , 351 (48) g , 159 (4) h , 141 (3) h , 133 (4) h , 115 (100) h , 70 (25) h OL (C16:0, C15:0)
5 627.5304 408 3.5 C 36 H 71 N 2 O 6 0.4 2.4 81.7 609 (<1) d , 591 (<1) i , 387 (8) f , 369 (58) g , 351 (62) j , 159 (7) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (16) h HOL (C16:0, C15:0)
6 641.5462 417 3.4 C 37 H 73 N 2 O 6 0.2 1.7 80.5 623 (1) d , 605 (<1) i , 401 (7) f , 383 (50) g , 365 (52) j , 159 (6) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (15) h HOL (C17:0, C15:0)
7 597.5184 418 3.3 C 35 H 69 N 2 O 5 2.8 1.8 57.7 579 (2) d , 387 (2) e , 369 (8) f , 351 (10) g , 159 (4) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (36) h OL (C16:0, C14:0)
8 613.5149 398 3.3 C 35 H 69 N 2 O 6 0.2 1.3 78.9 595 (1) d , 577 (<1) i , 387 (5) f , 369 (33) g , 351 (37) j , 159 (7) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (22) h HOL (C16:0, C14:0)
9 623.5339 425 3.2 C 37 H 71 N 2 O 5 2.9 1.5 66.8 605 (2) d , 385 (3) e , 367 (12) f , 349 (12) g , 159 (4) h , 141 (3) h , 133 (4) h , 115 (100) h , 70 (24) h OL (C16:1, C16:0)
10 639.5299 403 3.0 C 37 H 71 N 2 O 6 1.2 10.5 81.0 621 (1) d , 603 (<1) i , 399 (<1) f , 381 (58) g , 363 (56) j , 159 (7) h , 141 (3) h , 133 (6) h , 115 (100) h , 70 (21) h HOL (C17:1, C15:0)
11 621.5182 409 3.0 C 37 H 69 N 2 O 5 3.0 8.6 61.0 603 (2) d , 385 (15) e , 367 (54) f , 349 (52) g , 159 (5) h , 141 (3) h , 133 (6) h , 115 (100) h , 70 (36) h OL (C16:1, C16:1)
12 387.3218 294 3.0 C 21 H 43 N 2 O 4 -0.3 12 103.6 369 (3) d , 351 (5) i , 159 (1) h , 141 (3) h , 133 (7) h , 115 (100) h , 70 (62) h LOL (C16:0)
13 653.5808 446 3.0 C 39 H 77 N 2 O 5 2.9 7.1 63.0 635 (2) d , 415 (8) e , 397 (28) f , 379 (29) g , 159 (5) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (35) h OL (C18:0, C16:0)
14 649.5498 430 3.0 C 39 H 73 N 2 O 5 2.5 2.6 66.0 631 (2) d , 413 (7) e , 395 (27) f , 377 (27) g , 159 (4) h , 141 (3) h , 133 (4) h , 115 (100) h , 70 (23) h OL (C18:1, C16:1)
15 440.2769 340 3.0 C 20 H 43 NO 7 P 0.2 15.2 76.6 299 (100) k PE (C15:0)
16 401.3373 305 3.0 C 22 H 41 N 2 O 4 0.1 16.0 81.5 383 (4) d , 365 (6) i , 159 (2) h , 141 (4) h , 133 (7) h , 115 (100) h , 70 (60) h LOL (C17:0)
17 413.5187 304 3.0 C 23 H 45 N 2 O 4 0.2 13.6 80.5 395 (4) d , 377 (6) i , 159 (2) h , 141 (4) h , 133 (10) h , 115 (100) h , 70 (51) h LOL (C18:1)
a d [M + H -H 2 O] + . e [M + H -RCOH] + . f [M + H -H 2 O -RCOH] + . g [M + H -2 H 2 O -RCOH] + . h Other typical OL ion fragments. i [M+H -2 H 2 O]. j [M + H -3 H 2 O -RCOH] + . k [M + H -C 2 H 8 NO 4 P] + . | 69,228 | [
"18764",
"18399",
"1150743",
"747471"
] | [
"84790",
"84790",
"188653",
"188653",
"473420",
"188653",
"180118",
"84790"
] |
01681621 | en | [
"sdu",
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01681621/file/Geijzendorffer_2017_EnvSciPol_postprint.pdf | Ilse R Geijzendorffer
email: [email protected]
Emmanuelle Cohen-Shacham
Anna F Cord
Wolfgang Cramer
Carlos Guerra
Berta Martín-López
Ecosystem Services in Global Sustainability Policies
Keywords: Aichi Targets, human well-being, indicators, monitoring, reporting, Sustainable Development Goals. Highlights
+ Acknowledgements (95) + References (1796). The manuscript contains 4 Figures (296 words
), 3 Tables (668 words), 55 References and an Online appendix.
Introduction
Multiple international policy objectives aim to ensure human well-being and the sustainability of the planet, whether via sustainable development of society or via biodiversity conservation, e.g. the Sustainable Development Goals (SDGs) and the Conventional of Biological Diversity (CBD) Aichi Targets. To evaluate progress made towards these objectives and to obtain information on the efficiency of implemented measures, effective monitoring schemes and trend assessments are required [START_REF] Hicks | Engage key social concepts for sustainability[END_REF]. Whereas the CBD has been reporting on progress towards objectives in Global Outlooks since 20011 , a first list of indicators has recently been launched.
There is broad consensus that pathways to sustainability require a secure supply of those ecosystem services that contribute to human well-being (Fig. 1; [START_REF] Griggs | Policy: Sustainable development goals for people and planet[END_REF][START_REF] Wu | Landscape sustainability science: ecosystem services and human well-being in changing landscapes[END_REF]. The ecosystem service concept is an important integrated framework in sustainability science [START_REF] Liu | Systems integration for global sustainability[END_REF], even if the term ecosystem services is not often explicitly mentioned in policy objectives. Nevertheless, a number of specific ecosystem services are mentioned in documents relating to the different objectives stated in the SDGs and Aichi Targets. For example, there is an explicit mentioning of regulation of natural hazards in SDG 13 and of carbon sequestration in Aichi Target 15. Especially for the poorest people, who most directly depend on access to ecosystems and their services [START_REF] Daw | Applying the ecosystem services concept to poverty alleviation: the need to disaggregate human well-being[END_REF][START_REF] Sunderlin | Livelihoods, forests, and conservation in developing countries: An Overview[END_REF], information on ecosystem services state and trends should be highly relevant [START_REF] Wood | Ecosystems and human well-being in the Sustainable Development Goals[END_REF]. Trends in biodiversity, ecosystem services and their impact on human well-being as well as sustainability must be studied using an integrated approach [START_REF] Bennett | Linking biodiversity, ecosystem services, and human well-being: three challenges for designing research for sustainability[END_REF][START_REF] Liu | Systems integration for global sustainability[END_REF]. The SDG ambitions could potentially offer key elements for this integration. Most assessments use a pragmatic approach to select indicators for ecosystem services, often only focusing on those indicators and ecosystem services, for which data are readily available. Although this helps to advance the knowledge on ecosystem services on many aspects, it may not cover the knowledge required to monitor progress towards sustainability [START_REF] Hicks | Engage key social concepts for sustainability[END_REF]. Regions characterized by high vulnerability of ecosystem services supply and human well-being, such as the Mediterranean Basin [START_REF] Schröter | Ecosystem Service Supply and Vulnerability to Global Change in Europe[END_REF], require information on the trends in on all aspects ecosystem services flows including the impact of governance interventions and pressures on social-ecological systems.
Considerable progress has been made in developing integrative frameworks and definitions for ecosystem services and the quantification of indicators (e.g. [START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF][START_REF] Maes | An indicator framework for assessing ecosystem services in support of the EU Biodiversity Strategy to[END_REF], but it is unclear to which extent the current state of the art in ecosystem services assessments is able to provide the information required for monitoring the SDGs and the Aichi Targets. Since the publication of the Millennium Ecosystem Assessment in 2005, multiple national ecosystem services assessments have been undertaken, such as the United Kingdom National Ecosystem Assessment (UK National Ecosystem Assessment, 2011), the Spanish NEA [START_REF] Santos-Martín | Unraveling the Relationships between Ecosystems and Human Wellbeing in Spain[END_REF] or the New Zealand assessment [START_REF] Dymond | Ecosystem services in New Zealand[END_REF]. Furthermore, in the context of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES), regional and global assessments are planned for 2018 and 2019, respectively. The ecosystem services indicators used in these national, regional and global assessments could also provide relevant information for monitoring the progress towards these global sustainability objectives.
The main goal of the present study is to explore to what extent the ecosystem services concept has been incorporated in global sustainability policies, particularly the SDGs and the Aichi Targets. For this objective, we i) assessed the information on ecosystem services currently recommended to monitor the progress on both policy documents and ii) identified which information on ecosystem services can already be provided on the basis of the indicators reported in national ecosystem assessments. Based on these two outputs, we iii) identified knowledge gaps regarding ecosystem services for monitoring the progress on global policy objectives for sustainability.
Material and methods
Numerous frameworks exist to describe ecosystem services (e.g., [START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF][START_REF] Maes | An indicator framework for assessing ecosystem services in support of the EU Biodiversity Strategy to[END_REF], but there is general agreement that a combination of biophysical, ecological and societal components is required to estimate the flow of actual benefits arriving to the beneficiary. In line with the ongoing development of an Essential Ecosystem Services Variable Framework in the scope of the Global Earth Observation Biodiversity Observation Network (GEO BON), we used a framework that distinguishes variables of ecosystem services flows (Tab. 1): the ecological potential for ecosystem services supply (Potential supply), and the societal co-production (Supply), Use of the service, Demand for the service as well as Interests and governance measures for the service (Tab. 1, adapted from [START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF]. We hereafter refer to these variables with capitals to increase the readability of the text. Using this framework, we i) identified and ranked the frequency at which specific ecosystem services are mentioned, within and across the selected policy documents [START_REF] Cbd | Decision document UNEP/CBD/COP/DEC/X/2; Quick guides to the Aichi Biodiversity Targets, version 2[END_REF]United Nations, 2015a); ii) reviewed indicators currently used for reporting on the Aichi Targets (Global Outlook) and iii) reviewed the 277 indicators currently being used in national ecosystem assessments, to identify any existing information gaps.
Only monitoring data that feed all the variables of this framework allows detecting trends and interpreting changes in ecosystem services flow. One example relevant for the SDGs is a food deficit indicator (e.g. insufficient calories intake per capita). An increase in calorie intake in a specific country would indicate the need for additional interventions. However, depending on the cause of this increased deficit, some interventions are more likely to be effective than others. For example, the food deficit could be caused by a change in demand (e.g. increased population numbers), in the service supply (e.g. agricultural land abandonment), or in the ecological potential to supply services (e.g. degradation of soils).
We structured our analysis of indicators by distinguishing between indirect and direct indicators (Tab. 1). While direct indicators assess an aspect of an ecosystem service flow (e.g. tons of wheat produced), indirect indicators provide proxies or only partial information (e.g. hectares of wheat fields under organic management) necessary to compute the respective indicator. Our review does not judge the appropriateness or robustness of the respective indicator (as proposed by [START_REF] Hák | Sustainable Development Goals: A need for relevant indicators[END_REF], nor did we aim to assess whether the underlying data source was reliable or could provide repeated measures of indicators over time. We only looked at the type of information that was described for each of the ecosystem services mentioned in the policy objectives and the type of indicators proposed for reporting on these policies.
The data for reporting on the SDGs is currently provided by national statistical bureaus and we therefore wanted to identify which ecosystem services indicators might be available at this level. To get a first impression, we reviewed the indicators used in 9 national ecosystem assessments and the European ecosystem assessment.
A network analysis was used to determine the associations between i) ecosystem services within the SDGs and the CBD Aichi Targets, ii) the variables of ecosystem services flows and proposed indicators for both policies and iii) the categories of ecosystem services and the components of the ecosystem service flow, in the indicators used in national and the European ecosystem assessments. The network analysis was performed using Gephi [START_REF] Bastian | Gephi: an open source software for exploring and manipulating networks[END_REF] and their visualization was subsequently produced using NodeXL (https://nodexl.codeplex.com/, last consulted January 13 th 2017).
Managed Supply
Type and quantity of services supplied by the combination of the Potential supply and the impact of interventions (e.g., management) by people in a particular area and over a specific time period.
Capacity [START_REF] Schröter | Ecosystem Service Supply and Vulnerability to Global Change in Europe[END_REF], supply [START_REF] Crossman | A blueprint for mapping and modelling ecosystem services[END_REF], service capacity [START_REF] Villamagna | Capacity, pressure, demand, and flow: A conceptual framework for analyzing ecosystem service provision and delivery[END_REF]; supply capacity of an area [START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF]; actual ecosystem service provision [START_REF] Guerra | Mapping Soil Erosion Prevention Using an Ecosystem Service Modeling Framework for Integrated Land Management and Policy[END_REF]; ecosystem functions under the impact of "land management" [START_REF] Van Oudenhoven | Framework for systematic indicator selection to assess effects of land management on ecosystem services[END_REF]; Service Providing Unit-Ecosystem Service Provider Continuum [START_REF] Harrington | Ecosystem services and biodiversity conservation: concepts and a glossary[END_REF].
Harvested biomass; potential pressures that a managed landscape can absorb; extent of landscape made accessible for recreation.
Modelled estimates of harvestable biomass under managed conditions; soil cover vegetation management; financial investments in infrastructure.
Use
Quantity and type of services used by society.
Flow [START_REF] Schröter | Ecosystem Service Supply and Vulnerability to Global Change in Europe[END_REF][START_REF] Schröter | Accounting for capacity and flow of ecosystem services: A conceptual model and a case study for Telemark, Norway[END_REF]; service flow [START_REF] Villamagna | Capacity, pressure, demand, and flow: A conceptual framework for analyzing ecosystem service provision and delivery[END_REF]; "demand" (match and demand aggregated into one term) [START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF][START_REF] Crossman | A blueprint for mapping and modelling ecosystem services[END_REF].
Biomass sold or otherwise used; amount of soil erosion avoided while exposed to eroding pressures; number of people actually visiting a landscape.
Estimations of biomass use for energy by households; reduction of soil erosion damage; distance estimates from nearby urban areas.
Demand
Expression of demands by people in terms of actual allocation of scarce resources (e.g. money or travel time) to fulfil their demand for services, in a particular area and over a specific time period.
Stakeholder prioritisation of ecosystem services [START_REF] Martín-López | Trade-offs across valuedomains in ecosystem services assessment[END_REF], service demand [START_REF] Villamagna | Capacity, pressure, demand, and flow: A conceptual framework for analyzing ecosystem service provision and delivery[END_REF], demand [START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF].
Prices that people are willing to pay for biomass; amount of capital directly threatened by soil erosion; time investment, travel distances and prices people are willing to pay to visit a landscape.
Computation of average household needs; remaining soil erosion rates; survey results on landscape appreciation.
Interests
An expression of people's interests for certain services, in a particular area and over a specific time period. These tend to be longer wish-lists of services without prioritisation.
Identification of those important ecosystem services for stakeholders' well-being [START_REF] Martín-López | Trade-offs across valuedomains in ecosystem services assessment[END_REF]; beneficiaries with assumed demands [START_REF] Bastian | The five pillar EPPS framework for quantifying, mapping and managing ecosystem services[END_REF].
Subsidies for bio-energy; endorsement of guidelines for best practices for soil management; publicity for outdoor recreation.
Number of people interested in green energy; number of farmers aware of soil erosion; average distance of inhabitants to green areas.
Identification of ecosystem services in the SDGs and Aichi Targets
Two international policy documents were selected for review: the SDGs (United Nations, 2015a) and the CBD Aichi Targets (CBD, 2013). Both documents have global coverage and contain objectives on sustainable development, related to maintaining or improving human well-being and nature. The classification of ecosystem services used in this paper is based on [START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF], which matched best with the terminology of policy documents and the national assessments.
For each policy document, we determined the absolute and relative frequency at which an ecosystem service was mentioned. This frequency was also used to produce a relative ranking of ecosystem services, within and across these policy documents. Although the SDGs and the Aichi Targets include several statements on specific ecosystem services (e.g. food production, protection from risks), the term "ecosystem services" is not often mentioned. In the SDGs, for instance, ecosystem services explicitly occur only once (Goal 15.1). In contrast, "sustainable development or management" and "sustainable use of natural resources" are mentioned several times, although not further specified. While the latter could be interpreted to mean that the use of nature for provisioning purposes should not negatively affect regulating services, we preferred to remain cautious and not make this assumption, when reviewing the policy documents. We are therefore certain that we underestimate the importance of knowledge on ecosystem services regarding the different policy objectives.
Proposed ecosystem services indicators for the SDGs and Aichi Targets
In addition to the ecosystem services directly mentioned in the policy objectives, we also reviewed the type of information on ecosystem services proposed to monitor the progress towards the policy objectives. To this end, we used the 2015 UN report (United Nations, 2015b) for the SDGs. For the Aichi Targets, we focused on the recently proposed (but still under development) indicator list [START_REF] Cbd | Report of the ad hoc technical expert group on indicators for the strategic plan for biodiversity 2011-2020[END_REF] and on the indicators recently used in the Global Biodiversity Outlook 4 (CBD, 2014).
Review of national ecosystem services assessments
Although many authors propose indicators for ecosystem services (e.g. Böhnke-Hendrichs et al., 2013;[START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF], not all indicators can be used for monitoring, due to lack of available data at the relevant scale or because current inventories do not provide sufficient time series for trend assessment.
For the CBD reporting, continuous efforts are made to provide monitoring information at global level, for instance via the use of Essential Biodiversity Variables (e.g. [START_REF] O'connor | Earth observation as a tool for tracking progress towards the Aichi Biodiversity Targets[END_REF]. Reporting for the SDGs, however, will heavily rely on the capacity of national statistical bureaus to provide the required data (ICSU, ISSC, 2015).
To estimate the type of ecosystem services indicators that might be available at national level, we selected national ecosystem assessment reports, which were openly available and written in one of the seven languages mastered by the co-authors (i.e. English, Spanish, Portuguese, Hebrew, French, German and Dutch). Nine assessments fulfilled these criteria (see Tab. 2). We complemented them with the European report [START_REF] Maes | Mapping and Assessment of Ecosystems and their Services: Trends in ecosystems and ecosystem services in the European Union between 2000 and[END_REF], which is considered to be a baseline reference for upcoming national assessments in European member states. The selection criteria resulted in the inclusions of 9 national assessments from three continents, but there is a bias towards European and developed countries.
Results and discussion
Ecosystem services mentioned in policy objectives
The need for information on ecosystem services from all three categories (i.e. provisioning, regulating and cultural) is mentioned in both policies, and reflects earlier suggestions on the integrative nature of the policy objectives on sustainable development, especially for the SDGs (Le [START_REF] Blanc | Towards Integration at Last? The Sustainable Development Goals as a Network of Targets: The sustainable development goals as a network of targets[END_REF]. Among the 17 SDGs and the 20 Aichi Targets, 12 goals and 13 targets respectively, relate to ecosystem services. Across both policy documents, all ecosystem service categories are well covered, the top 25% of the most cited ecosystem services being: Natural heritage and diversity, Capture fisheries, Aquaculture, Water purification, Crops, Livestock and Cultural heritage & diversity (Table 3). In the SDGs, provisioning services are explicitly mentioned 29 times, regulating services 33 times and cultural services 23 times. In the Aichi Targets, provisioning services are explicitly mentioned 29 times, regulating services 21 times and cultural services 13 times.
When considering the different ecosystem service categories, SDG 2 (end hunger, achieve food security and improved nutrition, and promote sustainable agriculture) and Aichi Goal B (reduce the direct pressures on biodiversity and promote sustainable use) heavily rely on provisioning services, with the latter also relying on regulating services (Fig. 2). Cultural services are more equally demanded over a range of policy objectives, with the service Natural heritage & diversity being the most demanded ecosystem service (see Tab. A.1).
Recent reviews of scientific ecosystem services assessments (e.g.
Proposed ecosystem services indicators
The analysis of the proposed indicators for reporting on both policy objectives (n=119) demonstrated that in total 43 indicators represented information on Potential supply with the other variables being represented by indicators in the 15-24 range (Fig. 3A). This bias towards supply variables is remarkable for the Aichi Targets (Fig. 3A). Another observed pattern is that the variables Demand and Interest are more often represented by proposed indicators for the SDGs than for the Aichi Targets (i.e. demand 11 versus 5 and interest 13 versus 4, respectively). The results therefore provide support for the claim that the SDGs aim to be an integrative policy framework (Le [START_REF] Blanc | Towards Integration at Last? The Sustainable Development Goals as a Network of Targets: The sustainable development goals as a network of targets[END_REF], at least in the sense that the proposed indicators for SDGs demonstrate a more balanced inclusion of ecological and socio-economic information.
A comparison of the number of ecosystem services that are relevant for the SDGs with the total number of indicators proposed for monitoring, however, reveals that balanced information from the indicators is unlikely to concern all ecosystem services (Figure 3). The proposed indicators never cover all five variables for a single SDG target except for one SDGs target (i.e. SDG 15: "Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss"). Among the Aichi Targets, none of the Strategic Goals was covered by indicators representing all five variables. The frequencies at which ecosystem services are presented for the policy reports are surprisingly low (Figure 3B). In an ideal situation, each of the ecosystem services would have been covered by indicators representing the five variables (i.e. frequency value of 1). Our results demonstrate a highest frequency value of 0.4 for SDG target 13 ("Take urgent action to combat climate change and its impacts"), caused by several indicators representing only two variables (i.e. demand and interest). The SDG list of indicators is kept short on purpose to keep reporting feasible, but if the indicators and data were available through national or global platforms (e.g. IPBES, World Bank), a longer list of readily updated indicators might not be so problematic. Despite the identified value of information on ecosystem services as presented in section 3.1, it seems that entire ecosystem service flows (from Potential supply to Interest) are poorly captured by the proposed and (potentially) used indicators. The information recommended for Aichi Targets shows a strong bias on the supply side of ecosystem services flow (i.e. Potential supply and Supply), whereas this seems more balanced for SDGs. However, the overall information demanded is very low, given the number of services that are relevant for the policies (Fig. 3). Variables linked to social behaviour and ecosystem services consumption (i.e. Demand and Use) and Governance (i.e. Interest) are much less represented in Aichi targets and this bias is enforced when looking at the actually used indicators. As the SDGs reporting is based on information from national statistical bureaus, we can wonder whether their data will demonstrate a similar bias or not, as the used data sources can be of a different nature (e.g. some indicators may come from national censors). Results from section 3.3 make it clear that if SDGs reports rely only on national ecosystem reports for their information, it will likely demonstrate the same bias as found in the Aichi Target reports. To obtain more balanced information for the SDGS, national statistical bureaus would be ideally placed to add complementary social and economic data on other variables.
Ecosystem service information in national assessments
The national ecosystem assessments analysis demonstrates the availability of a significant amount of information on ecosystem services flows at national level (Appendix A,Tab. A.4). It has to be noted that as the analysed national ecosystem assessments under represent developing countries and non-European countries, the available information at a global level might be significantly lower. However, some national reports may not have been detected or included in our review, for instance because we did not find them on the internet or because they were not written in any of the languages mastered by the authors.
The available knowledge in the selected ecosystem assessments on ecosystem services flows shows, however, a considerable bias towards Supply information on provisioning services and Potential supply information for regulating services. Cultural ecosystem services as well as Use, Demand and Interest variables are not well covered in national assessments. In addition, only for some ecosystem services (e.g., Timber, Erosion Regulation, Recreation) information is available for all relevant ecosystem services variables (Fig. A.2).
In total, we identified 277 ecosystem services indicators in the ten selected ecosystem services assessments (Tab. A.2). Within these 277 indicators, most provide information on provisioning services (126, 45%), whereas 121 indicators provide information on regulating services (44%). The remaining 30 indicators (11%) provide information on cultural services. Based on the network analysis, we can clearly see that indicators used for provisioning services mostly represent information on the Supply variable, whereas indicators used for regulating services mostly represent the Potential supply variable (Fig. 4).
Figure 4. Relative representation of the indicators used in analysed National
Ecosystem Assessments, according to ecosystem services category (provisioning, regulating or cultural services) and the ecosystem service variables (Potential supply, Supply, Use, Demand or Interest). The line width indicates the frequency at which indicators of a certain ecosystem service category were used to monitor any of the components of the ecosystem services flow. The size of the nodes is proportional to the number of ties that a node has. Among the 277 indicators, 39 did not provide a measure of service flow, but rather of the pressure (e.g. amount of ammonia emission) or of the status quo (e.g. current air quality). None of these measures provide information on the actual ecosystem service flow; they rather reflect the response to a pressure. The status quo can be considered to result from the interplay between exerted pressure and triggered ecosystem services flow. Among the 39 indicators, 38 were used to quantify regulating services, leaving a total number of 83 indicators to quantify variables of regulating ecosystem services flows.
The 238 indicators of ecosystem service flows are almost equally divided between direct and indirect indicators, namely 124 versus 114, respectively (Tab. A.2). The distribution of the indicators within the different ecosystem service categories differs. Among the different variables, Interest is least represented by the different indicators. The pattern is most pronounced for provisioning services, where there is relatively little information available on Demand and Interest (Fig. 4). For regulating services, most information seems available on the Potential supply side of the ecosystem services flow (Fig. 4).
The cultural ecosystem services category has the lowest number of indicators used for monitoring the ecosystem service flow (Tab. A.2). Regardless of general patterns, indicators are available only for very few services, for all five variables (Fig. A.2). For the top 25% services most frequently mentioned in the policies, there is a similar bias towards indicators on Supply (Tab. A.3), mainly stemming from the provisioning services crop and livestock (Tab. A.4), whereas no indicators were included for the ecosystem service Natural heritage and natural diversity.
As already acknowledged by IPBES, capacity building is needed to increase the number of readily available indicators for ecosystems services at national and global levels. The capacity to monitor spatially-explicit dynamics of ecosystem services, including multiple variables of the ecosystem services flow simultaneously, could benefit from the application of process-oriented models (e.g. [START_REF] Bagstad | Spatial dynamics of ecosystem service flows: A comprehensive approach to quantifying actual services[END_REF][START_REF] Guerra | An assessment of soil erosion prevention by vegetation in Mediterranean Europe: Current trends of ecosystem service provision[END_REF], the use of remote sensing for specific variables (e.g. [START_REF] Cord | Monitor ecosystem services from space[END_REF], or by aligning with censor social and economic data (e.g. Hermans-Neumann et al., 2016).
Recommendations for improvement towards the future
The biased information on ecosystem service flows hampers an evaluation of progress on sustainable development. If policy reports are not able to identify whether trends in supply, consumption and demand of ecosystem services align, it will be difficult to identify if no one is left behind [START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF]. Apart from the results of the structured analysis, three other issues emerged from the review, which we want to mention here to raise awareness and stimulate inclusion of these issues in further scientific studies.
First, trade-offs play a crucial role in the interpretation of the sustainability of developments related to human well-being [START_REF] Liu | Systems integration for global sustainability[END_REF][START_REF] Wu | Landscape sustainability science: ecosystem services and human well-being in changing landscapes[END_REF] and often include regulating services [START_REF] Lee | A quantitative review of relationships between ecosystem services[END_REF]. Interestingly, in the case of the SDGs, where the objective of sustainable development is a key concept, no indicators are proposed to monitor whether the impacts of progress on some objectives (e.g. industry development mentioned in Target 16) might negatively affect progress towards another objective (e.g. water availability and water quality mentioned in Target 6). Without monitoring of tradeoffs between objectives and underlying ecosystem services, it will be difficult to determine whether any progress made can be considered sustainable for improving human well-being [START_REF] Costanza | The UN Sustainable Development Goals and the dynamics of well-being[END_REF][START_REF] Nilsson | Policy: Map the interactions between Sustainable Development Goals[END_REF]. Reporting on global sustainability policies would greatly benefit from the development and standardisation of methods to detect trends in trade-offs between ecosystem services, and between ecosystem services and other pressures. The ongoing IPBES regional and global assessments could offer excellent opportunities to develop comprehensive narratives that include the interactions between multiple ecosystem services and between them and drivers of change. Global working groups on ecosystem services from GEO BON2 and the Ecosystem Services Partnership3 can render ecosystem services data and variables usable in a wide set of monitoring and reporting contexts by developing frameworks connecting data to indicators and monitoring schemes.
Second, the applied framework of variables of ecosystem service flows did not allow for an evaluation of the most relevant spatial and temporal scales, or for indicators' units. Most ecosystem services are spatially explicit and show spatial and temporal heterogeneity that requires information on both ecological and social aspects of ecosystem services flows (e.g. [START_REF] Guerra | An assessment of soil erosion prevention by vegetation in Mediterranean Europe: Current trends of ecosystem service provision[END_REF][START_REF] Guerra | Mapping Soil Erosion Prevention Using an Ecosystem Service Modeling Framework for Integrated Land Management and Policy[END_REF]. To monitor progress towards the Aichi Targets, the tendency to date has been to develop indicators and variables that could be quantified at global level, with the framework of Essential Biodiversity Variables being a leading concept [START_REF] O'connor | Earth observation as a tool for tracking progress towards the Aichi Biodiversity Targets[END_REF][START_REF] Pereira | Essential Biodiversity Variables[END_REF][START_REF] Pettorelli | Framing the concept of satellite remote sensing essential biodiversity variables: challenges and future directions[END_REF]. Although indicators with global coverage can be very effective in communicating and convincing the audience on the existence of specific trends (e.g. the Living Planet Index4 ), they are not likely to provide sufficient information to inform management or policy decisions, at local or national scales. For the SDGs, which are at a much earlier stage of development than the Aichi Targets, data will be provided at national level by national statistical bureaus (ICSU, ISSC, 2015), which may better suit national decision makers deciding on implementation of interventions. The current approach of reporting on SDGs progress at national level may also allow easier integration of information on ecosystem services available from national assessments. Although the number of available national ecosystem assessments is still rising, developing countries are currently underrepresented. Developing national assessments in these countries is therefore an important for the credible reporting on Aichi targets and SDGs.
Third, national ecosystem assessments would ideally provide information at the spatio-temporal scale and unit most relevant for the ecosystem services at hand [START_REF] Costanza | Ecosystem services: Multiple classification systems are needed[END_REF][START_REF] Geijzendorffer | The relevant scales of ecosystem services demand[END_REF]. This would allow for the identification of people who do not have enough access to particular ecosystem services (e.g. gender related, income related) at a sub-national level. The assessment of progress in human well-being for different social actors within the same country, requires alternative units of measurement than national averages for the whole population in order to appraise equity aspects [START_REF] Daw | Applying the ecosystem services concept to poverty alleviation: the need to disaggregate human well-being[END_REF][START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF]. Further, although the setting of the SDGs was done by national governments, achieving sustainable development requires the engagement of multiple social actors operating at local level. Some of these local actors (e.g. rural or indigenous communities, low-income neighbourhoods, migrants or women) play a relevant role in achieving the SDGs, because they are more vulnerable to the impact of unequal access to and distribution of ecosystem services.
Although some of the indicators and objectives of SDGs mention particular actor groups (e.g. women), the representation of vulnerable groups will require special attention throughout the different targets and ecosystem services.
Conclusion
This study demonstrates that information from all ecosystem services categories is relevant for the monitoring of the Aichi Targets and the SDGs. It identifies a bias in the information demand as well as in the information available from indicators at national level towards supply related aspects of ecosystem services flows, whereas information on social behaviour, use, demand and governance implementation is much less developed.
The National statistical bureaus currently in charge of providing the data for reporting on the SDGs could be well placed to address this bias, by integrating ecological and socio-economic data. In addition, IPBES could potentially address gaps between national and global scales, as well as improve coverage of ecosystem services flows. As its first assessments of biodiversity and ecosystem services are ongoing, IPBES is still adapting its concepts. To live up to its potential role, IPBES needs to continue to adapt concepts based on scientific conceptual arguments and not based on current day practical constraints, such as a lack of data, or political sensitivities. This manuscript demonstrates the importance of data and indicators for global sustainability policies and which biases we need to start readdressing, now.
Appendix A: The frequency at which ecosystem services are mentioned per target, in the policy documents. The review of the national assessment reports showed no indicators explicitly linked to the Natural heritage and natural diversity service (Table S3). We might consider that some aspects of this service may be captured by other cultural services, such as the appreciation by tourists or knowledge systems.
However, the interpretation of this specific service is generally considered to be very difficult. Many consider that the intrinsic value of biodiversity, although very important, cannot be considered an ecosystem service as the direct benefit for human well-being is not evident, but rather as an ecological characteristic [START_REF] Balvanera | Quantifying the evidence for biodiversity effects on ecosystem functioning and services: Biodiversity and ecosystem functioning/services[END_REF][START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF]. To include to the Natural heritage and natural diversity service in our review, we considered that only information on biodiversity aspects for which human appreciation was explicitly used as criteria, should be included in this particular ecosystem service. This means that general patterns in species abundance (e.g. Living Planet Index), habitat extent or the presence of red list of species, were considered as important variables for biodiversity, only if they supported specific ecological functions (e.g. mangrove extent for life cycle maintenance by providing nurseries for fish), but not as an indicator for the supply of the natural heritage service in general.
Figure 1 .
1 Figure 1. Contribution of ecosystem services to human well-being, with direct contributions being indicated with black arrows and indirect contributions by dotted arrows.Figure adapted from Wu (2013).
Figure 1. Contribution of ecosystem services to human well-being, with direct contributions being indicated with black arrows and indirect contributions by dotted arrows.Figure adapted from Wu (2013).
Fig 2 .
2 Fig 2. Relative importance of ecosystem service categories for the different policy objectives. The line width indicates the frequency at which a certain ecosystem service category was mentioned in relation to a specific goal of the SDGs or Aichi Targets (goals for which no relation to ecosystem services was found are not shown). The size of the nodes is proportional to the number of ties that a node has.
For3BFigure 3 .
3 Figure 3. Relative importance of each of the ecosystem services variables (Potential supply, Supply, Use, Demand and Interest) recommended for the monitoring of the global sustainability objectives. (A) The number of proposed and used indicators for the reporting on the progress of the sustainability goal in policy documents per ecosystem service variable. (B) Relative frequencies (0-1) at which information from variables are represented by indicators per policy target. Frequency values are standardized for the total number of services linked to individual policy target (nES) and the legend indicates nSDG and nAichi for the total number of proposed indicators for each ES variable per policy programme respectively. Policy targets which did not mention ecosystem services were not included in the figure.
nSDGS=13; nAichi = 30) Supply (nSDGs=7; nAichi =14)) Use (nSDGs=10; nAichi=3) Demand (nSDG =11;nAichi=5)Interest (nSDG = 13; nAichi=4) 3.
Figure A. 1 :
1 Figure A.1: Degree (the number of connections) per ecosystem service across both policy documents
Table 1 : Evaluation framework for the indicators on ecosystem service flows (
1 adapted from[START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF]. While direct indicators can be used to immediately assess the needed information, indirect indicators provide proxies or only partial information necessary to compute the respective indicator.
Information
component Definition Related terms used in other papers Examples of direct indicators Examples of indirect indicators Potential Supply
Estimated supply of ecosystem Ecosystem functions (de Groot et Modelled estimates of Qualitative estimates of
services based on ecological and al., 2002); ecosystem properties harvestable biomass under land cover type
geophysical characteristics of that support ecosystem functions natural conditions; potential contributions to biomass
ecosystems, taking into account (van Oudenhoven et al., 2012) pressures that an ecosystem growth; species traits (e.g.
the ecosystem's integrity, under can absorb; landscape root growth patterns);
the influence of external drivers aesthetic quality. landscape heterogeneity
(e.g., climate change or of land cover types.
pollution).
Table 2 : Ecosystem service assessments considered in the analysis Included countries Reference
2
Belgium (Stevens, 2014)
Europe (Maes et al., 2015)
Finland http://www.biodiversity.fi/ecosystemservice s/home, last consulted January 13 th 2017
New Zealand (Dymond, 2013)
South Africa (Reyers et al., 2014)
South Africa, Tanzania and Zambia (Willemen et al., 2015)
Spain (Santos-Martín et al., 2013)
United Kingdom (UK National Ecosystem Assessment, 2011)
Table 3 . Frequency at which the different ecosystem services were mentioned in both policy
3 [START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF] Lee and Hautenbach, 2016) demonstrate that easily measurable ecosystem services (i.e. most of the provisioning services) or ecosystem services that can be quantified through modelling (i.e. many of the regulating services) are most often studied, whereas cultural ecosystem services are much less represented, despite their importance for global sustainability policies. The reason for this knowledge gap is partly theoretical (e.g. lack of agreement on for monitoring and measuring, and partly because the assessment of cultural services in particularly requires a multi-disciplinary approach (e.g. landscape ecologists, environmental anthropologists, or environmental planners) which is difficult to achieve(Hernández- 231 Morcillo et al. 2013;[START_REF] Milcu | Cultural ecosystem services: a literature review and prospects for future research[END_REF]. The development of cultural services indicators would benefit 232 from a truly interdisciplinary dialogue which should take place at both national level and international 233 level to capture cultural differences and spatial heterogeneity. The capacity building objectives of IPBES 234 could provide an important global incentive to come to a structured, mutli-disciplinary and coherent 235 concept of cultural services. 236
237
238 documents. Presented ecosystem services frequency scores are for the SDGs per target (n=126) and for
239 the Aichi Targets per target (n=20).
Ecosystem services SDGs Aichi Targets
Provisioning services (total) 29 29
Crops 4 3
Energy (biomass) 2 1
Fodder 0 1
Livestock 4 3
Fibre 0 2
Timber 0 3
Wood for fuel 2 1
Capture fisheries 8 3
Aquaculture 5 3
Wild foods 2 3
Biochemicals/medicine 0 3
Freshwater 2 3
Regulating services (total) 33 21
Global climate regulation 0 2
Local climate regulation 3 1
Air quality regulation 2 0
Water flow regulation 5 2
Water purification 5 3
Nutrient regulation 0 3
Erosion regulation 3 3
Natural hazard protection 6 1
Pollination 1 2
Pest and disease control 2 2
Regulation of waste 6 2
Cultural services (total) 23 13
Recreation 4 0
Landscape aesthetics 0 0
Knowledge systems 2 3
Religious and spiritual experiences 0 1
Cultural heritage & cultural diversity 4 3
Natural Heritage & natural diversity 13 6
240
241
Table A .1. Overall ranking of the frequency that ecosystem services were mentioned across both the SDGs and the Aichi Targets.
A The top 25% most frequently mentioned ecosystem services are highlighted in bold. Ecosystem services categories are Provisioning (P), Regulating (R) and Cultural (C).
Ecosystem SDGs Aichi Targets Combined
service category Ecosystem services Ranking Ranking ranking
C Natural heritage & natural diversity 1 1 1
P Capture fisheries 2 8 2
P Aquaculture 6 8 3.5
R Water purification 6 8 3.5
P Crops 9,5 8 6
P Livestock 9,5 8 6
C Cultural heritage & cultural diversity 9,5 8 6
R Erosion regulation 12,5 8 8,5
R Regulation of waste 3,5 17,5 8,5
R Water flow regulation 6 17,5 10
P Wild foods 17 8 12
P Freshwater 17 8 12
C Knowledge systems 17 8 12
R Natural hazard protection 3,5 23,5 14
P Timber 25,5 8 16
P Biochemicals/medicine 25,5 8 16
R Nutrient regulation 25,5 8 16
R Pest and disease control 17 17,5 18
R Local climate regulation 12,5 23,5 19
C Recreation 9,5 28 20
R Pollination 21 17,5 21
P Energy (biomass) 17 23,5 22,5
P Wood for fuel 17 23,5 22,5
P Fibre 25,5 17,5 24
R Global climate regulation 25,5 17,5 25
R Air quality regulation 17 28 26
P Fodder 25,5 23,5 27,5
C Religious and spiritual experiences 25,5 23,5 27,5
C Landscape aesthetics 25,5 28 29
Table A .2. Number of indicators identified from national ecosystem assessments, presented per ecosystem service category
A (provisioning, regulating or cultural services), ecosystem service variable (Potential Supply, Supply, Use, Demand or Interest) or indicator type (direct or indirect). For regulating services, 39 additional indicators describing pressures and states were identified.
Potential
Direct Indirect Supply Supply Use Demand Interest
Total 124 114 59 89 46 31 13
Provisioning 82 43 22 61 31 8 3
Regulating 26 57 34 19 5 18 7
Cultural 16 14 3 9 10 5 3
Potential Supply 19 40
Supply 45 44
Use 40 6
Demand 17 14
Interest 3 10
Table A.3.
Number of indicators identified from ecosystem services assessments for the top 25% of ecosystem services recommended by the reviewed policies, presented per ecosystem service variable
(Potential Supply, Supply, Use, Demand or Interest) or indicator type (direct or indirect).
(https://www.cbd.int/gbo/) last consulted on the
nd of April 2017
http://geobon.org/working-groups/, last consulted 22th of April 2017
http://es-partnership.org/community/workings-groups/, last consulted 22th of April 2017
www.livingplanetindex.org/home/index, last consulted 22th of April 2017
Acknowledgements
We thank the two anonymous reviewers for their suggestions, which have led to an improved final version of the manuscript. This work was partly supported by 7th Framework Programmes funded by the European Union the EU BON (Contract No. 308454) and the OPERAs project (Contract No.308393). It contributes to the Labex OT-Med (no. ANR-11-LABX-0061) funded by the French Government through the A*MIDEX project (no. ANR-11-IDEX-0001-02). This study contributes to the work done within the GEO BON working group on Ecosystem Services and the Mediterranean Ecosystem Services working group of the Ecosystem Services Partnership.
[START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF], but based on the indicators found in the selected ecosystem services assessments, we made small adjustments: 1) for livestock the definition remained the same, but we changed the name for clarity in the table; 2) noise reduction, soil quality regulation and lifecycle maintenance were absent from [START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF] and were added; 3) we split natural hazard regulation in two: flood risk regulation and coastal protection; and 4) we separated recreation and tourism. | 49,591 | [
"18543"
] | [
"508096",
"46716",
"136715",
"188653",
"475979"
] |
01444016 | en | [
"sde"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01444016/file/Titeux_2016_GCB_postprint.pdf | Nicolas Titeux
email: [email protected]
Klaus Henle
Jean-Baptiste Mihoub
Adrián Regos
Ilse R Geijzendorffer
Wolfgang Cramer
Peter H Verburg
Lluís Brotons
Biodiversity scenarios neglect future land use changes Running head Land use changes and biodiversity scenarios
Keywords: Biodiversity projections, climate change, ecological forecasting, land cover change, land system science, predictive models, species distribution models, storylines
Efficient management of biodiversity requires a forward-looking approach based on scenarios that explore biodiversity changes under future environmental conditions. A number of ecological models have been proposed over the last decades to develop these biodiversity scenarios. Novel modelling approaches with strong theoretical foundation now offer the possibility to integrate key ecological and evolutionary processes that shape species distribution and community structure. Although biodiversity is affected by multiple threats, most studies addressing the effects of future environmental changes on biodiversity focus on a single threat only. We examined the studies published during the last 25 years that developed scenarios to predict future biodiversity changes based on climate, land use and land
Introduction
Biodiversity plays an important role in the provision of ecosystem functions and services [START_REF] Mace | Biodiversity and ecosystem services: a multilayered relationship[END_REF][START_REF] Bennett | Linking biodiversity, ecosystem services, and human well-being: three challenges for designing research for sustainability[END_REF]Oliver et al., 2015a). Yet, it is undergoing important decline worldwide due to human-induced environmental changes [START_REF] Collen | Monitoring change in vertebrate abundance: the living planet index[END_REF][START_REF] Pimm | The biodiversity of species and their rates of extinction, distribution, and protection[END_REF]. Governance and anticipative management of biodiversity require plausible scenarios of expected changes under future environmental conditions [START_REF] Sala | Global Biodiversity Scenarios for the Year 2100[END_REF][START_REF] Pereira | Scenarios for global biodiversity in the 21st century[END_REF][START_REF] Larigauderie | Biodiversity and ecosystem services science for a sustainable planet: the DIVERSITAS vision for 2012-20[END_REF]. A forward-looking approach is essential because drivers of biodiversity decline and their associated impacts change over time. In addition, delayed mitigation efforts are likely more costly and timeconsuming than early action and often fail to avoid a significant part of the ecological damage [START_REF] Cook | Using strategic foresight to assess conservation opportunity[END_REF][START_REF] Oliver | The pitfalls of ecological forecasting[END_REF]. Hence, biodiversity scenarios are on the agenda of international conventions, platforms and programmes for global biodiversity conservation, such as the Convention on Biological Diversity (CBD) and the Intergovernmental Platform for Biodiversity & Ecosystem Services (IPBES) [START_REF] Pereira | Scenarios for global biodiversity in the 21st century[END_REF][START_REF] Leadley | Progress towards the Aichi Biodiversity Targets: An Assessment of Biodiversity Trends, Policy Scenarios and Key Actions[END_REF]; Secretariat of the Convention on Biological Diversity, 2014; [START_REF] Díaz | The IPBES Conceptual Frameworkconnecting nature and people[END_REF][START_REF] Kok | Biodiversity and ecosystem services require IPBES to take novel approach to scenarios[END_REF].
Accepted Article
This article is protected by copyright. All rights reserved.
An increasing number of ecological models have been proposed over the last decades to develop biodiversity scenarios [START_REF] Evans | Predictive systems ecology[END_REF][START_REF] Kerr | Predicting the impacts of global change on species, communities and ecosystems: it takes time[END_REF][START_REF] Thuiller | A road map for integrating eco-evolutionary processes into biodiversity models[END_REF]. They integrate and predict the effects of the two main factors that will determine the future of biodiversity:
(1) the nature, rate and magnitude of expected changes in environmental conditions and (2) the capacity of organisms to deal with these changing conditions through a range of ecological and evolutionary processes (Figure 1). Most modelling approaches rely on strong assumptions about the key processes that shape species distribution, abundance, community structure or ecosystem functioning [START_REF] Kearney | Mechanistic niche modelling: combining physiological and spatial data to predict species' ranges[END_REF][START_REF] Evans | Modelling ecological systems in a changing world[END_REF][START_REF] Thuiller | A road map for integrating eco-evolutionary processes into biodiversity models[END_REF], with only few studies considering the adaptation potential of the species. Hence, recent work has mainly focused on improving the theoretical foundation of ecological models [START_REF] Evans | Predictive systems ecology[END_REF][START_REF] Thuiller | A road map for integrating eco-evolutionary processes into biodiversity models[END_REF]Harfoot et al., 2014a;Zurell et al., 2016).
Yet, the credibility of developed biodiversity scenarios remains severely limited by the assumptions used to integrate the expected changes in environmental conditions into the ecological models.
Biodiversity scenarios draw upon narratives (storylines) of environmental change that project plausible socio-economic developments or particularly desirable future pathways under specific policy options and strategies [START_REF] Van Vuuren | Scenarios in global environmental assessments: key characteristics and lessons for future use[END_REF][START_REF] O'neill | The roads ahead: Narratives for shared socioeconomic pathways describing world futures in the 21st century[END_REF] (Figure 1). Although biodiversity is affected by multiple interacting driving forces (Millennium Ecosystem Assessment, 2005;[START_REF] Mantyka-Pringle | Interactions between climate and habitat loss effects on biodiversity: a systematic review and meta-analysis[END_REF][START_REF] Settele | Biodiversity: Interacting global change drivers[END_REF], most biodiversity scenarios are based on environmental change projections that represent a single threat only [START_REF] Bellard | Combined impacts of global changes on biodiversity across the USA[END_REF]. With a literature survey on the biodiversity scenarios published during the last 25 years, we show here a dominant use of climate change projections and a relative neglect of future changes in land use and land cover. The emphasis on the impacts of climate change reflects the urgency to deal with this threat as it emerges from studies, data and reports such as those produced by the Intergovernmental Panel on Climate Change (IPCC) [START_REF] Tingley | Climate change must not blow conservation off course[END_REF][START_REF] Settele | Terrestrial and inland water systems[END_REF]. The direct destruction or degradation of habitats are, however, among the most significant threats to biodiversity to date (Millennium Ecosystem Assessment, 2005;[START_REF] Leadley | Progress towards the Aichi Biodiversity Targets: An Assessment of Biodiversity Trends, Policy Scenarios and Key Actions[END_REF][START_REF] Newbold | Global effects of land use on local terrestrial biodiversity[END_REF][START_REF] Newbold | Global patterns of terrestrial assemblage turnover within and among land uses[END_REF] and not including them raises concerns for the credibility of biodiversity scenarios. Habitat destruction and
Accepted Article
This article is protected by copyright. All rights reserved. degradation result from both changes in the type of vegetation or human infrastructures that cover the land surface (i.e. land cover) and changes in the manner in which humans exploit and manage the land cover (i.e. land use) [START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF][START_REF] Van Asselen | Land cover change or land-use intensification: simulating land system change with a global-scale land change model[END_REF]. The lack of coherent and interoperable environmental change projections that integrate climate, land use and land cover across scales constitutes a major research gap that impedes the development of credible biodiversity scenarios and the implementation of efficient forward-looking policy responses to biodiversity decline. We identify key research challenges at the crossroads between ecological and environmental sciences, and we provide recommendations to overcome this gap.
Climate and land use/cover changes are important drivers of biodiversity decline
Biodiversity decline results from a number of human-induced drivers of change, including land use/cover change, climate change, pollution, overexploitation and invasive species [START_REF] Pereira | Global biodiversity change: the bad, the good, and the unknown[END_REF][START_REF] Leadley | Progress towards the Aichi Biodiversity Targets: An Assessment of Biodiversity Trends, Policy Scenarios and Key Actions[END_REF]. [START_REF] Ostberg | Three centuries of dual pressure from land use and climate change on the biosphere[END_REF] have recently estimated that climate and land use/cover changes have now reached a similar level of pressure on the biogeochemical and vegetation-structural properties of terrestrial ecosystems across the globe, but during the last three centuries land use/cover change has exposed 1.5 times as many areas to significant modifications as climate change. The relative impacts of these driving forces on biodiversity have also been assessed at the global scale. In its volume on state and trends, the Millennium Ecosystem Assessment (2005) reported that land use/cover change in terrestrial ecosystems has been the most important direct driver of changes in biodiversity and ecosystem services in the past 50 years. Habitat destruction or degradation due to land use/cover change constitute an on-going threat in 44.8% of the vertebrate populations included in the Living Planet Index (WWF, 2014) for which threats have been identified, whereas climate change is a threat in only 7.1% of them. A query performed on the website of the IUCN Red List of Threatened species (assessment during the period [2000][2001][2002][2003][2004][2005][2006][2007][2008][2009][2010][2011][2012][2013][2014][2015] indicates that more than 85% of the vulnerable or (critically) endangered mammal, bird and amphibian species in terrestrial ecosystems are affected by habitat destruction or degradation (i.e. residential and commercial development, agriculture and aquaculture, energy production and mining, transportation and service corridors, and natural system modification) and less than 20% are affected by climate
Accepted Article
This article is protected by copyright. All rights reserved. change and severe weather conditions (see also [START_REF] Pereira | Global biodiversity change: the bad, the good, and the unknown[END_REF]. Interactions between multiple driving forces, such as climate, land use and land cover changes, may further push ecological systems beyond tipping points [START_REF] Mantyka-Pringle | Interactions between climate and habitat loss effects on biodiversity: a systematic review and meta-analysis[END_REF][START_REF] Oliver | Interactions between climate change and land use change on biodiversity: attribution problems, risks, and opportunities[END_REF] and are key to understanding biodiversity dynamics under changing environmental conditions [START_REF] Travis | Climate change and habitat destruction: a deadly anthropogenic cocktail[END_REF][START_REF] Forister | Compounded effects of climate change and habitat alteration shift patterns of butterfly diversity[END_REF][START_REF] Staudt | The added complications of climate change: understanding and managing biodiversity and ecosystems[END_REF][START_REF] Mantyka-Pringle | Climate change modifies risk of global biodiversity loss due to land-cover change[END_REF].
Emphasis on climate change impacts in biodiversity scenarios
Available projections of climate and land use/cover changes [START_REF] Van Vuuren | Scenarios in global environmental assessments: key characteristics and lessons for future use[END_REF][START_REF] O'neill | The roads ahead: Narratives for shared socioeconomic pathways describing world futures in the 21st century[END_REF] are used to inform on future environmental conditions for biodiversity across a variety of spatial and temporal scales (de Chazal & Rounsevell, 2009) (Figure 1). Many studies have predicted the consequences of expected climate change on biodiversity [START_REF] Bellard | Impacts of climate change on the future of biodiversity[END_REF][START_REF] Staudinger | Biodiversity in a changing climate: a synthesis of current and projected trends in the US[END_REF][START_REF] Pacifici | Assessing species vulnerability to climate change[END_REF]. For instance, future climate change is predicted to induce latitudinal or altitudinal shifts in species ranges with important effects on ecological communities [START_REF] Maes | Predicted insect diversity declines under climate change in an already impoverished region[END_REF][START_REF] Barbet-Massin | The effect of range changes on the functional turnover, structure and diversity of bird assemblages under future climate scenarios[END_REF], to increase the risks of species extinction [START_REF] Thomas | Extinction risk from climate change[END_REF][START_REF] Urban | Accelerating extinction risk from climate change[END_REF] or to reduce the effectiveness of conservation areas [START_REF] Araújo | Climate change threatens European conservation areas[END_REF]. Projections of land use/cover change have been used to predict future changes in suitable habitats for a number of species [START_REF] Martinuzzi | Future land-use scenarios and the loss of wildlife habitats in the southeastern United States[END_REF][START_REF] Newbold | Global effects of land use on local terrestrial biodiversity[END_REF], to predict future plant invasions [START_REF] Chytrý | Projecting trends in plant invasions in Europe under different scenarios of future land-use change[END_REF], to estimate potential future extinctions in biodiversity hotspots [START_REF] Jantz | Future habitat loss and extinctions driven by land-use change in biodiversity hotspots under four scenarios of climate-change mitigation[END_REF] or to highlight the restricted potential for future expansion of protected areas worldwide [START_REF] Pouzols | Global protected area expansion is compromised by projected land-use and parochialism[END_REF]. [START_REF] Visconti | Socio-economic and ecological impacts of global protected area expansion plans[END_REF] estimated the coverage of suitable habitats for terrestrial mammals under future land use/cover change and based on global protected areas expansion plans. They showed that such plans might not constitute the most optimal conservation action for a large proportion of the studied species and that alternative strategies focusing on the most threatened species will be more efficient.
Climate and land use/cover change projections have also been combined in the same modelling framework to address how climate change will interplay with land use/cover change in driving the future of biodiversity [START_REF] Jetz | Projected impacts of climate and land-use change on the global diversity of birds[END_REF][START_REF] Martin | Testing instead of assuming the importance of land use change scenarios to model species distributions under climate change[END_REF][START_REF] Ay | Integrated models, scenarios and dynamics of climate, land use and common birds[END_REF][START_REF] Saltré | How climate, migration ability and habitat fragmentation affect the projected future distribution of European beech[END_REF][START_REF] Visconti | Projecting Global Biodiversity Indicators under Future Development Scenarios[END_REF]. For instance, future refuge areas for orang-utans have been identified in Borneo
Accepted Article
This article is protected by copyright. All rights reserved.
under projected climate change, deforestation and suitability for oil-palm agriculture [START_REF] Struebig | Anticipated climate and land-cover changes reveal refuge areas for Borneo's orang-utans[END_REF]. [START_REF] Alkemade | GLOBIO3: A Framework to Investigate Options for Reducing Global Terrestrial Biodiversity Loss[END_REF] used land use/cover change, climate change and projections of other driving forces to predict the future impacts of different global-scale policy options on the composition of ecological communities. Recently, it has been shown that the persistence of drought-sensitive butterfly populations under future climate change may be significantly improved if semi-natural habitats are restored to reduce fragmentation (Oliver et al., 2015b).
We searched published literature from 1990 to 2014 to estimate the yearly number of studies that developed biodiversity scenarios based on climate change projections, land use/cover change projections or the combination of both types of projections. A list of 2,313 articles was extracted from the search procedure described in Table 1. We expected a number of articles within this list would only weakly focus on the development of biodiversity scenarios based on climate and/or land use/cover change projections and therefore, we randomly sampled articles within this list (sample size: N=300). We then carefully checked their titles and abstracts to allocate each of them to one of the following categories: We considered that articles reported on the development of biodiversity scenarios when they produced predictions of the response of biodiversity to future changes in environmental conditions.
Accepted Article
This article is protected by copyright. All rights reserved.
We calculated for each year between 1990 and 2014 the proportions of studies allocated to each of the five categories among the random sample of articles. We used a window size of 5 years and we calculated two-sided moving averages of the yearly proportions along the 25-year long time series.
With this approach, we smoothed out short-term fluctuations due to the limited sample size and we highlighted the long-term trend in the proportions of articles allocated to the different categories.
We used these smoothed proportions estimated from the sample of articles and the total number of 2,313 articles extracted from the search procedure to estimate the yearly numbers of articles during 1990-2014 that reported on the development of biodiversity scenarios and that used climate change projections (category 1), land use/cover change projections (category 2) and both types of environmental change projections (category 3).
Our survey revealed that the number of studies that have included the expected impacts of future land use/cover change on biodiversity falls behind in comparison with the number of studies that have focused on the effects of future climate change (Figure 2). Among the studies published during the period 1990-2014 and that drew upon at least one of these two driving forces to develop biodiversity scenarios, we estimated that 85.2% made use of climate change projections alone and that 4.1% used only projections of land use/cover change. Climate and land use/cover change projections were combined in 10.7% of the studies. A sensitivity analysis was carried out and indicates that the number of articles for which we checked the titles and abstracts was sufficient to reflect those proportions in a reliable way (Appendix S1 and Figure S1). The imbalance between the use of climate and land use/cover change projections has increased over time in the last 25 years and has now reached a maximum (Figure 2).
Where biodiversity scenarios lack credibility
Disregarding future changes in land use or land cover when developing biodiversity scenarios assumes that their effects on biodiversity will be negligible compared to the impacts of climate change. Two main reasons are frequently brought forward when omitting to include the effects of land use/cover change in biodiversity scenarios: (1) the available representations of future land use/cover
Accepted Article
This article is protected by copyright. All rights reserved. change are considered unreliable or irrelevant for addressing the future of biodiversity (e.g. [START_REF] Stanton | Combining static and dynamic variables in species distribution models under climate change[END_REF] and (2) climate change could outpace land use and land cover as the greatest threat to biodiversity in the next decades (e.g. [START_REF] Bellard | Impacts of climate change on the future of biodiversity[END_REF]. Here, we build on these two lines of arguments to discuss the lack of credibility of assuming unchanged land use/cover in biodiversity scenarios and to stress the need for further development of land use/cover change projections.
Available large-scale land use/cover change projections are typically associated with a relatively coarse spatial resolution and a simplified thematic representation of the land surface [START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF]. This is largely due to the fact that most of these projections have been derived from integrated assessment models which simulate expected changes in the main land cover types and their impacts on climate through emission of greenhouse gases (de Chazal [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF][START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF]Harfoot et al., 2014b). A strong simplification of the representation of land use and land cover is inevitable due to the spatial extent and computational complexity of these models. Some studies have implemented downscaling methods based on spatial allocation rules to improve the representation of landscape composition in large-scale projections [START_REF] Verburg | Downscaling of land use change scenarios to assess the dynamics of European landscapes[END_REF]. Because their primary objective is to respond to the pressing need to assess future changes in climatic conditions and to explore climate change mitigation options, such downscaled projections use, however, only a small number of land cover types and are, consequently, of limited relevance for addressing the full impact of landscape structure and habitat fragmentation on biodiversity (de Chazal [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF][START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF]Harfoot et al., 2014b).
In addition, much of land system science has focused on conversions between land cover types (e.g.
from forest to open land through deforestation), but little attention has been paid to capture some of the most important dimensions of change for biodiversity that result from changes in land use withinand not only betweencertain types of land cover (de Chazal [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF][START_REF] Van Asselen | Land cover change or land-use intensification: simulating land system change with a global-scale land change model[END_REF][START_REF] Stürck | Simulating and delineating future land change trajectories across Europe[END_REF]. Changes in land management regimes (e.g. whether grasslands are mown or grazed) and intensity of use (e.g. through wood harvesting or the use of fertilizers, pesticides and irrigation in cultivated areas) are known to strongly impact biodiversity [START_REF] Pe'er | EU agricultural reform fails on biodiversity[END_REF] and are expected to cause unprecedented habitat modifications in the next decades (Laurance,
Accepted Article
This article is protected by copyright. All rights reserved.
2001; [START_REF] Tilman | Forecasting Agriculturally Driven Global Environmental Change[END_REF]. For instance, management intensification of currently cultivated areas [START_REF] Meehan | Agricultural landscape simplification and insecticide use in the Midwestern United States[END_REF] rather than agricultural surface expansion will likely provide the largest contribution to the future increases in agricultural production [START_REF] Van Asselen | Land cover change or land-use intensification: simulating land system change with a global-scale land change model[END_REF]. These aspects of land use change remain poorly captured and integrated into currently available projections [START_REF] Rounsevell | Challenges for land system science[END_REF][START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF][START_REF] Stürck | Simulating and delineating future land change trajectories across Europe[END_REF]. Furthermore, the frequency and sequence of changes in land use and land cover, or the lifespan of certain types of land cover, interact with key ecological processes and determine the response of biodiversity to such changes [START_REF] Kleyer | Mosaic cycles in agricultural landscapes of Northwest Europe[END_REF][START_REF] Watson | Land-use change: incorporating the frequency, sequence, time span, and magnitude of changes into ecological research[END_REF]. Although methods have become available to represent the dynamics and the expected trajectories of the land system [START_REF] Rounsevell | Challenges for land system science[END_REF], these temporal dimensions of change are still rarely incorporated in land use/cover change projections [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF]Harfoot et al., 2014b). This lack of integration between ecological and land system sciences limits the ability to make credible evaluations of the future response of biodiversity to land use and land cover changes in interaction with climate change (de Chazal [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF]Harfoot et al., 2014b). In turn, this makes it hazardous to speculate that the expected rate and magnitude of climate change will downplay the effects of land use/cover change on biodiversity in the future. There is no consensus on how the strength of future climate change impact should be compared to that of other threats such as changes in land use and land cover [START_REF] Tingley | Climate change must not blow conservation off course[END_REF]. Some of the few studies that included the combined effect of both types of drivers in biodiversity scenarios have stressed that, although climate change will severely affect biodiversity at some point in the future, land use/cover change may lead to more immediate and even greater biodiversity decline in some terrestrial ecosystems [START_REF] Jetz | Projected impacts of climate and land-use change on the global diversity of birds[END_REF][START_REF] Pereira | Scenarios for global biodiversity in the 21st century[END_REF][START_REF] Visconti | Projecting Global Biodiversity Indicators under Future Development Scenarios[END_REF]. For example, considerable habitat loss is predicted in some regions during the next few decades due to increasing pressures to convert natural habitats into agricultural areas [START_REF] Lambin | Global land use change, economic globalization, and the looming land scarcity[END_REF]. The rapid conversion of tropical forests and natural grasslands for agriculture, timber production and other land uses [START_REF] Laurance | Saving logged tropical forests[END_REF] is expected to have more significant impacts on biodiversity than climate in the near future [START_REF] Jetz | Projected impacts of climate and land-use change on the global diversity of birds[END_REF][START_REF] Laurance | Biodiversity scenarios: projections of 21st century change in biodiversity and associated ecosystem services[END_REF]. Again, most of these studies focused on changes
Accepted Article
This article is protected by copyright. All rights reserved. that will emerge from conversions between different types of land cover and only few of them addressed the future impacts of land use change within certain types of land cover. For instance, the distribution changes of broad habitat types were predicted under future climate, land use and CO 2 change projections in Europe and it was shown that land use change is expected to have the greatest effects in the next few decades [START_REF] Lehsten | Disentangling the effects of land-use change, climate and CO2 on projected future European habitat types[END_REF]. In this region, effects of land use change might lead to both a loss and a gain of habitats benefitting different aspects of biodiversity. This will likely happen through parallel processes of intensification and abandonment of agriculture that offer potential for recovering wilderness areas [START_REF] Henle | Identifying and managing the conflicts between agriculture and biodiversity conservation in Europe -A review[END_REF][START_REF] Queiroz | Farmland abandonment: threat or opportunity for biodiversity conservation? A global review[END_REF]. These immediate effects of land use/cover changes on biodiversity deserve further attention with regard to the ecological forecast horizon, i.e. how far into the future useful predictions can be made [START_REF] Petchey | The ecological forecast horizon, and examples of its uses and determinants[END_REF]. Immediate changes in land use/cover may significantly alter the ability of ecological systems to deal with the impacts of climate change that are expected to be increasingly severe in the future [START_REF] Tingley | Climate change must not blow conservation off course[END_REF]. Hence, ecological predictions that neglect the immediate effects of land use/cover changes and only focus on the effects of climate change in a distant future may be largely uncertain. It is therefore needed to identify appropriate time horizons for biodiversity scenarios, with increased reliance on those associated with greater predictability and higher policy relevance [START_REF] Petchey | The ecological forecast horizon, and examples of its uses and determinants[END_REF].
Climate change will exert severe impacts on the land system, but the way humans are managing the land will also influence climatic conditions, so that both processes interact with each other. For instance, deforestation and forest management constitute a major source of carbon loss with direct impacts on the carbon cycle and indirect effects on climate [START_REF] Pütz | Long-term carbon loss in fragmented Neotropical forests[END_REF][START_REF] Naudts | Europe's forest management did not mitigate climate warming[END_REF].
Climate change mitigation strategies include important modifications of the land surface such as the increased prevalence of biofuel crops. This mitigation action may pose some conflicts between important areas for biodiversity conservation and bioenergy production [START_REF] Alkemade | GLOBIO3: A Framework to Investigate Options for Reducing Global Terrestrial Biodiversity Loss[END_REF][START_REF] Fletcher | Biodiversity conservation in the era of biofuels: risks and opportunities[END_REF][START_REF] Meller | Balance between climate change mitigation benefits and land use impacts of bioenergy: conservation implications for European birds[END_REF]. In integrated assessment models or other global land use models, such interactions are often restricted to impacts of climate change on crop productivity and shifts in potential production areas. These models neglect a wide range of human adaptive responses
Accepted Article
This article is protected by copyright. All rights reserved.
to climate change in the land system [START_REF] Rounsevell | Towards decision-based global land use models for improved understanding of the Earth system[END_REF], such as spatial displacement of activities [START_REF] Lambin | Global land use change, economic globalization, and the looming land scarcity[END_REF]) that may pose a significant threat to biodiversity [START_REF] Estes | Using changes in agricultural utility to quantify future climate-induced risk to conservation[END_REF]. Increased attention to the feedback effects between climate and land use/cover changes is therefore needed to help assessing the full range of consequences of the combined impacts of these driving forces on biodiversity in the future.
Both climate and land use/cover changes are constrained or driven by large-scale forces linked to economic globalization, but the actual changes in land use/cover are largely determined by local factors [START_REF] Lambin | The causes of land-use and land-cover change: moving beyond the myths[END_REF][START_REF] Lambin | Global land use change, economic globalization, and the looming land scarcity[END_REF][START_REF] Rounsevell | Challenges for land system science[END_REF]. Modifications in the land system are highly location-dependent and a reflection of the local biophysical and socioeconomic constraints and opportunities [START_REF] Rounsevell | Towards decision-based global land use models for improved understanding of the Earth system[END_REF]. In Europe, observed changes in agricultural practices in response to increased market demands and globalization of commodity markets include the intensification of agriculture, the abandonment of marginally productive areas, and the changing scale of agricultural operations. These processes occur at the same time but at different locations across the continent [START_REF] Henle | Identifying and managing the conflicts between agriculture and biodiversity conservation in Europe -A review[END_REF][START_REF] Stürck | Simulating and delineating future land change trajectories across Europe[END_REF][START_REF] Van Vliet | Manifestations and underlying drivers of agricultural land use change in Europe[END_REF].
Hence, land use/cover change and its impacts on biodiversity are highly scale-sensitive processes: they show strongly marked contrasts from one location to the other [START_REF] Tzanopoulos | Scale sensitivity of drivers of environmental change across Europe[END_REF]. Many subtle changes that are locally or regionally significant for biodiversity may be seriously underestimated in the available land use/cover change projections because they are occurring below the most frequently used spatial, temporal and thematic resolution of analysis in large-scale land use models [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF]. Most statistical downscaling approaches based on spatial allocation rules neglect such scale-sensitivity issues and therefore fail to represent landscape composition and structure to appropriately address the local or regional impacts of land use/cover changes on biodiversity [START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF].
A multi-scale, integrated approach is therefore required to unravel the relative and interacting roles of climate, land use and land cover in determining the future of biodiversity across a range of temporal and spatial scales. A good example of this need is the prediction of the impacts of changes in
Accepted Article
This article is protected by copyright. All rights reserved.
disturbance regimes, such as fire, for which idiosyncratic changes may be expected in particular combinations of future climate and land use/cover changes [START_REF] Brotons | How fire history, fire suppression practices and climate change affect wildfire regimes in Mediterranean landscapes[END_REF][START_REF] Regos | Predicting the future effectiveness of protected areas for bird conservation in Mediterranean ecosystems under climate change and novel fire regime scenarios[END_REF].
A way forward for biodiversity scenarios
Most large-scale land cover change projections are derived from integrated assessment models. They are coherent to some extent with climate change projections because they are based on the same socio-economic storylines. This coherence is useful for studying the interplay between different driving forces. Integrated assessment models capture human energy use, industrial development, agriculture and main land cover changes within a single modelling framework. However, their original, primary objective is to provide future predictions of greenhouse gas emissions. It is therefore important to recognise that these models are not designed to describe the most relevant aspects of land use and land cover changes for (changes in) biodiversity [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF]Harfoot et al., 2014b). Here, we provide two recommendations to increase the ecological relevance of land use/cover change projections: (1) reconciling local and global land use/cover modelling approaches and (2) incorporating important ecological processes in land use/cover models.
Novel and flexible downscaling and upscaling methods to reconcile global-, regional-and local-scale land use modelling approaches are critically required and constitute one of the most burning issues in land system science [START_REF] Letourneau | A land-use systems approach to represent land-use dynamics at continental and global scales[END_REF][START_REF] Rounsevell | Challenges for land system science[END_REF][START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF]. An important part of the land use modelling community focuses on the development of modelling and simulation approaches at local to regional scales where human decision-making and land use/cover change processes are incorporated explicitly [START_REF] Rounsevell | Towards decision-based global land use models for improved understanding of the Earth system[END_REF]. These models offer potential to include a more detailed representation of land use/cover trajectories than integrated assessment models. Beyond the classification of dominant land cover types, they inform on land use, intensity of use, management regimes, and other dimensions of land use/cover changes (van Asselen [START_REF] Van Asselen | Land cover change or land-use intensification: simulating land system change with a global-scale land change model[END_REF]). An integration of scales will provide the opportunity to better represent the interactions between local trajectories and global dynamics [START_REF] Kok | Biodiversity and ecosystem services require IPBES to take novel approach to scenarios[END_REF] and to deal more explicitly with scale-sensitive factors such as land use/cover changes [START_REF] Tzanopoulos | Scale sensitivity of drivers of environmental change across Europe[END_REF]. To achieve this integration, a strengthened connection between ecological and land use modelling communities is
Accepted Article
This article is protected by copyright. All rights reserved. needed as it would ensure that the spatial, temporal and thematic representation of changes in land use models matches with the operational scale at which biodiversity respond to these changes. Harfoot et al. (2014b) recently suggested development needs for integrated assessment models and recommended the general adoption of a user-centred approach that would identify why ecologists need land use/cover change projections and how they intend to use them to build biodiversity scenarios. Although we believe such an approach will also be needed to ensure the ecological relevance of integrating the different scales of analysis in land use models, this will only be successful if ecologists increase their use of already available land use/cover change projections and suggest concrete modifications to improve their ecological relevance [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF][START_REF] Martin | Testing instead of assuming the importance of land use change scenarios to model species distributions under climate change[END_REF]. To address the scale-sensitivity issue thoroughly, we should also move beyond the current emphasis on large and coarse scale of analysis in global change impact research and increase our recognition for studies examining the local and regional effects of climate and land use/cover changes on biodiversity.
Ecological processes in marine, freshwater or terrestrial ecosystems remain poorly incorporated in existing integrated assessment models and other land use models (Harfoot et al., 2014b). Ecological processes in natural and anthropogenic ecosystems provide essential functions, such as pollination, disease or pest control, nutrient or water cycling and soil stability, that exert a strong influence on land systems through complex mechanisms [START_REF] Sekercioglu | Ecosystem consequences of bird declines[END_REF][START_REF] Klein | Importance of pollinators in changing landscapes for world crops[END_REF].
Incorporating these processes at appropriate spatial and temporal scales in land use models constitutes an important challenge, but it would considerably increase the ecological realism of these models and, in turn, their ability to predict emergent behaviour of the future ecosystems and the related biodiversity patterns (Harfoot et al., 2014b). Therefore, we urge the need for strengthened interactions between different scientific communities to identify (1) which ecological processes are relevant in driving land use/cover dynamics and (2) how and at which scales these processes could be incorporated in land use models to predict the trajectories of socio-ecological systems.
Accepted Article
This article is protected by copyright. All rights reserved.
A successful implementation of our two recommendations does not solely depend on collaborative scientific efforts, but it also requires societal agreement and acceptance. The dialogue with and engagement of stakeholders, such as policy advisers and NGOs, within a participatory modelling framework [START_REF] Voinov | Modelling with stakeholders[END_REF] will be key to agreeing on a set of biodiversity-oriented storylines and desirable pathways at relevant spatial and temporal scales for decision-making processes in biodiversity conservation and management. An improved integration of the expertise and knowledge from social science into the development and interpretation of the models may allow a better understanding of likely trajectories of land use/cover changes. Moreover, such an integration would provide a better theoretical understanding and practical use of social-ecological feedback loops in form of policy and management responses to changes in biodiversity and ecosystem services, which in turn will impact future land use decisions and trajectories.
The priority given to investigating future climate change impacts on biodiversity most likely reflects how the climate change community has attracted attention during the last decades. The availability of long-term time series of climatic observations in most parts of the world and the increasing amount of science-based, spatially explicit climatic projections derived from global and regional circulation models have clearly stimulated the development of studies focusing on the impacts of climate change [START_REF] Tingley | Climate change must not blow conservation off course[END_REF]Harfoot et al., 2014b). Under the World Climate Research Programme (WCRP), the working group on coupled modelling has established the basis for climate model diagnosis, validation, inter-comparison, documentation and accessibility [START_REF] Overpeck | Climate Data Challenges in the 21st Century[END_REF]. The requirements for climate policy, mediated through the IPCC, have further mobilized the use of a common reference in climate observations and simulations by the scientific community. The set of common future emission scenarios (SRES) released in 2000 [START_REF] Nakicenovic | Special Report on Emission Scenarios: A Special Report of Working Group III of the Intergovernmental Panel on Climate Change[END_REF], the more recent representative concentration pathways (RCPs) [START_REF] Van Vuuren | The representative concentration pathways: an overview[END_REF], and the fact that these
Accepted Article
This article is protected by copyright. All rights reserved. model representations, uncertainties and differences is also needed and should be understandable and interpretable by a broad interdisciplinary audience (Harfoot et al., 2014b).
The IPCC has also clearly demonstrated that an independent intergovernmental body is an appropriate platform for attracting the attention of the non-scientific community. Many actors now perceive climate change as an important threat to ecosystem functions and services. This emphasis can be heard in the media and among policy makers, such as during the United Nations conferences on climate change. As a response to the increasing societal and political relevance of climate change, research efforts have been mostly directed towards climate change impact assessments [START_REF] Herrick | Land degradation and climate change: a sin of omission?[END_REF]. From this observed success of the IPCC and the climate change community, it becomes evident that an independent body is needed for mobilizing the scientific and non-scientific communities to face the significant challenge of developing biodiversity-oriented references for land use and land cover change projections. With its focus on multi-scale, multi-disciplinary approaches, the working programme of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES) [START_REF] Inouye | IPBES: global collaboration on biodiversity and ecosystem services[END_REF][START_REF] Díaz | The IPBES Conceptual Frameworkconnecting nature and people[END_REF][START_REF] Lundquist | Engaging the conservation community in the IPBES process[END_REF] is offering a suitable context to stimulate collaborative efforts for taking up this challenge. In line with [START_REF] Kok | Biodiversity and ecosystem services require IPBES to take novel approach to scenarios[END_REF], we therefore encourage IPBES to strengthen its investment in the development and use of interoperable and plausible projections of environmental changes that will allow to better explore the future of biodiversity.
Conclusion
Neglecting the future impacts of land use and land cover changes on biodiversity and focusing on climate change impacts only is not a credible approach. We are concerned that such an overemphasis on climate change reduces the efficiency of identifying forward-looking policy and management responses to biodiversity decline. However, the current state of integration between ecological and land system sciences impedes the development of a comprehensive and well-balanced research agenda addressing the combined impacts of future climate, land use and land cover changes on biodiversity and ecosystem services. We recommend addressing two key areas of developments to increase the ecological relevance of land use/cover change projections: (1) reconciling local and
Accepted Article
This article is protected by copyright. All rights reserved.
global land use/cover modelling approaches and (2) incorporating important ecological processes in land use/cover models. A multi-disciplinary framework and continuing collaborative efforts from different research horizons are needed and will have to build on the efforts developed in recent years by the climate community to agree on a common framework in climate observations and simulations.
It is now time to extend these efforts across scales in order to produce reference environmental change projections that embrace multiple pressures such as climate, land use and land cover changes. IPBES offers a timely opportunity for taking up this challenge, but this independent body can only do so if adequate research efforts are undertaken.
Figure captions
Figure 1. Biodiversity scenarios: a predictive tool to inform policy-makers on expected biodiversity responses (after [START_REF] Bellard | Impacts of climate change on the future of biodiversity[END_REF] with minor modifications) to future human-induced environmental changes. A great variety of ecological models integrate the nature, rate and magnitude of expected changes in environmental conditions and the capacity of organisms to deal with these changing conditions to generate biodiversity scenarios [START_REF] Thuiller | A road map for integrating eco-evolutionary processes into biodiversity models[END_REF].
1.
Article reporting on the development of biodiversity scenarios based only on climate change projections 2. Article reporting on the development of biodiversity scenarios based only on land use/cover change projections 3. Article reporting on the development of biodiversity scenarios based on the use of climate and land use/cover change projections 4. Article reporting on the development of biodiversity scenarios based on other types of environmental change projections 5. Article not reporting on the actual development of biodiversity scenarios
can be shared easily have played a major role in mobilizing the scientific community to use climate change projections in biodiversity scenarios. Work is underway to facilitate open access to land use/cover change time series and projections, but clear and transparent documentation of land use
Figure 2 .
2 Figure 2. Relative neglect of future land use and land cover change in biodiversity scenarios.
Acknowledgements N.T., K.H., J.B.M., I.R.G., W.C. and L.B. acknowledge support from the EU BON project (no. 308454, FP7-ENV-2012, European Commission, Hoffmann et al., 2014). N.T. and L.B. were also funded by the TRUSTEE project (no. 235175, RURAGRI ERA-NET, European Commission). N.T., A.R. and L.B. were also supported by the FORESTCAST project (CGL2014-59742, Spanish Government). I.R.G. and W.C. contribute to the Labex OT-Med (no. ANR-11-LABX-0061) funded by the French Government through the A*MIDEX project (no. ANR-11-IDEX-0001-02). P.H.V.
received funding from the GLOLAND project (no. 311819, FP7-IDEAS-ERC, European Commission). We thank Piero Visconti and one anonymous reviewer for useful comments on a previous version of this paper.
Accepted Article
This article is protected by copyright. All rights reserved.
Accepted Article
This article is protected by copyright. All rights reserved.
Accepted Article
This article is protected by copyright. All rights reserved.
Iyengar L, Jeffries B, Oerlemans N). WWF International, Gland, Switzerland. Zurell D, Thuiller W, Pagel J et al. (2016) Benchmarking novel approaches for modelling species range dynamics. Global Change Biology, accepted, doi: 10.1111/gcb.13251.
Supporting Information captions
Appendix S1. Sensitivity analysis to examine the effect of sample size in the literature survey.
Figure S1. Effect of sample size in the literature survey.
Tables Table 1. We used Boolean operators "AND" to combine the different queries and we refined the obtained results using "Articles" as Document Type and using "Ecology" or "Biodiversity conservation" as Web of Science Categories. We also tested if the parameters that we used in the query #3 might potentially underestimate the number of studies focusing on land use/cover change. To do so, we tried to capture land use/cover change in a broader sense and we included additional parameters in the query #3 as follows: ("climat* chang*" OR "chang* climat*") OR ("land use chang*" OR "land cover chang*" OR "land* system* chang*" OR "land* chang*" OR "habitat loss*" OR "habitat degradation*" OR "habitat chang*" OR "habitat modification*"). We refined the results as described above and we obtained a list of 2,388 articles, that is, only 75 additional articles compared to the search procedure with the initial query #3 (see main text). Hence, the well-balanced design of the search procedure as described in the table does not underestimate the use of land use/cover change projections compared to climate change projections in biodiversity scenarios studies. | 52,997 | [
"174212",
"18543"
] | [
"234416",
"237629",
"442190",
"188653",
"62433",
"442190"
] |
01444653 | en | [
"sde"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01444653/file/huggel_etal_resubm_final.pdf | Christian Huggel
email: [email protected]
Ivo Wallimann-Helmer
Dáithí Stone
email: [email protected]
Wolfgang Cramer
email: [email protected]
Reconciling justice and attribution research to advance climate policy
The Paris Climate Agreement is an important step for international climate policy, but the compensation for negative effects of climate change based on clear assignment of responsibilities remains highly debated. Both from a policy and science perspective, it is unclear how responsibilities should be defined and on what evidence base. We explore different normative principles of justice relevant to climate change impacts, and ask how different forms of causal evidence of impacts drawn from detection and attribution research could inform policy approaches in accordance with justice considerations. We reveal a procedural injustice based on the imbalance of observations and knowledge of impacts between developed and developing countries. This type of injustice needs to be considered in policy negotiations and decisions, and efforts be strengthened to reduce it.
The Paris Agreement 1 of the United Nations Framework Convention on Climate Change (UNFCCC) is considered an important milestone in international climate policy. Among the most critical points during the Paris negotiations were issues related to climate justice, including the question about responsibilities for the negative impacts of anthropogenic climate change. Many developing countries continued to emphasize the historical responsibility of the developed world. On the other hand, developed countries were not willing to bear the full burden of climate responsibilities, reasons among others being the current high levels of greenhouse gas emissions and substantial financial power of some Parties categorized as developing countries (i.e. Non-Annex I) in the UNFCCC. Many Annex I Parties were particularly uncomfortable with the issue of 'Loss and Damage' (L&D), which is typically defined as the residual, adverse impacts of climate change beyond what can be addressed by mitigation and adaptation [START_REF] Warner | Loss and damage from climate change: local-level evidence from nine vulnerable countries[END_REF][START_REF] Okereke | Working Paper 19 pp[END_REF] . Although L&D is now anchored in the Paris Agreement in a separate article (Article 8) [START_REF] Cramer | Adoption of the Paris Agreement[END_REF] , questions of responsibility and claims for compensation of negative impacts of climate change basically remain unsolved.
Claims for compensation, occasionally also called climate 'reparations' [START_REF] Burkett | Climate Reparations[END_REF] , raise the question of who is responsible for which negative climate change impacts, how to define such responsibilities and on the basis of what type of evidence. Scientific evidence has become increasingly available from recent studies and assessments, termed "detection and attribution of climate change impacts", revealing numerous discernable impacts of climate change on natural, managed and human systems worldwide [START_REF] Rosenzweig | Detection and attribution of anthropogenic climate change impacts[END_REF][START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF][START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF] . In some cases, these impacts have been found to be substantial, but often the effects of multiple non-climatic drivers ('confounders') acting on natural and especially human and managed systems (e.g. land-use change, technical developments) have either been greater than the effect of climate change or have rendered attempts to determine the relative importance thereof difficult. A significant portion of attribution research has focused on the effects of increased atmospheric greenhouse gas concentrations on extreme weather events, yet usually without adopting an impacts perspective [START_REF]Attribution of extreme weather events in the context of climate change. 144[END_REF] . Recent studies have therefore emphasized the need for a more comprehensive attribution framework that considers all components of risk (or L&D), including vulnerability and exposure of assets and values in addition to climate hazards [START_REF] Huggel | Loss and damage attribution[END_REF] . Other contributions have discussed the role of attribution analysis for adaptation and L&D policies [START_REF] Allen | The blame game[END_REF][START_REF] Pall | Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000[END_REF][START_REF] Hulme | Attributing weather extremes to 'climate change' A review[END_REF][START_REF] James | Characterizing loss and damage from climate change[END_REF] .
How detection and attribution research could inform, or engage with climate policy and justice debates is currently largely unclear. Some first sketches of a justice framework to address the assignment of responsibility for L&D have recently been developed [START_REF] Thompson | Ethical and normative implications of weather event attribution for policy discussions concerning loss and damage[END_REF][START_REF] Wallimann-Helmer | Justice for climate loss and damage[END_REF] . However, the question of which type of evidence would best cohere with each of the various concepts of justice has not been addressed despite its importance for the achievement of progress in international climate policy.
In this Perspective we explore the different concepts and dimensions of normative justice research relevant to issues of climate change impacts (see Textbox 1). We adopt a normative perspective and analyze how the application of principles of justice can inform respective political and legal contexts.
We study the extent to which different forms of scientific evidence on climate change impacts, including detection and attribution research (see Textbox 2), can contribute to, or inform, the respective justice questions and related policy debates. Normative principles of justice define who is morally responsible for an impact and how to fairly distribute the burdens of remedy. In the political and in particular in the legal context liability defines an agent's legal duties in case of unlawful behavior [START_REF] Hayward | Climate change and ethics[END_REF] . Liability of an agent for climate change impacts defines a legal duty to pay for remedy of the negative effects. Liability can comprise compensation for L&D but also, for instance, include fines [START_REF] Hart | Causation in the law[END_REF][START_REF] Honoré | Stanford Encyclopedia of Philosophy[END_REF] .
In the following we first address questions of liability and compensation and why a potential implementation faces many hurdles on the scientific, political and legal level. We then consider the role that recognition of moral responsibilities for climate change impacts could play in fostering political reconciliation processes. Third, we explore the feasibility of the principle of ability to assist (or pay) and focus on risk management mechanisms as a response to immediate and preventive needs. Finally, we address the uneven distribution of knowledge about impacts across the globe as assessed in the 5 th Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC), and reveal an additional injustice on a procedural level with important further implications for policy and science.
BEGIN TEXT BOX 1: Justice principles relevant for climate change impacts
International climate policy is loaded with moral evaluations. The fact that emissions of greenhouse gases from human activities lead to climate change is not morally blameworthy as such. In order to assess emissions as ethically relevant it is necessary to evaluate their consequences based on normative principles. The level at which climate change is "dangerous" in an ethically significant sense has to be defined. Similarly, normative principles become relevant when differentiating responsibilities in order to deal with the adverse effects of climate change [START_REF] Hayward | Climate change and ethics[END_REF][START_REF] Mckinnon | Climate justice in a carbon budget[END_REF][START_REF] Pachauri | Climate ethics: Essential readings[END_REF] . In climate policy, as reflected in normative climate justice research, the following principles are relevant for establishing who bears responsibility for climate change impacts and for remedying those impacts:
Polluter-Pays-Principle (PPP): It is commonly accepted that those who have contributed or are contributing more to anthropogenic climate change should shoulder the burdens of minimizing and preventing climate change impacts in proportion to the magnitude of their contribution to the problem. From a PPP perspective, it is not only high-emitting developed countries that are called into responsibility to share the burden and assist low-emitting communities facing climate change risks, but also high-emitting developing countries [START_REF] Page | Distributing the Burdens of Climate Change[END_REF][START_REF] Gardiner | Ethics and Global Climate Change[END_REF][START_REF] Shue | Global Environment and International Inequality[END_REF] .
Beneficiary-Pays-Principle (BPP):
The BPP addresses important ethical challenges emerging from the PPP such as that some people have profited from past emissions, yet have not directly contributed to anthropogenic climate change. The BPP claims that those benefitting from the high emissions of others (e.g. their ancestors or other high-emitting co-citizens) are held responsible to assist those impacted by climate change irrespective of whether they themselves caused these emissions [START_REF] Page | Distributing the Burdens of Climate Change[END_REF][START_REF] Halme | Carbon Debt and the (In)significance of History[END_REF][START_REF] Gosseries | Historical Emissions and Free-Riding[END_REF][START_REF] Baatz | Responsibility for the Past? Some Thoughts on Compensating Those Vulnerable to Climate Change in Developing Countries[END_REF] .
Ability-to-Pay-Principle (APP):
The PPP and BPP both establish responsibilities irrespective of the capacity of the duty-bearers to contribute to climate change measures or reduce emissions. This can result in detrimental situations for disadvantaged high emitters and beneficiaries, be it individuals or countries. Following the APP only those capable of carrying burdens are responsible to contribute to climate change measures or emission reductions [START_REF] Page | Distributing the Burdens of Climate Change[END_REF][START_REF] Shue | Global Environment and International Inequality[END_REF][START_REF] Caney | Cosmopolitan Justice, Responsibility, and Global Climate Change[END_REF] .
In this Perspective, we deal with the APP under the label of "Ability-to-Assist-Principle" (AAP) in order to broaden the perspective beyond monetary payments toward consideration of assistance with climate change impacts more generally. Furthermore, we do not address the difference between the PPP and the BPP because to a large extent the sets of duty-bearers identified by the two principles overlap. None of the above principles provides any natural guidance on the threshold for emissions in terms of quantity or historical date at which they become a morally relevant contribution to dangerous climate change.
END TEXT BOX 1: Justice principles relevant for climate change impacts BEGIN TEXT BOX 2: Evidence that climate change has impacted natural and human systems
Scientific evidence that human-induced climate change is impacting natural and humans systems can come in a number of forms, each having different applications and implications [START_REF] Huggel | Potential and limitations of the attribution of climate change impacts for informing loss and damage discussions and policies[END_REF] . We draw here an analogy to U.S. environmental litigation [START_REF] Schleiter | Proving Causation in Environmental Litigation[END_REF] where typically two types of causation are relevant: "general causation" refers to the question of whether a substance is capable of causing a particular damage, injury or condition, while "specific causation" refers to a particular substance causing a specific individual's injury.
In the line of general causation, evidence for the potential existence of anthropogenic climate change impacts is relatively abundant (for more examples and references see the main text). Long-term monitoring may, for instance, reveal a trend toward more frequent wildfires in an unpopulated area.
These observations may have little to say about the relevance of climate change, or of emissions for that climate change, but they can be useful for highlighting the potential urgency of an issue.
Another form of evidence may come from a mechanistic understanding of how a system should respond to some change in its environmental conditions. The ranges of plant and animal species may, for instance, shift polewards in response to an observed or expected warming. In this case, the relevance to human-induced climate change may be explicit, but it remains unclear whether the range shifts have indeed occurred.
In order to be confident that an impact of anthropogenic climate change has indeed occurred, more direct evidence is required, akin to "specific evidence" in U.S. environmental litigation [START_REF] Schleiter | Proving Causation in Environmental Litigation[END_REF] . The most complete set of information for understanding past changes in climate and its impacts, commonly referred to as "detection and attribution", combines observational and mechanistic evidence, by confronting predictions of recent changes based on our mechanistic understanding with observations of long-term variations [START_REF] Stone | The challenge to detect and attribute effects of climate change on human and natural systems[END_REF] . These analyses address two questions: the first, detection, examines whether the natural or human system has indeed been affected by anthropogenic climate change, versus changes that may be related to natural climate variability or non-climatic factors. The second, attribution, estimates the magnitude of the effect of anthropogenic climate change as compared to the effect of other factors. These other factors (also termed 'confounders') might be considered external drivers of the observed change (e.g. deforestation driving land-cover changes).
Impacts of multi-decadal trends in climate have now been detected in many different aspects of natural and human systems across the continents and oceans of the planet [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . Analysis of the relevant climate trends suggests that anthropogenic emissions have played a major role in at least two thirds of the impacts induced by warming, but few of the impacts resulting from precipitation trends can yet be confidently linked to anthropogenic emissions [START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF] . Overall, research on detection and attribution of climate change impacts is still emerging and there remain few studies available that demonstrate a causal link between anthropogenic emissions, climate trends and impacts.
END TEXT BOX: Evidence that climate change has impacted natural and human systems
Liability and compensation
Compensation of those who suffer harm by those responsible for the harm, and more specifically, responsible for the negative impacts of climate change, represents a legitimate claim from the perspective of normative justice research [START_REF] Shue | Global Environment and International Inequality[END_REF][START_REF] Goodin | Theories of Compensation[END_REF][START_REF] Miller | Global justice and climate change: How should responsibilities be distributed[END_REF][START_REF] Pogge | World poverty and human rights: Cosmopolitan responsibilities and reforms[END_REF] . In their most common understanding, principles such as the PPP or BPP provide the justice framework to identify those responsible for climate change impacts and establish a basis for liability and compensation (see Textbox 1). However, issues of compensation have not yet been sufficiently clarified and remain contested in international climate policy. Driven by the pressure exerted by countries such as the U.S. and others, the notion that L&D involves or provides a basis for liability and compensation has been explicitly excluded in the decisions taken in Paris 2015 [START_REF] Cramer | Adoption of the Paris Agreement[END_REF] . L&D has previously been thought to require consideration of causation, as well as the deviations from some (possibly historical) baseline condition [START_REF] Verheyen | Beyond Adaptation-The legal duty to pay compensation for climate change damage[END_REF] . The Paris Agreement and related discussions have not offered any clarity about what type of evidence would be required for claims of liability and compensation to be legitimate, either from a normative perspective considering different principles of justice (see Textbox 1) or in relation to legal mechanisms under international policy. Liability and compensation represent the strongest and most rigid reference frame to clarify who is responsible to remedy climate change impacts, but also involve major challenges, both in terms of policy and science, as we will outline below. Liability and compensation involve clarification of impacts due to climate variability versus anthropogenic climate change, since no one can be morally blamed or held legally liable for negative impacts wholly resulting from natural climate variability [START_REF] Page | Distributing the Burdens of Climate Change[END_REF][START_REF] Gardiner | Ethics and Global Climate Change[END_REF][START_REF] Caney | Cosmopolitan Justice, Responsibility, and Global Climate Change[END_REF] . Accordingly, and as further detailed below, we suggest that here the strongest scientific evidence in line with specific causation is required, i.e. detection and attribution (see Textbox 2).
Figure 1 sketches a detection and attribution framework as it has been developed in recent research [START_REF] Stone | The challenge to detect and attribute effects of climate change on human and natural systems[END_REF][START_REF] Hansen | Linking local impacts to changes in climate: a guide to attribution[END_REF] and assessments [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . It reflects the relation of climatic and non-climatic drivers and detected climate change impacts in both natural and human systems at a global scale. As a general guideline, changes in many physical, terrestrial and marine ecosystems are strongly governed by climatic drivers such as regional changes in average or extreme air temperature, precipitation, or ocean water temperature. Due to the high likelihood of a major anthropogenic role in observed trends in these regional climate drivers, there is accordingly potential for high confidence in detection and attribution of related impacts of anthropogenic climate change [START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF] .
The negative impacts of climate change potentially relevant for liability and compensation usually concern human systems, and for these climatic drivers are typically less important than for natural systems: any anthropogenic climate effect can be outweighed by the magnitude of socio-economic changes, for instance considered in terms of exposure and vulnerability (e.g. expansion of exposed assets or people, or increasing climate resilient infrastructure). As a consequence, as documented in the IPCC AR5 and subsequent studies [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF][START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF] , there is currently only low confidence in the attribution of a major climate change role in impacts on human systems, except for polar and high mountain regions where livelihood conditions are strongly tied to climatic and cryospheric systems (Fig. 1).
In order to establish confidence in the detection of impacts, long-term, reliable, high-quality observations, as well as better process understanding, are crucial for both natural and human systems. Assuming that some substantial level of confidence will be required for issues of liability and compensation, we need to recognize that a very high bar is set by requiring high-quality observations over periods of several decades. Precisely these requirements are likely one of the reasons why studies of detection and attribution of impacts to anthropogenic climate change are still rare.
In the context of liability and compensation a separate pathway to climate policy is being developed in climate litigation under existing laws. In some countries such as the U.S. climate litigation has been used to advance climate policy but so far only a small fraction of lawsuits have been concerned with questions of rights and liabilities as related to damage or tort due to climate change impacts [START_REF] Peel | Climate Change Litigation[END_REF][START_REF] Markell | An Empirical Assessment of Climate Change In The Courts: A New Jurisprudence Or Business As Usual?[END_REF] . In the U.S. where by far the most such lawsuits are documented worldwide, several cases on imposing monetary penalties or injunctive relief on greenhouse gas emitters have been brought to court but so far, all of them have ultimately failed [START_REF] Wilensky | Climate change in the courts: an assessment of non-U.S. climate litigation[END_REF] . One of the most prominent lawsuits is known as California v. General Motors where the State of California claimed monetary compensation from six automakers for damage due to climate change under the tort liability theory of public nuisance.
Damages specified for California included reduced snow pack, increased coastal erosion due to rising sea levels, and increased frequency and duration of extreme heat events. As with several other lawsuits, the case was dismissed on the grounds that non-justiciable political questions were raised.
Further legal avenues that have been taken and researched with respect to the negative impacts of climate change include human rights in both domestic and international law [START_REF] Verheyen | Beyond Adaptation-The legal duty to pay compensation for climate change damage[END_REF][START_REF] Mcinerney-Lankford | Human Rights and Climate Change: A Review of the International Legal Dimensions[END_REF][START_REF] Posner | Climate Change and International Human Rights Litigation: A Critical Appraisal[END_REF] .
Generally, currently available experiences cannot sufficiently clarify what type of evidence would be needed in court to defend a legal case on climate change liability. However, there is useful precedent from litigation over harm caused by exposure to toxic substance where typically specific causation is required [START_REF] Farber | Basic Compensation for Victims of Climate Change[END_REF] . In our context, hence, this translates into detection and attribution of impacts of anthropogenic climate change.
Overall, experience so far indicates that the hurdles are considerable, and they may range from aspects of justiciability, to the proof required for causation, to the applicability of the no-harm rule established in international law or of the application of extraterritoriality in human rights law [START_REF] Schleiter | Proving Causation in Environmental Litigation[END_REF][START_REF] Mcinerney-Lankford | Human Rights and Climate Change: A Review of the International Legal Dimensions[END_REF][START_REF] Weisbach | Negligence, strict liability, and responsibility for climate change[END_REF][START_REF] Maldonado | The impact of climate change on tribal communities in the US: displacement, relocation, and human rights[END_REF] .
Based on these challenges and on the analysis of precedents from cases with harm due to exposure to toxic substances, some scholars favor ex-ante compensation as compared to ex-post compensation and refer to experiences with monetary disaster funds used to compensate affected vicitms [START_REF] Farber | Basic Compensation for Victims of Climate Change[END_REF] . It is interesting to note that in one of the very few lawsuits on climate change liability that have been accepted by a court ex-ante compensation is claimed. In this currently ongoing legal case at a German court, a citizen of the city of Huaraz in Peru is suing RWE, a large German energy producer, for their cumulative emissions causing an increased local risk of floods from a glacier lake in the Andes that formed as glaciers receded. Specific causation is likely required for this case but additional difficulty arises from proving the relation of harm of an individual to emissions. From an attribution point of view governments are in a better position to claim compensation than individuals because damages due to climate change can be aggregated over time and space over their territory and/or economic interests [START_REF] Grossman | Adjudicating climate change: state, national, and international approaches[END_REF] .
In conclusion, at the current state of legal practice, political discussions and available scientific evidence, significant progress in terms of liability and compensation seems rather unlikely in the near future. Politically, creating a monetary fund in line with considerations of ex-ante compensation may yet be the most feasible mechanism. In the following, we present two alternative approaches to achieve justice in relation to climate change impacts.
Recognition of responsibilities and reconciliation
As a first alternative we refer to the notion that legitimate claims of justice may extend beyond questions of liability and compensation, involving instead restorative justice, and more specifically recognition and acknowledgement of moral responsibilities for climate change impacts [START_REF] Thompson | Ethical and normative implications of weather event attribution for policy discussions concerning loss and damage[END_REF] . Following from that, we argue that recognition of responsibilities would be a first important step in any process of reconciliation.
Reconciliation is often discussed in the context of normative restorative (or transitional) justice research, which typically relates to the aftermath of violence and repression [START_REF] May | Restitution, and Transitional Justice. Moral[END_REF][START_REF] Roberts | Encyclopedia of Global Justice[END_REF] . In this context it is argued that recognition of wrongs is important in order to attain and maintain social stability [START_REF] Eisikovits | Stanford Encyclopedia of Philosophy[END_REF] . In the case of the negative effects of climate change, recognition could play a similar role. However, since the most negative effects of climate change will occur at least several decades from now, ex-ante recognition of responsibilities of climate change impacts would be required to support maintaining social stability. Recognition of responsibilities neither is the final step nor does it exclude the possibility of compensation, but we suggest it can represent a fundamental element in the process, especially where recovery has limitations. This is particularly the case when impacts of climate change are irreversible, such as for submersion of low-lying islands, permafrost thawing in the Arctic, or loss of glaciers in mountain regions [START_REF] Bell | Environmental Refugees: What Rights? Which Duties?[END_REF][START_REF] Byravan | The Ethical Implications of Sea-Level Rise Due to Climate Change[END_REF][START_REF] Heyward | New Waves in Global Justice[END_REF][START_REF] Zellentin | Climate justice, small island developing states & cultural loss[END_REF] .
On the level of scientific evidence, recognition of responsibilities as a first step in a reconciliation process implies clarification of those who caused, or contributed to, negative impacts of anthropogenic climate change, and of those who suffer the damage and losses. If the goal is a practical first step in a reconciliation process between those generally contributing to and those generally being impacted by climate change, rather than experiencing a specific impact, then we argue that basic understanding of causation (i.e. general causation) could provide sufficient evidence.
Understanding of general causation (see Textbox 2) can rely on multiple lines of evidence collected from observations, modeling or physical understanding, but not all are necessarily required and nor do they all have to concern the exact impact and location in question [START_REF] Huggel | Potential and limitations of the attribution of climate change impacts for informing loss and damage discussions and policies[END_REF] . According to physical understanding, for instance, warming implies glacier shrinkage and thus changes in the contribution of ice melt to river runoff [START_REF] Kaser | Contribution potential of glaciers to water availability in different climate regimes[END_REF][START_REF] Schaner | The contribution of glacier melt to streamflow[END_REF] or formation and growth of glacier lakes with possible lake outburst floods and associated risks [START_REF] Iribarren Anacona | Hazardous processes and events from glacier and permafrost areas: lessons from the Chilean and Argentinean Andes[END_REF][START_REF] Allen | hydrometeorological triggering and topographic predisposition[END_REF] . As another example, given the sensitivity of crops such as grapes or coffee to changes in temperature, precipitation, and soil moisture [START_REF] Jaramillo | Climate Change or Urbanization? Impacts on a Traditional Coffee Production System in East Africa over the Last 80 Years[END_REF][START_REF] Hannah | Climate change, wine, and conservation[END_REF][START_REF] Moriondo | Projected shifts of wine regions in response to climate change[END_REF] we can expect that yield, quality, phenology, pest and disease, planting site suitability and possibly supply chains may be affected [START_REF] Laderach | The Economic, Social and Political Elements of Climate Change[END_REF][START_REF] Holland | Climate Change and the Wine Industry: Current Research Themes and New Directions[END_REF][START_REF] Webb | Earlier wine-grape ripening driven by climatic warming and drying and management practices[END_REF][START_REF] Baca | An Integrated Framework for Assessing Vulnerability to Climate Change and Developing Adaptation Strategies for Coffee Growing Families in Mesoamerica[END_REF] . However, our understanding will be limited with respect to the exact magnitude of these impacts, especially along cascades of impacts from crop production to food supply. Further challenges arise from ongoing adaptation in human and managed systems, in particular for agricultural systems as demonstrated in recent studies [START_REF] Lobell | Climate change adaptation in crop production: Beware of illusions[END_REF][START_REF] Lereboullet | Socio-ecological adaptation to climate change: A comparative case study from the Mediterranean wine industry in France and Australia[END_REF] . Thus, while we suggest that understanding of general causation could serve the reconciliation processes, the value and limitations of this sort of evidence may vary among different types of impacts and is not likely to be sufficient to attain justice in the full sense. In climate policy, as the Paris negotiations have shown, many countries do in fact recognize some moral responsibility for impacts of climate change, but are reluctant to define any legal implications thereof in more detail.
Against this background, we believe that explicit recognition of moral responsibilities for climate change impacts plays a significant role in fostering cooperation among the Parties to the UNFCCC.
The ability to assist principle and risk management
Discourses on global justice provide the grounds for a second alternative beyond liabilities and compensation. A number of scholars offer arguments to distinguish between responsibilities to assist and claims for compensation from those liable for harm [START_REF] Miller | Holding Nations Responsible[END_REF][START_REF] Young | Responsiblity and Global Justice: A Social Connection Model[END_REF][START_REF] Jagers | Dual climate change responsibility: on moral divergences between mitigation and adaptation[END_REF][START_REF] Miller | National Responsibility and Global Justice[END_REF] . Ability to assist (AAP) is in line with the APP (see Textbox 1) and assumes an assignment of responsibilities proportional to economic, technological and logistic capacities. With regard to climate change impacts specifically, we argue that prioritizing the ability to assist is supported in the following contexts [START_REF] Wallimann-Helmer | Justice for climate loss and damage[END_REF][START_REF] Jagers | Dual climate change responsibility: on moral divergences between mitigation and adaptation[END_REF][START_REF] Wallimann-Helmer | Philosophy, Law and Environmental Crisis / Philosophie, droit et crise environnementale[END_REF] : when a projected climate impact is severe and immediate help is needed; when there is missing clarity on whether the party causing a negative impact did something morally wrong; or when the party responsible for the impact is not able to provide full recovery. It is important to note that prioritizing AAP does not mean that PPP and BPP should be dismissed altogether. Rather we think that AAP is more plausible and feasible in the aforementioned contexts than the other justice principles.
In the context of climate change impacts, we suggest that AAP includes an ex-ante component to facilitate prevention of and preparedness for L&D. Many different mechanisms exist to meet responsibilities to assist in the aforementioned sense and context, including reconstruction, programs to strengthen preparedness and institutions responsible for risk management, or technology transfer. Most of these mechanisms can be accommodated under the perspective of integrative risk management [START_REF] Mechler | Managing unnatural disaster risk from climate extremes[END_REF] .
Appropriate identification and understanding of risks, and how risks change over time, is an important prerequisite for risk management. In the IPCC AR5 risk is defined as a function of (climate) hazard, exposure of assets and people, and their vulnerability [START_REF] Oppenheimer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . For the climate hazard component of risks, extreme weather events are a primary concern. A large number of studies have identified observed trends in extreme weather, both globally [START_REF] Hansen | Perception of climate change[END_REF][START_REF] Hartmann | Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change[END_REF][START_REF] Westra | Global Increasing Trends in Annual Maximum Daily Precipitation[END_REF] and regionally [START_REF] Skansi | Warming and wetting signals emerging from analysis of changes in climate extreme indices over South America[END_REF][START_REF] Donat | Reanalysis suggests long-term upward trends in European storminess since 1871[END_REF] , and have examined their relation to anthropogenic climate change [START_REF] Bindoff | Climate Change 2013: The Physical Science Basis[END_REF][START_REF] Otto | Attribution of extreme weather events in Africa: a preliminary exploration of the science and policy implications[END_REF] . Particularly challenging and debated is the attribution of single extreme weather events to anthropogenic climate change [START_REF]Attribution of extreme weather events in the context of climate change. 144[END_REF][START_REF] Bindoff | Climate Change 2013: The Physical Science Basis[END_REF][START_REF] Otto | Reconciling two approaches to attribution of the 2010 Russian heat wave[END_REF][START_REF] Trenberth | Attribution of climate extreme events[END_REF][START_REF] Seneviratne | Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation. A Special Report of Working Groups I and II of the Intergovernmental Panel on Climate Change[END_REF] . On the other hand, disaster risk studies focusing on L&D due to extreme weather events generally have concluded that the observed strong increase in monetary losses is primarily due to changes in exposure and wealth [START_REF] Bouwer | Have Disaster Losses Increased Due to Anthropogenic Climate Change? Bull[END_REF][START_REF] Barthel | A trend analysis of normalized insured damage from natural disasters[END_REF][START_REF] Ipcc | Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation. A Special Report of Working Groups I and II of the Intergovernmental Panel on Climate Change[END_REF] , with a dynamic contribution from vulnerability [START_REF] Mechler | Understanding trends and projections of disaster losses and climate change: is vulnerability the missing link?[END_REF] . For instance, for detected changes in heat related human mortality, changes in exposure, health care or physical infrastructure and adaptation are important drivers and often outweigh the effects of climate change [START_REF] Christidis | Causes for the recent changes in cold-and heatrelated mortality in England and Wales[END_REF][START_REF] Oudin Åström | Attributing mortality from extreme temperatures to climate change in Stockholm, Sweden[END_REF][START_REF] Arent | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF][START_REF] Smith | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] .
Risk management yet should not only be concerned with impacts of extreme weather events but also with negative effects of gradual climate change on natural, human and managed systems. Based on the assessment of the IPCC AR5, concern for unique and threatened systems has mounted for Arctic, marine and mountain systems, including Arctic marine ecosystems, glaciers and permafrost, and Arctic indigenous livelihoods [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . Impacts of gradual climate change are often exacerbated by extreme events, thus enhancing risks and complicating attribution [START_REF] Huggel | Potential and limitations of the attribution of climate change impacts for informing loss and damage discussions and policies[END_REF] . Furthermore, impacts of climate change usually occur within a context of multiple non-climatic drivers of risk. Effective identification of specific activities to reduce risk may require estimation of the relative balance of the contributions of climatic and non-climatic drivers. However, understanding of general causation, in the form for instance of process-based understanding, may not provide sufficient precision to distinguish the relative importance of the various drivers; in that case, more refined information generated through detection and attribution analysis may be required. This, however, implies the availability of longterm data which is limited in many developing countries.
In the context of international climate policy, assistance provided to strengthen risk management is largely uncontested and is supported in many documents [START_REF]Lima Call for Climate Action[END_REF] . Hence, political feasibility, the justice basis and potential progress in scientific evidence make risk management a promising vehicle for addressing climate change impacts.
Injustices from the imbalance of climate and impact monitoring
Depending on the approaches outlined in the previous sections, observational monitoring of climate and impacts can be of fundamental importance in order to provide the necessary causal evidence, and to satisfy justice claims posed by many Parties. In this light, it is informative to consider the distribution of long-term climate observations, as well as that of the detected and attributed impacts as assessed by the IPCC AR5 [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . As Figure 2 shows, the distribution of both long-term recording weather stations and observed impacts of climate change is unequal across the globe. Observations of non-climatic factors, which are important to assess the magnitude of impacts of climatic versus non-climatic factors, are not shown in Figure 2 but are likely to show a similar imbalanced pattern.
The analysis of attributed impacts based on IPCC AR5 6 reveals that more than 60% of the attributed impacts considered come from the 43 Annex I countries while the 154 Non-Annex I countries feature less than 40% of the observations (Fig. 3). This imbalance is even larger if the least developed countries (LDC) and the countries of the Small Island Developing States (SIDS) (80 countries together) are considered, for which less than 20% of globally detected and attributed impacts are reported.
While different identified impacts in the IPCC AR5 reflect different degrees of aggregation (e.g. aggregating phenological shifts across species on a continent into a single impact unit), this aggregation tends to be amplified in Annex I countries, and thus understates the geographical contrast in terms of available evidence between developed and developing countries. Additionally, Non-Annex I, LDC and SIDS countries generally have a higher proportion of impacts with very low and low confidence in attribution to climate change whereas Annex I countries have more impacts with high confidence in attribution. The assignment of confidence thereby typically relates, among other things, to the quality and duration of available observational series [START_REF] Hegerl | Good practice guidance paper on detection and attribution related to anthropogenic climate change[END_REF][START_REF] Adler | The IPCC and treatment of uncertainties: topics and sources of dissensus[END_REF][START_REF] Ebi | Differentiating theory from evidence in determining confidence in an assessment finding[END_REF] ; this also holds for the attribution of observed climate trends to greenhouse gas emissions [START_REF] Stone | Rapid systematic assessment of the detection and attribution of regional anthropogenic climate change[END_REF] . This imbalance thus reflects an unequal distribution of monitoring for physical as well as for biological, managed and human systems.
The results of this analysis imply new kinds of injustices involved in the approaches discussed above.
Whichever approach is chosen, the unequal distribution of observed and attributed impacts, and of the confidence in assessments, implies an unjustified disadvantage for those most in need of assistance. The more impacts are detected and their attribution to climate change is clarified the better it is understood (i) what responsibilities would have to be recognized, (ii) what the appropriate measures of risk management might be, and (iii) what would represent appropriate methods of compensation for negative climate change effects on natural and human systems. In this respect many Non-Annex I countries seem to be disadvantaged as compared to Annex I countries. This disadvantage represents a form of procedural injustice in negotiating and deciding when, where and what measures are taken. Hence, the point here is not the potentially unfair outcomes of negotiations but the fairness of the process of negotiating itself. The imbalance of the distribution of detected and attributed impacts was in fact an issue during the final IPCC AR5 government approval process [START_REF] Hansen | Global distribution of observed climate change impacts[END_REF] , indicating concern that voices from some actors and parties might be downplayed or ignored due to lack of hard evidence for perceived impacts.
Against this background, we argue in line with a version of the APP (AAP) that countries with appropriate economic, technological and logistic capacities should enhance the support for countries with limited available resources or capacity along two lines of actions and policy: i) to substantially improve monitoring of a broad range of climate change impacts on natural and human systems; ii) to strengthen local human resources and capacities in countries facing important climate change impacts to a level that ensures an adequate quality and extent of monitoring and scientific analysis. This proposal is perfectly in line with the UNFCCC and decisions taken at recent negotiations including COP21 [START_REF] Cramer | Adoption of the Paris Agreement[END_REF][START_REF]Lima Call for Climate Action[END_REF] , and actions and programs underway in several Non-Annex I countries, hence strongly increasing its political feasibility. The lack of monitoring and observations has been long recognized but the related procedural injustice has not received much discussion. Our analysis intends to provide the justice basis and context to justify strengthening these efforts.
However, even if such efforts are substantially developed in the near future, a major challenge remains in how to cope with non-existing or low-quality observational records of the past decades in countries were no corresponding monitoring had been in place. Reconstruction of past climate change impacts and events exploiting historical satellite data, on-site field mapping, searching historical archives, etc. may be able to recover missing data to some extent. Different and diverse forms of knowledge existing in various regions and localities can be of additional value but need to be evaluated in their respective context to avoid simplistic comparisons of, for instance, scientific versus local knowledge [START_REF] Reyes-García | Local indicators of climate change: the potential contribution of local knowledge to climate research[END_REF] . Substantial observational limitations, however, will likely remain and the implications for the aforementioned approaches toward justice need to be seriously considered.
Developing evidence for just policy
In this Perspective we discussed different approaches towards justice regarding negative climate change impacts. We argued that depending on the approach chosen, different kinds of evidence concerning detection and attribution of climate change impacts are needed. Establishing liabilities in a legal or political context to seek compensation sets the highest bar, and we suggest that it requires detection and attribution in line with specific causation. However, in general the level of scientific evidence currently available rarely supports high confidence in linking impacts to emissions, except for some natural and human systems related to the mountain and Arctic cryosphere and the health of warm water corals. Hence, claims for compensation based on liabilities will likely continue to encounter scientific hurdles, in addition to various political and legal hurdles.
Understanding the role of climate change in trends in impacted natural and human systems at a level of evidence currently available can still effectively inform other justice principles which in our view are politically much more feasible, namely recognition of responsibilities and ability to assist.
Attribution research can clarify responsibilities and thus facilitate their recognition; and it can enhance the understanding of drivers of risks as a basis for improved risk management. More rigorous implementation of risk management is actually critical to prevent and reduce future L&D.
Whether recognition of responsibilities and APP / AAP are politically sufficient to facilitate ex-ante compensation, for instance with the creation of a monetary fund for current or future victims of climate change impacts, needs yet to be seen.
Finally, the imbalance of observed and attributed climate change impacts leaves those countries most in need of assistance (i.e. SIDS and LDC countries) with relatively poor evidence in support of appropriate risk management approaches or any claim for liability and related compensation in international climate policy or at courts. We have argued that evidence in line with general causation may be sufficient for recognition of responsibilities, and hence, this may well speak in favor of this justice approach, considering the aforementioned limitations in observations and attribution.
Recognition of responsibilities cannot represent the final step to attain justice, however, and we therefore suggest that two issues remain crucial: i) procedural injustice resulting from an imbalance of detected and attributed impacts should be considered as a fundamental issue in negotiations and decision making in international climate policy; and ii) monitoring of climate change impacts in natural and human systems, and local capacities in developing countries need to be substantially strengthened. Efforts taken now will be of critical value for the future when climate change impacts are expected to be more severe than experienced so far. stations and the number of detected impacts as assessed in the IPCC WGII AR5 [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . It distinguishes between Annex I countries (in red colors), Non-Annex I countries (in green colors), and regions not party to the UNFCCC (grey colors). The GHCN is the largest publicly available collection of global surface air temperature station data. The shaded regions correspond to the regional extent of relevant climatic changes for various impacts, rather than of the impacts themselves, as determined in [START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF]; a few impacts are not included due to insufficient information for defining a relevant region.
Figure captions
Figure 1 :
1 Figure captionsFigure 1: A schematic detection and attribution framework for impacts on natural and human systems. The left part (in light grey) indicates the different impacts and the respective level of confidence in detection and attribution of a climate change influence as assessed in the IPCC Working Group II 5 th Assessment Report (AR5) 6 . Boxes with a thick (thin) outline indicate a major (minor) role of climate change as assessed in [ 6 ] (note that this IPCC assessment 6 did not distinguish between natural and anthropogenic climate change in relation with impacts). The right part (in darker grey) of the figure identifies important climatic and non-climatic drivers of detected impacts at global scales. The attribution statements for the climatic drivers are from IPCC WGI AR5 77 and refer to anthropogenic climate change. Trends in the graphs are all for global drivers and represent from top to bottom the following: TAS: mean annual land air temperature 98 ; TXx (TNn): hottest (coldest) daily maximum (minimum) temperature of the year 99 ; TOS sea surface temperature 100 (all units are degrees Celsius and anomalies from the 1981-2010 global average); SIC: northern hemisphere sea ice coverage 100 (in million km 2 ); Popul: total world population (in billions); GDP: global gross domestic product (in 2005 USD); Life exp. and health expend.: total life expectancy at birth and public health expenditure (% of GDP) (Data sources: The World Bank ,World Bank Open Data, http://data.worldbank.org/).
Figure 2 :
2 Figure 2: World map showing the distribution of Global Historical Climatology Network (GHCN)
Figure 3 :
3 Figure 3: Distribution of attributed climate change impacts in physical, biological and human systems as assessed in the IPCC WGII AR5 6 , showing an imbalance between Annex I, Non-Annex I, and Least Developed Countries (LDC) and Small Island Development States (SIDS). Three confidence levels of attribution are distinguished. Note that LDC and SIDS are also part of Non-Annex I countries.
Acknowledgements C. H. was supported by strategic funds by the Executive Board and Faculty of Science of the University of Zurich. I. W.-H. acknowledges financial support by the Stiftung Mercator Switzerland and the University of Zurich's Research Priority Program for Ethics (URPP Ethics). D.S. was supported by the US Department of Energy Office of Science, Office of Biological and Environmental Research, under contract number DE-AC02-05CH11231. W.C. contributes to the Labex OT-Med (no. ANR-11-LABX-0061) funded by the French Government through the A*MIDEX project (no. ANR-11-IDEX-0001-02). We furthermore appreciate the collaboration with Gerrit Hansen on the analysis of the distribution of climate change impacts. | 53,447 | [
"18543"
] | [
"202508",
"138932",
"188653"
] |
01764957 | en | [
"sdv",
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01764957/file/Valor_et_al_2017.pdf | Teresa Valor
email: [email protected]
Elena Ormeño
Pere Casals
Temporal effects of prescribed burning on terpene production in Mediterranean pines
Keywords: conifers, fire ecology, Pinus halepensis, Pinus nigra, Pinus sylvestris, plant volatiles, prescribed fire, secondary metabolism
Prescribed burning is used to reduce fuel hazard but underburning can damage standing trees. The effect of burning on needle terpene storage, a proxy for secondary metabolism, in fire-damaged pines is poorly understood despite the protection terpenes confer against biotic and abiotic stressors. We investigated variation in needle terpene storage after burning in three Mediterranean pine species featuring different adaptations to fire regimes. In two pure-stands of Pinus halepensis Mill. and two mixed-stands of Pinus sylvestris L. and Pinus nigra ssp. salzmanni (Dunal) Franco, we compared 24 h and 1 year post-burning concentrations with pre-burning concentrations in 20 trees per species, and evaluated the relative contribution of tree fire severity and physiological condition (δ 13 C and N concentration) on temporal terpene dynamics (for mono-sesqui-and diterpenes). Twenty-four hours post-burning, monoterpene concentrations were slightly higher in P. halepensis than at pre-burning, while values were similar in P. sylvestris. Differently, in the more fire-resistant P. nigra monoterpene concentrations were lower at 24 h, compared with pre-burning. One year post-burning, concentrations were always lower compared with pre-or 24 h post-burning, regardless of the terpene group. Mono-and sesquiterpene variations were negatively related to pre-burning δ 13 C, while diterpene variations were associated with fire-induced changes in needle δ 13 C and N concentration. At both post-burning times, mono-and diterpene concentrations increased significantly with crown scorch volume in all species. Differences in post-burning terpene contents as a function of the pine species' sensitivity to fire suggest that terpenic metabolites could have adaptive importance in fire-prone ecosystems in terms of flammability or defence against biotic agents post-burning. One year postburning, our results suggest that in a context of fire-induced resource availability, pines likely prioritize primary rather than secondary metabolism. Overall, this study contributes to the assessment of the direct and indirect effects of fire on pine terpene storage, providing valuable information about their vulnerability to biotic and abiotic stressors throughout time.
Introduction
Prescribed burning (PB) is the planned use of fire under mild weather conditions to meet defined management objectives [START_REF] Wade | A guide for prescribed fire in southern forests[END_REF]). Prescribed burning is executed mostly for fire risk reduction, but also for forest management, restoring habitats or improving grazing. Generally, prescribed burns are low intensity fires, but certain management objectives require a higher burning intensity to effectively achieve specific goals, such as significantly removing understory or slash. In this case, PB can partially damage trees and affect their vitality in the shortterm. Some studies have analysed the effects of PB on postburning growth [START_REF] Battipaglia | The effects of prescribed burning on Pinus halepensis Mill. as revealed by dendrochronological and isotopic analyses[END_REF][START_REF] Valor | Assessing the impact of prescribed burning on the growth of European pines[END_REF] and tree vitality (see Woolley et al. 2012 for review). Less attention has been dedicated to understanding the effect of PB on secondary metabolites produced by pines [START_REF] Lavoir | Does prescribed burning affect leaf secondary metabolites in pine stands?[END_REF], despite the protection they confer against biotic and abiotic stressors, and their potential to increase plant flammability (Ormeño et al. 2009, Loreto and[START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF].
The quantity and composition of terpenes produced against a stressor can be constrained by the plant's physiological status [START_REF] Sampedro | Costs of constitutive and herbivore-induced chemical defences in pine trees emerge only under low nutrient availability[END_REF]) and genetics [START_REF] Pausas | Secondary compounds enhance flammability in a Mediterranean plant[END_REF], but also by the nature and severity of the stress, and the species affected. The main secondary metabolites biosynthesized in conifers are terpenes and phenols [START_REF] Langenheim | Plant resins: chemistry, evolution, ecology and ethnobotany[END_REF]. In Pinus species, oleoresin is a mixture of terpenes including monoterpenes (volatile metabolites), sesquiterpenes (metabolites with intermediate volatility) and diterpenes (semi-volatile compounds), which are stored in resin ducts of woody and needle tissues [START_REF] Phillips | Resin-based defenses in conifers[END_REF]. Upon stress, plants follow a constitutive or induced strategy to defend themselves from a stressor. Although most Pinus spp. favour the production of constitutive terpenes under stress conditions, they can also synthesize new induced defences [START_REF] Phillips | Resin-based defenses in conifers[END_REF]. The induction timing may be different depending on the chemical groups of terpenes, type of stress, and the species or tissue attacked [START_REF] Lewinsohn | Defense mechanisms of conifers differences in constitutive and wound-induced monoterpene biosynthesis among species[END_REF][START_REF] Achotegui-Castells | Strong induction of minor terpenes in Italian Cypress, Cupressus sempervirens, in response to infection by the fungus Seiridium cardinale[END_REF].
Direct effects of fire such as rising temperatures or heatinduced needle damage can alter terpene production. Increases in air and leaf temperature trigger the emission of volatile terpenes [START_REF] Alessio | Direct and indirect impacts of fire on isoprenoid emissions from Mediterranean vegetation[END_REF]) but their synthesis can also be stimulated if the optimal temperature of enzymes is not exceeded [START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF]. Benefits of such stimulation include thermoprotection against heat, since terpene volatiles neutralize the oxidation pressure encountered by chloroplasts under thermal stress [START_REF] Vickers | Isoprene synthesis protects transgenic tobacco plants from oxidative stress[END_REF]. As the emission of volatile terpenes in several Mediterranean pines cease 24 h after fire [START_REF] Alessio | Direct and indirect impacts of fire on isoprenoid emissions from Mediterranean vegetation[END_REF] or wounding [START_REF] Pasqua | The role of isoprenoid accumulation and oxidation in sealing wounded needles of Mediterranean pines[END_REF], we hypothesized that the accumulation of monoterpenes will be higher 24 h post-burning, than before PB.
Indirect effects of fire can affect terpene concentrations by means of increasing resource availability [START_REF] Certini | Effects of fire on properties of forest soils: a review[END_REF]. In turn, terpene variations induced by fire could change needle flammability [START_REF] Ormeño | The relationship between terpenes and flammability of leaf litter[END_REF]) and susceptibility to insects [START_REF] Hood | Low-severity fire increases tree defense against bark beetle attacks[END_REF]. The 'growth differentiation balance hypothesis' (GDBH) (Herms andMattson 1992, Stamp 2003) predicts that under poor water and nutrient availabilities, growth is more limited than photosynthesis. Since carbon assimilation is maintained, the excess of carbohydrates favours the synthesis of carbon-based secondary metabolites. On the contrary, when resource availability is high, the growth of plants is not expected to be limited and plants allocate a greater proportion of assimilates to growth rather than to defence traits (Herms andMattson 1992, Stamp 2003). Accordingly, a short-term response following PB should be an increasing demand on the plant for chemical defence if trees are damaged, but with time, if trees heal, increased fertilization and reduced water competition induced by PB [START_REF] Feeney | Influence of thinning and burning restoration treatments on presettlement ponderosa pines at the Gus Pearson Natural Area[END_REF]) could favour carbon allocation to growth rather than chemical defences. Time-course terpene responses of the direct and indirect effects of PB could differ between tree species depending on their fire resistance strategies. In this study, we used pines with contrasting tolerance to surface fires: Pinus halepensis, a fire sensitive species, Pinus sylvestris, moderately fire-resistant and the fire-resister Pinus nigra, which is supposed to be less vulnerable to fire tissue damage due to its pyro-resistant traits (e.g. thicker bark, higher crown base height) [START_REF] Fernandes | Fire resistance of European pines[END_REF]. In agreement with these strategies, we previously found that radial growth was reduced the year of PB in the most firesensitive species and unaffected in P. nigra, while 1 year postburning, growth was augmented in P. nigra and P. halepensis, and reduced in P. sylvestris [START_REF] Valor | Assessing the impact of prescribed burning on the growth of European pines[END_REF]. In consequence, we hypothesized that 1 year post-burning, the concentration of terpenes would be, as a whole, lower than before PB, if fire induces a decrease in nutrient and water competition; this reduction would be lower on damaged trees and in pines defined as having lower fire resistance (e.g., P. halepensis and P. sylvestris).
The objectives of this study were to evaluate the effects of relatively high-intensity PB (enough to remove understory and ladder fuels) on mono-, sesqui-and diterpene storage in Pinus spp., comparing 24 h and 1 year post-burning concentrations with pre-burning concentrations. We modelled the relative change of terpene concentrations at two sampling times: (i) 24 h post-burning, as a function of fire severity and pre-burning physiological condition and (ii) 1 year post-burning, as a function of fire severity and PB-induced changes in pine physiological conditions. Additionally, we aimed to identify the most representative terpenes of each sampling time since burning.
Materials and methods
The study was established in three sites situated in the NE Iberian Peninsula (Catalonia, Spain): two plots in mixed-stands of P. nigra ssp. salzmanni (Dunal) Franco and P. sylvestris L. at Miravé and Lloreda localities, situated in the foothills of the Pyrenees; and two other plots in a pure-stand of P. halepensis Mill. at El Perelló locality, in the Southern part of Catalonia. The P. halepensis stand is located in areas of dry Mediterranean climate while the mixed-stands of P. nigra and P. sylvestris are situated in temperate cold sub-Mediterranean climate with milder summers and colder winters (Table 1). In the sub-Mediterranean sites, soils are developed from calcareous colluviums (0.5-1 m deep) and thus classified as Calcaric cambisols (FAO 2006); in the Mediterranean site, they are developed from limestones (0.4-0.5 m deep) and classified as Leptic Regosol (FAO 2006). The understory is dominated by Buxus sempervirens L. and Viburnum lantana L., in the P. nigra and P. sylvestris mixedstands, and by Pistacia lentiscus L. and Quercus coccifera L. in the P. halepensis stand.
Experimental design: tree selection and prescribed burning
A total of four plots (30 × 30 m) were set up: one in each of the mixed-stand of P. nigra and P. sylvestris, and two in the pure P. halepensis stand. Each plot was burnt in spring 2013 (Table 2). Prescribed burns were conducted by the Forest Actions Support Group (GRAF) of the Autonomous Government (Generalitat de Catalunya) using a strip headfire ignition pattern. Prescribed burning aimed to decrease fuel hazard by reducing surface and ladder fuel loads. Between 90% and 100% of the surface fuel load was consumed in all plots. Needle terpene concentration, fire features and tree physiological condition were studied in 9/10 dominant or co-dominant pines per species in each plot. Each tree was sampled on three occasions for analysing terpene concentration: 24 h before PB (pre-burning), 24 h and 1 year after PB (24 h post-burning and 1 year post-burning, respectively). δ 13 C and N concentrations of 1-year-old needles were also analysed as a proxy of physiological condition in pre-burning and 1 year post-burning samples.
Before PB, selected trees were identified with a metal tag. Their diameter at breast height (DBH), total height and height to live crown base were measured. During fires, the fire residence time (minutes) above 60 °C and the maximum temperature at the base of the trunk were measured for the selected trees with K-thermocouples (4 mm) connected to dataloggers (Testo 175), packed with a fireproof blanket and buried into the soil. Temperatures were recorded every 10 s. The maximum temperatures registered at the soil surface occurred in the P. nigra and P. sylvestris plots, while the highest residence time above 60 °C was recorded in the P. halepensis plots (Table 2). One week after PB, the crown volume scorched was visually estimated to the nearest 5% as an indicator of fire severity. Foliage scorch was defined as a change in needle colour resulting from direct foliage ignition or indirect heating [START_REF] Catry | Post-fire tree mortality in mixed forests of central Portugal[END_REF].
Needle sampling
In each plot, we cut an unscorched branch from the top of the south-facing crown in the 9/10 trees selected per species for each sampling time studied: pre-burning, 24 h and 1 year postburning. Five twigs with unscorched healthy needles were cut immediately, covered with aluminium paper and stored in a portable refrigerator at 4 °C until being stored at -20 °C in the laboratory for terpene analysis. The time period between the field and the laboratory did not exceed 2 h. Additionally, about five twigs were transported to the laboratory, dried at 60 °C and stored in tins before δ 13 C and N concentration analysis.
Needle terpene concentration
In the studied pine species, needles reached up to 3 years old. Before terpene extraction, we collected the 1-year-old needles from each twig to control for the effect of age for each sampling time. Needles were cut in small parts (~5 mm) and placed in well-filled, tightly closed amber glass vials to avoid exposure to light and oxygen (Guenther 1949[START_REF] Farhat | Seasonal changes in the composition of the essential oil extract of East Mediterranean sage (Salvia libanotica) and its toxicity in mice[END_REF]. The extraction method consisted in dissolving 1 g of cut 1-year-old unscorched green needles in 5 ml of organic solvent (cyclohexane + dichloromethane, 1:9), containing a constant amount of undecane, a volatile internal standard to quantify terpene concentrations which was not naturally stored in the needles. Extraction occurred for 20 min, under constant shaking at room temperature, similar to extractions shown in [START_REF] Ormeño | Plant coexistence alters terpene emission and concentration of Mediterranean species[END_REF]. The extract was stored at -20 °C and then analysed within the following 3 weeks. Analyses were performed on a gas chromatograph (GS-Agilent 7890B, Agilent Technologies, Les Ulis, France) coupled to a mass selective detector (MSD 5977A, Agilent Technologies, Les Ulis, France). Compound separation was achieved on an HP-5MS (Agilent Technologies, Les Ulis, France) capillary column with helium as the carrier gas. After sample injection (1μl), the start temperature (40 °C for 5 min) was ramped up to 245 °C at a rate of 3 °C min -1 , and then until 300 °C a rate of 7 °C min -1 . Terpene identifications were based on the comparison of terpene retention times and mass spectra with those obtained from authentic reference samples (Sigma-Aldrich ® , Sigma-Aldrich, Saint-Quentin-Fallavier, France) when available, or from databases (NIST2008, Adams 2007) when samples were unavailable. Also, we calculated the Kovats retention index and compared it with bibliographical data. Terpenes were quantified based on the internal standard undecane (36.6 ng μl -1 of injected solution). Thus, based on calibrations of terpene standards of high purity (97-99%), also prepared using undecane as internal standard, chromatographic peak areas of an extracted terpene where converted into terpene masses based on the relative response factor of each calibrated terpene. Results were expressed on a needle dry mass (DM) basis. The identified terpenes were grouped in mono-, sesqui-and diterpenes. At each post-burning time, we calculated the relative change of terpene concentration as the difference between the pre-and post-burning concentration of each terpene group expressed as percentage.
Tree physiological condition: δ 13 C and N analysis δ 13 C and N analysis were carried out on 1-year-old unscorched needles in pre-burning and 1 year post-burning samples. For δ 13 C and N, needles were oven-dried at 60 °C for 48 h, ground and analysed at the Stable Isotope Facility of the University of California at Davis (USA) using an ANCA interfaced to a 20-20 Europa ® isotope ratio mass spectrometer (Sercon Ltd, Cheshire, UK).
Climatic data before and during sampling years
Monthly precipitation (P) and temperature (T) from March 2012 to August 2014 were downloaded from the three nearest meteorological stations to the sub-Mediterranean and the Mediterranean plots. Monthly potential evapotranspiration (PET) was estimated using the Thornthwaite (1948) method. For each sampling year (t), 2013 and 2014, accumulated values of P and PET of different periods were calculated for each meteorological station. Seven periods of accumulated climate data were compiled: annual, from June before the sampling year (t -1) to May of the sampling year (t); spring, summer, fall and winter before the sampling year (t -1); spring and summer of the sampling year (t). For each period, we calculated the difference between P and PET (P -PET) for each meteorological station and sampling year.
Linear mixed models (LMM), considering plot as a random factor, were used to:
(i) Analyse potential differences in pre-burning tree physiological condition and fire parameters among pine species. (ii) Test for differences in total terpene and terpene group concentrations (expressed in a needle mass basis and as the percentage of the terpene group from the total) between times since burning for each pine species. (iii) Model 24 h and 1 year impact of PB on the relative concentration change of mono-, sesqui-and diterpenes with respect to pre-burning concentration. The 24 h and 1 year post-burning models considered pine species as a fixed factor, needle δ 13 C and N concentration pre-burning, and the proportion of crown scorched and fire residence time above 60 °C as covariables. In addition, in the 1 year postburning model, δ 13 C and N concentration changes were also included (1 year post-burning minus pre-burning levels of δ 13 C and N concentration). Second interactions of pine species with each co-variable were included.
Terpene concentrations were log-transformed to accomplish normal distribution requirement. When the relative concentration change of terpenes was modelled, 100 was summed as a constant before taking the logarithm. Therefore, log-transformed data higher than 2 indicate higher terpene concentrations than pre-burning, while values lower than 2 mean lower terpene concentrations. Residuals presented no pattern and highly correlated explanatory variables were avoided. The variance explained for the fixed effects was obtained by comparing the final model with the null model (containing only the random structure). A Tukey post-hoc test was used for multiple comparisons when needed.
For each pine species, terpene profiles were evaluated using a principal component analysis to show potential qualitative and quantitative variation in needle terpene within and between plots and time since burning. Terpene concentrations were centred and the variance-covariance matrix used to understand how terpene profiles varied. Moreover, for each pine species, we used a multilevel sparse partial least squares discriminant analysis (sPLS-DA) to select the terpenes that best separated each time since burning in terms of their concentration. The sPLS-DA is a supervised technique that takes the class of the sample into account, in this case time since burning, and tries to reduce the dimension while maximizing the separation between classes. To conduct the analysis, we selected those compounds that were present in at least 75% of the sampled trees, resulting in a total of 48, 37 and 35 compounds in P. halepensis, P. nigra and P. sylvestris, respectively. We used the multilevel approach to account for the repeated measures on each tree to highlight the PB effects within trees separately from the biological variation between trees. The classification error rate was estimated with leave-one-out cross validation with respect to the number of selected terpenes on each dimension. Lastly, differences in P -PET between sampling years were tested by a Student's t-test for the Mediterranean and sub-Mediterranean plots. All analyses were conducted with the software R (v. 3.2.1, The R Foundation for Statistical Computing, Vienna, Austria) using the package nlme for linear mixed-effects modelling and the package mixOmics for the sPLS-DA analysis. The model variances explained by fixed effects (marginal R 2 ) and by both fixed and random effects (conditional R 2 ) are provided [START_REF] Nakagawa | A general and simple method for obtaining R2 from generalized linear mixed-effects models[END_REF].
Results
Tree, fire and climate characteristics
The proportion of crown scorched was significantly higher in P. halepensis than in the other species despite the fact that the three pine species presented similar height to live crown base (Table 3). By contrast, no differences in fire residence time above 60 °C were encountered among species (Table 3). Needle δ 13 C decreased significantly 1 year post-burning in the three species while N concentration was similar (Table 3).
This decrease in δ 13 C contrasted with the drier conditions found 1 year post-burning (P -PET = 200 mm and = 135 mm in Mediterranean and sub-Mediterranean plots, respectively) in comparison with pre-burning (P -PET = 481 mm and = 290 mm in Mediterranean and sub-Mediterranean plots, respectively; see Figure S1 available as Supplementary Data at Tree Physiology Online).
A total of 56, 59 and 49 terpenes were identified and quantified in P. halepensis, P. nigra and P. sylvestris, respectively (see Table S1 available as Supplementary Data at Tree Physiology Online). Preburning, P. nigra showed the highest terpene concentration (65.6 ± 7.1 mg g DM -1
) followed by P. halepensis and P. sylvestris (41.2 ± 5.8 mg g DM -1 and 21.4 ± 2.6 mg g DM -1 , respectively the diterpene thunbergol in P. halepensis, the sesquiterpene β-caryophyllene in P. nigra and the monoterpene α-pinene in P. sylvestris were the major compounds found, representing an average of 22%, 22% and 40% of the total terpene concentration, respectively (see Figure S2 available as Supplementary Data at Tree Physiology Online). Terpene concentration and composition strongly varied within plots in all species, with no clear differences in terpene composition among plots (see Figures S3a,S4a and S5a available as Supplementary Data at Tree Physiology Online). The variation in terpene concentrations was high within pre-and 24 h post-burning samples, while variation for 1 year post-burning concentrations was much lower (see Figures S3b,S4b and S5b available as Supplementary Data at Tree Physiology Online). In all species, the quantity of dominant compounds in pre-and 24 h post-burning samples were clearly different from those of 1 year post-burning samples (see Figures S3b,S4b and S5b available as Supplementary Data at Tree Physiology Online). For instance, the quantity of α-pinene was higher in pre-and 24 h post-burning times in all pine species in opposition with 1 year post-burning samples. Limonene was characteristic 24 h post-burning in the needles of P. halepensis and P. nigra, while the quantity of camphene and myrcene was higher in pre-and 24 h post-burning needles samples of P. sylvestris. Differences in total terpene concentration between pre-and 24 h post-burning were only detected in P. nigra, which decreased ∼39% (Figure 1a). When analysing terpene groups, the 24 h post-burning needle concentration of both mono-and sesquiterpenes were, in comparison with pre-burning, slightly higher in P. halepensis, lower in P. nigra and similar in P. sylvestris (Figure 1b andc). No differences were detected in the diterpene concentration between pre-and 24 h post-burning times (Figure 1d).
One year after burning, total terpene concentration was lower compared with the levels observed pre-and 24 h post-burning in the three species (Figure 1a). In P. halepensis this reduction was similar for each terpene group while, in the two sub-Mediterranean species, it was mostly due to a decrease in the proportion of monoterpenes (see Table S2 available as Supplementary Data at Tree Physiology Online). In contrast, an increase in the relative contribution of the sesquiterpene group to the total terpenes was found 1 year post-burning in both sub-Mediterranean species.
The relative changes of mono-and diterpene concentrations 24 h post-burning were directly related to the proportion of crown scorched (Table 4). However, crown scorch volume interacted with pine species to explain the relative changes in mono-and diterpene concentrations (Table 4). Thus, in both P. halepensis and P. sylvestris, the 24 h post-burning concentration of monoterpenes was higher than pre-burning and increased with crown scorched (Figure 2a.1 and a.3); only individual pines with low proportion of the crown scorched (<15--20%) showed similar or lower concentration than pre-burning. In for each pine species (P. halepensis, n = 20; P. sylvestris, n = 19; P. nigra, n = 19 and n = 18 in 1 year-post-burning). Differences in the concentration between TSB within each pine species were tested using LMM considering plot as a random factor. Within each pine species, different letters indicate differences between TSB using a Tukey post-hoc, where regular letters indicate significant differences at P < 0.05; italic letters represent a marginal significant difference (0.05 < P < 0.1). contrast, the relative change of monoterpene concentration in P. nigra was generally lower than pre-burning, at least in the range of crown scorch measured (0-50%) (Figure 2a.2). The relationship between the relative concentration change in diterpenes and crown scorch followed a similar trend as in monoterpenes for P. halepensis and P. sylvestris (Figure 2b.1 and b.2), while in P. nigra, the ratio of change in crown scorch was higher and shifted from lower to higher concentrations than pre-burning in the middle of the measured crown scorch range (Figure 2b.3).
The relative concentration change of monoterpene was also directly related to the needle N concentration and the height to live crown base (Table 4). In the case of sesquiterpenes, needle N concentration interacted with pine species (Table 4, Figure 2). Thus, the relative concentration change of sesquiterpenes 24 h post-burning was higher in P. halepensis and P. sylvestris, and augmented as needle N concentration increased (Figure 2c.1 and c.3), whereas it was always lower in P. nigra, decreasing inversely with increasing needle N concentration (Figure 2c.2). Finally, fire residence time above 60 °C directly affected the relative change of sesquiterpene concentration in all species (Table 4).
One year after PB, the relative change of mono-and sesquiterpene concentrations were always lower than pre-burning and inversely related with δ 13 C of pre-burning needles (Table 5, Figure 3a.1). The 1 year post-burning relative change concentration of diterpenes were also lower than pre-burning, but variations were associated with changes in δ 13 C or N concentration of needles (Figures 3a.2 and a.3).
Similar to 24 h post-burning, the proportion of crown scorched had a direct effect on the relative concentration change of all terpene groups, although marginally significant in mono-and sesquiterpene models (Table 5). This variable interacted with pine species in the case of diterpenes (Figure 3b) and showed that as crown scorch increased, the relative concentration change in P. nigra was more acute than in the other species (Table 5, Figure 3b.2).
Discriminant terpenes across time since burning for each pine species
The multilevel sPLS-DA in P. halepensis led to optimal selection of six and one terpenes on the first two dimensions with a classification error rate of 0.26 and 0.06, respectively, reflecting a clear separation between times since burning (Figure 4). Among compounds, terpinen-4-ol separated pre-burning (Cluster 2) from both post-burning times; whereas E-β-ocimene and α-thujene discriminated the 24 h post-burning sampling time from the others (Cluster 1). Four sesquiterpenes characterized the 1 year postburning needle samples (Cluster 3).
In P. nigra, we chose three dimensions and the corresponding terpenes selected for each were four, one and one (Figure 5). The classification error rates were 0.35, 0.33 and 0.18, respectively, for the first three dimensions. Two clusters were differentiated: pre-burning was discriminated, mainly, by three sesquiterpenes (Cluster 1) and bornyl acetate and β-springene represented, postburning samplings (Cluster 2) (Figure 5). Table 4. Summary of the models characterizing the impact of prescribed burning and tree vitality on the 24 h post-burning relative concentration change of mono-, sesqui-and diterpenes, calculated as the standardized difference between 24 h post-burning and pre-burning concentration expressed as percentage (logarithmically transformed). Only the significant interaction terms are shown. Bold characters indicate significant effects (P < 0.05). Finally, two dimensions were selected for P. sylvestris (Figure 6) with 11 terpenes on each component. The classification error rates were 0.66 and 0.33. As in P. nigra, two clusters were distinguished: sesquiterpenes characterized the pre-burning sampling time, whereas both post-burning times were characterized mainly by mono-and sesquiterpenes (Figure 6).
h post-burning relative concentration change
Monoterpenes
Discussion
Pinus nigra is a species considered to be resistant to mediumlow fire intensities, P. sylvestris a moderately fire-resistant species and P. halepensis a fire-sensitive species [START_REF] Agee | Fire and pine ecosystems[END_REF][START_REF] Fernandes | Fire resistance of European pines[END_REF]. While the concentration of the semi-Figure 2. Measured and predicted (line) relative concentration change (log-transformed) using 24 h post-burning models (see Table 4) of monoterpenes and diterpenes against crown scorched (a and b) and for sesquiterpenes against needle N (c). Before the log-transformation, 100 was summed. The dashed line indicates no changes between pre-and post-burning terpene concentrations: higher values indicate a higher terpene concentration than those of pre-burning, while the opposite is indicated by lower values. volatile diterpenes was not affected 24 h post-burning, the concentration of mono-and sesquiterpenes seems to decrease in P. nigra, was sustained in P. sylvestris and tended to increase in P. halepensis. Although massive needle terpene emissions have been reported at ambient temperatures often reached during PB [START_REF] Alessio | Direct and indirect impacts of fire on isoprenoid emissions from Mediterranean vegetation[END_REF][START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF][START_REF] Zhao | Terpenoid emissions from heated needles of Pinus sylvestris and their potential influences on forest fires[END_REF], various explanations may justify the different terpene contents observed 24 h post-burning between species. For instance, terpenes stored in needle resin ducts are likely to encounter different resistance to volatilization due to differences in the specific characteristics of the epistomatal chambers which are, respectively, unsealed, sealed and buried in needles of P. nigra, P. sylvestris and P. halepensis [START_REF] Hanover | Surface wax deposits on foliage of Picea pungens and other conifers[END_REF][START_REF] Boddi | Structure and ultrastructure of Pinus halepensis primary needles[END_REF][START_REF] Kim | Micromorphology of epicuticular waxes and epistomatal chambers of pine species by electron microscopy and white light scanning interferometry[END_REF]). These differences in needle morphology may contribute to explaining the reduction of terpenes observed 24 h post-burning in P. nigra. Another reason for variable terpene contents may be different respiration sensitivity between species. As the consumption of assimilates increases relative to the photosynthetic production at high temperatures [START_REF] Farrar | The effects of increased atmospheric carbon dioxide and temperature on carbon partitioning, source-sink relations and respiration[END_REF], this could bring about a decrease in the weight of carbohydrates and, thus, an apparent increase in needle terpene concentrations. If the respiration sensitivity to increasing temperature is higher in P. halepensis than in the other two species, this may explain the slight increase in terpene concentration in this species 24 h post-burning. Alternatively, the increase in monoterpene concentration in unscorched needles of P. halepensis 24 h post-burning may partly reflect systemic induced resistance, triggered by burning needles from lower parts of the canopy, although no data was found in literature to support this hypothesis. Finally, although we carefully selected only 1-year-old unscorched needles and from the same part of the crown, we cannot fully exclude that terpene variation between preand post-burning are reflecting differences in light availability between the sampled needles.
Terpene dynamics within the species were modulated by fire severity. Thus, relative concentration changes of mono-and diterpenes increased with the proportion of crown scorched 24 h post-burning. This trend was evident 1 year post-burning, suggesting that the damaged pines were still investing in chemical defences. According to the GDBH (Herms andMattson 1992, Stamp 2003) and the reduction in radial growth detected in P. halepensis and P. sylvestris [START_REF] Valor | Assessing the impact of prescribed burning on the growth of European pines[END_REF], we hypothesized that the increase in monoterpenes by P. halepensis and, to a lesser extent, in P. sylvestris, may constrain primary metabolism. Although the rate of increase in diterpenes post-burning was greater in P. nigra than in the other two species, P. nigra required a greater proportion of scorched crown in order to achieve higher concentrations than those observed pre-burning. Therefore, trees with a greater proportion of scorched crown could be investing in secondary metabolism rather than primary metabolism, although this potential trade-off on carbon investment deserves further research.
Table 5. Summary of the models characterizing the impact of prescribed burning and tree vitality on the 1 year post-burning relative concentration change of mono-, sesqui-and diterpenes, calculated as the standardized difference between 1 year post-burning and pre-burning content expressed as percentage (logarithmic transformed). Only the significant interaction terms are shown. Bold characters indicate significant effects (P < 0.05). 2 HLCB, height to live crown base (m).
3 Change δ 13 C, change in δ 13 C (difference between 1 year post-burning and pre-burning δ 13 C). 4 Change N, change in foliar N content (difference between 1 year post-burning and pre-burning N content).
Needle N concentration was positively associated with the relative concentration change of monoterpenes in the three species and of sesquiterpenes in the case of P. halepensis and P. sylvestris. As resin canal ducts are limited by N [START_REF] Björkman | Different responses of two carbon-based defences in Scots pine needles to nitrogen fertilization[END_REF]), these positive relationships may be explained by an increase in the number and size of the ducts in needles with higher N content. In contrast, we did not detect any effect of pre-burning water status, as estimated by δ 13 C, for 24 h post-burning terpene concentration change in individual pines.
According to our study, tree-to-tree variation in terpene concentration is known to be naturally high, even over short spatial distances, or when plants grow in the same soil in the same geographic area [START_REF] Ormeño | Production and diversity of volatile terpenes from plants on Calcareous and Siliceous soils: effect of soil nutrients[END_REF][START_REF] Kännaste | Highly variable chemical signatures over short spatial distances among Scots pine (Pinus sylvestris) populations[END_REF]). Our study reveals, however, that this variation is reduced 1 year postburning within and between plots. One year post-burning, the terpene concentration was lower than pre-burning, while an increase could be expected given the drier meteorological conditions during the year after burning [START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF]. In contrast, lower needle δ 13 C values, compared with pre-burning, suggest a decrease in water competition 1 year post-burning, an increase in the photosynthetic rate or stomatal conductance [START_REF] Battipaglia | The effects of prescribed burning on Pinus halepensis Mill. as revealed by dendrochronological and isotopic analyses[END_REF], or an improvement in water conditions in the remaining needles of highly scorched trees [START_REF] Wallin | Effects of crown scorch on ponderosa pine resistance to bark beetles in Northern Arizona[END_REF]. A lower terpene concentration 1 year after burning differs from other studies [START_REF] Cannac | Phenolic compounds of Pinus laricio needles: a bioindicator of the effects of prescribed burning in function of season[END_REF][START_REF] Lavoir | Does prescribed burning affect leaf secondary metabolites in pine stands?[END_REF]) comparing burned versus unburned plots. These studies concluded that needle terpene concentration returns to normal values 1 year after fire. They suggested that short-term increases in nutrient availability had minor effects on terpene concentration. The discrepancies with our investigation may be explained by the higher burning intensity in our study, which impacted water availability as indicated by δ 13 C values. In agreement with the GDBH (Herms andMattson 1992, Stamp 2003), our results showed that the relative concentration change of diterpenes was lower in trees that had an improvement in their physiological condition 1 year post-burning, as suggested by needle δ 13 C change and changes in needle N concentration. Despite the fact that no relationships Figure 3. Measured and predicted (line) relative concentration change (log-transformed) using 1 year post-burning models (see Table 5) for monoterpenes against δ 13 C (a.1), for diterpenes against change in δ 13 C (a.2), change in needles N (a.3), and the interaction between species and crown scorch (b). Before the log-transformation, 100 was summed. The dashed line indicates no changes between pre-and post-burning terpene concentrations: higher values indicate a higher terpene concentration than those of pre-burning while the opposite is indicated by lower values.
were found between mono-or sesquiterpenes regarding the change in δ 13 C or N, the direct relationship between the relative terpene concentration change and the pre-burning δ 13 C suggested that the decrease in both terpene groups occurred in pines that were more stressed pre-burning.
The ecological functions of many mono-, sesqui-and diterpene compounds are still not well understood, although in recent years significant achievements have been made via genetic engineering (Cheng et al. 2007, Loreto and[START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF]. Likewise, research on terpenes and flammability is generally scarce, though there are some studies that have shown a correlation between both variables [START_REF] Owens | Seasonal patterns of plant flammability and monoterpenoid concentration in Juniperus ashei[END_REF][START_REF] Alessio | Direct and indirect impacts of fire on isoprenoid emissions from Mediterranean vegetation[END_REF][START_REF] Ormeño | The relationship between terpenes and flammability of leaf litter[END_REF]. The reduction in terpene concentration 24 h postburning in the fire-resister P. nigra could imply a reduction of needle flammability with respect to pre-burning, strengthened by a reduction in the highly flammable α-caryophyllene (also known as α-humulene) and the increase in bornyl acetate, which is inversely related to flammability [START_REF] Owens | Seasonal patterns of plant flammability and monoterpenoid concentration in Juniperus ashei[END_REF]. By contrast, increases of mono-and sesquiterpene concentrations in P. halepensis may involve greater flammability, which would favour fire reaching the canopy to effectively open the serotinous cones. Specifically, the sPLS-DA showed E-β-ocimene, which is correlated with flammability [START_REF] Page | Mountain pine beetle attack alters the chemistry and flammability of lodgepole pine foliage[END_REF], as representative of 24 h postburning samples. In P. sylvestris, the poor terpene discrimination in relation to time since burning limits the interpretation of any compound in terms of flammability.
Fire-damaged trees are more vulnerable to insects, especially bark beetles, and infections by root fungus, which contribute to trees susceptibility to beetle attack [START_REF] Sullivan | Association between severity of prescribed burns and subsequent activity of conifer-infesting beetles in stands of longleaf pine[END_REF][START_REF] Parker | Interactions among fire, insects, and pathogens in coniferous forests of the interior western United States and Canada[END_REF]. The accumulation of high amounts of monoterpenes 24 h post-burning in the lower fire resistant species (P. halepensis and P. sylvestris) when fire partially scorches the crowns, might accomplish several functions, such as effective transport of diterpenes to the affected tissues [START_REF] Phillips | Resin-based defenses in conifers[END_REF], better protection of the photosynthetic apparatus [START_REF] Vickers | Isoprene synthesis protects transgenic tobacco plants from oxidative stress[END_REF] or ensuring the needs for chemical defence against pathogens [START_REF] Phillips | Resin-based defenses in conifers[END_REF]. According with this last function, E-β-ocimene and α-thujene with antifungal activity [START_REF] Bajpai | Chemical composition and antifungal properties of the essential oil and crude extracts of Metasequoia glyptostroboides Miki ex Hu[END_REF][START_REF] Deba | Chemical composition and antioxidant, antibacterial and antifungal activities of the essential oils from Bidens pilosa Linn. var. radiata[END_REF] appear to correctly classify 24 h post-burning needle samples of P. halepensis. Although the discriminant analysis in P. sylvestris showed poor classification power, the presence of E-β-ocimene and ϒ-terpinene also suggests that trees possess a higher resistance to fungus compared with pre-burning [START_REF] Espinosa-García | Dosedependent effects in vitro of essential oils on the growth of two endophytic fungi in coastal redwood leaves[END_REF]). In the case of the fire-resistant P. nigra, the pre-burning concentration of monoterpenes may be sufficient to cope with biotic stresses related with medium intensity fires. Nonetheless, bornyl acetate seems to represent 24 h post-burning samples conferring resistance to defoliators immediately after fire [START_REF] Zou | Foliage constituents of Douglas fir (Pseudotsuga menziesii (Mirb.) Franco): their seasonal variation and potential role in Douglas fir resistance and silviculture management[END_REF]. The high accumulation of diterpenes 24 h post-burning in P. nigra as the proportion of the scorched crown increases in respect to the other species, and possibly indicates a better chemical protection against xylophagus insects [START_REF] Lafever | Diterpenoid resin acid biosynthesis in conifers: enzymatic cyclization of geranylgeranyl pyrophosphate to abietadiene, the precursor of abietic acid[END_REF]). In P. nigra and P. sylvestris, the fact that the percentage of sesquiterpenes was augmented significantly 1 year postburning with respect to pre-burning, together with the increase in the relative concentration change as crown scorch augmented, might indicate the importance of sesquiterpenes as indirect defences to a wide range of biotic stressors [START_REF] Phillips | Resin-based defenses in conifers[END_REF]Croteau 1999, Schnee et al. 2006) and, as reported in [START_REF] Lavoir | Does prescribed burning affect leaf secondary metabolites in pine stands?[END_REF], were representative in repeatedly burned plots. Similarly, our classification found the sesquiterpenes guaiol, α-muurolene and δ-elemene as being characteristic in 1 year post-burning P. halepensis needle samples. These compounds might have defensive roles in defoliated trees against insects [START_REF] Wallis | Systemic induction of phloem secondary metabolism and its relationship to resistance to a canker pathogen in Austrian pine[END_REF][START_REF] Liu | Guaiol-a naturally occurring insecticidal sesquiterpene[END_REF].
After fire, bark beetles pose a significant threat to trees, especially when a significant amount of the crown has been scorched [START_REF] Lombardero | Effects of fire and mechanical wounding on Pinus resinosa resin defenses, beetle attacks, and pathogens[END_REF]. Several volatile terpenes such as α-pinene, camphene and myrcene can be released during PB and facilitate the attack of bark beetles [START_REF] Coyne | Toxicity of substances in pine oleoresin to southern pine beetles[END_REF]. Twenty-four hour post-burning P. sylvestris tended to present higher amounts of these terpene compounds, suggesting higher susceptibility to bark beetle attack with respect to the other species. Finally, limonene, which is highly toxic for several types of beetle [START_REF] Raffa | Interactions among conifer terpenoids and bark beetles across multiple levels of scale: an attempt to understand links between population patterns and physiological processes[END_REF], was present in higher amounts in P. nigra and P. halepensis, suggesting a higher resistance to bark beetle attack for both species 24 h post-burning.
The concentration of mono-and sesquiterpenes 24 h post-burning was similar to the pre-burning ones in the more fire-sensitive species (P. halepensis and P. sylvestris) and lower in the fire-resistant P. nigra species. Terpene dynamics were modulated within the species by fire severity, as indicated by the direct relation between the proportion of scorched crown and the concentration of terpenes 24 h post-burning. As discussed, a combination of morphological and physiological mechanisms may be operating during and in the short-term after PB, but no clear conclusions may be stated. However, differences in terpene contents as a function of the pine species sensitivity to fire suggest that terpenic metabolites could have adaptive importance in fire-prone ecosystems, in terms of flammability and defence against biotic agents short-term after fire. In agreement with the GDBH (Herms andMattson 1992, Stamp 2003) trees may be allocating assimilates to growth rather than to defence, as suggested by the remarkable decrease in terpene concentration and the negative relation between terpene concentration and the change in needle δ 13 C. This decrease in terpene concentration, in turn, could imply a higher susceptibility to fire-related pathogens and insects.
Figure 1 .
1 Figure 1. Concentration (mean ± SE) of total terpene (a), monoterpenes (b), sesquiterpenes (c) and diterpenes (d) across time since burning (TSB)for each pine species (P. halepensis, n = 20; P. sylvestris, n = 19; P. nigra, n = 19 and n = 18 in 1 year-post-burning). Differences in the concentration between TSB within each pine species were tested using LMM considering plot as a random factor. Within each pine species, different letters indicate differences between TSB using a Tukey post-hoc, where regular letters indicate significant differences at P < 0.05; italic letters represent a marginal significant difference (0.05 < P < 0.1).
Figure 4 .
4 Figure 4. Hierarchical clustering for P. halepensis of the seven terpenes selected with multilevel sPLS-DA using terpene content. Samples are represented in columns and terpenes in rows. MHT, monoterpene hydrocarbon; SHT, sesquiterpene hydrocarbon; O, oxygenated compounds; der, derivative compounds.
Figure 5 .
5 Figure 5. Hierarchical clustering for P. nigra of the six terpenes selected with multilevel sPLS-DA using terpene content. Samples are represented in columns and terpenes in rows. MT, monoterpene hydrocarbon; SHT, sesquiterpene hydrocarbon; DHT, diterpene hydrocarbon; der, derivative compounds; others, compounds other than terpenes.
Table 1 .
1 Topographical and climate characteristics of the study localities.
Study sites Topography Climate 1
Localities Lat. (°) Long. (°) Aspect Slope Elevation Annual Mean annual
(%) (m.a.s.l.) rainfall (mm) temperature (°C)
Lloreda 1.5706 42.0569 N 30 715 731.6 11.7
Miravé 1.4494 41.9515 NE 25 723 677.3 11.5
El Perelló 0.6816 40.9068 NW 10 244 609.9 15.5
Table 2 .
2 Characteristics of prescribed burnings and forest experimental units (mean
± std).
1
Wind speed was measured outside the forest.
2 Range of maximum temperatures (Tmax) and residence time above 60 °C (RT60) in 10 trees in each of the Perelló experimental units and in 20 trees in Miravé and Lloreda.
3 Ph, Pinus halepensis; Pn/Ps, P. nigra and P. sylvestris; phytovolume calculated using the cover and height of the understory shrubs; diameter at breast height (DBH), density and basal area of trees with DBH ≥7.5 cm.
Table 3 .
3 Studied pine trees and fire characteristics (mean ± std) before and after prescribed burnings grouped by species.
Tree and fire characteristics P. halepensis P. nigra P. sylvestris
n (trees) 20 19 19 1
DBH (cm) 20.0 ± 6.9a 13.6 ± 5.5b 12.7 ± 5.3b
Total height (m) 9.1 ± 2.4a 8.3 ± 2.4a 8.6 ± 1.9a
Height to live crown base (m) 5.2 ± 1.0a 4.8 ± 1.3a 6.6 ± 13.2b
Crown scorched (%) 44.0 ± 32.1a 6.6 ± 13.2b 5.5 ± 9.5b
Fire residence time >60 °C (min) 38.2 ± 54.1a 16.6 ± 6.9a 15.2 ± 6.4a
Needle δ 13 C (‰)
Pre-burning -25.8 ± 0.5Aa -26.6 ± 1.0 Ab -26.5 ± 0.6Ab
1 year post-burning -27.6 ± 0.9Ba -28.5 ± 1.4Ba -28.0 ± 0.8Ba
Needle N content (mg g DM -1 )
Pre-burning 14.8 ± 1.9Aa 10.1 ± 0.8Ab 12.3 ± 1.6Ac
1 year post-burning 14.9 ± 3.2Aa 9.1 ± 2.7Ab 11.0 ± 3.1Aa
). Before PB, more than 45% of total terpene concentration was represented by diterpenes in P. halepensis while sesquiterpenes represented about 59% in P. nigra and monoterpenes represented 83% in P. sylvestris (see Table
S2
available as Supplementary Data at Tree Physiology Online). Considering all sampling times,
Climate variables, annual rainfall and annual mean temperature, were estimated using a georeferenced model(Ninyerola et al.
2000).
Sample size is 18 for 1 year post-burning data because of death of one tree. Different small letters within a row indicate statistical significant differences (P < 0.05) among pine species using LME (where fixed factor = species, random factor = plot) followed by Tukey post-hoc test. Different capital letters within a column indicate statistical significant differences (P < 0.05) between pre-burning and 1 year post-burning for each pine species using LMM (where fixed factor = time since burning, random factor = plot) followed by Tukey post-hoc test.
Acknowledgments
We wish to thank GRAF (Bombers, Generalitat de Catalunya) who kindly executed the PB; the EPAF team, Dani Estruch, Ana I. Ríos and Alba Mora for their technical assistance in the field, and Carol Lecareux and Amelie Saunier for their help in the laboratory. Finally, we would like to thank Miquel De Cáceres for his invaluable comments.
Funding Ministerio de Economía, Industria y Competitividad (projects AGL2012-40098-CO3, AGL2015 70425R; EEBB I 15 09703 and BES 2013 065031 to T.V.; RYC2011 09489 to P.C.). CERCA Programme/Generalitat de Catalunya.
Conflict of interest
None declared. | 53,322 | [
"170291"
] | [
"442190",
"188653",
"442190"
] |
01636819 | en | [
"sdv",
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01636819/file/Sellam%20et%20al%202017%20Cystoseira%20michaelae_HAL.pdf | nom. et stat. nov Verlaque Louiza-Nesrine
Aurelie Sellam
Charles-François Blanfuné
Thierry Boudouresque
C Thibaut
Marc Rebzani-Zahaf
Verlaque
J G Agardh
C Spinosa Sauvageau
stat. nov Al Nom Louiza
Nesrine Sellam
Aurélie Blanfuné
Charles F Boudouresque
Thierry Thibaut
Rebzani Chafika
Zahaf
Marc Verlaque
C Adriatica Sauvageau
G Agardh
Pc0535490
Roussel
C Turneri Montagne
C Montagnei
Pc0535491
P C Monnard
) Montagne
C Sp Nov
C Platyclada Sauvageau
Fig. 22 Montagne
Jean Bart
Thuret
J Feldmann
Ld00526
C Abies
Cystoseira montagnei
come
INTRODUCTION
In the Mediterranean Sea, the species of the genus Cystoseira C. Agardh, 1820, nom. cons., are the main forest-forming species of the photophilous rocky substrates from the littoral fringe down to the lower euphotic zone (down to 70-80 m depth in the clearest waters) [START_REF] Giaccone | Le Cistoseire e la vegetazione sommersa del Mediterraneo[END_REF][START_REF] Giaccone | -La vegetazione marina bentonica fotofila del Mediterraneo: II. Infralitorale e Circalitorale. Proposte di aggiornamento[END_REF][START_REF] Blanfuné | Decline and local extinction of Fucales in the French Riviera: the harbinger of future extinctions?[END_REF]Blanfuné et al., 2016a,b;[START_REF] Boudouresque | Where seaweed forests meet animal forests: the examples of macroalgae in coral reefs and the Mediterranean coralligenous ecosystem[END_REF]. Out of 289 taxa of Cystoseira listed worldwide (including homotypic and heterotypic synonyms and names of uncertain taxonomic status), 32 species and more than fifteen infra-specific taxa are currently accepted taxonomically in the Mediterranean Sea [START_REF] Guiry | their discussion is beyond the scope of the present study[END_REF]. However, in spite of their importance as habitat formers, the delimitation and distribution of a number of species are still not well known [START_REF] Roberts | Active speciation in the taxonomy of the genus Cystoseira C. Ag[END_REF][START_REF] Ribera | -Check-list of Mediterranean seaweeds. I. Fucophyceae (Warming, 1884)[END_REF][START_REF] Draisma | -DNA sequence data demonstrate the polyphyly of the genus Cystoseira and other Sargassaceae Genera (Phaeophyceae)[END_REF][START_REF] Cormaci | -Flora marina bentonica del Mediterraneo: Phaeophyceae[END_REF][START_REF] Berov | -Reinstatement of species rank for Cystoseira bosphorica Sauvageau (Sargassaceae, Phaeophyceae)[END_REF][START_REF] Bouafif | -Cystoseira taxa new for the marine flora of Tunisia[END_REF][START_REF] Bouafif | -New contribution to the knowledge of the genus Cystoseira C. Agardh in the Mediterranean Sea, with the reinstatement of species rank for C. schiffneri Hamel[END_REF]. The reason for this is that the species of Cystoseira offer few unambiguous diagnostic characters and that some of these characters are more or less overlapping. Genetic tools will probably help to disentangle their taxonomic value. Montagne (1838) described from Algeria (Cherchell, west of Algiers, Mediterranean Sea) a taxon he regarded as a new variety of the Atlantic species Cystoseira granulata C. Agardh, as C. granulata var. turneri Montagne. This taxon represents one of the least well known Cystoseira taxa; in addition, it became subsequently a source of confusion. Agardh (1842: 47-48), on the basis of distinct specimens from the north-western Mediterranean and the Adriatic Sea, raised Montagne's taxon to species level, under the name of C. montagnei J. Agardh, actually a new species. Cystoseira montagnei was widely recorded in the Mediterranean Sea until several authors expressed doubts regarding its taxonomic value, considering it as a mixture of distinct taxa (e.g. [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF][START_REF] Sauvageau | -A propos des Cystoseira[END_REF][START_REF] Papenfuss | Taxonomic and nomenclatural notes on three species of brown algae. In: Travaux de Biologie végétale dédiés au Professeur P. Dangeard[END_REF][START_REF] Roberts | Active speciation in the taxonomy of the genus Cystoseira C. Ag[END_REF]. Following [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF], Mediterranean authors replaced the name 'C. montagnei' by those of C. spinosa Sauvageau and C. adriatica Sauvageau. Cystoseira montagnei is now often treated as a taxon inquirendum in the updated Mediterranean checklists and floras [START_REF] Ribera | -Check-list of Mediterranean seaweeds. I. Fucophyceae (Warming, 1884)[END_REF][START_REF] Cormaci | -Flora marina bentonica del Mediterraneo: Phaeophyceae[END_REF]; but see Perret- [START_REF] Perret-Boudouresque | Inventaire des algues marines benthiques d'Algérie[END_REF].
In 2014 and 2015, we collected specimens corresponding to Montagne's taxon, C. granulata var. turneri, from the regions of Tipaza and Algiers (Algeria) [START_REF] Sellam | -Rediscovery of a forgotten seaweed forest in the Mediterranean Sea, the Cystoseira montagnei (Fucales) forest. Rapports et procès-verbaux de la commission internationale pour l'Exploration[END_REF]. The morphological study of these specimens showed that they belonged to a taxon quite distinct from C. spinosa. The aim of this study was (i) to reassess the status of C. granulata var. turneri, C. montagnei and C. spinosa, made obscure by taxonomic ambiguities and misuses (ii) to propose the lectotypification of C. granulata var. turneri and of C. montagnei, (iii) to propose a new name for Montagne's C. granulata var. turneri (C. michaelae nom. et stat. nov.) and (iv) to provide information concerning the ecology and distribution range of the latter species.
MATERIAL AND METHODS
Sampling and observations were undertaken using SCUBA diving, between the sea surface and 25 m depth, at different localities in the regions of Tipaza and Algiers (Algeria), from La Corne d'Or to Bounetah Island, between August 2014 and November 2015 (Fig. 1). The populations were studied all year round, at twoweek intervals.
Samples were transferred to the laboratory (in Algiers), then rinsed with seawater and cleaned of epiphytes. The samples were either preserved in 4% buffered formalin/seawater or pressed and prepared as herbarium specimens. A subsample of some specimens was preserved in silica gel for further DNA analyses. The material studied has been deposited in HCOM, the Herbarium of the Mediterranean Institute of Oceanography, Aix-Marseille University (Herbarium abbreviation follows [START_REF] Thiers | Index Herbariorum: A global directory of public herbaria and associated staff[END_REF].
Specimens were compared with the almost exhaustive collection of Mediterranean Cystoseira species deposited in the HCOM and with the syntype of C. granulata C. Agardh var. turneri Montagne (deposited in the herbaria of the Muséum National d'Histoire Naturelle, Paris, PC), the syntype of C. montagnei J. Agardh (deposited in the J.G. Agardh herbarium, Botanical Museum, Lund University, LD) and the lectotypes of C. spinosa Sauvageau and C. adriatica Sauvageau (herbaria of the Muséum National d'Histoire Naturelle, Paris, PC). They were also compared with other specimens housed in the herbaria of the Université Montpellier 2 (MPU) and of PC (Table 1). Identification criteria, in the genus Cystoseira, are based on the mode of attachment to the substratum, the number and the form of axes, the aspect of apices and tophules (when present), the phyllotaxy and the morphology of branches, the occurrence and the arrangement of cryptostomata and aerocysts, and the location and the morphology of reproductive structures (see [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF][START_REF] Sauvageau | -A propos des Cystoseira[END_REF][START_REF] Hamel | Phéophycées de France[END_REF][START_REF] Ercegović | -Fauna i Flora Jadrana. Jadranske cistozire. Njihova morfologija, ekologija i razvitak / Fauna et Flora Adriatica. Sur les Cystoseira adriatiques[END_REF]Gómez Garreta et al., 2001;[START_REF] Mannino | Guida all' identificazione delle Cistoseire. Area Marina Protetta "Capo Gallo -Isola delle Femmine[END_REF][START_REF] Cormaci | -Flora marina bentonica del Mediterraneo: Phaeophyceae[END_REF][START_REF] Taşkin | The Mediterranean Cystoseira (with photographs)[END_REF]. The initial nomenclature (before the changes resulting from the present investigation) followed that adopted by [START_REF] Guiry | their discussion is beyond the scope of the present study[END_REF].
Literature dealing with C. granulata var. turneri, C. montagnei, C. spinosa and C. adriatica was exhaustively searched and analyzed.
RESULTS
Hereafter, we describe the morphology, phenology and habitat of the specimens we collected in Algeria. Morphological description. Plants stiff, up to 30 cm high, not caespitose, yellowbrown in colour when alive and darker when dried, attached to the substrate by a robust discoid holdfast (Figs 2-4); main axis up to 20 cm high, branched, with apices not protruding, surrounded by spinose tophules (Figs 5, 7); spinose tophules ovoid, 5-13 mm × 5-7 mm, becoming smooth-tuberculate when older (Figs 5-7); primary branches, up to 18-19 cm long, of two different types: either slightly complanate, up to 2.5 mm wide, with an inconspicuous rib and irregularly alternately branched in one plane, or cylindrical and branched in all directions, with spaced short simple to bifid spine-like appendages . The habit varies with depth: shallow specimens have cylindrical branches while the deeper ones have complanate branches. Cryptostomata scattered along branches; specimens monoecious; receptacles both intercalary basal, compact to more or less loosely arranged, attached just above the tophule (Figs 11-13), and terminal, cylindrical and more or less diffuse on branchlets ; conceptacles male, female or hermaphroditic, differentiated in the branch and at the base of spine-like appendages . No obvious relationship was found between the location of the conceptacles, their type (male, female, hermaphroditic) and the receptacles. Phenology. The annual growth cycle is similar to that of other lower infralittoral species of Cystoseira. The plant grows from the early spring to the summer. While terminal receptacles were observed in late summer and autumn, basal receptacles were present all year round. Plants shed their primary branches in late autumn and they are almost devoid of primary branches in winter (Fig. 4). Habitat. The species thrives in the lower infralittoral zone (sensu [START_REF] Pérès | Major benthic assemblages[END_REF], between 10 m and 25 m depth (limit of our investigations), and on sub-horizontal to gently sloping photophilous rocky substrates (0 to 45°). It is always heavily covered with epiphytes such as other macroalgae, bryozoans, hydroids and sponges. The specimens we collected from the region of Algiers correspond well with Montagne's description and to his herbarium specimens (syntype) housed in the Muséum National d'Histoire Naturelle (Paris: PC) (Table 1). The taxon is very easily distinguishable from all other Cystoseira species through a panel of characters: (i) a single axis with spinose (when young) to smooth-tuberculate (when old) tophules, (ii) primary branches either slightly compressed, with an inconspicuous rib, and irregularly alternately branched in one plane, or cylindrical and branched in all directions, with spaced short spine-like appendages (iii) receptacles either intercalary basal, just above the tophule, or terminal, cylindrical and diffuse on branchlets.
The fate of Cystoseira granulata var. turneri and the confusing story of C. montagnei and C. spinosa [START_REF] Meneghini | Alghe italiane e dalmatiche[END_REF] reported C. granulata var. turneri from Naples, Toulon, the northern Adriatic Sea and Dalmatia. J.G. [START_REF] Agardh | Algae maris Mediterranei et Adriatici, observationes in diagnosin specierum et dispositionem generum[END_REF], receiving several specimens of Cystoseira with spinose tophules from France (Cette, now Sète, and Marseille) and the northern Adriatic Sea (Trieste, Italy), concluded that they all belonged to Montagne's taxon. Considering C. granulata var. turneri to be quite distinct from C. granulata C. Agardh [currently C. usneoides (Linnaeus) M.Roberts [START_REF] Roberts | -Taxonomic and nomenclatural notes on the genus Cystoseira C[END_REF][START_REF] Spencer | -Typification of Linnaean names relevant to algal nomenclature[END_REF]] and from all the other species known at that time ('Species distinctissima, a Montagne primum bene descripta, sed cum C. granulata male confusa': 'very distinct species…, well described for the first time by Montagne, but with C. granulata badly confused'), J.G. Agardh raised the var. turneri to species rank under the name C. montagnei J. Agardh. However, it is worth noting that, in his description, J.G. Agardh mentioned the spinose tophules and the compressed branches but omitted the major diagnostic character of Montagne's taxon, i.e. the basal intercalary receptacles. Montagne (1846) followed J.G. Agardh and re-described his alga from Algeria under the name C. montagnei J. Agardh [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF] published his impressive revision of the genus Cystoseira, with descriptions of new species with spinose tophules (C. adriatica Sauvageau and C. spinosa Sauvageau), the first taxonomic questions about the real identity of C. montagnei J. Agardh were raised. Considering all the previous records of C. montagnei with, when possible, re-examination of samples, [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF][START_REF] Sauvageau | -A propos des Cystoseira[END_REF] concluded that C. montagnei J. Agardh differed from C. montagnei sensu Montagne and was a mixture of species. Except for specimens from Algeria that possessed all the characteristics of C. montagnei Montagne non J. Agardh (sic) [here: C. michaelae], all the other records from France, Corsica, Sardinia, Italy and Adriatic Sea were doubtful because devoid of basal intercalary receptacles: C. montagnei sensu Va liante from Naples would probably be C. spinosa and C. montagnei sensu Hauck from the Adriatic sea would probably be C. adriatica (subsequently synonymised with C. spinosa; see [START_REF] Cormaci | Observations taxonomiques et biogéographiques sur quelques espèces du genre Cystoseira C.Agardh[END_REF]. In the Adriatic Sea, [START_REF] Ercegović | -Fauna i Flora Jadrana. Jadranske cistozire. Njihova morfologija, ekologija i razvitak / Fauna et Flora Adriatica. Sur les Cystoseira adriatiques[END_REF] followed the conclusions of [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF][START_REF] Sauvageau | -A propos des Cystoseira[END_REF] and treated (i) C. montagnei J. Agardh sensu Hauck as a synonym of C. adriatica, (ii) 'C. montagnei J. Agardh (ex parte)' (sic) as a synonym of C. platyramosa Ercegović [currently C. spinosa var. compressa (Ercegović) Cormaci et al.], and (iii) 'C. montagnei J. Agardh (pro parte)' (sic) and C. montagnei J. Agardh sensu Va liante as synonyms of C. spinosa. At Naples, [START_REF] Funk | Beiträge zur Kenntnis der Meeresalgen von Neapel: Zugleich mikrophotographischer Atlas[END_REF] did the opposite and recorded C. montagnei J. Agardh, with C. spinosa Sauvageau as synonym. According to [START_REF] Papenfuss | Taxonomic and nomenclatural notes on three species of brown algae. In: Travaux de Biologie végétale dédiés au Professeur P. Dangeard[END_REF], 'until J. G. Agardh's material has been examined and C. montagnei lectotypified, it will not be possible to settle the status of the species'. Our re-examination of the syntype of C. montagnei J. Agardh dating before 1842 and deposited in the J.G. Agardh Herbarium (LD) showed that no specimen originates from Algeria and confirmed the conclusions of Sauvageau and Ercegović: all the specimens of J.G. Agardh do not differ from C. spinosa Sauvageau (synonyms: C. adriatica Sauvageau and C. platyramosa Ercegović) [here: C. montagnei]. In Spain, C. montagnei was recorded from the Balearic Islands before being excluded from the flora [START_REF] Gallardo | -A preliminary checklist of Iberian benthic marine algae[END_REF]Ribera Siguan & Gómez Garreta, 1985;Gómez Garreta et al., 2001). The species was never recorded from Morocco (see [START_REF] Benhissoune | -A checklist of the seaweeds of the Mediterranean and Atlantic coasts of Morocco. II. Phaeophyceae[END_REF], Libya, in spite of extensive research on the genus Cystoseira [START_REF] Nizamuddin | Cystoseira gerloffi, a new species from the coast of Libya[END_REF][START_REF] Nizamuddin | -A new species of Cystoseira C. Ag. (Phaeophyta) from the Eastern part of Libya[END_REF][START_REF] Nizamuddin | A caespitose-tophulose Cystoseira species from Tripoli, Libya[END_REF], and from Egypt [START_REF] Aleem | Marine algae of Alexandria, Egypt[END_REF]. Subsequently, C. montagnei was definitely considered as taxon inquirendum [START_REF] Ribera | -Check-list of Mediterranean seaweeds. I. Fucophyceae (Warming, 1884)[END_REF][START_REF] Furnari | Catalogue of the benthic marine macroalgae of the Italian coast of the Adriatic Sea[END_REF][START_REF] Furnari | -Biodiversità marina delle coste italiane: catalogo del macrofitobenthos[END_REF][START_REF] Giaccone | -Biodiversità vegetale marina dell'arcipelago 'Isole Eolie[END_REF][START_REF] Cormaci | -Flora marina bentonica del Mediterraneo: Phaeophyceae[END_REF][START_REF] Taşkin | The Mediterranean Cystoseira (with photographs)[END_REF][START_REF] Tsiamis | -Seaweeds of the Greek coasts. I. Phaeophyceae[END_REF], including in Algeria [START_REF] Ould- | -Checklist of the benthic marine macroalgae from Algeria. I. Phaeophyceae[END_REF]. See Table 2 for further records of C. montagnei, C. spinosa and C. adriatica. the Algerian coast. Currently, C. michaelae seems to be an endemic species restricted to Algeria and northern Tunisia (Cyrine Bouafif, pers. com.).
Cystoseira forests are highly impacted due to the cumulative effects of increasing human pressure (e.g. destruction of habitats, pollution, non-indigenous species, overfishing, coastal aquaculture and global warming). Losses have been reported throughout the Mediterranean Sea caused by habitat destruction, eutrophication and overgrazing by herbivores (fish, sea urchins), leading to a shift to lesser structural complexity, such as turf-forming seaweed assemblages or barren grounds where sea urchins are the drivers of habitat homogenization [START_REF] Pinedo | -Long-term decline of the populations of Fucales (Cystoseira, Sargassum) in the Albères coast (northwestern Mediterranean)[END_REF][START_REF] Blanfuné | Decline and local extinction of Fucales in the French Riviera: the harbinger of future extinctions?[END_REF]Blanfuné et al., 2016a,b). Protective measures should be taken so that the C. michaelae forests do not suffer the same decline as many Cystoseira forests of the Mediterranean Sea.
Specimens studied: H8287-8288 -Cap Caxine (36' 49' 4" N & 2°57' 19" E), September 2014, 17 m depth, rocky substrates; H8289-8290 -Aïn Benian, close to the harbor (36°48' 45" N & 2°53' 29" E), August 2014, 19 m depth, rocky substrates; H8291-8294 and H8300-8301 -Tipaza, La Corne d'Or (36°35' 45" N & 2°26' 44" E), September 2014, 16 m depth, rocky substrates; H8295 -Aïn Benian, close to the harbour, September 2014, 18 m depth, rocky substrates; H8296-8297 -Aïn Benian, close to the harbour, April 2015, 14 m depth, rocky substrates; H8298 -Cap Caxine, April 2015, 16 m depth, rocky substrates; H8299 -Islets of Tipaza (36°35' 52" N & 2°27' 40" E), March 2015, 11 m depth, rocky substrates; H8302 -Islets of Tipaza, August 2015, 10 m depth, rocky substrates; H8303 -Bounetah Island (36°47' 46" N & 3°21' 19" E), August 2015, 14 m depth, rocky substrates; H8304 -Cap Caxine, August 2015, 13 m depth, rocky substrates; H8305 -Islets of Tipaza, October 2015, 12 m depth, rocky substrates; H8306 -Cap Caxine, November 2015, 13 m depth, rocky substrates (Fig. 1).
The species forms sparse algal forests (< 5 individuals.m -2 ), in association with other large macroalgae such as Cystoseira zosteroides (Turner) C. Agardh, Dictyopteris lucida M.A.Ribera Siguán et al., Dictyota cyanoloma Tronholm et al., Dictyota spp., Flabellia petiolata (Turra) Nizamuddin, Phyllariopsis sp., Sargassum sp., Zonaria tournefortii (J.V. Lamouroux) Montagne, and a rich sessile fauna dominated by Eunicella singularis (Esper, 1791) and large species of sponges, bryozoans and hydroids.DISCUSSION AND CONCLUSIONSA good fit between Cystoseira granulata C. Agardh var. turneri Montagne and the specimens collected in the Algiers regionMontagne (1838: 340-342) described from Algeria, 'prope Juliam Caesaream' (now Cherchell, ~80 km west of Algiers) a taxon he regarded as a variety of the Atlantic species Cystoseira granulata C. Agardh, as C. granulata var. turneri Montagne. Montagne (1846: 13-14, plate 4) re-described and nicely illustrated
Fig. 1 .
1 Fig. 1. Locations with collection dates of Cystoseira michaelae Ve rlaque et al., nom. et stat. nov. (C. granulata C. Agardh var. turneri Montagne) in Algeria (Tipaza and Algiers regions) -Historical data: Herbarium specimens and references (light circles) and newly collected specimens (this work) (dark circles). *: Debray (1897): specimens not found.
Figs 2- 4 .
4 Figs 2-4. Cystoseira michaelae Ve rlaque et al., nom. et stat. nov. from Algeria (newly collected specimens, this work). 2-3. Habit of specimens H8300 and H8291, respectively, from Tipaza, La Corne d'Or, September 2014. 4. Habit of an old individual, specimen H8299 from Islets of Tipaza, March 2015. Bars = 5 cm.
Figs 5 -
5 Figs 5-10. Cystoseira michaelae Ve rlaque et al., nom. et stat. nov. from Algeria (newly collected specimens, this work). 5-6. Apical views of axes showing spinose (black arrows) and smooth-tuberculate (white arrows) tophules (specimen H8300); bars = 5 mm. 7. Spinose and smooth-tuberculate tophules, specimen H8288 from Cap Caxine, September 2014; bar = 1 cm. 8. Complanate branch with inconspicuous midrib, specimen H8300; bar = 1 cm. 9. Complanate to cylindrical branch with short spine-like appendages, specimen H8291; bar = 1 cm. 10. Detail of a complanate branch with inconspicuous midrib, specimen H8300; bar = 1 cm.
Figs
Figs 11-20. Cystoseira michaelae Ve rlaque et al., nom. et stat. nov. from Algeria (newly collected specimens, this work). 11-12. Compact tuberculate-spinose basal intercalary receptacles (arrows) close to the tophule (arrow heads), specimen H8300: bars = 5 mm. 13. Diffuse spinose basal intercalary receptacles (arrows) close to the tophule (arrow head), specimen H8290 from Aïn Benian, August 2014; bar = 5 mm. 14. Transverse section of a female basal receptacle, specimen H8291; bar = 200 µm. 15. Transverse section of a male basal receptacle, specimen H8300; bar = 200 µm. 16. Transverse section of a female basal conceptacle, specimen H8291; bar = 100 µm. 17. Transverse section of a male basal conceptacle, specimen H8300; bar = 100 µm. 18-19. Diffuse spinose terminal receptacles, specimen H8291; bars = 5 mm. 20. Transverse section of a hermaphroditic terminal conceptacle, specimen H8291; bar = 100 µm.
Fig. 21 .
21 Fig. 21. Illustration of Cystoseira michaelae Ve rlaque et al., nom. et stat. nov., as C. montagnei J. Agardh (sensu Montagne), in Montagne (1846, plate 4, figs 2a-h). a: Habit. b: Lower part of a branch with intercalary receptacles. c: Terminal receptacle. d: Detail of a terminal receptacle. e: Transverse section of a terminal receptacle showing the conceptacles. f: Oogonia. The original numbering of figures within the plate has been changed, but the original numbers have not been erased and can be seen in very small print.
[here: C. michaelae]. At the same time, he excluded all the records of[START_REF] Meneghini | Alghe italiane e dalmatiche[END_REF]. J.G.[START_REF] Agardh | Species genera et ordines algarum, seu descriptiones succinctae specierum, generum et ordinum, quibus algarum regnum constituitur[END_REF] considered Montagne's illustrations of C. montagnei [here: C. michaelae] as excellent ('Montagne (1846) p.43. tab. IV.2 (eximie !)'), and completed the distribution of the species ('Hab. In mari mediterraneo ad littora Occitaniae et galloprovinciae (ipse! = J.G. Agardh), ad Algeriam (Montagne!); in Adriatico ad Trieste (Biasoletto! et C. Agardh!) et Ve netiam (Martens!); e Gadibus = Cadix (Cabrera!)', but always without any mention of basal intercalary receptacles. This shows that (i) J.G. Agardh probably never saw any genuine specimen of Montagne's taxon, (ii) the J.G. Agardh concept of C. montagnei is much broader than that of Montagne and (iii) C. montagnei cannot be treated as a replacement name for C. granulata var. turneri (according to Art. 6.11 of ICN; McNeill et al., 2012), but as a new species based upon the Sète, Marseille and Trieste specimens (syntype housed in the Lund herbarium, LD). Kützing (1849) transferred C. montagnei to the genus Phyllacantha Kützing (currently a junior synonym of Cystoseira). Later on, he published the illustrations of P. montagnei and P. montagnei var. cirrosa Kützing on the basis of Algerian specimens sent by Montagne (Kützing, 1860) [here: C. michaelae]. Hauck (1885) recorded C. montagnei (with Phyllacantha gracilis Kützing, P. pinnata Kützing and P. affinis Kützing as synonyms), from the Adriatic Sea, with no mention of basal intercalary receptacles; the illustrations of these species of Phyllacantha in Kützing (1860) agree more with the Sauvageau (1912) C. spinosa [here: C. montagnei] than with Montagne's taxon [here: C. michaelae]. In his 'Catalogue des algues du Maroc, de l'Algérie & de la Tunisie', Debray (1897) recorded C. montagnei [here: C. michaelae] only from Algeria, close to Algiers [Cherchell, Matifou and Saint Eugène (now Bologhine)]. When
Cystoseiramichaelae
Fig. 22. Lectotype of Cystoseira granulata C. Agardh var. turneri Montagne (here: C. michaelae Ve rlaque et al., nom. et stat. nov.), Algiers [Algeria], PC (Herbarium Montagne), barcode PC0043663, by courtesy of the MNHN-Paris ©; Collection C. and P. Monnard; labelled 'Cystoseira granulata L. Turner var. turneri Montagne -Alger n°397 -Com. Class. Monnard'. The lectotype is the top left specimen. Isolated branches (top right and bottom) possibly belong to the same individual.
Figs
Figs 23-26. Lectotype of Cystoseira montagnei J. Agardh, Cette (now Sète), France, May 1837, Botanical Museum, Lund University (Herbarium J. Agardh), barcode LD528, by courtesy of Lund University ©. 23. Habit; bar = 10 cm. 24. Detail of the label. 25. Detail of spinose tophules; bar = 5 mm. 26. Detail of the upper part of branchlets with receptacles; bar = 5 mm.
Figs
Figs 27-29. Lectotype of Cystoseira spinosa Sauvageau, Banyuls-sur-Mer, Pyrénées-Orientales, France, 6 May 1907, PC (Herbarium Général, collection C. Sauvageau), barcode PC0525446, by courtesy of the MNHN-Paris ©. 27. Habit; bar = 2 cm. 28. Detail of spinose tophules; bar = 5 mm. 29. Detail of the upper part of branchlets with receptacles; bar = 5 mm.
Table 1 .
1 Major herbarium specimens of Cystoseira examined. The syntypes of C. michaelae Ve rlaque et al. nom. et stat. nov. and of C. montagnei Ve rlaque et al. comb. nov.
C. michaelae Ve rlaque et al. nom. et stat. nov.
C. granulata var. turneri Montagne
Table 1. Major herbarium specimens of Cystoseira examined. The syntypes of C. michaelae Ve rlaque et al. nom. et stat. nov. and of C. montagnei J. Agardh are indicated with an asterisk. (coll.: collection); m.d.: missing data. Correct names according to the authors of the present study (continued)
Table 2 .
2 Records of Cystoseira species with spinose tophules, referred to as C. spinosa, C. adriatica and C. montagnei, in the Mediterranean Sea (in addition to the records mentioned within the text), and probable correspondence with the taxonomic treatment in the present study (Cystoseira montagnei J.Agardh and C. michaelae Ve rlaque et al. nom. et stat. nov.)
Name(s) and Correct name(s)
Reference Location authority used by according to the Comments
the author(s) present treatment
Ardissone & Liguria (Italy) C. montagnei C. montagnei ? No description or
Strafforello J. Ag. illustration
(1877)
Piccone La Galite C. montagnei C. montagnei or No description or
(1879) (Tunisia) J. Agardh C. michaelae ? illustration
Va liante Gulf of Naples C. montagnei C. montagnei Description and illustrations
(1883) (Italy) J. Ag. corresponding well to
C. spinosa Sauvageau [here
C. montagnei]
Piccone Sardinia (Italy) C. montagnei C. montagnei ? No description or
(1884) J. Ag. illustration
Rodríguez Balearic Islands C. montagnei C. montagnei ? No description or
y Femenías (Spain) J. Ag. illustration
(1889)
Petersen La Galite C. montagnei C. michaelae As the 2 taxa are cited,
(1918) (Tunisia) Montagne and and the possibility that they
C. spinosa C. montagnei, could actually refer
Sauvageau respectively ? to C. michaelae and
C. montagnei must be
considered
Acknowledgements. The authors wish to thank the Herbarium LD and Dr Patrik Froden, Assistant Curator at the Botanical Museum of Lund University, for sending photographs of J.G. Agardh specimens of Cystoseira montagnei (syntype); Dr V. Bourgade of the Université Montpellier 2 and Prof. Bruno de Reviers and Dr B. Dennetière of the Muséum National d'Histoire Naturelle, Paris, for permission to consult collections and notebooks of C. Montagne, and for permission to reproduce the photographs of the lectotypes of C. michaelae (C. granulata var. turneri), C. adriatica and C. spinosa; Michèle Perret-Boudouresque for documentation assistance; Oussalah Adel for diving assistance; and Michael Paul for revising the English text. Many thanks are due to the anonymous reviewers for their comments and constructive criticism of the manuscript.
Algérie / Cystoseira michaelae / Cystoseira montagnei / Cystoseira spinosa /
(7 specimens that predate the protologue) (Table 1), because it is the most similar to the fertile specimen illustrated by Montagne (1846: Plate 4, Fig. 2a-h) (reproduced here as Fig. 21). Type locality: On the sheet of the lectotype: Algiers (Algeria). However, in the protologue, Montagne (1838) mentions 'propè Juliam Caesaream' (near Cherchell, ~80 km West of Algiers). We therefore consider that the type locality is Cherchell. Illustrations: Montagne (1846, Algiers, Plate 4, Fig. 2a-h
Cystoseira montagnei J. Agardh | 30,928 | [
"18869",
"173659",
"20177"
] | [
"191652",
"191652",
"191652",
"92874"
] |
00611620 | en | [
"sdv"
] | 2024/03/05 22:32:13 | 2011 | https://hal.science/hal-00611620/file/article.pdf | François Romagné
email: [email protected]
Eric Vivier
Natural killer cell-based therapies
Allotransplantation of natural killer (NK) cells has been shown to be a key factor in the control and cure of at least some hematologic diseases, such as acute myeloid leukemia or pediatric acute lymphocytic leukemia. These results support the idea that stimulation of NK cells could be an important therapeutic tool in many diseases, and several such approaches are now in clinical trials, sometimes with conflicting results. In parallel, recent advances in the understanding of the molecular mechanisms governing NK-cell maturation and activity show that NK-cell effector functions are controlled by complex mechanisms that must be taken into account for optimal design of therapeutic protocols. We review here innovative protocols based on allotransplantation, use of NK-cell therapies, and use of newly available drug candidates targeting NK-cell receptors, in the light of fundamental new data on NK-cell biology.
Introduction
Natural killer (NK) cells are the front-line troops of the immune system that help to keep you alive while your body marshals a specific response to viruses or malignant cells. They constitute about 10% of circulating lymphocytes [START_REF] Vivier | Innate or adaptive immunity? The example of natural killer cells[END_REF] and are on patrol constantly, always on the lookout for virus-infected or tumor cells, and when detected, they lock onto their targets and destroy them by inducing apoptosis while signaling danger by releasing inflammatory cytokines. By using NK cells that do not need prior exposure to their target, the innate immune system buys time for the adaptive immune system (T cells and B cells) to build up a specific response to the virus or tumor. Recent advances in understanding this process have led to the hope that NK cells could be harnessed as a therapy for cancers and other diseases, and we shall outline recent progress in understanding NK-cell biology that brings this approach into the realm of clinical trials.
Considerable advances have been made in understanding the molecular mechanisms governing NK-cell activation, which are assessed by the cells' ability to lyse different targets and/or secrete inflammatory cytokines such as interferon gamma (IFN-g) when in their presence. NK-cell activation is the result of a switch in the balance between the positive and negative signals provided by two main types of receptors. The receptors NKG2D, NKp46, NKp30, NKp44, the activating form of KIR (killer cell immunoglobulin-like receptor), known as KIR-S, and CD16 provide positive signals, triggering toxicity and production of cytokines. Although some of the ligands of these receptors remain unknown, the discovery of NKG2D ligands (MICA and the RAET1 family) and the NKp30 ligand (B7H6) suggests that such receptors recognize molecules that are seldom present on normal cells but are induced during infection or carcinogenesis. It is worth noting that CD16 recognizes antibody-coated target cells through their Fc portion, the receptor that mediates antibody-dependent cellular cytotoxicity, an important mechanism of action of therapeutic monoclonal antibodies (mAbs). The function of KIR-S, a family of activating receptors with a lot of homology with inhibitory KIRs (KIR-L) including the sharing of some ligands, remains largely unknown.
In the normal state of affairs, there are checks and balances to keep NK cells from attacking normal cells: activating ligands are rare on normal cells and there are inhibitory receptors on NK cells (Figure 1). The most studied inhibitory receptors are a family of immunoglobulin (Ig)-like receptors with two (KIR2DL1 and KIR2DL2/3) or three (KIR3DL1) Ig-like domains, and immunoreceptor tyrosine-based inhibition intracellular motifs (ITIMs), which transduce negative signals [START_REF] Vivier | Natural killer cell signaling pathways[END_REF]. The ligands of these receptors are well characterized and each consist of large families of major histocompatibility complex (MHC) class I gene variants (alleles) sharing structural determinants. KIR2DL1 and KIR2DL2/3 molecules recognize MHC-C alleles with a lysine or an asparagine at position 80 (collectively termed C2 alleles and C1 alleles, respectively), whereas KIR3DL1 recognizes MHC-B alleles sharing a Bw4 epitope, representing about half of the overall MHC-B alleles. Another receptor, NKG2A, recognizes HLA-E, an MHC class I-like molecule, loaded mostly with peptides derived from other class I molecules [START_REF] Parham | MHC class I molecules and KIRs in human history, health and survival[END_REF]. The expression of these molecules is variegated, and an individual NK cell will express either one or several inhibitory receptors. In combination, these receptors are sensors of the presence of MHC class I molecules on target cells and inhibitors of NK function. An integrated, although simplified, view of NK-cell activation is that NK cells quantitatively integrate positive and negative signals provided by cancer cells or infected cells, which express NK-stimulatory ligands de novo, while often down-modulating MHC class I to avoid detection by T cells.
There has been considerable interest in stimulation of NKcell activity in recent years because of genetic studies, both in preclinical and clinical settings, showing that it can increase tumor immunosurveillance and eradicate established hematological diseases such as acute myeloid leukemia (AML), as well as some viruses [START_REF] Terme | Natural killer cell-directed therapies: moving from unexpected results to successful strategies[END_REF]. In mouse models, the expression of NK-stimulatory NKG2D ligands not only induces short-term rejection of tumors, but also induces a protective adaptive immune response [START_REF] Diefenbach | Rae1 and H60 ligands of the NKG2D receptor stimulate tumour immunity[END_REF]. Similarly, mice genetically deficient in NKG2D are more susceptible to spontaneous cancer than wild-type mice [START_REF] Guerra | NKG2Ddeficient mice are defective in tumor surveillance in models of spontaneous malignancy[END_REF]. In humans, the development of allotransplantation, a clinical procedure involving transplantation of genetically nonidentical cells (routinely used in AML), shed light on the role of NK cells and particularly the role of inhibitory receptors in this process. For certain donor recipient pairs, genetic differences in MHC class I genes between the donor and the recipient cause the KIR-expressing cells NK cells sense interacting cells via their activating and inhibitory receptors. The density of ligands for these receptors dictates whether or not this interaction will lead to NK-cell activation and hence cytotoxicity and/or cytokine secretion. MHC, major histocompatibility complex; KIR, killer cell immunoglobulin-like receptor.
from the donor to not recognize their inhibitory MHC class I ligands in the recipient, leaving a subpopulation of donor NK cells free from inhibition, referred to as "alloreactive" NK cells. For example, a donor NK-cell subpopulation expressing only KIR2DL1 transplanted in C1/C1 homozygotes or KIR2DL2/3 NK cells transplanted in C2/C2 individuals do not find their cognate inhibitory ligands and become alloreactive. In haploidentical MHCmismatched hematopoietic stem cell transplantation (HSCT)-a situation where one MHC haplotype is similar between donor and recipient whereas the other is fully mismatched-that absence of inhibition due to the KIR-MHC incompatibility results in major differences in the clinical outcome [START_REF] Ruggeri | Effectiveness of donor natural killer cell alloreactivity in mismatched hematopoietic transplants[END_REF][START_REF] Ruggeri | Donor natural killer cell allorecognition of missing self in haploidentical hematopoietic transplantation for acute myeloid leukemia: challenging its predictive value[END_REF][START_REF] Ruggeri | Role of natural killer cell alloreactivity in HLA-mismatched hematopoietic stem cell transplantation[END_REF]. Clinical benefit correlates with the presence in the recipient of these disinhibited alloreactive NK cells from the donor, which are effective against recipient tumor cells. In viral infections, particular combinations of NK-activating receptors or KIR and their ligands are protective. Presence of the activating receptor KIR3DS1 and its putative ligand HLABw4-I80 has been shown to be a key factor in preventing HIV infection from leading to full-blown AIDS [START_REF] Alter | Differential natural killer cell-mediated inhibition of HIV-1 replication based on distinct KIR/HLA subtypes[END_REF][START_REF] Alter | HLA class I subtype-dependent expansion of KIR3DS1+ and KIR3DL1+ NK cells during acute human immunodeficiency virus type 1 infection[END_REF][START_REF] Carrington | KIR-HLA intercourse in HIV disease[END_REF]. In hepatitis C, KIR2DL3 homozygosity and HLA-C1 homozygosity are beneficial in both early eradication of infection and response to standard treatment (type I IFN + ribavirin) [START_REF] Khakoo | HLA and NK cell inhibitory receptor genes in resolving hepatitis C virus infection[END_REF][START_REF] Vidal-Castiñeira | Effect of killer immunoglobulin-like receptors in the response to combined treatment in patients with chronic hepatitis C virus infection[END_REF]. Homozygosity of KIR2DL3 and HLA-C1 alleles has been reported to lead to lower levels of NK inhibition than other pairs of KIR ligand combinations [START_REF] Ahlenstiel | Distinct KIR/HLA compound genotypes affect the kinetics of human antiviral natural killer cell responses[END_REF][START_REF] Moesta | Synergistic polymorphism at two positions distal to the ligand-binding site makes KIR2DL2 a stronger receptor for HLA-C than KIR2DL3[END_REF], suggesting that this underlies the enhanced response to hepatitis C. However, as KIR can also be expressed by some T-cell subsets, it remains to be firmly established whether NK cells are responsible for these effects. Nevertheless, the results of these studies suggest that we should extend the design of NK cell-based therapies to diseases other than cancer, such as infections and inflammation.
We will review here the recent advances that could help with the design of proper protocols and therapies and advance the use of NK cells in the clinic, starting with allotransplantation (transplantation between genetically different individuals of the same species). This will be followed by a discussion of the cell therapy procedures that are being developed, and the pharmacological agents that are currently or could be used in clinical trials to take advantage of the activity of NK cells.
Lessons from transplantation
Since the initial data from haploidentical HSCT, a number of retrospective studies in allostransplantation have been published, sometimes leading to differing clinical outcomes [START_REF] Witt | The influence of NK alloreactivity on matched unrelated donor and HLA identical sibling haematopoietic stem cell transplantation[END_REF]. These conflicting results may be explained in the light of new findings in NK-cell physiology and maturation.
Initially, alloreactive NK cells were simply defined by having KIRs that were only incompatible with the host MHC, and several studies have identified such alloreactive NK cells that are effective against AML blasts. However, it has been shown in normal mice that NK cells with only inhibitory receptors incompatible with self MHC class I alleles do arise physiologically (i.e., not after transplantation) but are partially functionally disabled [START_REF] Raulet | Self-tolerance of natural killer cells[END_REF]. Hence, NK cells undergo a complex maturation process that necessitates the interaction of their inhibitory receptors with their ligands, in order to be fully functional against class I negative cells (recognition of missing self; see [START_REF] Raulet | Self-tolerance of natural killer cells[END_REF] for review). The precise molecular mechanisms and localization of this process remain largely unknown in mice but were shown to be dynamic and reversible [START_REF] Elliott | MHC class I-deficient natural killer cells acquire a licensed phenotype after transfer into an MHC class I-sufficient environment[END_REF][START_REF] Joncker | Mature natural killer cells reset their responsiveness when exposed to an altered MHC environment[END_REF]. It has since been confirmed in humans that NK cells with only MHCincompatible KIR cells do exist in normal individuals but, as in mice, they are partially functionally disabled [START_REF] Anfossi | Human NK cell education by inhibitory receptors for MHC class I[END_REF][START_REF] Cooley | A subpopulation of human peripheral blood NK cells that lacks inhibitory receptors for self-MHC is developmentally immature[END_REF], indicating that human NK cells also undergo education much like mouse NK cells. This leads to a revision of the concept of alloreactivity: KIR mismatch is necessary to induce activity against MHC-positive cells (we will refer to these cells as potentially alloreactive) but not entirely sufficient, as they must have undergone an education process. It follows that functional assays must be performed to demonstrate activity and define truly alloreactive cells.
These new findings may lead to reconciliation of the conflicting data from allogeneic HSCT. Allogeneic HSCT (from a nonidentical donor) is a complex clinical procedure, with considerable differences in the nature and origin of the graft, as well as in pregraft treatments (conducted to remove recipient hematopoietic cells and thereby allow the graft to implant) and postgraft treatments (to prevent graft-versus-host disease [GVHD] caused by donor T cells). Generally, there are two main scenarios. In the first, haploidentical grafts consisting of high doses of highly purified donor CD34positive hematopoietic stem cells, with very few mature cells, are injected after very intense conditioning regimens of the host to avoid graft rejection (there is virtually no postgraft treatment as the graft is highly T cell-depleted) (Figure 2). Truly alloreactive NK cells have been consistently found ex vivo following such transplantation, in an activated state resulting from missing-self recognition, and this scenario is associated with an improved clinical outcome [START_REF] Moretta | Killer Ig-like receptor-mediated control of natural killer cell alloreactivity in haploidentical hematopoietic stem cell transplantation[END_REF]. Unfortunately, such haploidentical procedures also require profound immunosuppression of the host, and the treatmentrelated morbidities caused by infection are high, so such procedures are not used widely. In the second scenario, allogeneic HSCT can be matched except at a given HLA-B or HLA-C allele, and require much less conditioning pregraft, but more immunosuppressive treatment postgraft to avoid GVHD. Such protocols vary widely depending on the laboratories, both in terms of pregraft and postgraft treatments and cell content in the graft (mature cell content and origin of graft consisting of either bone marrow cells or mobilized peripheral cells). Not surprisingly, such protocols vary widely in the KIR mismatch effect, with outcomes to match: beneficial, neutral, or even pejorative. Taking into account the new findings on NK-cell physiology, the current prevailing hypothesis is that in haploidentical HSCT, the harsh conditioning regimen and high CD34-positive cell content allow the donor NK cells to mature with a recognition of the "self" MHC type on the donor hematopoietic cells, and therefore become truly alloreactive against residual recipient blast cells, whereas normal host tissues are spared because of lack of NK-stimulatory ligand expression [START_REF] Pende | Anti-leukemia activity of alloreactive NK cells in KIR ligand-mismatched haploidentical HSCT for pediatric patients: evaluation of the functional role of activating KIR and redefinition of inhibitory KIR specificity[END_REF][START_REF] Haas | NK-cell education is shaped by donor HLA genotype after unrelated allogeneic hematopoietic stemcell transplantation[END_REF]. In nonhaploidentical situations, education of NK cells on donor HLA may be lacking in some graft preparation and pregraft regimens, which might account for the neutral effects seen (cells remain potentially alloreactive). Conflicting results in nonhaploidentical situations [START_REF] Giebel | Survival advantage with KIR ligand incompatibility in hematopoietic stem cell transplantation from unrelated donors[END_REF][START_REF] Davies | Evaluation of KIR ligand incompatibility in mismatched unrelated donor hematopoietic transplants. Killer immunoglobulin-like receptor[END_REF] may be also explained by different treatments resulting in different T-cell levels in grafts and consequently different levels of GVHD [START_REF] Cooley | Donors with group B KIR haplotypes improve relapse-free survival after unrelated hematopoietic cell transplantation for acute myelogenous leukemia[END_REF]. This hypothesis is further supported by protocols where the graft origin is cord blood, a situation with few mature T cells in the graft, which results in a beneficial outcome [START_REF] Willemze | Eurocord-Netcord and Acute Leukaemia Working Party of the EBMT: KIR-ligand incompatibility in the graft-versus-host direction improves outcomes after umbilical cord blood transplantation for acute leukemia[END_REF].
In truly matched transplantation, there are no obvious reasons for alloreactive cells to develop as the MHC of donor and recipient are the same and the maturation of NK cells should spare all host cells (either normal or NK-stimulating ligand-expressing cells). Surprisingly, even in completely MHC-matched transplantation, in particular in T-depleted grafts, functionally alloreactive NK cells have been reported, with an improved outcome for patients homozygous for HLA-C1 or HLA-C2, for example [START_REF] Yu | Breaking tolerance to self, circulating natural killer cells expressing inhibitory KIR for non-self HLA exhibit effector function after T cell-depleted allogeneic hematopoietic cell transplantation[END_REF][START_REF] Sobecks | Survival of AML patients receiving HLA-matched sibling donor allogeneic bone marrow transplantation correlates with HLA-Cw ligand groups for killer immunoglobulin-like receptors[END_REF]. In the same vein, it has been recently demonstrated in a large retrospective study that the KIR genotype alone influences clinical outcome, with the presence of KIR2DL3 and/or absence of KIR2DL2 and KIR2DS2 being less favorable, opening the way for the selection of donors based on KIR genotype in matched allotransplantation [START_REF] Cooley | Donor selection for natural killer cell receptor genes leads to superior survival after unrelated transplantation for acute myelogenous leukemia[END_REF]. The functional basis of such observations is still incompletely understood: during NK-cell reconstitution from stem cells, KIR expression is variegated, and potentially alloreactive cells appear, but, as mentioned above, if such reconstitution was equivalent to normal NK-cell maturation, they should be functionally impaired and tolerant to self. It is possible that during hematopoietic reconstitution and in certain allograft protocols, the cytokine milieu, strength of inhibitory interaction and presence of different activating genes (depending on KIR genotype), and absence of T-cell interaction (T cell-depleted grafts) favor the maturation of truly alloreactive NK cells despite the presence of matched inhibitory receptors. Indeed, new studies describing NK-cell maturation point to the fact that the hyporesponsiveness of NK cells is very subtle and malleable, influenced by cytokines and probably genotype, and reversible [START_REF] Elliott | MHC class I-deficient natural killer cells acquire a licensed phenotype after transfer into an MHC class I-sufficient environment[END_REF][START_REF] Joncker | Mature natural killer cells reset their responsiveness when exposed to an altered MHC environment[END_REF]. In summary, although many studies strongly suggest the efficacy of KIR-mismatched NK cells, definitive studies are needed to optimize the clinical settings. We need a better understanding of NK-cell development and function after matched allogeneic transplantation, depending on the specific allotransplantation protocol used, to take full advantage of the alloreactive potential of NK cells.
Cell therapy protocols in development
The current view arising from the results of allotransplantation studies is that NK cells, and particularly allogeneic KIR-mismatched NK cells, are effective, at least in adult AML and pediatric acute lymphocytic leukemia, but that the effect may depend on NK-cell maturation/ activation state. One way to better control NK-cell functional status (as well as the ratio of NK cells to target cells) would be to generate large quantities of these cells in vitro and inject them either as a therapeutic regimen alone or after allotransplantation.
Historically, crude, short-term (1-2 days) interleukin (IL)-2-activated cells were used as graft material (lymphokine-activated killer [LAK] cells), and although this was enriched in NK cells, the LAK cells were mostly T cells, and the cellular content was poorly defined and variable (see [START_REF] Suck | Emerging natural killer cell immunotherapies: large-scale ex vivo production of highly potent anticancer effectors[END_REF] for review). Initial attempts to work with purified preparations of NK cells led to promising results, although with a limited number of patients. Autologous HSCT followed by injection of purified, short-term IL-2-stimulated KIR-mismatched NK cells in multiple myeloma patients destroyed multiple myeloma blasts in vitro, did not lead to graft failure, and the NK cells survived at least a few days [START_REF] Shi | Infusion of haploidentical killer immunoglobulin-like receptor ligand mismatched NK cells for relapsed myeloma in the setting of autologous stem cell transplantation[END_REF]. In a protocol not involving HSCT, purified short-term IL-2-activated haploidentical NK cells were injected into AML and other hematologic cancer patients after mild conditioning to avoid rejection of injected NK cells. This study showed that injected NK cells survived in the host for a few days and were well tolerated [START_REF] Miller | Successful adoptive transfer and in vivo expansion of human haploidentical NK cells in patients with cancer[END_REF]. Although the number of patients is still too limited to draw firm conclusions, encouraging clinical signs of activity were seen in the above protocols. As neither protocol reached any doselimiting toxicity, these findings suggest that it may be possible to inject higher cell numbers if cell sources or ex vivo expansion procedures improve.
These initial results have prompted several groups to embark on the large-scale expansion of highly purified, GMP ("good manufacturing practice") grade NK cells after longer-term in vitro expansion. NK-cell purification by magnetic beads is followed by IL-2 expansion with or without feeder cells. Protocol for generation of single KIR-positive cells has also been designed but is not yet ready to be applied to large-scale clinical trials [START_REF] Siegler | Good manufacturing practice-compliant cell sorting and large-scale expansion of single KIR-positive alloreactive human natural killer cells for multiple infusions to leukemia patients[END_REF]. Functional studies using NK cells against leukemic cells or NK infusion in xenogenic models have demonstrated, however, that the cells generated are very active. Some of the protocols have reached smallscale phase I clinical trials and have demonstrated that high numbers of these infused NK cells are safe in humans [START_REF] Barkholt | Safety analysis of ex vivo-expanded NK and NKlike T cells administered to cancer patients: a phase I clinical study[END_REF][START_REF] Fujisaki | Expansion of highly cytotoxic human natural killer cells for cancer cell therapy[END_REF].
The current caveats of such protocols are the complexity of the procedures required, which would make it difficult to increase the scale-up to multicenter clinical studies necessary for larger phase II trials. Indeed, successful transfer of cell therapy protocols (compliant with regulatory standards) to industry and large clinical trials requires a centralized cell culture factory and the use of frozen cells. NK-cell culture protocols do not yet meet this benchmark, but further refinements should solve these issues [START_REF] Berg | Clinical-grade ex vivo-expanded human natural killer cells up-regulate activating receptors and death receptor ligands and have enhanced cytolytic activity against tumor cells[END_REF]. Moreover, alloreactive NK cells should be the most potent cells in cell therapy protocol and their source can be a problem outside the context of allotransplantation. This alone may prevent the use of such cells in larger trials. The development of antiKIRtherapeutic mAbs (see below and Figure 2) that block NK inhibition may allow the use of autologous cells as an easier source of cell material, by inducing alloreactivity of NK cells that would otherwise be MHCtolerant.
Another important problem to be solved is the fate of ex-vivo expanded NK cells after infusion. Indeed, if NK cells from allogeneic donors are used, they may be rejected by the host immune system despite the mild immunosuppression used in some protocols. Even in cases of autologous transplantation or allotransplantation where donor NK cells are not rejected, they may be short lived, and protocols usually involve daily injection of IL-2 to sustain NK levels and activation status [START_REF] Shi | Infusion of haploidentical killer immunoglobulin-like receptor ligand mismatched NK cells for relapsed myeloma in the setting of autologous stem cell transplantation[END_REF][START_REF] Miller | Successful adoptive transfer and in vivo expansion of human haploidentical NK cells in patients with cancer[END_REF]. IL-2 injection may increase NK-cell lifespan and activity (although it has not been formally tested, by comparison with untreated cells) but also can generate outgrowth of Treg cells that may hamper the overall response to the tumor as shown in pilot clinical trials [START_REF] Barkholt | Safety analysis of ex vivo-expanded NK and NKlike T cells administered to cancer patients: a phase I clinical study[END_REF][START_REF] Geller | A phase II study of allogeneic natural killer cell therapy to treat patients with recurrent ovarian and breast cancer[END_REF]. The very recent availability of GMP grade IL-15, now in phase I clinical trials by the National Institutes of Health (NIH) [START_REF] Geller | A phase II study of allogeneic natural killer cell therapy to treat patients with recurrent ovarian and breast cancer[END_REF] may circumvent the use of IL-2, providing a better activation signal for NK cells, both in vitro and in vivo, without promoting Treg expansion.
Pharmacological agents in development to modulate NK activity
Although cell therapy protocols can be very useful to characterize NK-cell activity and, if successful, can translate into commercially available products, they remain very difficult and costly to develop on a large scale. It should be easier to move new drugs forward now that therapeutic agents are being tested that aim to stimulate NK-cell activity.
The most advanced compound specifically targeting NK cells is a blocking antiKIR mAb. This mAb, 1-7F9, recognizes KIR2DL1, 2, and 3 and therefore blocks the inhibition imposed by virtually all MHC class I C alleles, allowing it to be tested in all patients whatever their KIR and HLA genotypes. Building on the results of allotransplantation in AML and multiple myeloma patients [START_REF]gov -A phase I study of intravenous recombinant human IL-15 in adults with refractory metastatic malignant melanoma and metastatic renal cell cancer[END_REF], as well as preclinical data showing reconstitution of NK-cell lysis of MHC-positive multiple myeloma and AML blasts in vitro and in preclinical models [START_REF] Kröger | Clinical Trial Committee of the British Society of Blood and Marrow Transplantation and the German Cooperative Transplant Group: Comparison between antithymocyte globulin and alemtuzumab and the possible impact of KIR-ligand mismatch after dose-reduced conditioning and unrelated stem cell transplantation in patients with multiple myeloma[END_REF], clinical trials with 1-7F9 mAb in both diseases have been initiated. Phase I results showed good tolerability in both scenarios (Vey et al., manuscript in preparation), paving the way for phase II trials that are now ongoing. While the monoclonal 1-7F9 should be valuable to block the inhibition of NK cells, other products are now available that can enhance the activation of NK cells. One of the most promising of these is IL-15, which is a key cytokine for NK cells.
In the same vein, it has been shown by several groups that certain drugs, already available in the therapeutic arsenal, can increase the expression of NK-activating ligands on the tumor, and therefore increase NK tumor lysis in vivo. Initially, it was shown that some chemotherapy (5-FU, Ara-C, cisplatin) and radiation or ultraviolet therapy targeting the DNA damage pathway can increase expression of the NK-stimulating ligand NKG2D on tumor cells, and lead to enhanced NK lysis of tumors [START_REF] Romagné | Preclinical characterization of 1-7F9, a novel human anti-KIR receptor therapeutic antibody that augments natural killer-mediated killing of tumor cells[END_REF]. More recently, new drugs targeting proteasome inhibitors, such as bortezomib, which is now registered for the treatment of multiple myeloma, have also been shown to induce NK-stimulatory ligands [START_REF] Gasser | The DNA damage pathway regulates innate immune system ligands of the NKG2D receptor[END_REF][START_REF] Ames | Sensitization of human breast cancer cells to natural killer cell-mediated cytotoxicity by proteasome inhibition[END_REF]. Finally, lenalidomide (Revlimid), a drug which has been shown to be active in multiple myeloma and to have promising preliminary results in other hematological malignancies, has been shown, in addition to having a direct antitumor effect, to upregulate NK-cell function through induction of cytokines [START_REF] Butler | Proteasome regulation of ULBP1 transcription[END_REF] and to induce NK-stimulatory ligands on tumor cells. Some of these drugs, such as bortezomib or chemotherapies [START_REF] Davies | Thalidomide and immunomodulatory derivatives augment natural killer cell cytotoxicity in multiple myeloma[END_REF][START_REF] Markasz | Effect of frequently used chemotherapeutic drugs on the cytotoxic activity of human natural killer cells[END_REF], can also have inhibitory effects on NK cells so their use must be carefully evaluated, but their clinical availability opens the door to multiple combination possibilities, either sequentially or concomitantly, with cell therapy and antiKIR antibodies. Such combinations are beginning to be tested in the clinic (phase I/II for antiKIR in combination with lenalidomide, and cell therapies in combination with bortezomib [START_REF] Berg | Clinical-grade ex vivo-expanded human natural killer cells up-regulate activating receptors and death receptor ligands and have enhanced cytolytic activity against tumor cells[END_REF]).
Future directions
Since the initial demonstration that NK-cell therapies are effective in some contexts, there has been a lot of progress in refining protocols, as well as new approaches such as the injection of highly purified, functionally controlled NK cells. Also, new drugs allow the in-vivo manipulation of NK cells by targeting their inhibitory receptors or activating receptors (through drugs driving the expression of ligands of activating receptors on tumor cells). All these tools have now been developed to a point where they can be tested in clinical trials either alone or in combination. Because recent advances have increased our understanding of NK maturation and function, such clinical trials can now be monitored for NK-cell activity and represent attractive possibilities to be translated into successful treatments in the clinic.
Figure 1 .
1 Figure 1. Natural killer (NK) cell recognition strategies
Figure 2 .
2 Figure 2. Natural killer (NK) cell-based therapies
(page number not for citation purposes)
(page number not for citation purposes) F1000 Medicine Reports 2011, 3:9 http://f1000.com/reports/m/3/9
Acknowledgements
The authors thank Corinne Beziers-Lafosse (CIML) for excellent graphic assistance and CIML's antibody and cytometry facilities. EV's lab is supported by grants from the European Research Council (ERC advanced grants), Agence Nationale de la Recherche (ANR), Ligue Nationale Contre le Cancer (Equipe labellisée 'La Ligue'), as well as by institutional grants from INSERM, CNRS, and Université de la Méditerranée to the CIML. EV is a scholar from the Institut Universitaire de France.
Abbreviations
AML, acute myeloid leukemia; GMP, good manufacturing practice; GVHD, graft-versus-host disease; HSCT, hematopoietic stem cell transplantation; IFNγ, interferon gamma; Ig, immunoglobulin; IL, interleukin; KIR, killer cell immunoglobulin-like receptor; LAK, lymphokine-activated killer; mAb, monoclonal antibody; MHC, major histocompatibility complex; MICA, MHC class Irelated chain A; NK, natural killer; RAET1, retinoic acid early transcripts-1.
Competing interests
FR and EV are co-founders and shareholders of Innate Pharma. | 33,949 | [
"17342"
] | [
"111290",
"182245",
"50970"
] |
01765072 | en | [
"sdv",
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01765072/file/Roux_et_al_2017.pdf | David Roux
email: [email protected]
Osama Alnaser
Elnur Garayev
Béatrice Baghdikian
Riad Elias
Philippe Chiffolleau
Evelyne Ollivier
Sandrine Laurent
Mohamed El Maataoui
Huguette Sallanon
Ecophysiological and phytochemical characterization of wild populations of Inula montana L. (Asteraceae) in Southeastern France
Keywords:
Inula montana is a member of the family Asteraceae and is present in substantial numbers in Garrigue country (calcareous Mediterranean ecoregion). This species has traditionally been used for its anti-inflammatory properties as well as Arnica montana. In this study, three habitats within Luberon Park (southern France) were compared regarding their pedoclimatic parameters and the resulting morpho-physiological response of the plants. The data showed that I. montana grows in south-facing poor soils and tolerates large altitudinal and temperature gradients. The habitat conditions at high elevation appear to affect mostly the morphology of the plant (organ shortening). Although the leaf contents of total polyphenols and flavonoids subclass essentially followed a seasonal pattern, many sesquiterpene lactones were shown to accumulate first at the low-elevation growing sites that suffered drought stress (draining topsoil, higher temperatures and presence of a drought period during the summer). This work highlights the biological variability of I. montana related to the variation of its natural habitats which is promising for the future domestication of this plant. The manipulation of environmental factors during cultivation is of great interest due to its innovative perspective for modulating and exploiting the phytochemical production of I. montana.
Introduction
The sessile living strategy of terrestrial plants, anchored to the ground, forces them to face environmental variations. Plants have developed complex responses to modify their morpho-physiological characteristics to counteract both biotic and abiotic factors [START_REF] Suzuki | Abiotic and biotic stress combinations[END_REF][START_REF] Rouached | Plants coping abiotic and biotic stresses: a tale of diligent management[END_REF]. Altitude is described as an integrative environmental parameter that influences phytocoenoses in terms of species distribution, morphology and physiology [START_REF] Liu | Influence of environmental factors on the active substance production and antioxidant activity in Potentilla fruticosa L. and its quality assessment[END_REF]. It reflects, at minimum, a mixed combination of temperature, humidity, solar radiation and soil type [START_REF] Körner | Alpine Plant Life[END_REF]. In addition, the plant age, season, microorganism attacks, competition, soil texture and nutrient availability have been proven to strongly influence the morphology and the secondary metabolite profile of plants [START_REF] Seigler | Plant Secondary Metabolism[END_REF]. Altitudinal gradients are attractive for eco-physiological studies to decipher the mechanisms by which abiotic factors affect plant biological characteristics and how those factors influence species distribution [START_REF] Graves | A comparative study of Geum rivale L. and G. urbanum L. to determine those factors controlling their altitudinal distribution II. Photosynthesis and respiration[END_REF]. For instance, a summer increase of nearly 10% in solar irradiance per 1000 m in elevation has been demonstrated in the European Alps. This increase was also characterized by an 18% increase in UV radiation [START_REF] Blumthaler | Increase in solar UV radiation with altitude[END_REF]. Considering the reliefs of the Mediterranean basin, plants must confront both altitude and specific climate, namely high summer temperatures, infrequent but abundant precipitation, and wind [START_REF] Bolle | Mediterranean Climate: Variability and Trends[END_REF]. Moreover, plants that live at higher elevation must also survive winter conditions characterized by low temperatures and high irradiance. All together, these factors force the plants to develop dedicated short-and long-term phenological, morphological and physiological adaptations [START_REF] Kofidis | Combined effects of altitude and season on leaf characteristics of Clinopodium vulgare L. (Labiatae)[END_REF]. Many of these adjustments are protective mechanisms against photoinhibition of photosynthesis [START_REF] Guidi | Non-invasive tools to estimate stress-induced changes in photosynthetic performance in plants inhabiting Mediterranean areas[END_REF][START_REF] Sperlich | Seasonal variability of foliar photosynthetic and morphological traits and drought impacts in a Mediterranean mixed forest[END_REF], and most of them involve the synthesis of secondary metabolites [START_REF] Ramakrishna | Influence of abiotic stress signals on secondary metabolites in plants[END_REF][START_REF] Bartwal | Role of secondary metabolites and brassinosteroids in plant defense against environmental stresses[END_REF].
The genus Inula (Asteraceae) includes more than 100 species that are widely distributed in Africa and Asia and throughout the Mediterranean region. These plants have long been collected or cultivated around the world for their ethnomedicinal uses. They synthesize and accumulate significant amounts of specific terpenoids and flavonoids. Secondary metabolites (including sesquiterpene lactones) from Inula spp. have shown interesting biological activities such as antitumor, anti-inflammatory, antidiabetic, bactericidal, antimicrobial and antifungal activities, and these plants have also been used for tonics or diuretics [START_REF] Reynaud | Free flavonoid aglycones from Inula montana[END_REF][START_REF] Seca | The genus Inula and their metabolites: from ethnopharmacological to medicinal uses[END_REF].
The species Inula montana is a hairy rhizomatous perennial (hemicryptophyte) herb with a 10-40 cm circumference and solitary capitulum (5-8 cm diameter) of yellow florets (long ligules) positioned at the top of a ≈20 cm floral stem. It grows at altitudes of 50-1300 m from eastern Italy to southern Portugal and is frequent in Southeast France. This calcicolous and xerophilous plant can be locally abundant, particularly in the Garrigue-type lands [START_REF] Gonzalez Romero | Phytochemistry and pharmacological studies of Inula montana L[END_REF][START_REF] Girerd | Flore du Vaucluse: troisième inventaire, descriptif, écologique et chorologique[END_REF][START_REF] Botanica | Tela Botanica[END_REF]. In the south of France, I. montana was incorrectly called "Arnica" because it was used in old traditional medicine as an alternative drug to the well-known Arnica montana [START_REF] Reynaud | Free flavonoid aglycones from Inula montana[END_REF]. Due to herbivory pressure, loss of habitat and the fact that it is mainly harvested from the wild, A. montana is cited in the Red List of Threatened Species (IUCN). In Europe, more than 50 t of dried flowers are traded each year [START_REF] Sugier | Propagation and introduction of Arnica montana L. into cultivation: a step to reduce the pressure on endangered and highvalued medicinal plant species[END_REF]. Although many efforts are currently underway to domesticate A. montana and to correctly manage its habitats, the opportunity to find an alternative plant would therefore be of considerable interest.
In this context, we have developed a scientific program that aims to rehabilitate, domesticate and test I. montana as an efficient pharmaceutical substitute to A. montana. We have recently published a phytochemical investigation of the contents of leaves and flowers of I. montana [START_REF] Garayev | New sesquiterpene acid and inositol derivatives from Inula montana L[END_REF]. Those data showed new compounds with associated anti-inflammatory activity. Here, we present the results of an ecophysiological study of I. montana that aimed to analyze the putative correlations between its morphology, its phytochemical production (with a focus on sesquiterpene lactones) and the characteristics (edaphic and climatic) of its natural habitats. It was expected that I. montana would face various abiotic stresses according to the large altitude gradient of its habitats. Assessing the response of the plant to its natural growing conditions will be helpful for its future domestication. In addition, a successful identification of environmental levers that could modulate the phytochemical production of this medicinal plant would be of great interest.
Material and methods
Luberon park
The present study was focused on I. montana populations growing in the French "Parc Naturel Régional du Luberon" (Luberon Park) that is located in southeastern France. The park (185,000 ha) is characterized by medium-sized mountains (from 110 to 1125 m high; mean altitude ≈680 m) that stretch from west to east over the "Vaucluse" and the "Alpes-de-Haute-Provence" regions (Supplemental file). Although the overall plant coverage of Luberon Park belongs to the land type "Garrigue" (calcareous low scrubland ecoregion), there are two significant climatic influences: first, the north-facing shady side is characterized by a cold and humid climate that supports the development of deciduous species such as the dominant white oak (Quercus pubescens). Second, the sunny south-facing side receives eight to ten times more solar radiation. On this side, the vegetation is typically Mediterranean with a majority of green oak (Quercus ilex), Aleppo pine (Pinus halepensis), kermes oak (Quercus coccifera) and rosemary (Rosmarinus officinalis). The ridges of the Luberon Park suffer from extreme climatic variations: windy during all seasons, intense summer sun, cold during the winter, dry atmosphere and spontaneous and intense rains. These conditions limit the spectrum of plant species to those most resistant to these conditions, such as the common juniper (Juniperus communis) and boxwood (Buxus sp.) [START_REF] Gressot | Le parc naturel régional du Luberon[END_REF].
Sites of interest and sampling
Inula montana is present in highly variable amounts over Luberon Park. By exploring the south-facing sides we selected three sites of interest: Murs, Bonnieux and Apt (Supplemental file). At these locations, I. montana forms several small, sparse and heterogeneous groups of tens of plants per hectare. These sites were also selected for their similar presentation as grassy clearings (area from 4 to 9 ha) and for their uniform flatness and slight inclination (≈7%). The linear distance between the 3 sites is 21.4 ± 2 km. The Apt site is 500-600 m higher than both other sites. A preliminary phenological survey showed that the vegetative growth of I. montana extended from early April to late October, consistent with the hemicryptophytic strategy of the plant. Mid-June corresponded to the flowering period, which lasted ≈10 days. Accordingly, samples were synchronously collected from the three habitats at four consecutive periods during 2014: early April (early spring), mid-May (late spring), mid-June (summer) and late October (autumn).
Climatic and edaphic data
The measurements of climate characteristics (standard weather stations, 1.5 m height above soil surface) were accessed from the French weather data provider (meteofrance.fr, 2014, France) and supplemented with agronomic weather station data near each site (climatedata.org, 2014, Germany). The satellite-based solar radiation measurements (Copernicus Atmosphere Monitoring Service (CAMS)) were obtained from the solar radiation data service (soda-pro.com, 2014, MINES ParisTech, France). The measurements of the physical properties of the soils and of the chemical content of the aqueous extracts (cf. Table 1) were subcontracted to an ISO certified laboratory (Teyssier, Bordeaux, France) according to standards. Briefly, 10 g of raw soil were milled, dried (12 h at 45 °C, except for nitrogen determination) and sifted (2 mm grid). Samples were then stirred into 50 ml of demineralized water for 30 min at 20 °C and filtered. Organic matter was measured after oxidation in potassium dichromate and sulfuric acid. NH 4 and NO 3 were extracted with 1 M KCl. Organic matter, NH 4 , NO 3 and water-extractable PO 4 were then determined by colorimetric methods. K, Mg, Ca, Fe, Cu, Mn, Zn and Bo were determined by atomic absorption spectroscopy.
Determination of growth parameters
Plant growth for each period evaluated was determined by using several parameters: fresh and dry weight, water content, leaf area, and height of floral stem at the flowering stage. For each period, ten plants were collected randomly from each of the three sites (Luberon Park). The fresh weight was measured immediately after harvest, and the leaves were scanned to measure their area with the ImageJ software (National Institutes of Health, USA). The collected plants were subsequently dried (80 °C, 24 h) to calculate the water content. Glandular trichome density was assessed on 10 leaves randomly collected at the flowering period from 10 different plants per site. This assessment was performed using a stereomicroscope (Nikon ZX 100, Canagawa, Japan) equipped with fluorescence (excitation 382 nm, emission 536 nm) and digital camera (Leica DFC 300 FX, Wetzlar, Germany). The captured images allowed the quantification of glandular trichomes using ImageJ.
Chlorophyll-a fluorescence measurements
Chlorophyll-a fluorescence was measured in vivo using a portable Handy-PEA meter (Hansatech, Kings Lynn, UK) on 20 plants arbitrarily selected three times per day: in the morning (10:00), at midday (12:00) and in the afternoon (14:00). This was done for each considered time period (season) and for each of the three I. montana habitats. The fluorescence parameters calculated were the maximum quantum yield of primary photosystem II photochemistry (Fv/Fm) and the performance index (PI) according to the OJIP test [START_REF] Strasser | The fluorescence transient as a tool to characterize and screen photosynthetic samples[END_REF]. Both parameters are plant stress indicators and provide indications of the overall plant fitness. The ratio (Fv/Fm) between variable chlorophyll fluorescence (Fv = Fm -F0) and maximum fluorescence (Fm) is the most used parameter to assess plant stress. Initial fluorescence (F0) is obtained from dark adapted samples and maximum fluorescence (Fm) is measured under a saturation pulse [START_REF] Maxwell | Chlorophyll fluorescence -a practical guide[END_REF][START_REF] Rohaçek | Chlorophyll fluorescence parameters: the definitions, photosynthetic meaning, and mutual relationships[END_REF]. PI is an integrative parameter that reflects the contribution to photosynthesis of the density of reaction centers and both the light and the dark reactions [START_REF] Poiroux-Gonord | Metabolism in orange fruits is driven by photooxidative stress in the leaves[END_REF]. All of the parameters were calculated from the measured fluorescence of leaves under saturating pulsed light (1 s at 3500 μmol m -2 s -1 ) after 20 min adaptation to the dark.
Total polyphenol and flavonoid contents
Harvested leaves were air dried on absorbent paper at room temperature for 3 weeks. The samples were prepared by maceration at room temperature for 96 h in 20 ml of 50% ethanol (v/v) (Carlo Erba, Italy). This step was followed by ultrasonic extraction for 30 min. The samples were then filtered into a 20 ml volumetric flask and adjusted to volume with the same solvent. The total polyphenol content was determined according to paragraph 2.8.14 of the current European Pharmacopoeia (Ph. Eur, 2017): the absorbance was measured at 760 nm (Shimadzu 1650pc, Japan), and the results are expressed as pyrogallol (Riedel-de-Haën, Germany) equivalents in percent (g/100 g of dried plant sample). The total flavonoid content was determined according to the aluminum chloride colorimetric method of monograph number 2386 (safflower flower) from the current European Pharmacopoeia. The absorbance was measured at 396 nm (Shimadzu 1650pc), and the results are expressed as the percentage of total flavonoids relative to luteoline (C 15 H 10 O 6 ; Mr 286.2).
High-performance liquid chromatography (HPLC) analyses
The extraction was performed by mixing 10 g of dried leaves with 100 ml of CH 2 Cl 2 (Carlo Erba, Italy) and introducing the mixture into a glass column in order to extract compounds by percolation with dichloromethane. After 18 h of maceration, 100 ml of dichloromethane extract was collected and evaporated to dryness. Next, 10 mg of dried extract were dissolved in 5 ml of methanol (Carlo Erba) and centrifuged. Then, 4 ml of the supernatant was brought to a final volume of 10 ml with distilled water. The solution was filtered through a 0.45-μm membrane. The analyses were performed using an Agilent 1200 series apparatus (G1379A degasser, G1313A autosampler, G1311A quaternary pump, G1316A column thermostat and G1315 B diode array detector (DAD)) (Agilent, Germany) with a Luna C18 adsorbent (3 μm, 150 mm × 4.6 mm) (Phenomenex, USA) and a SecurityGuard C18 column (3 mm ID × 4 mm cartridge) (Phenomenex). Instrument control, data acquisition and calculation were performed with ChemStation B.02.01. (Agilent). The mobile phase consisted of 52% MeOH (Carlo Erba) and 48% water (Millipore, Germany), and the pH of the mobile phase was 5.5. The flow rate was 1.0 ml/min. The detector was operated at 210 nm (sesquiterpene lactones absorption wavelength), and peaks were identified according to [START_REF] Garayev | New sesquiterpene acid and inositol derivatives from Inula montana L[END_REF]. The injection volume was 20 μl.
Statistical analysis
The principal component analysis (PCA) and non-parametric test were performed using R (R Foundation, Austria). For multiple comparisons, the post hoc Kruskal-Wallis-Dunn test with the Bonferroni adjustment method was used. The R libraries used were factomineR, PMCMR, and multcompView. The data are displayed as the means ± standard error of the mean and were considered significant at p < 0.05.
Results
Pedoclimatic characterization of I. montana habitats
Among the three Luberon Park sites assessed, Murs and Bonnieux showed a similar climatic pattern in terms of temperature and precipitation (Fig. 1). In addition, the 20-year-data suggested that both of these I. montana habitats experienced a drought period centered on July (1-2 months long). The Apt habitat, which is 500-600 m higher than the two other sites (Supplemental file), showed a lower mean temperature and higher precipitation. In addition, Apt notably displayed the absence of a drought period, according to the averaged data (Fig. 1), but it showed drier air throughout the year (Table 1).
The 3-year (2013-2015) satellite-based measurement of the global solar irradiation on the horizontal plane at ground level (GHI) (Fig. 2) showed a strong increase in irradiance from January to June, a stable amount of radiation from June to July and then a strong decrease until December. When investigating the irradiance in detail for the 3 I. montana populations, no difference was observed (GHI). However, when irradiance was estimated under clear sky, namely by virtually removing the clouds (Clear-Sky GHI), it appeared that the Apt site received ≈3% higher solar irradiation from May to July. Taken together, these results indicate that the cloudier Apt weather compensates on average for the higher solar irradiation at this altitude.
Considering the physical characteristics of the topsoil, Apt appeared clayey and loamy, whereas Murs and Bonnieux were much richer (6-12 times) in sand (Table 1). Concerning the chemical characteristics, the analysis of the topsoil aqueous extracts showed that all three growing sites appeared equally poor (Table 1). The Apt topsoil showed slightly lower levels of NH 4 , NO 3 , K, Ca, Mn and Zn than either of the other sites and also showed the lowest pH.
Impact of the geographic location and seasonal progress on I. montana morphology and physiology
Until autumn, I. montana plants from the Apt population showed less leaf blade surface area than leaves from the other sites (Fig. 3A). All three habitats displayed an intensive early-spring growth period, but during late spring, the leaf surface area was 45% lower at the Apt site than at the other sites. However, at Apt, the leaf area was quite stable after this time and for the remaining period, but the leaf surface decreased progressively from late spring to autumn at the two lower-altitude sites (Murs and Bonnieux). There was no significant variation in the number of leaves during the season, with the exception of a slight difference during summer between Bonnieux and the two other habitats (Fig. 3B).
In addition, the geographic location of I. montana habitats seemed to influence both the length of the flowering stem and the number of glandular trichomes per leaf (Table 2). It appeared that I. montana plants from the Apt habitat showed a significantly shortened floral stem (≈-12%) and fewer leaf glandular trichomes (≈-23%) in comparison with the other sites (Murs and Bonnieux).
Dry and fresh weights increased from spring to summer and then decreased until autumn (Fig. 4A,B). When comparing the three I. montana habitats in terms of plant dry weight, they showed no difference during the overall season (Fig. 4A). Similarly, all plants showed essentially the same water content until late spring (Fig. 4C). Then, Apt plants displayed a water content of over 70% during the remaining seasons, while plants from the two low-elevation locations stayed below 65% (Bonnieux showed the lowest value in the summer: 55%).
Both indicators from chlorophyll a fluorescence measurements (the maximum quantum yield of primary photosystem II photochemistry and the performance index) showed slight decreases during the summer regardless of the geographic location of the plants (Fig. 5). The overall values then increased until autumn, but they did not return to their initial levels. In addition, I. montana plants from the Apt population showed higher Fv/Fm and PI values during the whole growing period than did plants in the two low-altitude habitats.
Phytochemical contents of I. montana according to geographic location
The amounts of total polyphenols and their flavonoid subclass during the overall period did not differ among the three habitats, with the exception of a lower level of polyphenols during late-spring for the plants in Bonnieux (Fig. 6A). For the three habitats, the total polyphenol level was 49% lower in autumn than in the early spring. The total flavonoids (Fig. 6B) showed an average increase of 56% from early spring to summer (higher level) but then decreased drastically thereafter (-68%).
We conducted high-performance liquid chromatography analysis on the leaves of the I. montana plants. The chromatograms (Fig. 7) showed 10 major peaks, in which we recently identified (by the external standard method) 5 sesquiterpene lactones [START_REF] Garayev | New sesquiterpene acid and inositol derivatives from Inula montana L[END_REF] respectively artemorin (p1), 9B-hydroxycostunolide (p2), reynosin (p3), santamarine (p5) and costunolide (p10). Other peaks were determined to be a mix of two flavonoids (Chrysosplenol C and 6-hydroxykaempferol 3,7-dimethyl ether; p4) and four inositol derivatives (myoinositol-1,5- diangelate-4,6-diacetate, myoinositol-1,6-diangelate-4,5-diacetate, myoinositol-1-angelate-4,5-diacetate-6-(2-methylbutyrate), myoinositol-1-angelate-4,5-diacetate-6-isovalerate; p6 to p9). The cross-location and cross-time relative quantification of the 5 sesquiterpene lactones (Table 3) suggested that I. montana plants from the low-altitude Murs and Bonnieux populations contained approximately three times more phytochemicals than plants from Apt. The data also showed that p1, p3 and p5 tended to accumulate throughout the seasons, unlike p2 and p10, which decreased. Lastly, p10 appeared to be the most abundant compound (roughly 50% more than the other lactones) regardless of the location or season.
Discussion
4.1. Inula montana morphology montana plants exhibited shorter floral stem length and reduced leaf surface at high altitude (Apt; Table 2 and Fig. 3A). This is consistent with the tendency of many plants to shorten their organs during winter (Åström et al., 2015) or at high elevation due to low temperatures and strong wind speeds, as shown previously in three Asteraceae species [START_REF] Yuliani | The relationship between habitat altitude, enviromental factors and morphological characteristics of Pluchea indica, Ageratum conyzoides and Elephantopus scaber[END_REF]. This behavior allows for limiting dehydration and ameliorate the photosynthetic conditions by setting plant organs closer to the warmer soil surface [START_REF] Cabrera | Effects of temperature on photosynthesis of two morphologically contrasting plant species along an altitudinal gradient in the tropical high Andes[END_REF]. The seasonal modification of the leaf morphology has also been shown to optimize photosynthetic capacity (Åström et al., 2015). The slightly lower nutrient availability at Apt (Table 1) may also contribute to the smaller organ sizes. In addition, the leaf surface of I. montana remained stable during the hot period at Apt, whereas it decreased at low altitude (Fig. 3B). This result is correlated with both the higher temperature and the drought period present at the low-elevation sites. Taken together, the data suggest that the plant morphological response is clearly adapted to both the climate and the location.
Inula montana displays two different trichome types on its leaves: hairy and glandular. Trichomes are well described as being plastic and efficient plant weapons against herbivory, notably through their high contents of protective secondary metabolites. Insect feeding can modify both the density and the content of trichomes [START_REF] Tian | Role of trichomes in defense against herbivores: comparison of herbivore response to woolly and hairless trichome mutants in tomato (Solanum lycopersicum)[END_REF]. Abiotic factors also strongly influence plant hairs; for example dry conditions, high temperatures or high irradiation can increase the number of trichomes per unit leaf area [START_REF] Pérez-Estrada | Variation in leaf trichomes of Wigandia urens: environmental factors and physiological consequences[END_REF]. Conversely, trichome density decreases in the shade or in well-irrigated plants. In this context, water availability in the plant environment is an integrative factor [START_REF] Picotte | ), respectively. A: Variable factor map. The bold lines and squares show sesquiterpene lactones; B: Individual factor map with confidence ellipses (95%) around the descriptive variables. temporal variation in moisture availability: consequences for water use efficiency and plant performance[END_REF]. Our data are consistent with this model, since plants from the Apt habitat (showing the highest altitude and precipitation but the lowest temperatures) displayed fewer glandular trichomes on their leaves than either of the other growing sites that suffered from drought periods (Table 2). These results also indicate that I. montana undergoes a stronger or at least a different type of stress at low altitude.
Inula montana physiology
It appeared that I. montana biomass increased from early spring to summer but then decreased, consistent with the hemicryptophytic strategy of this plant (dry weight, Fig. 4A). The location of the I. montana habitats had no effect on the dry weight but significantly influenced the plant water content, which markedly decreased during the summer at low elevation (Murs and Bonnieux; Fig. 4C). This is consistent with the expectation that low-elevation regions in the Mediterranean area would be hotter and drier than high-altitude regions, leading to more stressful conditions for plants [START_REF] Kofidis | Combined effects of altitude and season on leaf characteristics of Clinopodium vulgare L. (Labiatae)[END_REF][START_REF] Wolfe | Adaptation to spring heat and drought in northeastern Spanish Arabidopsis thaliana[END_REF]. The absence of an effect on the dry weight of I. montana here illustrates the strong variability of this xerophilous plant, namely its capacity to grow in various habitats and its ability to resist drought. Chlorophyll a fluorescence has been described as an accurate indicator of the plant response to environmental fluctuations and biotic stress [START_REF] Murchie | Chlorophyll fluorescence analysis: a guide to good practice and understanding some new applications[END_REF][START_REF] Guidi | Non-invasive tools to estimate stress-induced changes in photosynthetic performance in plants inhabiting Mediterranean areas[END_REF] and has gained interest in ecophysiological studies [START_REF] Åström | Morphological characteristics and photosynthetic capacity of Fragaria vesca L. winter and summer leaves[END_REF][START_REF] Perera-Castro | Light response in alpine species: different patterns of physiological plasticity[END_REF]. The maximum photochemical quantum yield of PSII (Fv/Fm) and the performance index (PI) reflect photooxidative stress and plant fitness [START_REF] Strasser | The fluorescence transient as a tool to characterize and screen photosynthetic samples[END_REF]. Fv/Fm values usually vary from 0.75 to 0.85 for non-stressed plants. Any decrease indicates a stressful situation, reducing the photosynthetic potential [START_REF] Maxwell | Chlorophyll fluorescence -a practical guide[END_REF]. In the Mediterranean climate, plant photoinhibition frequently occurs [START_REF] Guidi | Non-invasive tools to estimate stress-induced changes in photosynthetic performance in plants inhabiting Mediterranean areas[END_REF]. Below a certain limit of solar radiation, this protective mechanism allows the dissipation of excessive photosynthetic energy as heat [START_REF] Dos Santos | Seasonal variations of photosynthesis gas exchange, quantum efficiency of photosystem II and biochemical responses of Jatropha curcas L. grown in semi-humid and semi-arid areas subject to water stress[END_REF]. Here, both of the indicators (Fv/Fm and PI) displayed lower values at the lowelevation sites (Murs and Bonnieux; Fig. 5), confirming that I. montana was subjected to greater stress there. These results are in agreement with the observed drought periods at those sites and reflect the adaptive response of the plants to avoid photodamage under high temperature and drought stress in order to preserve their photosynthetic apparatus [START_REF] Poiroux-Gonord | Metabolism in orange fruits is driven by photooxidative stress in the leaves[END_REF]. It is not possible to easily correlate these results to the solar radiation because no difference was observed among the 3 habitats, as described above (Fig. 2). However, a similar study that focused on the combined effects of altitude and season on Clinopodium vulgare highlighted a decrease in Fv/Fm values in lowland populations at the beginning of a drought period [START_REF] Kofidis | Combined effects of altitude and season on leaf characteristics of Clinopodium vulgare L. (Labiatae)[END_REF].
Secondary metabolites
Plant secondary metabolites are well known for accumulating in response to environmental conditions that induce oxidative stress. Many studies have proposed that polyphenols might play a protective anti-oxidative role in plants [START_REF] Bartwal | Role of secondary metabolites and brassinosteroids in plant defense against environmental stresses[END_REF][START_REF] Bautista | Environmentally induced changes in antioxidant phenolic compounds levels in wild plants[END_REF]. Consequently, phenolics and other secondary metabolites usually accumulate under drought stress, salt stress [START_REF] Adnan | Desmostachya bipinnata manages photosynthesis and oxidative stress at moderate salinity[END_REF], high or low temperatures and at high altitude; this is exacerbated in Mediterranean plants [START_REF] Kofidis | Combined effects of altitude and season on leaf characteristics of Clinopodium vulgare L. (Labiatae)[END_REF][START_REF] Scognamiglio | Chemical composition and seasonality of aromatic Mediterranean plant species by NMR-Based Metabolomics[END_REF]. Phenology and plant development also strongly influence the concentrations of phenolic compounds [START_REF] Radušienė | Effect of external and internal factors on secondary metabolites accumulation in St. John's Worth[END_REF]. Our data showed that the I. montana leaf total polyphenol and flavonoid contents both varied over the season and reached their maximum value during latespring and summer (Fig. 6). This physiological behavior follows the seasonal solar radiation profile (Fig. 2) and is consistent with the welldescribed photoprotective role of polyphenols [START_REF] Agati | Multiple functional roles of flavonoids in photoprotection[END_REF]. It has long been known that the quantity of solar radiation increases with altitude [START_REF] Spitaler | Altitudinal variation of secondary metabolite profiles in flowering heads of Arnica montana cv[END_REF]. Accordingly, we expected a higher content of phenolics at high elevation due to the higher irradiance, including UV. However, the cloudier weather at the Apt site appeared to compensate for the theoretical 3% difference in sunshine between that site and the 2 other I. montana habitats (Fig. 2). As such, our results cannot explain the low polyphenol content observed in latespring at Bonnieux. Either way, it appears that the stress perceived by I. montana plants at low altitude is not due to the simple variation in solar radiation but rather to a significant susceptibility to drought stress and/ or high temperatures.
Sesquiterpenes are an important group of organic compounds released by plants and are characteristic of the family Asteraceae. Most of them are volatile molecules used as hormones and for functions such as communication and defense against herbivory [START_REF] Rodriguez-Saona | The role of volatiles in plant-plant interactions[END_REF][START_REF] Chadwick | Sesquiterpenoids lactones: benefits to plants and people[END_REF]. In this work we have identified 5 sesquiterpene lactones that tend to accumulate in higher amounts in lowelevation habitats (Table 3). These compounds also showed quantities that were positively or negatively correlated with the seasonal progression. Sesquiterpene lactones are well described to follow a seasonal pattern and to accumulate in response to biotic and abiotic stresses [START_REF] Chadwick | Sesquiterpenoids lactones: benefits to plants and people[END_REF][START_REF] Sampaio | Effect of the environment on the secondary metabolic profile of Tithonia diversifolia: a model for environmental metabolomics of plants[END_REF]. Since these compounds play essential roles in the plant defense response, their accumulation under abiotic stress is consistent with the carbon balance theory, which states that the investment in plant defense increases in response to a growth limitation [START_REF] Mooney | Response of Plants to Multiple Stresses[END_REF]. However, in Arnica montana, no positive correlation between the production of these molecules and altitude was found [START_REF] Spitaler | Altitudinal variation of secondary metabolite profiles in flowering heads of Arnica montana cv[END_REF]. In addition, plant terpenoid release has been reported to be modulated by temperature, drought and UV radiation [START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF].
The topsoil at the low altitude sites (Murs and Bonnieux) was significantly richer in sand than that at Apt (Table 1). This confers a high draining capacity to the low-elevation sites that would inevitably increase the water deficiency and contribute to the drought stress perceived by the plants. Last, our data from aqueous extracts highlighted a slightly lower topsoil nutrient content at the Apt site (Table 1). Although the literature on this topic is scarce, and no information is available concerning N and K, some soil nutrients (namely P, Cu and Ca) can influence the plant sesquiterpene lactone content [START_REF] Foster | Influence of cultivation site on sesquiterpene lactone composition of forage chicory (Cichorium intybus L.)[END_REF][START_REF] Sampaio | Effect of the environment on the secondary metabolic profile of Tithonia diversifolia: a model for environmental metabolomics of plants[END_REF]. More broadly, deficiencies in nitrogen have been described to induce the accumulation of plant phenylpropanoids [START_REF] Ramakrishna | Influence of abiotic stress signals on secondary metabolites in plants[END_REF]. Apt also showed the lowest Ca content and pH. Although the values were only slightly higher at the two other sites, they may globally contribute to decreasing the availability of topsoil cations. We cannot exclude the possibility that this would also contribute to the stress on the plants at these locations, but it is not easy to make a connection with the plant phytochemical production.
Conclusion
The morpho-physiological characteristics of I. montana showed that the plant undergoes higher stress at its lower-altitude growing sites (Murs and Bonnieux). Four plant and environmental variables (chlorophyll fluorescence, plant water content, climate and topsoil draining capacity) specially converged to highlight the site water availability as the primary source of stress. In addition, the sesquiterpene lactone production by I. montana was higher at these low-elevation stress-inducing habitats.
The overall data are summarized in the principal component analysis (Fig. 8). The I. montana growing location (dimension 1) and the seasons (dimension 2) encompass more than 76% of the total variability, and the location itself exceeds 50%. The map confirms that plant stress (expressed as water content or Fv/Fm) and the subsequent release of sesquiterpene lactones (including 4 of the 5 compounds) are correlated to the integrative altitude parameter. The individual factor map (B) clearly discriminates the I. montana growing locations from the seasons and highlights the interaction of these two factors.
Dissecting the manner in which molecules of interest fluctuate in plants (in response to biotic and abiotic stress) is of great interest scientifically and economically [START_REF] Pavarini | Exogenous influences on plant secondary metabolite levels[END_REF]. The present study shows that growing habitats that induce plant stress, particularly drought stress, can significantly enhance the production of sesquiterpene lactones by I. montana. Similar approaches have been conducted with A. montana [START_REF] Spitaler | Altitudinal variation of secondary metabolite profiles in flowering heads of Arnica montana cv[END_REF][START_REF] Perry | Sesquiterpene lactones in Arnica montana: helenalin and dihydrohelenalin chemotypes in Spain[END_REF][START_REF] Clauser | Differences in the chemical composition of Arnica montana flowers from wild populations of north Italy[END_REF] and have provided valuable information and cultivation guidelines that helped with its domestication [START_REF] Jurkiewicz | Optimization of culture conditions of Arnica montana L.: effects of mycorrhizal fungi and competing plants[END_REF][START_REF] Sugier | Propagation and introduction of Arnica montana L. into cultivation: a step to reduce the pressure on endangered and highvalued medicinal plant species[END_REF]. Appropriate cultivation techniques driven by the ecophysiological study of A. montana have succeeded in influencing its sesquiterpene lactone content for medicinal use [START_REF] Todorova | Developmental and environmental effects on sesquiterpene lactones in cultivated Arnica montana L[END_REF]. The manipulation of environmental stress has also been described to significantly promote the phytochemical (phenolic) content of lettuce [START_REF] Oh | Environmental stresses induce healthpromoting phytochemicals in lettuce[END_REF] and halophytes [START_REF] Slama | Water deficit stress applied only or combined with salinity affects physiological parameters and antioxidant capacity in Sesuvium portulacastrum[END_REF][START_REF] Adnan | Desmostachya bipinnata manages photosynthesis and oxidative stress at moderate salinity[END_REF]. Literature regarding I. montana is very sparse. The present results bode well for our ongoing field-work that aims to simulate and test environmental levers to augment the secondary metabolism and to develop innovative culture methods for I. montana.
Our data also illustrate the high morpho-physiological variability of this calcicolous plant. High-altitude habitat appears to primarly impact the morphology of the plant, while low-elevation sites mostly induce physiological responses to stress (chlorophyll fluorescence, phytochemicals synthesis). I. montana appears to grow well on south-facing sites possessing poor topsoil and low nutrient availability. It is also able to face high temperature and altitude gradients and to grow well on draining soil under a climate that induces drought stress. Table 3 Cross-location and cross-time relative quantification of the 5 sesquiterpene lactones found in Inula montana leaves. "-" indicates the absence of the molecule, and "+", "++", and "+++" indicate its relative increasing abundance.
Fig. 1 .
1 Fig. 1. Annual climographs (20 years of averaged data) of the three Inula montana study sites. The black line represents the mean temperature, and the hatched area represents the mean precipitation per month (rainfall or snow). Drought periods are symbolized by an asterisk.
Fig. 2 .
2 Fig. 2. Monthly means of satellite-based global solar irradiation as measured on the horizontal plane at ground level (GHI). The solid lines (GHI) represent the actual terrestrial solar radiation; the dashed lines (Clear-Sky GHI) estimate the irradiation under a cloudless sky. The data represent the means of 3 years of records (2013-2015) for the 3 Inula montana populations (spatial resolution was 3-8 km). The asterisks indicate significant differences between sites at p < 0.05.
Fig. 3 .Fig. 4 .
34 Fig. 3. Mean leaf blade surface area (A) and number of leaves (B) of Inula montana plants according to the geographic location and seasonal progress. The data represent the mean values of 10 plants ± standard error. The lowercase letters represent significant differences at p < 0.05.
Fig. 6 .
6 Fig. 6. Effect of the geographic location on Inula montana phytochemical contents. The data represent the contents of total polyphenols (A) and total flavonoids (B). The data represent the mean values of 10 plants ± standard error. The lowercase letters represent significant differences at p < 0.05.
Fig. 5 .
5 Fig. 5. Effect of the geographic location on Inula montana photosystem II fluorescence. The data represent the mean Fv/Fm (A) and mean PI (B) values of 10 plants ± standard error. The lowercase letters represent significant differences at p < 0.05.
Fig. 7 .
7 Fig. 7. HPLC chromatograms of Inula montana leaves harvested during the summer, according to the plant geographic location. S.l.: sesquiterpene lactone; Fl.: flavonoid; In.: inositol. Peaks were identified according to Garayev et al. (2017).
Table 1
1 Pedoclimatic characterization of Inula montana habitats. Exp. Δ: expected theoretical range of element concentrations for a standard agricultural parcel.
Murs Bonnieux Apt
Climate
Mean temperature (°C) 11.0 12.1 7.7
Air moisture (%) 66.5 69.7 54.4
Annual precipitation (mm) 774 702 928
Topsoil
Composition (%) & texture Sandy loam Sandy clayey Clayey loam
loam
Organic matter (from aqueous 5.01 4.98 5.00
extract)
Clay 6.4 16.1 18.0
Sand 37.3 20.4 3.1
Silt 56.3 63.5 78.8
Macro-and microelements (mg/kg from aqueous extract) Exp. Δ
pH 8.3 7.8 7.7
NH 4 3.96 2.56 2.09 4.0-8.0
NO 3 3.79 3.14 1.29 4.0-8.0
K 5.4 5.4 2.7 40-80
PO 4 0.2 0.2 0.2 15-25
Mg 2.5 2.5 3.3 20-40
Ca 96.2 71.8 68.7 100-200
Fe 0.33 0.07 0.14 8.0-12.0
Cu 0.01 0.01 0.01 0.30-0.50
Mn 0.13 0.14 0.08 0.30-0.50
Zn 0.09 0.08 0.07 0.30-0.50
Bo 0.51 0.49 0.53 1.0-2.0
Acknowledgments
This work was supported by the French region Provence-Alpes-Côte d'Azur (project n°2013_13403), the Luberon Regional Natural Park and the TERSYS Research Federation of the University of Avignon. We thank Prof. Vincent Valles (Avignon University) for his advice on the statistics. We thank Didier Morisot, collections manager of the plant garden of the Faculty of Medicine of the University of Montpellier, for the I. montana identification. | 44,866 | [
"15455",
"173383",
"171075",
"172844",
"172667"
] | [
"31878",
"31878",
"188653",
"188653",
"188653",
"514340",
"188653",
"31878",
"31878",
"31878"
] |
01765113 | en | [
"info"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01765113/file/ROADEF2018_paper_060.pdf | Olivier Briant
email: [email protected]
Hadrien Cambazard
email: [email protected]
Diego Cattaruzza
email: [email protected]
Nicolas Catusse
email: [email protected]
Anne-Laure Ladier
email: [email protected]
Maxime Ogier
email: [email protected]
A column generation based approach for the joint order batching and picker routing problem
Keywords: order batching, picker routing, column generation
Introduction
Picking is the process of retrieving products from the inventory and is often considered a very expensive operation in warehouse management. A set of pickers perform routes into the warehouse, pushing a trolley and collecting items to prepare customer orders. However, customer orders usually do not fit the capacity of a trolley. They are typically grouped into batches, or on the contrary divided into subsets, with the aim of collecting all the orders by minimizing the walked distance. This problem is known as the joint order batching and picker routing problem [START_REF] Cristiano | Optimally solving the joint order batching and picker routing problem[END_REF].
This work presents an exponential linear programming formulation where variables or columns are related to single picking routes in the warehouse. More precisely, a column refers to a route involving a set of picking operations and satisfying the side constraints required at the trolley level such as the mixing of orders or the capacity. Computing such a picking route is an intractable routing problem in general and, depending on the warehouse layout, can closely relate to the traveling salesman problem (TSP). The rational of our approach is however to consider that the picking problem alone, in real-life warehouses, is easy enough in practice to be solved exactly. We apply this approach on two different industrial benchmarks, based on different warehouse layouts.
Problem specification and industrial applications
The warehouse layout is modeled as a directed graph G = (V, A) with two types of vertices, locations and intersections. Locations contain one or more product references to be picked.
Two typical examples of warehouse layouts are used as benchmarks in the present work.
-A regular rectangular layout made of vertical aisles and horizontal cross-aisles. Such a layout has been used by numerous authors in the past [START_REF] De Koster | Design and control of warehouse order picking : A literature review[END_REF] to define the order picking problem. It is the setup of the Walmart benchmark. -An acyclic layout where pickers are not allowed to backtrack. It is another typical industrial setup where the flow is constrained in a single direction and an aisle must be entered and exited on the same side. It is the setup of the HappyChic benchmark. Each product reference p ∈ P is characterized by its location in the warehouse and its size V w p in each dimension w ∈ W. A product reference may have several dimensions such as weight and volume and we refer to the set of dimensions as W.
An order from a customer is defined as a set of order lines. An order line l ∈ L is related to an order o and defined as a pair (p l , Q l ) where p l ∈ P is a product reference and Q l is the number of items to pick. An order o ∈ O is a set of order lines L o ⊆ L. Moreover, an order can be split in at most M o boxes.
Order lines are collected by trolleys, each carrying a set of B boxes. A box has a capacity V w in dimension w ∈ W and an order line can be assigned into several boxes (the quantity Q l of an order line l can be split among several boxes). A box is therefore filled with partial order lines. A partial order line l is a pair (p l , Ql ) with Ql ≤ Q l . A box can only contain partial order lines from a single order.
A solution is a collection of routes R in the warehouse layout G. Each route r is travelled by a trolley which collects partial order lines into its boxes. The capacities of the boxes must be satisfied in each dimension w ∈ W. An order o ∈ O can not be assigned to more than M o boxes. Finally all order lines must be picked with the required number of items. The objective is to minimize the total distance to perform all the routes in R.
The two industrial cases addressed in the present work, from the Walmart and HappyChic, differ slightly. In particular, for the Walmart case, only one dimension is considered for a box, representing the maximum number of items in a box. Additionally, an order must be picked entirely by a single trolley.
A column generation based approach
In the industrial case of HappyChic, the picking takes place on an acyclic graph, thus boils down to an easy path problem. In Walmart's case, warehouse have the regular rectangular structure made of aisles and cross-aisles. In that case, dynamic programming algorithms can take advantage of that structure to efficiently solve the corresponding TSP when the warehouse contains up to eight cross-aisles, which is beyond most real-life warehouse's sizes [START_REF] Cambazard | Fixed-Parameter Algorithms for Rectilinear Steiner tree and Rectilinear Traveling Salesman Problem in the plane[END_REF]. We therefore assume that in both cases, an efficient oracle is available to provide optimal picking routes in the warehouse.
We show that such an oracle allows for a very effective exponential LP formulation of the joint order batching and picking problem. The pricing problem can be seen as a prize-collecting TSP with a capacity constraint and the pricing algorithm heavily relies on the picking oracle to generate cutting planes. A number of improvements are proposed to speed up the pricing. In particular, a procedure to strengthen the cutting planes is given when the distance function for the considered set of orders is submodular. For the industrial case of HappyChic, the graph is acyclic, so it is possible to propose a polynomial set of constraints to exactly calculate the distance, instead of generating cutting planes.
The proposed formulation is compared experimentally on Walmart's benchmark and proves to be very effective, improving many of the best known solutions and providing very strong lower bounds. Finally, this approach is also applied to the HappyChic case, demonstrating its generality and interest for this application's domain. | 6,394 | [
"8779",
"923391",
"170732",
"938010",
"2273",
"15837"
] | [
"1041931",
"1041932",
"410272",
"433076",
"1041932",
"145304",
"410272",
"433076"
] |
01546357 | en | [
"chim"
] | 2024/03/05 22:32:13 | 2004 | https://hal.science/hal-01546357v2/file/Jacques%20-%2028th%20ICACC%20-%20CB-S4-55%20for%20HALnew.pdf | S Jacques
B Bonnetot
M.-P Berthet
H Vincent
BN interphase processed by LP-CVD from tris(dimethylamino)borane and characterized using SiC/SiC minicomposites
SiC/BN/SiC 1D minicomposites were produced by infiltration of a Hi-Nicalon (from Nippon Carbon, Japan) fiber tow in a Low Pressure Chemical Vapor Deposition reactor.
Tris(dimethylamino)borane was used as a halogenide-free precursor for the BN interphase processing. This precursor prevents fiber and CVD apparatus from chemical damage. FT-IR and XPS analyses have confirmed the boron nitride nature of the films. Minicomposite tensile tests with unload-reload cycles have shown that the minicomposite mechanical properties are good with a high interfacial shear stress. Transmission electron microscopy observation of the interphase reveals that it is made of an anisotropic turbostratic material.
Furthermore, the fiber/matrix debonding, which occurs during mechanical loading, is located within the BN interphase itself.
INTRODUCTION
In SiC/SiC type ceramic matrix composites, a good toughness can be achieved by adding between the fiber and the brittle matrix a thin film of a compliant material called "interphase" [START_REF] Evans | The physics and mechanics of fibre-reinforced brittle matrix composites[END_REF]. Anisotropic pyrolytic boron nitride obtained from BF3/NH3/H2 mixture can play such a role. However, its processing by LP-CVD (Low Pressure Chemical Vapor Deposition) from BF3 requires protecting the fiber from gaseous chemical attack [START_REF] Rebillat | Oxidation resistance of SiC/SiC minicomposites with a highly crystallised BN interphase[END_REF] [START_REF] Jacques | SiC/SiC minicomposites with structure-graded BN interphases[END_REF]. Furthermore, the CVD apparatus is quickly deteriorated by the aggressive halogenated gases and expensive maintenance is needed. On the other hand, some authors have reported the use of a halogenide-free precursor: B[N(CH3)2]3 (tris(dimethylamino)borane, TDMAB) for CVD semiconductor h-BN film processing [START_REF] Dumont | Deposition and characterization of BN/Si(0 0 1) using tris(dimethylamino)borane[END_REF].
The aim of the present contribution was to prepare within one-dimensional minicomposites a BN interphase from TDMAB and to characterize this interphase and the properties of these SiC/BN/SiC minicomposites.
EXPERIMENTAL
SiC/BN/SiC minicomposites were produced by infiltration of the BN interphase within a Hi-Nicalon (from Nippon Carbon, Japan) fiber tow by LP-CVD in a horizontal hot-wall reactor (inner diameter: 24 mm) at a temperature close to 1100°C during 90 seconds. TDMAB vapor was carried by hydrogen through a bubbler at 30°C (TDMAB is liquid at this temperature and the vapor pressure is 780 Pa). The H2 gas flow rate was 15 sccm. NH3 was added to the gaseous source with a flow rate of 100 sccm in order to enhance nitrogen source and favor amine group stripping from the precursor and carbon suppression in the coating. A BN film was also deposited with the same conditions on a Si wafer for Fourier transform infrared (FT-IR) spectroscopy (Nicolet spectrometer, Model MAGNA 550, USA) and X-ray photoelectron spectroscopy (XPS) analyses (SSI model 301 spectrometer). The SiC matrix was classically infiltrated in the fiber tow from CH3SiCl3/H2 precursor gases at 950°C in a second LP-CVD reactor. In both cases, the total gas pressure in the reactors was as low as 2 kPa in order to favor infiltration homogeneity within the fiber tows.
The interphase thickness was about 150 nm and the fiber volume fraction was about 40 % (measured by weighing). The minicomposites were tensile tested at room temperature with unload-reload cycles using a machine (MTS Systems, Synergie 400, USA) equipped with a 2 kN load cell. The minicomposite ends were glued with an epoxy resin (Lam Plan, ref 607, France) in metallic tubes separated by 40 mm that were then gripped into the testing machine jaws. The crosshead speed was 0.05 mm/min. The strain was measured with an extensometer (MTS, model 634.11F54, USA) directly gripped on the minicomposite itself.
The extensometer gauge length was 25 mm. The total number of matrix cracks was verified by optical microscopy on polished longitudinal sections of the failed minicomposites after chemical etching (Murakami reactant) in order to reveal the matrix microcracks which were closed during unloading. The interfacial shear stress was then estimated from the last hysteresis loop recorded before failure by following the method described in reference [START_REF] Lamon | Microcomposite test procedure for evaluating the interface properties of ceramic matrix composites[END_REF].
Thin longitudinal sections of minicomposites were studied by transmission electron microscopy (TEM: Topcon 002B, Japan) after tensile test using bright-field (BF), high resolution (HR) and selected area electron diffraction (SAED) techniques. The samples were embedded in a ceramic cement (CERAMABOND 503, Aremco Products Inc., USA) and mechanically thinned. The thin sheets (~60 µm in thickness) were then ion-milled (GATAN PIPS, USA) to electron transparency.
RESULTS AND DISCUSSION
Only two absorption bands are seen on the transmittance FT-IR spectra at 810 cm -1 and 1380 cm -1 typical of h-BN; OH bonds are not detected (Fig. 1).
At the film surface, the B/N atomic concentration ratio determined by XPS is close to one.
After ionic etching, the carbon content due to surface pollution decreases drastically; the nitrogen deficit is due to a preferential etching (Fig. 2). Both analyses confirm the BN nature of the films.
Figure 3 displays a typical force-strain curve for SiC/BN/SiC minicomposites. 588 matrix cracks were detected after failure along the 25 mm gauge length. The composites exhibit a non-brittle behavior: a non-linear domain evidencing matrix microcracking and fibre/matrix debonding follows the initial linear elastic region up to a high force at failure (170 N).
Therefore, (i) the BN interphase acts as a mechanical fuse and (ii) the Hi-Nicalon fibers were not damaged during the interphase BN processing from TDMAB. Furthermore, the calculated is 230 MPa. This value corresponds to a good load transfer between the matrix and the fibers and is as high as the best values obtained with BN interphases processed from classical halogenated gases [3] [6].
TEM observation of minicomposite pieces after failure (Fig. 4) shows that the matrix cracks deflections are preferentially localized within the BN interphase. Figure 4.a exhibits a thin matrix microcrack with a small opening displacement that is stopped within the interphase before reaching the fiber. In figure 4.b, a larger matrix crack which has been widened by the ion-milling is observed. In that case, some BN material remains bonded on both the fiber and the matrix. Thus, neither the interface with the fiber as in reference [START_REF] Naslain | Boron nitride interphase in ceramic-matrix composites[END_REF] nor the interface with the matrix is a weak link. The role of mechanical fuse is played by the boron nitride interphase itself. This feature agrees with the good interfacial shear stress measured for these minicomposites and corresponds to a strong fiber bonding characterized by a high strength and a high toughness [START_REF] Droillard | Strong interface in CMCs, a condition for efficient multilayered interphases[END_REF].
In figure 4.c, a crack is observed within the interphase. A higher magnification in HR mode (Fig. 4.e) reveals that the orientation of the 002 BN planes seems to influence the crack path: the crack and the lattice fringes have the same curvature. Furthermore, the existence of two distinct BN 002 diffraction arcs in the SAED pattern (Fig. 4.d) is due to a preferential orientation parallel of the 002 planes to the fiber axis. This structural anisotropy promotes the mode II crack propagation observed in the interphase.
CONCLUSION
A BN interphase was processed by LP-CVD within SiC/SiC minicomposites from tris(dimethylamino)borane, a halogenide-free precursor. The structure of the BN material is anisotropic and allows deflecting the matrix cracks during mechanical damaging. This interphase is strongly bonded to the fiber and plays the role of a mechanical fuse. The good mechanical properties of the composites allow considering the TDMAB as a new alternative precursor to classical aggressive halogenated gases for LP-CVD boron nitride interphase processing.
Figure 1 :
1 Figure 1: Transmittance FT-IR spectra of the BN film.
Figure 2 :
2 Figure 2: XPS depth atomic concentration profiles for the BN film (sputter rate: 1 -4 nm/min).
Figure 3 :F
3 Figure 3: Tensile force-strain curve with unload-reload cycles for the minicomposites (for clarity only a few hysteresis loops are represented).
Figure 4 :
4 Figure 4: TEM observation of the SiC/BN/SiC minicomposite according to the BF mode (a), (b) and (c), the SAED technique (negative pattern of the Hi-Nicalon fiber and the interphase) (d) and the HR mode (e).
ACKNOWLEDGEMENT
The authors are grateful to G. Guimon from LPCM (University of Pau, France) for XPS analysis. | 9,191 | [
"184715"
] | [
"752",
"752",
"752",
"752"
] |
00176521 | en | [
"math"
] | 2024/03/05 22:32:13 | 2002 | https://hal.science/hal-00176521/file/DIE-sept02.pdf | Cyril Imbert
SOME REGULARITY RESULTS FOR ANISOTROPIC MOTION OF FRONTS
Keywords: AMS Subject Classifications: 35A21, 35B65, 35D99, 35J60, 35K55, 35R35. Part
We study the regularity of propagating fronts whose motion is anisotropic. We prove that there is at most one normal direction at each point of the front; as an application, we prove that convex fronts are C 1,1 . These results are by-products of some necessary conditions for viscosity solutions of quasilinear elliptic equations. These conditions are of independent interest; for instance they imply some regularity for viscosity solutions of nondegenerate quasilinear elliptic equations.
Introduction
Following [START_REF] Bellettini | Anisotropic motion by mean curvature in the context of Finsler geometry[END_REF][START_REF] Nochetto | Numerical analysis of geometric motion of fronts[END_REF], we study propagating fronts whose velocity field v Φ is given by the following geometric law:
v Φ = (κ Φ + g)n Φ ,
where n Φ and κ Φ are respectively the inward normal direction and the mean curvature associated with a Finsler metric Φ; g denotes a possible (bounded) driving force.
The main result of this paper states that under appropriate assumptions, there is at most one (outward or inward) "normal direction" at each point of the front.
In order to define the front past singularities, we use the level-set approach initiated by Barles [START_REF] Barles | Remark on a flame propagation model[END_REF] and developed by Osher and Sethian [START_REF] Osher | Fronts moving with curvature dependent speed: Algorithms based on hamilton-jacobi equations[END_REF]. This approach consists in describing the front Γ t at time t as the zero level-set of a (continuous or discontinuous) function u : Γ t = {x : u(x, t) = 0}. Choosing first a continuous function u 0 such that the initial front Γ 0 coincides with {x : u 0 (x) = 0} (consider for instance the signed distance function to Γ 0 ), u turns out to be a solution of the following Hamilton-Jacobi equation: where Du and D 2 u denotes the first and second derivative in x of the function u and Φ • denotes the dual metric associated with Φ. This equation is known as the anisotropic mean curvature equation. It is solved by using viscosity solutions [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF]. The function u depends on the choice of u 0 , but not the front Γ t , even not the two families of sets O t = {x : u(x, t) > 0} and I t = {x : u(x, t) < 0} [START_REF] Evans | Motion of level sets by mean curvature, I[END_REF][START_REF] Gang | Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations[END_REF][START_REF] Ishii | Generalized motion of noncompact hypersurfaces with velocity having arbitrary growth on the curvature tensor[END_REF]. The definition of the front is therefore consistent and the notions of "outside"and "inside" become precise.
The study of the normal directions reduces to the study of the semi-jets of discontinuous semisolutions of (1.1). This latter study is persued by using necessary conditions derived for viscosity solutions of degenerate elliptic and parabolic quasilinear equations. Besides, these conditions are of independent interest. For instance, we derive from them regularity of viscosity solutions of nondegenerate quasilinear elliptic and parabolic equations.
The paper is organized as follows. In Section 2, we first give assumptions and recall definitions that are used in the paper. In particular, the Finsler metric and its dual are introduced and the definition of normal directions and semijets are recalled. In Section 3, we state and prove our main results (Theorem 1 and Corollary 1). Eventually, in Section 4, we present the necessary conditions used in the proof of Theorem 1.
Assumptions and definitions
In this section, we give assumptions and definitions that are used throughout the paper.
2.1. Anisotropic motion. In order to take into account the anisotropy and the inhomogeneity of the environment in which the front propagates, the metric induced by the Euclidian norm is replaced with a so-called Finsler metric. In our context, a Finsler metric Φ is the support function of a given compact set denoted by B Φ • (x) :
Φ(ζ, x) = max{ ζ, ζ * : ζ * ∈ B Φ • (x)}.
The set B Φ • (x) is referred to as the Wulff shape. Here are the assumptions we make concerning Φ and B Φ • (x).
A0. (i) The Wulff shape B Φ • (x) is a compact set that contains the origin in its interior and is symmetric with respect to it;
(ii) Φ ∈ C 2 (R n \{0} × R n ); (iii) for all x ∈ R n , ζ → [Φ(ζ, x)] 2 is strictly convex. For a given x ∈ R n , the dual metric Φ • is defined as the support function of the set B Φ (x) = {ζ ∈ R n : Φ(ζ,
→ Φ • (ζ, x
) is a support function; indeed, a support function is convex and linear along half-lines issued from the origin. Consequently:
D ζζ Φ • (ζ, x) 0 and ζ ∈ KerD ζζ Φ • (ζ, x),
where denotes the usual order associated with S n , the space of n × n symmetric matrices. A second example of motion is the following: consider a (riemannian
) metric Φ(ζ, x) = Φ(ζ) = Gζ, ζ
where G ∈ S n is definite positive. The associated dual metric turns out to be Φ
• (ζ * ) = G -1 ζ * , ζ * .
Finally, let us give a third example in which the inhomogeneity of the environment is taken into account: Φ(ζ, x) = a(x) Gζ, ζ , where a ∈ C 2 (R n ) and a(x) > 0 for all x ∈ R n . The reader can check that in these three examples the kernel of D ζζ Φ • (ζ, x) coincides with Span{ζ}. We next assume that the Finsler metric verifies such a property.
A1. ∀x ∈ R n , ∀ζ ∈ R n \{0}, KerD ζζ Φ • (ζ, x) = Span{ζ}.
We also need the following additional assumption.
A2. There exists L > 0 such that for all x, y ∈ R n and all
ζ * ∈ R n , |Φ • (ζ * , y) -Φ • (ζ * , x)| L |ζ * | |y -x|.
2.2. Semi-jets, P-subgradients and P-normals. We solve (4.1) and (4.2) by using viscosity solutions [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF]. In order to ensure the existence of a solution (using for instance results from [START_REF] Giga | Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains[END_REF][START_REF] Nochetto | Numerical analysis of geometric motion of fronts[END_REF]), we assume throughout the paper that the initial front is bounded. Unboundedness of the domain can be handled with results from [START_REF] Barles | Front propagation and phase field theory[END_REF]. The definition of viscosity solutions is based on the notion of semi-jets. Let Ω be a subset of R n and u be a numerical function defined on Ω and x be a point in Ω. A couple (X, p) ∈ S n × R n is a so-called subjet (resp. a superjet) of the function u at x (with respect to Ω) if for all y ∈ Ω :
1 2 X(y -x), y -x + p, y -x u(y) -u(x) + o(|y -x| 2 ) (2.2) resp. 1 2 X(y -x), y -x + p, y -x u(y) -u(x) + o(|y -x| 2 ) , (2.3)
where o(.) is a function such that o(h)/h → 0 as h → 0 + . The set of all the subjets (resp. superjets) of u at x is denoted by J 2,- Ω u(x) (resp. by J 2,+ Ω u(x)). In order to define viscosity solutions for parabolic equations, one must use so-called parabolic semi-jets P 2,- Ω×[0,T ] u(x, t); see [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF] for their definition.
A vector p that there exists X ∈ S n such that (X, p) ∈ J 2,- Ω u(x) is a so-called P-subgradient [START_REF] Clarke | Nonsmooth Analysis and Control Theory[END_REF] of the function u :
∀y ∈ Ω, p, y -x u(y) -u(x) + O(|y -x| 2 ).
The set of all such vectors is referred to as the proximal subdifferential of the function u and it is denoted by ∂ P u(x). Analogously, a proximal superdifferential (hence P-supergradients) can be defined by ∂ P u(x) = -∂ P (-u)(x). It coincides with the sets of vectors p such that ∃X ∈ S n : (X, p) ∈ J 2,+ Ω u(x). The geometry of a set Ω can be investigated by studying subjets of the function denoted by Zero Ω defined on Ω and that is identically equal to 0. The proximal subdifferential of this function coincides with the proximal normal cone of Ω at x [START_REF] Clarke | Nonsmooth Analysis and Control Theory[END_REF]:
N P (Ω, x) = {p ∈ R n : ∀y ∈ Ω, p, y -x O(|y -x| 2 )}.
An element of N P (Ω, x) is referred to as a P-normal. If p is a P-normal of Ω at x and λ is a nonnegative number, then λp is still a P-normal. From the geometrical viewpoint, one can say that N P (Ω, x) is a cone, that is to say it is made of half-lines issued from the origin. Crandall, Ishii and Lions [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF] proved that for a set with a C 2 boundary:
J 2,- Ω Zero(x) = {(S(x) -Y, λn(x)) : λ ≥ 0, Y 0},
where n(x) denotes the normal vector and S(x) denotes the second fundamental form extended to R n by setting S = 0 along Span{n(x)}. The proximal normal cone is therefore reduced to R + n(x) = {λn(x) : λ 0}. If Ω is a hyperplan and n = 0 denotes a normal vector from H ⊥ , then N P (Ω, x) is the whole line Span{n}.
Main results
In this section, we state and prove our main results, namely Theorem 1 and Corollary 1. The proof of Theorem 1 rely on necessary conditions verified by solutions of possibly degenerate elliptic and parabolic quasilinear equations; these conditions are presented in Section 4.
Theorem 1. Consider a Finsler metric Φ satisfying A0, A1 and A2. Then the associated propagating front Γ t , t > 0, has at most one "outward normal direction" (resp. "inward normal direction"), that is to say the proximal normal cone at any point of I t ∪ Γ t or at any point of I t (resp. O t ∪ Γ t or O t ) is at most a line.
Remarks. 1. Assumptions A0 and A2 ensure the existence and uniqueness of the solution u of (1.1). Assumption A1 can be seen as a regularity assumption on the Franck diagram.
2. Theorem 1 remains valid if the front "fattens" (see [START_REF] Souganidis | Front propagation: Theory and applications[END_REF] for details about the fattening phenomena).
Theorem 1 implies the regularity of convex fronts. See also Theorem 5.5 in [START_REF] Evans | Motion of level sets by mean curvature, III[END_REF]. Corollary 1. Let the metric Φ be independent of the position and such that A0, A1 are satisfied. Assume that the initial front Γ 0 is convex. Then the associated propagating front Γ t is also convex and is C 1,1 ; more precisely, I t ∪ Γ t and I t are convex and their boundary is C 1,1 .
Let us now prove these two results.
Proof of Theorem 1. Assumptions A0 and A2 ensure that the assumptions of Theorem 4.9 in [START_REF] Giga | Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains[END_REF] are satisfied. Then, there exists a unique solution of (1.1). In order to prove Theorem 1, we must prove that for a given point of the boundary of I t , two P-normals p 1 and p 2 colinear. Let us choose λ such that λp 1 + (1 -λ)p 2 = 0. We know [START_REF] Barles | Front propagation and phase field theory[END_REF] that the function Zero It is a supersolution of (1.1). By applying Proposition 1 (see Section 4), we obtain:
p 1 -p 2 ∈ KerD ζζ Φ • (λp 1 + (1 -λ)p 2 ).
Using Assumption A1, we conclude p 1 -p 2 is colinear with λp 1 + (1 -λ)p 2 . We conclude that p 1 and p 2 are colinear.
We proceed analogously with the sets I t , O t ∪ Γ t and O t .
Proof of Corollary 1. The fact that the front is convex for any time t follows from Theorem 3.1 in [START_REF] Giga | Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains[END_REF]. Choosing for u 0 the opposite of the signed distance function to Γ 0 , we ensure that the initial datum is Lipschitz and concave. Therefore, Theorem 2.1 in [START_REF] Nochetto | Numerical analysis of geometric motion of fronts[END_REF] implies that u is Lipschitz; this ensures that u has a sublinear growth. By applying Theorem 3.1 in [START_REF] Giga | Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains[END_REF], we know that x → u(x, t) is concave, hence I t ∪ Γ t and I t are convex sets. The Hahn-Banach theorem ensures the existence of a normal in the sense of convex analysis. Such a normal is also a P-normal [START_REF] Clarke | Nonsmooth Analysis and Control Theory[END_REF]. Using the fact that Zero It and Zero It∪Γt are supersolutions of (1.1) (see instance [START_REF] Barles | Front propagation and phase field theory[END_REF]), Theorem 1 implies that there is at most one P-normal. Hence there is exactly one normal in the sense of convex analysis and C 1,1 regularity follows.
Necessary conditions for elliptic and parabolic quasilinear equations
In the present section, we state necessary conditions that are verified by viscosity sub-and supersolutions (hence by solutions) of quasilinear elliptic equations on a domain Ω ⊂ R n :
- n i,j=1 a i,j (Du, u, x) ∂ 2 u ∂x i ∂x j + f (Du, u, x) = 0, ∀x ∈ Ω. (4.1)
These equations may be degenerate and/or singular at Du = 0. We also study the associated parabolic equations on Ω × [0, T ] :
∂u ∂t - n i,j=1 a i,j (Du, u, x, t) ∂ 2 u ∂x i ∂x j + f (Du, u, x, t) = 0, ∀(x, t) ∈ Ω × [0, T ].
(4.2) In the following, the n × n symmetric matrix with entries (a i,j ) is denoted by A. We assume that (4.1) and (4.2) are degenerate elliptic.
(E) For all p, u, x(, t), A(p, u, x(, t)) 0.
In Propositions 1 and 2, we prove that the difference of two P -subgradients (resp. P -supergradients) of a supersolution (resp. of a subsolution) of (4.1) or (4.2) is a degenerate direction, that is to say it lies in the kernel of A.
Proposition 1 (The elliptic case). Consider a supersolution (resp. a subsolution) u of (4.1), a point x ∈ Ω and two subjets (X i , p i ) ∈ J 2,- Ω u(x), i = 1, 2 (resp. two superjets (X i , p i ) ∈ J 2,+ Ω u(x), i = 1, 2). Then for any λ ∈ [0, 1] such that λp 1 + (1 -λ)p 2 = 0, the following holds true:
p 1 -p 2 ∈ KerA(λp 1 + (1 -λ)p 2 , u(x), x).
A straightforward consequence of Proposition 1 is the following result dealing with nondegenerate equations.
Corollary 2. Suppose that the equation (4.1) is nondegenerate, i.e., A(p, u, x)q, q > 0 if q = 0.
Then a solution u : Ω → R of (4.1) has "no corners", that is to say the function u at most one P-subgradient and at most one P-supergradient at any point x ∈ Ω. This corollary applies for instance to the equation associated with the search of minimal surfaces:
div Du 1 + |Du| 2 = 0 ⇔ -∆u + D 2 uDu, Du 1 + |Du| 2 = 0. (4.3)
Before proving Proposition 1, we state its parabolic version.
Proposition 2 (The parabolic case). Consider a supersolution (resp. a subsolution) u of (4.2), a point (x, t) ∈ Ω × [0, T ] and two parabolic subjets (X i , p i , α i ) ∈ P 2,- Ω×[0,T ] u(x, t), i = 1, 2 (resp. two parabolic superjets (X i , p i , α i ) ∈ P 2,+ Ω×[0,T ] u(x, t), i = 1, 2). Then for any λ ∈ [0, 1] such that λp 1 + (1 -λ)p 2 = 0, the following holds true:
p 1 -p 2 ∈ KerA(λp 1 + (1 -λ)p 2 , u(x), x, t).
Corollary 3. Suppose that the equation (4.2) is nondegenerate, i.e., A(p, u, x, t)q, q > 0 if q = 0.
Then a solution u : Ω × [0, T ] → R of (4.1) has "no corners", that is to say the function u has at most one P-subgradient and at most one Psupergradient at any point x ∈ R n .
The Hamilton-Jacobi equation associated with the motion by mean curvature of graphs is an example of nondegenerate quasilinear parabolic equation:
∂u ∂t -∆u + D 2 uDu, Du 1 + |Du| 2 = 0. ( 4.4)
A class of parabolic equations, including (4.4), is studied by a geometrical approach in [START_REF] Barles | Quasilinear parabolic equations, unbounded solutions and geometrical equations I[END_REF]. The proof of Proposition 1 relies on the following technical lemma.
Lemma 1. Consider an arbitrary set Ω and a function u : Ω → R. Let x be a point in Ω and (X i , p i ), i = 1, 2, be two subjets of u at x. Then for any matrix X ∈ S n such that X X i , i = 1, 2, any λ [0, 1] and any M > 0, the following holds true:
(X + M (p 1 -p 2 ) ⊗ (p 1 -p 2 ), λp 1 + (1 -λ)p 2 ) ∈ J 2,- Ω u(x).
Let us show how Lemma 1 implies Proposition 1.
Proof of Proposition 1. Let X ∈ S n be such that X X i for i = 1, 2 and consider any λ ∈ [0, 1] and any M > 0. By applying Lemma 1 to the supersolution u of (4.1) and by denoting p the vector λp 1 + (1 -λ)p 2 and q the vector p 1 -p 2 , we conclude that: (X + M q ⊗ q, p) ∈ J 2,- Ω u(x). As u is a supersolution of (4.1) and p = 0, the following holds true:
-tr [A(p, u(x), x)(X + M q ⊗ q)] + f (p, u(x), x) 0.
Dividing by M and letting M → +∞ yields: 0
A(p, u(x), x)q, q = tr [A(p, u(x), x)q ⊗ q] 0.
The first inequality follows from the ellipticity of (4.1). We conclude that q ∈ KerA(p, u(x), x).
If the function u is a subsolution, apply the lemma to the function -u and use it analogously.
One can easily give a parabolic version of this lemma and use it to prove Proposition 2. We omit these details and we turn to the proof of Lemma 1.
Proof of Lemma 1. By considering v(y) = u(x+y)-u(x), we may assume that x = 0 and u(x) = 0. Let us denote p = λp 1 + (1 -λ)p 2 and q = p 1 -p 2 . A straightforward calculus shows us that for any real number r such that |r| min( 2λ M , 2(1-λ) M ) : 1 2 M r 2 max{(1 -λ)r, -λr}.
Therefore, for any y such that | q, y | min( 2λ M , 2(1-λ) M ), we get: 1 2 M q, y 2 max{(1 -λ) q, y , -λ q, y }.
Finally, for any y in a neighbourhood of the origin such that x + y ∈ Ω, we get:
1 2 (X + M q ⊗ q)y, y + p, y = 1 2 Xy, y + 1 2 M q, y 2 + p, y
∂u ∂t -Φ • (Du, x) tr[D ζζ Φ • (Du, x)D 2 u] + D ζ Φ • (Du, x), Du |Du| +tr[D ζx Φ • (Du, x)] + g(Du, x, t) = 0,(1.1)
supported by the TMR "Viscosity solutions and their applications". 1263
max 1 2 X 1 y, y + (1 -λ) q, y + p, y , 1 2 X 2 y, y -λ q, y + p, y
We have therefore proved that (X + M q ⊗ q, p) ∈ J 2,- Ω-x v(0) = J 2,- Ω u(x). Remark. Using Lemma 1, necessary conditions can be derived for any general nonlinear elliptic equation F (D 2 u, Du, u, x) = 0 if (E) is satisfied and if X → F (X, p, u, x) is positively homogenous. | 18,289 | [
"9368"
] | [
"543620"
] |
01765230 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01765230/file/1804.02388.pdf | Grigor Nika
Andrei Constantinescu
DESIGN OF MULTI-LAYER MATERIALS USING INVERSE HOMOGENIZATION AND A LEVEL SET METHOD
Keywords: Topology optimization, Level set method, Inverse homogenization, Multi-layer material
This work is concerned with the micro-architecture of multi-layer material that globally exhibits desired mechanical properties, for instance a negative apparent Poisson ratio. We use inverse homogenization, the level set method, and the shape derivative in the sense of Hadamard to identify material regions and track boundary changes within the context of the smoothed interface. The level set method and the shape derivative obtained in the smoothed interface context allows to capture, within the unit cell, the optimal microgeometry. We test the algorithm by computing several multi-layer auxetic micro-structures. The multi-layer approach has the added benefit that contact during movement of adjacent "branches" of the micro-structure can be avoided in order to increase its capacity to withstand larger stresses.
Introduction
The better understanding of the behavior of novel materials with unusual mechanical properties is important in many applications. As it is well known the optimization of the topology and geometry of a structure will greatly impact its performance. Topology optimization, in particular, has found many uses in the aerospace industry, automotive industry, acoustic devices to name a few. As one of the most demanding undertakings in structural design, topology optimization, has undergone a tremendous growth over the last thirty years. Generally speaking, topology optimization of continuum structures has branched out in two directions. One is structural optimization of macroscopic designs, where methods like the Solid Isotropic Method with Penalization (SIMP) [START_REF] Bendsoe | Topology optimization: theory, methods and applications[END_REF] and the homogenization method [START_REF] Allaire | Shape Optimization by the Homogenization Methods[END_REF], [START_REF] Allaire | Shape optimization by the homogenization method[END_REF] where first introduced. The other branch deals with optimization of micro-structures in order to elicit a certain macroscopic response or behavior of the resulting composite structure [START_REF] Bendsoe | Generating optimal topologies in structural design using a homogenization method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF], [START_REF]Sigmund Materials with prescribed constitutive parameters: An inverse homogenization problem[END_REF], [START_REF] Wang | Level-set method for design of multi-phase elastic and thermoelastic materials[END_REF]. The latter will be the focal point of the current work.
In the context of linear elastic material and small deformation kinematics there is quite a body of work in the design of mechanical meta-materials using inverse homogenization. One of the first works in the aforementioned subject was carried out by [START_REF]Sigmund Materials with prescribed constitutive parameters: An inverse homogenization problem[END_REF]. The author used a modified optimality criteria method that was proposed in [START_REF] Rozvany | Layout and Generalized Shape Optimization by Iterative COC Methods[END_REF] to optimize a periodic micro-structure so that the homogenized coefficients attained certain target values.
On the same wavelength the authors in [START_REF] Wang | Level-set method for design of multi-phase elastic and thermoelastic materials[END_REF] used inverse homogenization and a level set method coupled with the Hadamard boundary variation technique [START_REF] Allaire | Conception optimale de structures[END_REF], [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF] to construct elastic and thermo-elastic periodic micro-structures that exhibited certain prescribed macroscopic behavior for a single material and void. More recent work was also done by [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF], where again inverse homogenization and a level set method coupled with the Hadamard shape derivative was used to extend the class of optimized micro-structures in the context of the smoothed interface approach [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF]. Namely, for mathematical or physical reasons a smooth, thin transitional layer of size 2 , where is small, replaces the sharp interface between material and void or between two different material. The theory that [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF] develop in obtaining the shape derivative is based on the differentiability properties of the signed distance function [START_REF] Delfour | Shapes and Geometries. Metrics, Analysis, Differential Calculus, and Optimization, Advances in Design and Control[END_REF] and it is mathematically rigorous.
Topology optimization under finite deformation has not undergone the same rapid development as in the case of small strains elasticity, for obvious reasons. One of the first works of topology optimization in non-linear elasticity appeared as part of the work of [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF] where they considered a non-linear hyper-elastic material of St. Venant-Kirchhoff type in designing a cantilever using a level set method. More recent work was carried out by the authors of [START_REF] Wang | Design of materials with prescribed nonlinear properties[END_REF], where they utilized the SIMP method to design non-linear periodic micro-structures using a modified St. Venant-Kirchhoff model.
The rapid advances of 3D printers have made it possible to print many of these microstructures, that are characterized by complicated geometries, which itself has given way to testing and evaluation of the mechanical properties of such structures. For instance, the authors of [START_REF] Clausen | Topology optimized architectures with programmable Poisson's ratio over large deformations[END_REF], 3D printed and tested a variety of the non-linear micro-structures from the work of [START_REF] Wang | Design of materials with prescribed nonlinear properties[END_REF] and showed that the structures, similar in form as the one in figure 1, exhibited an apparent Poisson ratio between -0.8 and 0 for strains up to 20%. Preliminary experiments by P. Rousseau [START_REF] Rousseau | Design of auxetic metamaterials[END_REF] on the printed structure of figure 1 showed that opposite branches of the structure came into contact with one another at a strain of roughly 25% which matched the values reported in [START_REF] Clausen | Topology optimized architectures with programmable Poisson's ratio over large deformations[END_REF]. To go beyond the 25% strain mark, the author of [START_REF] Rousseau | Design of auxetic metamaterials[END_REF] designed a material where the branches were distributed over different parallel planes (see figure 2). The distribution of the branches on different planes eliminated contact of opposite branches up to a strain of 50%. A question remains whether or not the shape of the unit cell in figure 2 is optimal. We suspect that it is not, however, the novelty of the actual problem lies in its multi-layer character within the optimization framework of a unit cell with respect to two desired apparent elastic tensors. Our goal in this work is to design a multi-layer periodic composite with desired elastic properties. In other words, we need to specify the micro-structure of the material in terms of both the distribution as well as its topology. In section 2 we specify the problem setting, define our objective function that needs to be optimized and describe the notion of a Hadamard shape derivative. In section 3 we introduce the level set that is going to implicitly characterize our domain and give a brief description of the smoothed interface approach. Moreover, we compute the shape derivatives and describe the steps of the numerical algorithm. Furthermore, in Section 4 we compute several examples of multi-layer auxetic material that exhibit negative apparent Poisson ratio in 2D. For full 3D systems the steps are exactly the same, albeit with a bigger computational cost. Notation. Throughout the paper we will be employing the Einstein summation notation for repeated indices. As is the case in linear elasticity, ε ε ε(u u u) will indicate the strain defined by: ε ε ε(u u u) = 1 2 ∇u u u + ∇u u u , the inner product between matrices is denoted by A A A:B B B = tr(A A A B B B) = A ij B ji . Lastly, the mean value of a quantity is defined as M Y (γ) = 1 |Y | Y γ(y y y) dy y y.
Problem setting
We begin with a brief outline of some key results from the theory of homogenization [START_REF] Allaire | Shape Optimization by the Homogenization Methods[END_REF], [START_REF] Bakhvalov | Homogenisation: averaging processes in periodic media: mathematical problems in the mechanics of composite materials[END_REF], [START_REF] Cioranescu | Introduction to Homogenization[END_REF], [START_REF] Mei | Homogenisation methods for multiscale mechanics[END_REF], [START_REF] Sanchez-Palencia | Non-homogeneous media and vibration theory[END_REF], that will be needed to set up the optimization problem. Consider a linear, elastic, periodic body occupying a bounded domain Ω of R N , N = 2, 3 with period that is assumed to be small in comparison to the size of the domain. Moreover, denote by
Y = - 1 2 , 1 2
N the rescaled periodic unit cell. The material properties in Ω are represented by a periodic fourth order tensor A(y y y) with y y y = x x x/ ∈ Y and x x x ∈ Ω carrying the usual symmetries and it is positive definite:
A ijkl = A jikl = A klij for i, j, k, l ∈ {1, . . . , N } Ω Y Figure 3
. Schematic of the elastic composite material that is governed by eq. (2.1).
Denoting by f f f the body force and enforcing a homogeneous Dirichlet boundary condition the description of the problem is,
-div σ σ σ = f f f in Ω, σ σ σ = A(x x x/ ) ε ε ε(u u u ) in Ω, (2.1) u u u = 0 0 0 on ∂Ω.
We perform an asymptotic analysis of (2.1) as the period approaches 0 by searching for a displacement u u u of the form
u u u (x x x) = +∞ i=0 i u u u i (x x x, x x x/ )
One can show that u u u 0 depends only on x x x and, at order -1 , we can obtain a family of auxiliary periodic boundary value problems posed on the reference cell Y. To begin with, for any m, ∈ {1, . . . , N } we define E E E m = 1 2 (e e e m ⊗e e e +e e e ⊗e e e m ), where (e e e k ) 1≤k≤N is the canonical basis of R N . For each E E E m we have
-div y A(y y y)(E E E m + ε ε ε y (χ χ χ m )) = 0 0 0 in Y, y y y → χ χ χ m (y y y) Y -periodic, M Y (χ χ χ m ) = 0 0 0.
where χ χ χ m is the displacement created by the mean deformation equal to E E E m . In its weak form the above equation looks as follows:
Find χ χ χ m ∈ V such that Y A(y y y) E E E m + ε ε ε(χ χ χ m ) : ε ε ε(w w w) dy y y = 0 for all w ∈ V, (2.2)
where
V = {w w w ∈ W 1,2 per (Y ; R N ) | M Y (w w w) = 0}
. Furthermore, matching asymptotic terms at order 0 we can obtain the homogenized equations for u u u 0 ,
-div x σ σ σ 0 = f f f in Ω, σ σ σ 0 = A H ε ε ε(u u u 0 ) in Ω, (2.3)
u u u 0 = 0 0 0 on ∂Ω.
where A H are the homogenized coefficients and in their symmetric form look as follows,
A H ijm = Y A(y y y)(E E E ij + ε ε ε y (χ χ χ ij )) : (E E E m + ε ε ε y (χ χ χ m ))
J(S) = 1 2 A H -A t 2 η with S = (S 1 , . . . , S d ). (2.5)
where • η is the weighted Euclidean norm, A t , written here component wise, are the specified elastic tensor values, A H are the homogenized counterparts, and η are the weight coefficients carrying the same type of symmetry as the homogenized elastic tensor. We define a set of admissible shapes contained in the working domain Y that have a fixed volume by d . Thus, we can formulate the optimization problem as follows, inf
U ad = S i ⊂ Y is open, bounded, and smooth, such that |S i | = V t i , i = 1, . . . ,
S⊂U ad J(S) χ χ χ m satisfies (2.2) (2.6)
2.2. Shape propagation analysis. In order to apply a gradient descent method to (2.6) we recall the notion of shape derivative. As has become standard in the shape and topology optimization literature we follow Hadamard's variation method for computing the deformation of a shape. The classical shape sensitivity framework of Hadamard provides us with a descent direction. The approach here is due to [START_REF] Murat | Etudes de problmes doptimal design[END_REF] (see also [START_REF] Allaire | Conception optimale de structures[END_REF]). Assume that Ω 0 is a smooth, open, subset of a design domain D. In the classical theory one defines the perturbation of the domain Ω 0 in the direction θ θ θ as
(Id + θ θ θ)(Ω 0 ) := {x x x + θ θ θ(x x x) | x x x ∈ Ω 0 } where θ θ θ ∈ W 1,∞ (R N ; R N )
and it is tangential on the boundary of D. For small enough θ θ θ, (Id + θ θ θ) is a diffeomorphism in R N . Otherwise said, every admissible shape is represented by the vector field θ θ θ. This framework allows us to define the derivative of a functional of a shape as a Fréchet derivative.
Definition Definition Definition 2.2.1. The shape derivative of J(Ω 0 ) at Ω 0 is defined as the Fréchet derivative in W 1,∞ (R N ; R N ) at 0 0 0 of the mapping θ θ θ → J((Id + θ θ θ)(Ω 0 )):
J((Id + θ θ θ)(Ω 0 )) = J(Ω 0 ) + J (Ω 0 )(θ θ θ) + o(θ θ θ) with lim θ θ θ→0 0 0 |o(θ θ θ)| θ θ θ W 1,∞ , and J (Ω 0 )(θ θ θ) a continuous linear form on W 1,∞ (R N ; R N ).
Remark 1. The above definition is not a constructive computation for J (Ω 0 )(θ θ θ). There are more than one ways one can compute the shape derivative of J(Ω 0 ) (see [START_REF] Allaire | Conception optimale de structures[END_REF] for a detailed presentation). In the following section we compute the shape derivative associated to (2.6) using the formal Lagrangian method of J. Cea [START_REF] Céa | Conception optimale ou identification de formes: calcul rapide de la drive directionnelle de la fonction cout[END_REF].
Level set representation of the shape in the unit cell
Following the ideas of [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Wang | Level-set method for design of multi-phase elastic and thermoelastic materials[END_REF], the d sub-domains in the cell Y labeled S i , i ∈ {1, . . . , d} can treat up to 2 d distinct phases by considering a partition of the working domain Y denoted by F j , j ∈ {1, . . . , 2 d } and defined the following way, Define for i ∈ {1, . . . , d} the level sets φ i ,
F 1 =S 1 ∩ S 2 ∩ . . . ∩ S d F 2 =S c 1 ∩ S 2 ∩ . . . ∩ S d . . .
φ i (y y y) = 0 if y y y ∈ ∂S i > 0 if y y y ∈ S c i < 0 if y y y ∈ S i
Moreover, denote by Γ km = Γ mk = F m ∩ F k where k = m, the interface boundary between the m th and the k th partition and let Γ = ∪ 2 d i,j=1i =j Γ ij denote the collective interface to be displaced. The properties of the material that occupy each phase, F j are characterized by an isotropic fourth order tensor
A j = 2 µ j I 4 + κ j - 2 µ j N I 2 ⊗ I 2 , j ∈ {1, . . . , 2 d }
where κ j and µ j are the bulk and shear moduli of phase F j , I 2 is a second order identity matrix, and I 4 is the identity fourth order tensor acting on symmetric matrices.
Remark 2. Expressions of the layer F k , 0 ≤ k ≤ 2 d in terms of the sub-domains S i , 1 ≤ k ≤ d is simply given by the representation of the number k in basis 2. For a number, k its representation in basis 2 is a sequence of d digits, 0 or 1. Replacing in position i the digit 0 with S i and 1 with S c i and can map the expression in basis 2 in the expression of the layer F i . In a similar way, one can express the subsequent formulas in a simple way. However for the sake of simplicity we shall restrain the expressions of the development in the paper to d = 2 and 0 ≥ j ≥ 4.
Remark 3. At the interface boundary between the F j 's there exists a jump on the coefficients that characterize each phase. In the sub-section that follows we will change this sharp interface assumption and allow for a smooth passage from one material to the other as in [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF].
3.1. The smoothed interface approach. We model the interface as a smooth, transition, thin layer of width 2 > 0 (see [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF]) rather than a sharp interface. This regularization is carried out in two steps: first by re-initializing each level set, φ i to become a signed distance function, d S i to the interface boundary and then use an interpolation with a Heaviside type of function, h (t), to pass from one material to the next,
φ i → d S i → h (d S i ).
The Heaviside function h (t) is defined as,
h (t) = 0 if t < -, 1 2 1 + t + 1 π sin π t if |t| ≤ , 1 if t > . (3.1)
Remark 4. The choice of the regularizing function above is not unique, it is possible to use other type of regularizing functions (see [START_REF] Wang | Color" level sets: A multiple multi-phase method for structural topology optimization with multiple materials[END_REF]).
The signed distance function to the domain S i , i = 1, 2, denoted by d S i is obtained as the stationary solution of the following problem [START_REF] Osher | Fronts propagating with curvature dependent speed: algorithms based on hamiltonjacobi formulations[END_REF],
∂d S i dt + sign(φ i )(|∇d S i | -1) = 0 in R + × Y, d S i (0, y y y) = φ i (y y y) in Y, (3.2)
where φ i is the initial level set for the subset S i . Hence, the properties of the material occupying the unit cell Y are then defined as a smooth interpolation between the tensors A j 's j ∈ {1, . . . , 2 d },
A (d S ) = (1 -h (d S 1 )) (1 -h (d S 2 )) A 1 + h (d S 1 ) (1 -h (d S 2 )) A 2 + (1 -h (d S 1 )) h (d S 2 ) A 3 + h (d S 1 ) h (d S 2 ) A 4 . (3.3)
where d S = (d S 1 , d S 2 ). Lastly, we remark that the volume of each phase is written as
Y ι k dy y y = V k
where ι k is defined as follows,
ι 1 = (1 -h (d S 1 )) (1 -h (d S 2 )), ι 2 = h (d S 1 ) (1 -h (d S 2 )), ι 3 = (1 -h (d S 1 )) h (d S 2 ), ι 4 = h (d S 1 ) h (d S 2 ).
(3.4)
Remark 5. Once we have re-initialized the level sets into signed distance functions we can obtain the shape derivatives of the objective functional with respect to each sub-domain S i . In order to do this we require certain differentiability properties of the signed distance function.
Detailed results pertaining to the aforementioned properties can be found in [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF]. We encourage the reader to consult their work for the details. For our purposes, we will make heavy use of Propositions 2.5 and 2.9 in [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF] as well as certain results therein.
Theorem Theorem Theorem 3.1.1. Assume that S 1 , S 2 are smooth, bounded, open subsets of the working domain Y and θ θ θ 1 , θ θ θ 2 ∈ W 1,∞ (R N ; R N ). The shape derivatives of (2.6) in the directions θ θ θ 1 , θ θ θ 2 respectively are,
∂J ∂S 1 (θ θ θ 1 ) = - Γ θ θ θ 1 • n n n 1 η ijk A H ijk -A t ijk A * mqrs (d S 2 )(E k mq + ε mq (χ χ χ k ))(E ij rs + ε rs (χ χ χ ij )) -h * (d S 2 ) dy y y ∂J ∂S 2 (θ θ θ 2 ) = - Γ θ θ θ 2 • n n n 2 η ijk A H ijk -A t ijk A * mqrs (d S 1 ) (E k mq + ε mq (χ χ χ k )) (E ij rs + ε rs (χ χ χ ij )) -h * (d S 1 ) dy y y
where, for i = 1, 2, A * (d S i ), written component wise above, denotes,
A * (d S i ) = A 2 -A 1 + h (d S i ) A 1 -A 2 -A 3 + A 4 , (3.5) h * (d S i ) = ( 2 -1 + h (d S i )( 1 -2 -3 + 4 )) (3.6)
and j , j ∈ {1, . . . , 4} are the Lagrange multipliers for the weight of each phase.
Proof. For each k, we introduce the following Lagrangian for (u
u u k , v v v, µ µ µ) ∈ V × V × R 2d associated to problem (2.6), L(S S S, u u u k , v v v, µ µ µ) = J(S S S) + Y A (d S S S ) E E E k + ε ε ε(u u u k ) : ε ε ε(v v v) dy y y + µ µ µ • Y ι ι ι dy y y -V V V t , (3.7)
where µ µ µ = (µ 1 , . . . , µ 4 ) is a vector of Lagrange multipliers for the volume constraint, ι ι ι = (ι 1 , . . . , ι 4 ), and V V V t = (V t 1 , . . . , V t 4 ). Remark 6. Each variable of the Lagrangian is independent of one another and independent of the sub-domains S 1 and S 2 .
Direct problem. Differentiating L with respect to v v v in the direction of some test function w w w ∈ V we obtain,
∂L ∂v v v | w w w = Y A ijrs (d S S S ) (E k ij + ε ij (u u u k )) ε rs (w w w) dy y y,
upon setting this equal to zero we obtain the variational formulation in (2.2).
Adjoint problem. Differentiating L with respect to u u u k in the direction w w w ∈ V we obtain,
∂L ∂u u u k | w w w = η ijk A H ijk -A t ijk Y A mqrs (d S S S ) (E k mq + ε mq (u u u k )) ε rs (w w w) dy y y + Y A mqrs (d S S S ) ε mq (w w w) ε rs (v v v) dy y y.
We immediately observe that the integral over Y on the first line is equal to 0 since it is the variational formulation (2.2). Moreover, if we chose w w w = v v v then by the positive definiteness assumption of the tensor A as well as the periodicity of v v v we obtain that adjoint solution is identically zero, v v v ≡ 0.
Shape derivative. Lastly, we need to compute the shape derivative in directions θ θ θ 1 and θ θ θ 2 for each sub-domain S 1 , S 2 respectively. Here we will carry out computations for the shape derivative with respect to the sub-domain S 1 with calculations for the sub-domain S 2 carried out in a similar fashion. We know (see [START_REF] Allaire | Conception optimale de structures[END_REF]) that
∂J ∂S i (S S S) | θ θ θ i = ∂L ∂S i (S S S, χ χ χ k , 0 0 0, λ λ λ) | θ θ θ i for i = 1, 2. (3.8) Hence, ∂L ∂S 1 (θ θ θ 1 ) = η ijk A H ijk -A t ijk Y d S 1 (θ θ θ 1 ) ∂A mqrs ∂S 1 (d S S S )(E k mq + ε mq (u u u k )) (E ij rs + ε rs (u u u ij ))dy y y + Y d S 1 (θ θ θ 1 ) ∂A ijrs ∂d S 1 (d S S S )(E k ij + e yij (u u u k ))ε rs (v v v)dy y y + 1 Y -d S 1 (θ θ θ 1 ) ∂h (d S 1 ) ∂d S 1 (1 -h (d S 2 ))dy y y + 2 Y d S 1 (θ θ θ 1 ) ∂h (d S 1 ) ∂d S 1 (1 -h (d S 2
)) dy y y
+ 3 Y -d S 1 (θ θ θ 1 ) ∂h (d S 1 ) ∂d S 1 h (d S 2 ) dy y y + 4 Y d S 1 (θ θ θ 1 ) ∂h (d S 1 ) ∂d S 1 h (d S 2
) dy y y.
The term on the second line is zero due to the fact that the adjoint solution is identically zero. Moreover, applying Proposition 2.5 and then Proposition 2.9 from [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF] as well as using the fact that we are dealing with thin interfaces we obtain,
∂L ∂S 1 (θ θ θ 1 ) = -η ijk A H ijk -A t ijk Γ θ θ θ 1 • n n n 1 A * mqrs (d S 2 ) (E k mq + ε mq (u u u k )) (E ij rs + ε rs (u u u ij )) dy y y + 1 Γ θ θ θ 1 • n n n 1 (1 -h (d S 2 )) dy y y -2 Γ θ θ θ 1 • n n n 1 (1 -h (d S 2
)) dy y y
+ 3 Γ θ θ θ 1 • n n n 1 h (d S 2 ) dy y y -4 Γ θ θ θ 1 • n n n 1 h (d S 2 ) dy y y
where n n n 1 denotes the outer unit normal to S 1 . Thus, if we let u u u k = χ χ χ k , the solution to the unit cell (2.2) and collect terms the result follows.
Remark 7. The tensor A * in (3.5) as well h * in (3.6) of the shape derivatives in Theorem 3.1.1 depend on the signed distance function in an alternate way which provides an insight into the coupled nature of the problem. We further remark, that in the smooth interface context, the collective boundary Γ to be displaced in Theorem 3.1.1, is not an actual boundary but rather a tubular neighborhood.
3.2.
The numerical algorithm. The result of Theorem 3.1.1 provides us with the shape derivatives in the directions θ θ θ 1 , θ θ θ 2 respectively. If we denote by,
v 1 = ∂J ∂S 1 (S S S), v 2 = ∂J ∂S 2 (S S S),
a descent direction is then found by selecting the vector field θ θ θ 1 = v 1 n n n 1 , θ θ θ 2 = v 2 n n n 2 . To move the shapes S 1 , S 2 in the directions v 1 , v 2 is done by transporting each level set, φ i , i = 1, 2 independently by solving the Hamilton-Jacobi type equation
∂φ i ∂t + v i |∇φ i | = 0, i = 1, 2. (3.9)
Moreover, we extend and regularize the scalar velocity v i , i = 1, 2 to the entire domain Y as in [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF], [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF]. The extension is done by solving the following problem for i = 1, 2,
-α 2 ∆θ θ θ i + θ θ θ i = 0 in Y, ∇θ θ θ i n n n i = v i n n n i on Γ, θ θ θ i Y-periodic,
where α > 0 is small regularization parameter. Hence, using the same algorithm as in [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF], for i = 1, 2 we have:
3.2.1. Algorithm. We initialize S 0 i ⊂ U ad through the level sets φ i 0 defined as the signed distance function of the chosen initial topology, then 1. iterate until convergence for k ≥ 0: a. Calculate the local solutions χ χ χ m k for m, = 1, 2 by solving the linear elasticity problem
(2.2) on O k := S k 1 ∪ S k 2 .
b. Deform the domain O k by solving the Hamilton-Jacobi equations (3.9) for i = 1, 2.
The new shape O k+1 is characterized by the level sets φ k+1 i solutions of (3.9) after a time step ∆t k starting from the initial condition φ k i with velocity v i k computed in terms of the local problems χ χ χ m k for i = 1, 2. The time step ∆t k is chosen so that J(S S S k+1 ) ≤ J(S S S k ). 2. From time to time, for stability reasons, we re-initialize the level set functions φ k i by solving (3.2) for i = 1, 2.
Numerical examples
For all the examples that follow we have used a symmetric 100 × 100 mesh of P 1 elements. We imposed volume equality constraints for each phase. In the smooth interpolation of the material properties in formula (3.3), we set equal to 2∆x where ∆x is the grid size. The parameter is held fixed through out (see [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF] and [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF]). The Lagrange multipliers were updated at each iteration the following way, n+1 j = n j -β Y ι n j dy y y -V t j , where β is a small parameter. Due to the fact that this type of problem suffers from many local minima that may not result in a shape, instead of putting a stopping criterion in the algorithm we fix, a priori, the number iterations. Furthermore, since we have no knowledge of what volume constraints make sense for a particular shape, we chose not to strictly enforce the volume constraints for the first two examples. However, for examples 3 and 4 we use an augmented Lagrangian to actually enforce the volume constraints,
L(S S S, µ µ µ, β β β) = J(S S S) - 4 i=1 µ i C i (S S S) + 4 i=1 1 2 β i C 2 i (S S S),
here C i (S S S) are the volume constraints and β is a penalty term. The Lagrange multipliers are updated as before, however, this time we update the penalty term, β every 5 iterations. All the calculations were carried out using the software FreeFem++ [START_REF] Hecht | New development in FreeFem++[END_REF].
Remark 8. We remark that for the augmented Lagrangian we need to compute the new shape derivative that would result. The calculations are similar as that of Theorem 3.1.1 and, therefore, we do not detail them here for the sake of brevity.
Example 1.
The first structure to be optimized is multilevel material that attains an apparent Poisson ratio of -1. The Young moduli of the four phases are set to E 1 = 0.91, E 2 = 0.0001, E 3 = 1.82, E 4 = 0.0001. Here phase 2 and phase 4 represent void, phase 2 represents a material that is twice as stiff as the material in phase 3. The Poisson ratio of each phase is set to ν = 0.3 and the volume constraints were set to V t 1 = 30% and V t 3 = 4%.
ijkl 1111 1122 2222 η ijkl 1 30 1 A H ijkl 0.12 -0.09 0.12 A t ijkl 0.1 -0.1 0.1 Table 1. Values of weights, final homogenized coefficients and target coefficients From figure 8 we observe that the volume constraint for the stiffer material is not adhered to the target volume. In this cases the algorithm used roughly 16% of the material with Poisson ratio 1.82 while the volume constraint for the weaker material was more or less adhered to the target constraint. 11 we observe that the volume constraint for the stiffer material is not adhered to the target volume. In this cases the algorithm used roughly 15% of the material with Poisson ratio 1.82 while the volume constraint for the weaker material was more or less adhered to the target constraint. 3. Values of weights, final homogenized coefficients and target coefficients Again just as in the previous two examples we observe that the volume constraint for the stiffer material is not adhered to the target volume, even though for this example a augmented Lagrangian was used. In this cases the algorithm used roughly 20% of the material with Poisson ratio 1.82 while the volume constraint for the weaker material was more or less adhered to the target constraint. The fourth structure to be optimized is multilevel material that attains an apparent Poisson ratio of -0.5. An augmented Lagrangian was used to enforce the volume constraints for this example as well. The Lagrange multiplier was updated the same way as before, as was the penalty parameter β. The Young moduli of the four phases are set to E 1 = 0.91, E 2 = 0.0001, E 3 = 1.82, E 4 = 0.0001. The Poisson ratio of each material is set to ν = 0.3, however, this times we require that the volume constraints be set to V
Conclusions and Discussion
The problem of an optimal multi-layer micro-structure is considered. We use inverse homogenization, the Hadamard shape derivative and a level set method to track boundary changes, within the context of the smooth interface, in the periodic unit cell. We produce several examples of auxetic micro-structures with different volume constraints as well as different ways of enforcing the aforementioned constraints. The multi-layer interpretation suggests a particular way on how to approach the subject of 3D printing the micro-structures. The magenta material is essentially the cyan material layered twice producing a small extrusion with the process repeated several times. This multi-layer approach has the added benefit that some of the contact among the material parts is eliminated, thus allowing the structure to be further compressed than if the material was in the same plane.
The algorithm used does not allow "nucleations" (see [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF], [START_REF] Wang | Level-set method for design of multi-phase elastic and thermoelastic materials[END_REF]). Moreover, due to the non-uniques of the design, the numerical result depend on the initial guess. Furthermore, volume constraints also play a role as to the final form of the design.
The results in this work are in the process of being physically realized and tested both for polymer and metal structures. The additive manufacturing itself introduces further constraints into the design process which need to be accounted for in the algorithm if one wishes to produce composite structures.
Figure 1 .
1 Figure 1. A 3D printed material with all four branches on the same plane achieving an apparent Poisson ratio of -0.8 with over 20% strain. On subfigure (a) is the uncompressed image and on sub-figure (b) is the image under compression. Used with permission from [18].
Figure 2 .
2 Figure 2. A 3D printed material with two of the branches on a different plane achieving an apparent Poisson ratio of approximately -1.0 with over 40% strain. Sub-figure (a) is the uncompressed image and sub-figure (b) is the image under compression. Used with permission from [18].
Figure 4 .
4 Figure 4. Perturbation of a domain in the direction θ θ θ.
F 2 dFigure 5 .
25 Figure 5. Representation of different material in the unit cell for d = 2.
Figure 6 .
6 Figure 6. The design process of the material at different iteration steps. Young modulus of 1.82, Young modulus of 0.91, void.
Figure 7 .Figure 8 .
78 Figure 7. On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio -1.
Figure 9 .
9 Figure 9. The design process of the material at different iteration steps. Young modulus of 1.82, Young modulus of 0.91, void.
Figure 10 .Figure 11 .
1011 Figure 10. On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio -1.
Figure 12 .
12 Figure 12. The design process of the material at different iteration steps. Young modulus of 1.82, Young modulus of 0.91, void.
Figure 13 .Figure 14 .
1314 Figure 13. On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio -0.5.
Figure 15 .
15 Figure 15. The design process of the material at different iteration steps. Young modulus of 1.82, Young modulus of 0.91, void.
Figure 16 .Figure 17 .
1617 Figure 16. On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio -0.5.
Table 2 .
2 3, however, this times we require that the volume constraints be set to V t 1 = 33% and V t 3 = 1%. Values of weights, final homogenized coefficients and target coefficients Again, from figure
ijkl 1111 1122 2222
η ijkl 1 30 1
A H ijkl 0.11 -0.09 0.12 A t ijkl 0.1 -0.1 0.1
Table 4 .
4 Values of weights, final homogenized coefficients and target coefficients
t 1 = 53%
Acknowledgments
This research was initiated during the sabbatical stay of A.C. in the group of Prof. Chiara Daraio at ETH, under the mobility grant DGA-ERE (2015 60 0009). Funding for this research was provided by the grant "MechNanoTruss", Agence National pour la Recherche, France (ANR-15-CE29-0024-01). The authors would like to thank the group of Prof. Chiara Daraio for the fruitful discussions. The authors are indebted to Grégoire Allaire and Georgios Michailidis for their help and fruitful discussions as well as to Pierre Rousseau who printed and tested the material in figure 1 & figure 2. | 35,214 | [
"736958",
"5596"
] | [
"1167",
"1167"
] |
01765261 | en | [
"shs",
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01765261/file/459826_1_En_15_Chapter.pdf | Jolita Ralyté
email: [email protected]
Michel Léonard
email: [email protected]
Evolution Models for Information Systems Evolution Steering
Keywords: Information Systems Evolution, IS evolution steering, IS evolution structure, IS evolution lifecycle, IS evolution impact
Sustainability of enterprise Information Systems (ISs) largely depends on the quality of their evolution process and the ability of the IS evolution steering officers to deal with complex IS evolution situations. Inspired by Olivé [1] who promotes conceptual schema-centric IS development, we argue that conceptual models should also be the centre of IS evolution steering. For this purpose we have developed a conceptual framework for IS evolution steering that contains several interrelated models. In this paper we present a part of this framework dedicated to the operationalization of IS evolution -the evolution metamodel. This metamodel is composed of two interrelated views, namely structural and lifecycle, that allow to define respectively the structure of a particular IS evolution and its behaviour at different levels of granularity.
Introduction
No matter the type and the size of the organization (public or private, big or small), sustainability of its Information Systems (ISs) is of prime importance to ensure its activity and prosperity. Sustainability of ISs largely depends on the quality of their evolution process and the ability of the officers handling it to deal with complex and uncertain IS evolution situations. There are several factors that make these situations complex, such as: proliferation of ISs in the organization and their overlap, independent evolution of each IS, non-existence of tools supporting IS evolution steering, various IS dimensions to be taken into account, etc. Indeed, during an IS evolution not only its information dimension (the structure, availability and integrity of data) is at stake. IS evolution officers have also to pay attention to its activity dimension (the changes in enterprise business activity supported be the IS), the regulatory dimension (the guarantee of IS compliance with enterprise regulation policies), and the technology dimension (the implementation and integration aspects).
In this context, we claim that there is a need for an informational engineering approach supporting IS evolution steering, allowing to obtain all the necessary information for an IS evolution at hand, to define and plan the evolution and to assess its impact on the organization and it ISs. We found the development of such an approach on conceptual modelling by designing a conceptual framework for IS evolution steering. Some parts of this framework were presented in [START_REF] Opprecht | Towards a framework for enterprise information system evolution steering[END_REF] and [START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF]. In this paper we pursue our work and present one of its componentsthe operationalization of the IS evolution through the metamodel of IS Evolution.
The rest of the paper is organized as follows: in section 2 we overview the context of our work -the conceptual framework that we are developing to support the IS evolution steering. Then, in section 3, we discuss the role and principles of conceptual modelling in handling IS evolution. In sections 4 and 5 we present our metamodel formalizing the IS evolution, its structural and lifecycle views, and illustrate their usage in section 6. Section 7 concludes the paper.
Context: A Framework for IS Evolution Steering
With our conceptual framework for IS evolution steering we aim to face the following challenges: 1) steering the IS evolution requires a thorough understanding of the underpinning IS domain, 2) the impact of IS evolution is difficult to predict and the simulation could help to take evolution decisions, 3) the complexity of IS evolution is due to the multiple dimensions (i.e. activity, regulation, information, technology) to be taken into account, and 4) the guidance for IS evolution steering is almost non-existent, and therefore needs to be developed. As shown in Fig. 1, the framework contains several components each of them taking into account a particular aspect of IS evolution steering and considering the evolution challenges listed above. Let us briefly introduce these components.
The IS Steering Metamodel (IS-SM ) is the main component of the framework having as a role to represent the IS domain of an enterprise. Concretely, it allows to formalize the way the enterprise ISs are implemented (their structure in terms of classes, operations, integrity rules, etc.), the way they support enterprise business and management activities (the definition of enterprise units, activities, positions, business rules, etc.), and how they comply with regulations governing these activities (the definition of regulatory concepts, rules and roles). Although IS-SM is not the main subject of this paper (it was presented in [START_REF] Opprecht | Towards a framework for enterprise information system evolution steering[END_REF][START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF]), it remains the bedrock of the framework, and is necessary to be presented for better understanding other models and illustrations. IS-SM is also the kernel model for implementing an Informational Steering Information System -ISIS. ISIS is a meta-IS for steering enterprise IS (IS upon ISs according to [START_REF] Dinh | Towards a New Infrastructure Supporting Interoperability of Information Systems in Development: the Information System upon Information Systems[END_REF]). While enterprise ISs operate at business level, ISIS performs at the IS steering level. Therefore, we depict IS-SM in Fig. 2 mainly to make this paper self-explanatory, and we invite the reader to look at [5] for further details. The role of the Evolution Metamodel is to specify IS changes, and to assist the IS steering actor responsible for performing these changes. This metamodel comprises two interrelated views: structural and lifecycle. While the former deals with the extent and complexity of the IS evolution, the later supports its planning and execution. The Evolution Metamodel is the main subject of this paper, and is detailed in the following sections.
The Impact Space component provides mechanisms to measure the impact of IS changes on the enterprise IS, on the business activities supported by these ISs, and on the compliance with regulations governing enterprise activities. The impact model of a particular IS evolution is defined as a part of the IS-SM including the IS-SM elements that are directly or indirectly concerned by this evolution. An IS-SM element is directly concerned by the evolution if its instances undergo modifications, i.e. one or more instances of this element are created, enabled, disabled, modified, or deleted. An IS-SM element is indirectly concerned by the evolution if there is no modification on its instances but they have to be known to make appropriate decisions when executing the evolution.
The Responsibility Space (Ispace/Rspace) component [START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF] helps to deal with responsibility issues related to a particular IS evolution. Indeed, each IS change usually concerns one or several IS actors (i.e. IS users) by transforming their information and/or regulation spaces (Ispace/Rspace). An IS actor can see her information/regulation space be reduced (e.g. some information is not accessible anymore) or in the contrary increased (e.g. new information is available, new actions has to be performed, new regulations has to be observed). In both cases the responsibility of the IS actor over these spaces is at stake. The Ispace/Rspace model is defined as a part of IS-SM. It allows for each IS evolution to create subsets of information, extracted from ISIS, that inform the IS steering officer how this evolution affects the responsibility of IS users.
Finally, the Evolution Steering Method provides guidelines to use all these aforementioned models when executing an IS evolution.
3 Modelling IS Evolution: Background and Principles
Background
Most of the approaches dealing with IS and software evolution are based on models and metamodels (e.g. [START_REF] Pons | Model evolution and system evolution[END_REF][START_REF] Burger | A change metamodel for the evolution of mof-based metamodels[END_REF][START_REF] Aboulsamh | Towards a model-driven approach to information system evolution[END_REF][START_REF] Kchaou | A mof-based change meta-model[END_REF][START_REF] Ruiz Carmona | TraceME: Traceability-based Method for Conceptual Model Evolution[END_REF]). They mainly address the structural aspects of IS evolution (for example, changing a hierarchy of classes, adding a new class) [START_REF] Pons | Model evolution and system evolution[END_REF], model evolution and transformations [START_REF] Burger | A change metamodel for the evolution of mof-based metamodels[END_REF], and the traceability of changes [START_REF] Kchaou | A mof-based change meta-model[END_REF][START_REF] Ruiz Carmona | TraceME: Traceability-based Method for Conceptual Model Evolution[END_REF]. They aim to support model-driven IS development, the automation of data migration, the evaluation of the impact of metamodel changes on models, the development of forward-, reverse-, and re-engineering techniques, the recording of models history, etc. The importance and impact of model evolution is also studied in [START_REF] Lehman | Software Evolution[END_REF] where the authors stress that understanding and handling IS evolution requires models, model evolution techniques, metrics to measure model changes and guidelines for taking decisions.
In our work, we also claim that the purpose of conceptual modelling in IS evolution steering is manifold, it includes the understanding, building, deciding and realising the intended IS changes. As per [START_REF] Lehman | Evolution as a noun and evolution as a verb[END_REF], the notion of IS evolution has to be considered as a noun and as a verb. As a noun it refers to the question "what" -the understanding of the IS evolution phenomenon and its properties. While as a verb, it refers to the questions "how" -the theories, languages, activities and tools which are required to evolve a software. Our metamodel for IS evolution steering (see Fig. 1) includes two complementary views, namely structural and lifecycle view, and so serves to cope with complex IS artefacts, usually having multiple views.
Models are also known as a good support for taking decisions. In case of IS evolution, usually, there are several possible ways to realise it, each of them having a different impact on enterprise ISs and even on its activities. Taking a decision without any appropriate support can be difficult and very stressful task. Finally, with a set of models, the realisation of IS evolution is assisted in each evolution step and each IS dimension.
Principles of IS Evolution
The focus of the IS evolution is to transform a current IS schema (ASIS-IS) into a new one (TOBE-IS), and to transfer ASIS-IS objects into TOBE-IS objects. We use ISIS (see the definition in section 2), whose conceptual schema is represented by IS-SM (Fig. 2), as a support to handle IS evolution. Indeed, ISIS provides a thorough, substantial information on the IS structure and usage, which, combined with other information outside of ISIS, is crucial to decide the IS evolution to pursue. Furthermore, ISIS is the centre of the management and the execution of the IS evolution processes both at the organizational and informatics levels. So, one main principle of IS evolution is always to consider these two interrelated levels: the ISIS and IS levels with their horizontal effects concerning only one level, and their vertical effects concerning both levels. In the following, to make a clear distinction between the IS and ISIS levels, we use the concepts of "class" and "object" at the IS schema level, and "element" and "instance" at the ISIS schema level.
IS evolution is composition of transformation operations, where the most simple ones are called atomic evolution primitives. Obtaining an initial list of atomic evolution primitives for an IS and its ISIS is simple: we have to consider all the elements of the ISIS schema, and, for each of them, all the primitives usually defined over an element: Search, Create, Read, Update, Delete (SCRUD). In the case of IS-SM as ISIS schema, there are 53 elements and so, 265 atomic evolution primitives. Since the aim of the paper is to present the principles of our framework for IS evolution steering, we simplify this situation by considering only the most difficult primitives Create and Delete. Nevertheless, there are still 106 primitives to be considered. The proposed conceptual framework for IS evolution steering is going to help facing this complexity.
Structural View of IS Evolution
An IS evolution transforms a part of the ASIS-IS ISP into ISP', which is a part of the TOBE-IS, in a way that the TOBE-IS is compliant with:
-the horizontal perspective: the instances of the new ISIS and the objects of TOBE-IS validate all the integrity rules defined respectively over ISIS and TOBE-IS; -the vertical perspective: the TOBE-IS objects are compliant with the instances of the new ISIS.
In a generic way, we consider that an overall IS evolution requires to be decomposed into several IS evolutions, and so the role of the structural IS evolution model view (shown in Fig. 3) consists in establishing the schema of each IS evolution as a schema of composition of evolution primitives defined over IS-SM to pursue the undertaken IS evolution.
An evolution primitive represents a kind of elementary particle of an evolution: we cannot split it into several parts without loosing qualities in terms of manageable changes and effects, robustness, smartness and performances, introduced in a following paragraph. The most basic evolution primitives are the atomic evolution primitives: some of them, like Create, Delete and Update, are classic, since the other, Enable, Disable, Block, Unblock are crucial for the evolution process.
Atomic Evolution Primitives
Since the ISIS schema (i.e. IS-SM) is built only by means of existentially dependencies1 , the starting point of the IS evolution decomposition is very simple -it consists of a list of atomic primitives: create, delete, update, enable, disable, block and unblock an instance of any IS-SM element. We apply the same principle at the IS level, so the IS schema steered by ISIS is also built by using only existential dependencies. Moreover, an instance/object is existentially dependent on its element/class.
These atomic primitives determine a set of possible states that any ISIS instance (as well as IS object) could have, namely created, enabled, blocked, disabled, and deleted. Fig. 4 provides the generic life cycle of an instance/object.
Once an instance is created, it must be prepared to be enabled, and so to be used at the IS level. For example, a created class can be enabled, and so to have objects at the IS level, only if its methods validate all the integrity rules whose contexts contain it. A created instance can be deleted if it belongs to a stopped evolution. Enabled instances are disabled by an evolution when they do not play any role in the targeted TOBE-IS. They are not deleted immediately for two reasons: the first one concerns the fact that data, operations, or rules related to them, which were valid before the evolution, still stay consistent for situations where continuity is mandatory, for instance due to contracts. The second one concerns the evolution itself: if it fails, it is necessary to come back to the ASIS-IS, and so, to enable again the disabled instances.
Enabled instances are blocked during a phase of an evolution process when it is necessary to avoid their possible uses at the IS level through objects and/or execution of operations. At the end of this phase they are unblocked (re-enabled). For instance an activity (an instance of the element Activity) can be blocked temporary because of the introduction of a new business rule. Finally when an instance is deleted, it disappears definitively.
Robust Generic Atomic Evolution Rules
The generic atomic evolution rules must be validated to assure the consistency of the evolution process. Indeed each atomic evolution primitive has effects on other elements than its targeted elements. For example, deleting an integrity rule has effects on the methods of several classes belonging to the context of this integrity rule. Below, we present two kinds of generic atomic evolution rules: the first concerns the horizontal and vertical evolution effects, while the second deals with the dynamic effects.
Evolution Effects Horizontally and Vertically. An evolution primitive is firstly an atomic operation on the ISIS. So, it must verify the integrity rules defined over the IS-SM model to manage the horizontal effects. For example, if an instance cl of the element Class is deleted, then all the instances clo i of the element Class Operation related with cl must be deleted due to the existential dependency between these two elements (see Fig. 2).
An evolution primitive is also an operation on the IS and has to manage the vertical effects of the conformity rules between ISIS instances and IS objects. For example, deleting cl induces also deleting all its objects in the IS.
Then, since the evolution operations on IS are executed from ISIS, they validate the integrity rules defined over IS, which are instances of ISIS.
Generic Dynamic Evolution Rules. The generic evolution rules concern the states of the ISIS elements produced by the use of atomic evolution primitives (Fig. 4): created, enabled, blocked, disabled, deleted, and especially the interactions between instances of different elements in different states. They must be observed only at the ISIS level.
Some generic rules concerning the states "created" and "deleted" are derived directly from the existentially dependencies. Considering the element Einf depending existentially on the element Esup, any instance of Einf may be in the state "created" only if its associated instance esup of Esup is in the state "created", and it must be in the state "deleted" if esup is in the state "deleted".
The generic rules concerning the states "blocked" and "disabled" require to con-sider another relation between the IS-SM elements, called "determined by", defined at the conceptual level and not the instance level. An element Esecond is strictly/weakly determined by the element Efirst if any instance esecond to be exploitable in IS must/can require to be associated to one or several instances efirst.
Then there is the following generic dynamic rule: any instance esecond must be in the state disabled/blocked/deleted if at least one of its efirst is in the state respectively disabled/blocked/deleted.
For instance, the element Operation is strictly determined by the element Class, because any operation to be executed at the IS level must be associated to at least one class (see Fig. 2). Then, if an operation is associated to one class in the state disabled/blocked, it also must be in the state disabled/blocked, even if it is also associated to other enabled classes.
The element Integrity Rule is weakly determined by the element Business Rule because integrity rules are not mandatory associated with a business rule. In the same way, all elements, like Class, associated with the Regulatory Element are weakly determined by it, because their instances are not mandatory associated to an instance of Regulatory Element.
Considering the following elements of the IS-SM models (see Fig. 2): Person, Position, Business Process (BP), Activity, Business Rule (BR), Role, Operation, Class, Integrity Rule (IR), Regulatory Element (RE), here is the list of relations strictly determined by (=>): BP => Activity, BR => Activity, Operation => Class, Operation => IR, IR => Class. The list of the relations weakly determined by (->) (in addition to the aforementioned ones with the Regulatory Element) is: IR -> BR, IR -> RE, Class -> RE, Operation -> RE, Role -> RE, BR -> RE, Activity -> RE, Position -> RE, Event -> RE, BP -> RE.
Robustness. Every aforementioned evolution primitive is robust if it manages all its horizontal and vertical effects and respects all the generic dynamic evolution rules. The use of only existential dependencies at the both levels, IS and ISIS, in our approach, facilitates reaching this quality. Nevertheless, at the IS level, such an approach requires that the whole IS schema (including static, dynamic and integrity rule perspectives) must be easily evolvable, and the IT system supporting the IS (e.g. a database management system) must provide an efficient set of evolution primitives [START_REF] Andany | Management of schema evolution in databases[END_REF].
Composite Evolution Primitives
The composite primitives are built by a composition of the atomic ones (Fig. 3). They are necessary to consider IS evolution at the management level [START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF], but also for informational and implementation purposes. For instance, replacing an integrity rule by a new one can be considered logically equivalent to delete it and then to create the new one. But this logic is not pertinent if we consider the managerial, IS exploitation and implementation perspectives. It is much more efficient to build a composite evolution primitive "replace" built from the atomic primitives "create" and "delete".
A composite evolution primitive is robust, if it manages all its horizontal and vertical effects and respects all the generic dynamic evolution rules.
Managerial Effects
The managerial effects consider the effects of the IS evolution at the human level, and so concern the IS-SM elements Role, Activity, Position and Person. The evolution steering officers have to be able to assess whether the proposed evolution has a harmful effect on organization's activities or not, and to decide to continue or not this evolution. The evolution primitives are smart if they alert these levels by establishing a report of changes to all the concerned roles, activities, positions, and persons. To do that, they will use the responsibility space (Fig. 1) with its two sub-spaces: its informational space (Ispace) and its regulatory space (Rspace). This part was presented in [START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF]. Below in the paper, all primitives are smart.
Lifecycle View of IS Evolution
Evolution of an information system is generally a delicate process for an enterprise for several reasons. First, it cannot be realized by stopping the whole IS because all the activities supported by IS should be stopped and this situation is unthinkable in most cases. Second, it has impacts, especially on actors and on the organization of activities. It can even induce the need for reorganizing the enterprise. Third, it takes time and often requires to set up a process of adaptation to the changes for all concerned actors to enable them to perform their activities. Moreover, it concerns a large informational space of IS-SM and requires to be decomposed into partial evolutions called sub-evolutions. So, it requires a coordination model to synchronize all processes of these sub-evolutions as well as the process of the main evolution. Furthermore, it is a long process, with an important number of actors who work inside the evolution process or whose activities are changed by the evolution. Finally, most evolutions of ASIS-IS into TOBE-IS are generally nearly irreversible, because it is practically impossible to transform back TOBE-IS into ASIS-IS at least for two main reasons: (1) some evolution primitives, used by the evolution, can be irreversible themselves (e.g. the case of an existing integrity rule relaxed by the evolution), and (2) actors, and even a part of the enterprise, can be completely disoriented to go back to TOBE-IS after all the efforts they have done to adapt to ASIS-IS. So, a decision to perform an evolution must be very well prepared to decrease the risks of failure. For this purpose, we explore a generic lifecycle of an evolution, first at atomic primitive level, then at composite primitive level and finally at evolution level.
An atomic primitive can be performed stand-alone in two steps: (1) preparation and (2) execution or abort. They are defined as follows:
-Preparation: prepares the disabling list of ISIS instances and IS objects to be disabled if success, the creating list of ISIS instances and IS objects to be created if success, the list of reports of changes, and the blocking list of ISIS instances; -Execution: sends the reports of changes, blocks the concerned IS parts, disables/creates effectively contents of the disabling/creating lists, then unblocks the blocked IS parts.
The work done at the preparation step serves to decide whether the primitive should be executed or aborted. Finally, the execution of the primitive can succeed or fail. For example, it fails if it cannot block an IS part. As an example let us consider the deletion of a role: its creating list is empty and its disabling list contains all the assignments of operations to this role. Blocking these assignments signifies these operations cannot be executed by means of this role during the deletion of the role. It can fail if an IS actor is working through this role.
In the case of the atomic evolution primitive "Create an instance Cl of Class", the preparation step defines:
-how to fill the new class with objects, -how to position it in the IS schema by linking Cl to other IS classes by means of existential dependencies; -how to alert the managers of Role, Operation and Integrity Rule about the Cl creation.
Besides, it is important to create together with Cl its methods and attributes, and even the association classes between Cl and other IS classes. For that, we need a more powerful concept, the composite evolution primitive, as presented below.
A composite primitive is composed of other composite or atomic primitives, which builds a hierarchy of primitives. The top composite primitive is at the root of this hierarchy; the atomic primitives are at its leaves. Every composite primitive has a supervision step, which controls the execution of all its subprimitives. Only the top composite primitive has in addition a coordination step, which takes the same decision to enable or abort for all its sub primitives in the hierarchy. The main steps of a composite primitive life cycle are:
-Preparation: creates all direct sub-primitives of the composite primitive; -Supervision: determines the impacts and the managerial effects from the enabling lists and the creating lists established by the sub-primitives; -Coordination: takes the decision of enabling or aborting primitive processing and transmits it to the sub-primitives; -Training: this is a special step for the top primitive; it concerns training of all actors concerned by the whole evolution. This step is performed thanks to the actors' responsibility spaces.
The top composite primitive is successful if all its sub-primitives are successful; it fails if at least one among its sub-primitives fails. The life cycle of the atomic primitives must be adapted by adding the abort decision and by taking into account that enable/abort decisions are made by a higher level primitive. Fig. 5 illustrates the co-ordination between the composite primitive lifecycle and its sub-primitive lifecycle.
Thus, from the atomic evolution primitive "Create Class" we build the composite evolution primitive "C-Create Class" with the following sub-primitives:
-Create Class, which is used to create the intended class Cl and also all the new classes to associate Cl with other classes, as mentioned previously, -Create Class Concept, Create Class Attribute, -if necessary, Create Attribute, and Create Domain, -C-Create Method with its sub-primitives Create Method and Create Attribute Method.
Let us now look at the lifecycle of an entire evolution, which is a composition of primitives. During the processing of an evolution, the preparation step consists in selecting the list of composite primitives, whose processing will realize this Fig. 6. Coordination of the IS evolution lifecycle with the lifecycles of its top composite primitives evolution. Then, from the impacts and the managerial effects determined by the supervision steps of these composite primitives, the supervision step of the evolution determines a plan for processing these primitives. It decides which primitives can/must be executed in parallel and which in sequence. Next, the coordination step launches processing of primitives following the plan. After analyzing their results (success or failure), it decides to launch other primitives and/or to abort some of them. Finally, the evolution is finished and it is time to assess it. Indeed, the evolution processing transforms the enterprise and its ways of working, even if processing of some composite primitives fails. Due to the important complexity, it seems important to place the training step at the evolution level and not at the level of composite primitive. Of course, the training step of a composite primitive must be realized before its execution. But, in this way, it is possible to combine training steps of several composite primitives into one, and to obtain a more efficient training in the context of the evolution. Fig. 6 shows the coordination between the lifecycles of IS evolution and its top composite primitives.
Illustrating Example
To illustrate our approach, we use the example of a hospital. Fig. 7 depicts a small part of the kernel of its IS schema. In this example we will consider:
-one organizational unit: the general medicine department, -two positions: the doctor and the nurse, -two activities of a doctor: a 1 concerning the care of patients (visit, diagnostic, prescription) and a 2 concerning the management of the nurses working in her team. To illustrate an evolution case, let us suppose that now our hospital has to apply new rules for improving patients' safety. To this end, each doctor will be in charge to guarantee that nurses of her team have sufficient competences for administrating the drugs she can prescribe. So, the IS of the hospital must evolve, especially by introducing new classes: Nurse Drug that associates a nurse with a drug for which she is competent according to her doctor, and Doctor Drug that associates a doctor with a drug that she can prescribe. The TOBE-IS schema is shown in Fig. 8. The IS evolution is then composed of 2 top composite primitives, one around Doctor Drug (DD), the other one around Nurse Drug (ND). The first one is built from the composite primitive C-Create Class to create the instance DD of the ISIS element Class. Its preparation step specifies:
-the DD objects will be obtained from the ASIS-IS class Prescription; -DD will be existentially dependent on the IS classes Doctor and Drug, and Prescription will become existentially dependent on DD and no more directly dependent on Drug;
-the alerts for Role, Activities, Positions, Persons about the changes, especially in the creation an object of Prescription, which in TOBE-IS must be related to a DD object; -creation of DD objects, creation or modification of roles for reaching them; -the blocking list for its execution, which includes Doctor, Drug and Prescription.
The second composite primitive is built from the composite evolution primitive C-Create Class to create the instance ND of the ISIS element Class. Its preparation step specifies:
-the ND objects will be obtained from the ASIS-IS class Prescription; -ND will be existentially dependent on the IS classes Nurse and Drug, and Drug Delivery will become existentially dependent on ND; -the alerts for Role, Activities, Positions, Persons about the changes, especially in the creation an object of Drug Delivery, which must be related to a ND object; -creation of ND objects, creation or modification of roles for reaching them; -the blocking list for its execution, which includes Nurse, Drug and Drug Delivery.
In the case of this example, the execution process of the IS evolution after the training of involved actors is simple: to execute the top evolution composite primitives related to Doctor Drug and then to Nurse Drug.
Conclusion
Handling information systems evolution is a complex task that has to be properly defined, planned and assessed before its actual execution. The result of each IS evolution has impact on the sustainability on organization's ISs and also on the efficiency of the organization's activity. So this task is not only complex but also critical.
In this paper, we continue to present our work on a conceptual framework for IS evolution steering that aims to establish the foundation for the development of an Informational Steering Information System (ISIS). In particular, we dedicate this paper to the engineering aspects of the concept of IS evolution, and present its metamodel, which is one of the components in our framework (Fig. 1).
The role of the IS Evolution Metamodel consists in supporting the operationalization of the IS evolution. Therefore, it includes two views: structural and lifecycle. The structural view allows to progressively decompose a complex IS evolution into a set of atomic primitives going trough several granularity levels of composite primitives. The obtained primitives as robust because they follow generic evolution rules and take into account horizontal and vertical effects on ISIS and IS. They are also smart because they pay attention to the managerial effects of IS evolution at the human level. The lifecycle view helps to operate IS evolution at its different levels of granularity by providing a set of models and rules for progressing from one step to another.
To complete our framework for IS evolution steering we still need to define the Impact Space component that will provide mechanisms to measure the impact of IS evolution and to take decisions accordingly. With the IS Evolution Metamodel we have prepared the basis for developing the detailed guidance for IS evolution steering, which will complete our work on this conceptual framework.
Fig. 1 .
1 Fig. 1. Conceptual Framework for IS Evolution Steering
Fig. 2 .
2 Fig. 2. Simplified version of IS-SM. The right part (in white) shows the information model generic to any IS implementation, the left part (in red) represents enterprise business activity model, the top part (in grey) represents the regulatory model governing enterprise business and IS implementations. The multi-coloured elements represent pivot elements allowing to interconnect the information, activity and regulation models, and so, to capture how ISs support enterprise activities and comply with regulations.
Fig. 3 .
3 Fig. 3. Structural view of the IS evolution
Fig. 4 .
4 Fig. 4. Lifecycle of an instance of any element from IS-SM
Fig. 5 .
5 Fig. 5. Coordination of the lifecycle of a composite primitive (left) with the lifecycles of its sub-primitives (right); * indicates multiple transitions, dashed lines indicate that the step is under the responsibility of the lower or upper level model.
Fig. 7 .
7 Fig. 7. A small part of the ASIS-IS schema of the hospital
Fig. 8 .
8 Fig. 8. A part of the TOBE-IS schema of the hospital
A class C2 is existentially dependent on the class C1, if every object o2 of C2 is permanently associated to exactly one object o1 of C1; o2 is said to be existentially dependent on o1. The existential dependency is a transitive relation. One of its particular cases is the specialization. | 36,241 | [
"977600",
"977601"
] | [
"154620",
"154620"
] |
01765318 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01765318/file/bbviv7c.pdf | Eduardo Dur Án Venegas
Stéphane Le Diz Ès
Christophe Eloy
A coupled model for flexible rotors
Rotors are present in various applications ranging from wind turbines to helicopters and propellers. The rotors are often made of flexible materials which implies that their geometry varies when the operational conditions change. The intrinsic difficulty of rotor modeling lies in the strong coupling between the flow generated by the rotor and the rotor itself that can deform under the action of the flow. In this talk, we propose a model where the strong coupling between the flexible rotor and its wake is taken into account. We are particularly interested in configurations where the general momentum theory [START_REF] Sørensen | General momentum theory for horizontal axis wind turbines[END_REF] cannot be used (for example, for helicopters in descent flight).
The wake is described by a generalized Joukowski model. We assume that it is formed for each blade of a bound vortex on the blade and two free vortices of opposite circulation, same core size a, emitted at the radial locations R i and R e (see figure 1). These parameters are computed from the circulation profile Γ(r) obtained on the blade by applying locally at each radial location r the 2D Kutta-Joukowski formula
Γ(r) = 1 2 C L (α(r))U (r)c(r), (1)
where c(r) is the local chord, C L (α(r)) the lift coefficient of the chosen blade profile, α(r) the angle of attack of the flow, and U (r) the norm of the velocity. The vortex circulation Γ m is the maximum value of Γ(r), and the emission locations R i and R e are the radial distances of the centroid of ∂ r Γ on both sides of the maximum (see figure 1). The wake is computed using a free-vortex method [START_REF] Leishman | Principles of Helicopter Aerodynamics[END_REF]. Each vortex is discretized in small vortex segments for which the induced velocity can be explicitly obtained from the Biot-Savart law [START_REF] Saffman | Vortex Dynamics[END_REF]. We are considering helical wake structures that are stationary in the rotor frame. This frame is rotating at the rotor angular velocity Ω R and translating at a velocity V ∞ corresponding to an external axial wind. For a prescribed rotor of N blades, the wake structure is characterized by five non-dimensional parameters
λ = Ω R R b V ∞ , η = Γ m Ω R R 2 b , R * e = R e R b , R * i = R i R b , ε = a R b , (2)
where R b is the blade length. The aerodynamic forces exerted on the blade are calculated using the blade element theory [START_REF] Leishman | Principles of Helicopter Aerodynamics[END_REF]. From the wake solution are deduced the angle of attack and the velocity amplitude at each radial location on the blade in the rotor plane. Then, the loads are deduced from the lift and drag coefficients C L and C D of the considered blade profile. The blade deformation is obtained using a ribbon model for the blade [START_REF] Dias | meet Kirchhoff: A general and unified description of elastic ribbons and thin rods[END_REF]. This 1D model is a beam model that allows to describe the nonlinear coupling between bending and torsion. In the simplest cases, we assume uniform elastic properties of the blades which are characterized by a Poisson ratio ν and a non-dimensional Young modulus
E * = E/ρ b Ω 2 R 2 b
, where ρ b is the density of the blade.
A typical example with a simple blade geometry is shown in figure 2. In these figures are shown both the case of a rigid rotor and of a flexible rotor for the same operational conditions (same V ∞ and same Ω R ). We do see the effect of blade flexibility. The blades do bend and twist in the presence of the flow. Moreover, this bending and twisting also affect the wake. When the blade bends, the vortices move streamwise and inward, which impacts the expansion of the wake. The vortex circulation is also slightly modified as η changes from 0.0218 to 0.0216 when the blades bend.
Other examples will be presented and compared to available data. The question of the stability will also be addressed. Both flow instabilities and instabilities associated with the blade flexibility will be discussed.
Figure 1 :
1 Figure 1: Generalized Joukowski model. The parameters (Γ m , R i and R e ) of the model are computed from the circulation profile Γ(r) on the blade as explained in the text.
Figure 2 :
2 Figure 2: Illustration of the effect of blade flexibility on the wake structure and blade geometry. Dashed lines: wake and blades for the rigid case. solid lines: wake and blades for the flexible case. The undeformed blade is as illustrated in figure 1: it is a flat plate with a constant twist angle θ = -10 • and a linearly decreasing chord from c(r = 0.2R b ) = 0.1R b to c(r = R b ) = 0.07R b . The wake parameters of the rigid rotor are λ = 6.67, η = 0.0218, R * e = 0.99, R * i = 0.24, ε = 0.01. The flexible blades have the characteristics: E * = 10 6 , ν = 0.5. (a) 3D geometry of the rotor and of the wake. Only the deformation and the vortices emitted from a single blade are shown. (b) Locations of the vortices in the plane including a blade and the rotor axis. (c) Twist angle of the blade. (d) Bending of the blade. | 5,188 | [
"8388",
"7678"
] | [
"196526",
"196526",
"196526"
] |
01765340 | en | [
"shs"
] | 2024/03/05 22:32:13 | 2003 | https://insep.hal.science//hal-01765340/file/160-%20Drafting%20during%20swimming%20improves.pdf | Anne Delextrat
Véronique Tricot
Thierry Bernard
Fabrice Vercruyssen
Christophe Hausswirth
Jeanick Brisswalter
email: [email protected].
Pr Jeanick Brisswalter
Drafting during Swimming Improves Efficiency during Subsequent Cycling
Keywords: TRIATHLETES, HYDRODYNAMIC DRAG, OXYGEN KINETICS, HEMODYNAMICS, CADENCE
triathlon determinants highlighted that the metabolic demand induced by swimming could have detrimental effects on subsequent cycling or running adaptations (e.g., 3).
Experimental studies on the effect of prior swimming on subsequent cycling performance have led to contradictory results. Kreider et al. [START_REF] Kreider | Cardiovascular and thermal responses of triathlon performance[END_REF] have found that an 800-m swimming bout resulted in a significant decrease in power output (17%) during a subsequent 75-min cycling exercise. More recently, Delextrat et al. [START_REF] Delextrat | Effect of wet suit use on energy expenditure during a swim-to-cycle transition[END_REF] have observed a significant decrease in cycling efficiency (17.5%) after a 750-m swim conducted at a sprint triathlon competition pace when compared with an isolated cycling bout. In contrast, Laursen et al. [START_REF] Laursen | The effects of 3000-m swimming on subsequent 3-h cycling performance: implications for ultraendurance triathletes[END_REF] indicated no significant effect of a 3000-m swim performed at a long-distance triathlon competition pace on physiological parameters measured during a subsequent cycling bout. It is therefore suggested that the swimming section could negatively affect the subsequent cycling, especially during sprint triathlon, where the intensity of the swim is higher than during longdistance events.
Within this framework, we showed in a recent study [START_REF] Delextrat | Effect of wet suit use on energy expenditure during a swim-to-cycle transition[END_REF] that decreasing the metabolic load during a 750-m swim by using a wet suit resulted in a 11% decrease in swimming heart rate (HR) values and led to a 12% improvement in efficiency during a subsequent 10-min cycling exercise, when compared with swimming without a wet suit. The lower relative intensity when swimming with a wet suit is classically explained by a decrease in hydrodynamic drag. This decrease in hydrodynamic drag results from an increased buoyancy that allows the subjects to adopt a more horizontal position, thus reducing their frontal area [START_REF] Chatard | Effects of wetsuit use in swimming events[END_REF].
During swimming, hydrodynamic drag could also be reduced when swimming in a drafting position (i.e., swimming directly behind another competitor). The effects of drafting during short swimming bouts have been widely studied in the recent literature [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Chollet | The effects of drafting on stroking variations during swimming in elite male triathletes[END_REF][START_REF] Millet | Effects of drafting behind a two-or a six-beat kick swimmer in elite female triathletes[END_REF]. The main factor of decreased body drag with drafting seems to be the depression made in the water by the lead swimmer [START_REF] Bentley | Specific aspects of contemporary triathlon[END_REF]. This low pressure behind the lead swimmer decreases the pressure gradient from the front to the back of the following swimmer, hence facilitating his displacement through the water [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF]. Within this framework, significant decreases in passive drag (i.e., drag forces exerted on subjects passively towed through the water in prone position [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF]) from 10% to 26% have been reported in a drafting position compared with isolated conditions (for review, (3)). Moreover, swimming in drafting position is associated with significant reductions in oxygen uptake (10%), HR (6.2%), and blood lactate concentration (11-31%) [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF].
During a multidisciplinary event, such as triathlon, the effect of drafting on subsequent performance has been studied only during the cycling leg. Hausswirth et al. [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] showed that drafting during the cycle portion of a sprint triathlon led to a significant decrease in cycling energy expenditure (14%) compared with an individual effort, leading to a 4.1% improvement in performance during the subsequent 5-km run. To the best of our knowledge, no similar study has been conducted during a swim-bike trial, in order to evaluate the effects of drafting during swimming on subsequent cycling performance
The objective of the present study was therefore to investigate the effects of drafting during swimming on energy expenditure in the context of a swim-bike trial. We hypothesized that swimming in drafting position would be associated with a lower metabolic load during swimming and would reduce energy expenditure during subsequent cycling.
MATERIALS AND METHODS Subjects
Eight male triathletes competing at interregional or national level (age: 26 ± 6 yr, height: 183 ± 7 cm, weight: 74 ± 7 kg, body fat: 13 3%) participated to this study. They were all familiarized with laboratory testing. Average training distances per week were 6.6 km in swimming, 59 km in cycling, and 34 km in running, which represented 150 min, 135 min, and 169 min for these three disciplines, respectively. This training program included only one cross-training session (cycle-to-run) per week. The low distance covered by the triathletes during training, especially in cycling could be partly explained because the experiment was undertaken in winter, when triathletes usually decrease their training load in the three disciplines. Written consent was given by all the subjects before ail testing and the ethics committee for the protection of individuals gave their approval of the project before its initiation (Saint-Germain-en-Laye, France).
Protocol
Maximal oxygen uptake (V0 2 ".) and maximal aerobic power (MAP) determinations. The first test was a laboratory incremental test on a cycle ergometer to determine V0 2 . and MAP. After a 6-min warm-up at 150 W, the power output was increased by 25 W every 2 min until volitional exhaustion. The criteria used for the determination of V0 2 . were: a plateau in VO 2 despite the increase in power output, a HR over 90% of the predicted maximal HR, and a respiratory exchange ratio (RER) over 1.15 [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF]. Because V0 2 max was used as a simple descriptive characteristic of the population for the present study and was sot a primary dependent variable, the attainment of two out of three criteria was considered sufficient [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF]. The ventilatory threshold (VT) was calculated using the criteria of an increase in VE/VCO 2 with no concomitant increase of VE/ VCO 2 [START_REF] Wasserman | Anaerobic threshold and respiratory gas exchange during exercise[END_REF].
Submaximal sessions. After this first test, each triathlete underwent three submaximal sessions separated by at least 48 h. The experimental protocol is described in Figure 1. All swim tests took place in the outdoor Olympic swimming pool of Hyères (Var, France) and were performed with a neoprene wet suit (integral wet suit Aquaman ® , Pulsar 2000, thickness: shoulders: 1.5 mm, trunk: 4.5 mm, legs: 1.5 mm, arms: 1.5 mm) The cycling tests were conducted adjacent to the swimming pool in order to standardize the duration of the swim-to-cycle transition (3 min). The first test was always a 750-m swim performed alone at a sprint triathlon competition pace (SA trial). It was used to determine the swimming intensity for each subject. The two other tests, presented in a counterbalanced order, comprised one swim-to-cycle transition performed alone (SAC trial) and one swim-to-cycle transition with a swimming bout performed in drafting position (SDC trial).
The SAC trial consisted of a 750-m swim at the pace adopted during SA, followed by a 15-min ride on the bicycle ergometer at 75% of MAP and at a freely chosen cadence (FCC). This intensity was chosen to be comparable with the cycling competition pace during a sprint triathlon reported in subjects of the same level by previous studies (e.g., [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF]). Moreover, it was similar to those used in recent works studying the cycle-to-run transition in trained triathletes (e.g., 27). During the SDC trial, the subjects swam 750 m in drafting position (i.e., swimming directly behind a competitive swimmer in the same lane) at the pace adopted during SA. They then performed the 15-min ride at the same intensity as during SAC. The lead swimmer, who was used for ail the triathletes, was a highly trained swimmer competing at international level. To reproduce the swimming pace adopted during SA, the lead swimmer was informed of his performance every 50 m via visual feedback.
Measured Parameters
Swimming trials. During each swimming trial, the time to cover each 50 m and overall time were recorded. Subjects were instructed to keep the velocity as constant as possible. Stroke frequency (SF), expressed as the number of complete arm cycles per minute, was measured for each 50 m on a 20-m zone situated in the middle of the pool. The stroke length (SL) was calculated by dividing the mean velocity of each 20-m swim by the mean SF of each 20-m swim.
Immediately after each trial, the triathletes were asked to report their perceived exertion (RPE) using the 15-graded Borg scale (from 6 to 20 [START_REF] Borg | Perceived exertion as an indicator of somatic stress[END_REF]).
Blood sampling. Capillary blood samples were collected from subjects' earlobes at the following times: 1 and 3 min after swimming (L1, L2), and at the third and 15th minutes of cycling (L3, L4). Blood lactate concentration (LA, mmol.L -1 ) was then measured by the lactate Pro TM LT-1710 portable lactate analyzer (Arkray, KDK, Japan). The highest of the two postswim (L1, L2) concentrations was considered as the postswim lactate value, because the time delay for lactate to diffuse from the muscles to the blood during the recovery from a swimming exercise has not been precisely established [START_REF] Lepers | Swimming-cycling transition modelisation of a triathlon in laboratory. Influence on lactate kinetics[END_REF].
Measurement of respiratory gas exchange. During the cycling trials, oxygen uptake (VO 2 ), HR, and respiratory parameters (expiratory flow: VE; respiratory frequency: RF) were monitored breath-bybreath and recorded by the Cosmed K4b 2 telemetric system (Rome, Italy).
HR was continuously monitored during swimming and cycling using a cardiofrequency meter (Polar Vantage, Kempele, Finland). Physiological solicitation of cycling was assessed using oxygen kinetics analysis (e.g., 29); energy expenditure was analyzed by gross efficiency calculation [START_REF] Chavarren | Cycling efficiency and pedalling frequency in road cyclists[END_REF].
Curve fitting. Oxygen kinetics were modeled according to the method used by Barstow et al. [START_REF] Barstow | Influence of muscle fiber type and pedal frequency on oxygen uptake kinetics of heavy exercise[END_REF]. Breath-by-breath VO 2 data were smoothed in order to eliminate the outlying breaths (defined as those that were lying outside two standard deviations of the local mean). For each trial (SDC and SAC), the time course of the VO 2 response after the onset of the cycling exercise was described by two different exponential models that were fit to the data with the use of nonlinear regression techniques in which minimizing the sum of squared error was the criterion for convergence.
The first mathematical model was a mono-component exponential model:
The second mathematical model was a two-component exponential model:
The use of one of these models depends on the relative exercise intensity [START_REF] Xu | Oxygen uptake kinetics during exercise[END_REF]. The mono-component exponential model characterizes the VO 2 response during an exercise of moderate intensity (i.e., below the lactate threshold). After a time delay corresponding to the transit time of blood flow from the exercising muscle to the lung (TD), VO 2 increases exponentially toward a steady state level. The VO 2 response is characterized by an asymptotic amplitude (A) and a time constant (T) defined as the time to reach 63% of the difference from final plateau value and baseline (V0 2 (b), corresponding to the value recorded on the bicycle before the onset of cycling). At higher intensities, the VO 2 response is modeled by a two-component exponential function. The first exponential term describes the rapid rise in VO 2 previously observed (the parameters TD 1 , A 1 , and τ 1 are identical to TD, A, and T of the monocomponent exponential model), whereas the second exponential term characterizes the slower rise in VO 2 termed "VO 2 slow component" that is superimposed on the rapid phase of oxygen uptake kinetics.
The parameters TD 2 , A2, and τ 2 represent, respectively, the time delay, asymptotic amplitude, and time constant for this exponential term. The computation of best-fit parameters was chosen by a computer program (SigmaPlot 7.0) so as to minimize the sum of the squared differences between the fitted function and the observed response.
Determination of cycling gross efficiency. Cycling gross efficiency (GE, %) was calculated as the ratio of work accomplished per minute (kJ•min-1 ) to metabolic energy expended per minute (kJ•min- 1) . Because relative intensity of the cycling bouts could be superior to VT, the aerobic contribution to metabolic energy was calculated from the energy equivalents for oxygen (according to respiratory exchange ratio value) and a possible anaerobic contribution was estimated using blood lactate increase with time (à lactate: 63 J-kg -1 .mM -1 ; 13). For this calculation, VO 2 and lactate increase was estimated from the difference between the 15th and the third minutes.
Pedal rate. All the cycling tests were performed on an electromagnetically braked cycle ergometer (SRM Jülich, Welldorf, Germany) The cycle ergometer was equipped with the triathletes' own pedals, and the handlebars and racing seat were fully adjustable both vertically and horizontally to reproduce conditions known from their own bicycles. The SRM system can maintain a constant power output independent of the pedal rate spontaneously adopted by the subjects.
Statistical Analysis
All the results were expressed as mean and standard deviation (mean SD). Differences between the two conditions (swimming alone or in drafting position) in physiological and biomechanical parameters were analyzed using a Student t-test for paired samples. The level of confidence was set at P < 0.05.
RESULTS
Maximal Test
The subjects' physiological characteristics recorded during the incremental cycling test are presented in Table 1. VO 2m a x values were close to those previously obtained for triathletes of the same level [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: effect of exercise duration[END_REF][START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF]. From VT values, it could be observed that the cycling bouts were performed at an intensity close to VT + 2%.
Swimming Trials
Performance. No significant difference in performance was observed between the two swimming trials (respectively for SAC and SDC: 638 ± 38 s and 637 ± 39 s, P > 0.05). The two 750-m swims were therefore performed at a mean velocity of 1.18 m-s -1 . In addition, the stroke characteristics (SR and SL) were not significantly different between SAC and SDC trials (mean SR: 33.2 ± 4.5 cycles•min - 1 vs 33.1 ± 5.1 cycles.min -1 , respectively, for SAC and SDC, P > 0.05; mean SL: 2.13 ± 0.29 rn•cycle -1 vs 2.15 ± 0.30 m•cycle -1 , respectively, for SAC and SDC, P > 0.05). During the SDC trial, the mean distance between the subjects (draftees) and the lead swimmer did not exceed 1 m.
Physiological parameters and RPE.
The HR values recorded during the last 5 min of swimming are presented in Figure 2. The main result shows that swimming in drafting position resulted in a significant mean decrease of 7% in HR values during the last 4 min of swimming in comparison with the isolated swimming bout (160 ± 15 beats.min -1 vs 172 ± 18 beats•min -1 , respectively, for SDC and SAC trials, Fig. 2, P < 0.05). Furthermore, postswim lactate values were significantly lower (29.3%) after the SDC session when compared with the SAC session (5.3 ± 2.1 mmol-L -1 vs 7.5 ± 2.4 mmol•L - 1 , respectively, for SDC and SAC trials, P < 0.05).
Finally, RPE values recorded immediately after swimming indicated that the subjects' perception of effort was significantly lower in the SDC trial than in the SAC trial (13 ± 2 vs 15 ± 1, corresponding to "rather laborious" versus "laborious" respectively for SDC and SAC trials, P < 0.05).
Cycling trials
VO 2 kinetics. All VO 2 responses were best fitted by a mono-component exponential model, except the 'O 2 responses of one subject during the SAC trial that were best described by a two-component exponential function. The occurrence of a slow component in this latter case is representative of a heavy-intensity exercise whereas the other subjects have exercised in a moderate-intensity domain [START_REF] Xu | Oxygen uptake kinetics during exercise[END_REF]. Therefore, the parameters of the model for this subject are different (two-component exponential model vs mono-component exponential model) and could not be included in the same analysis. Figure 3 shows the breath-by-breath VO 2 responses during SAC and SDC trials for a representative subject (responses best fitted by a mono-component exponential model, Fig. 3A) as well as the breath-bybreath \ . /0 2 responses for the subject eliminated (responses best fitted by a two-component exponential model, Fig. 3B). Statistical analysis shows that baseline VO 2 values were sot significantly different between SAC and SDC trials (P > 0.05). However, we have observed that during the SAC trial, higher VO 2 values at the steady state level were attained more quickly than during the SDC trial (time constant values for SAC and SDC trials were, respectively, 17.1 ± 7.8 s vs 23.6 ± 10.1 s for VO 2 kinetics, P < 0.05).
Mean physiological parameters and RPE.
The influence of drafting, during prior swimming, on the mean physiological values measured during subsequent cycling is presented in Table 2. The statistical analysis shows that cycling efficiency was significantly higher in the SDC trial (4.8%) in comparison with the SAC trial (P < 0.05). The V02 . HR and lactate values measured during cycling were significant higher when previous swimming was performed alone compared with the drafting condition (Table 2, P < 0.05). However, no significant increase in blood lactate concentration with time was observed, indicating the main contribution of aerobic metabolism [START_REF] Di Prampero | Energetics of muscular exercise[END_REF]. Therefore, the decrease in gross efficiency during the SAC trial is related to higher VO 2 values. Furthermore, the subjects' RPE was significantly lower in the SDC trial compared to the SAC trial (15 ± 2 vs 17 ± 2, corresponding to "laborious" vs "very laborious," P < 0.05).
Pedal rate. The statistical analysis indicated a significant difference in pedal rate measured during cycling between the two conditions. A significantly lower pedal rate (5.6%) was observed in the SDC trial in comparison with the SAC trial (Table 2, P < 0.05).
DISCUSSION
The main result of the present study indicated a significant effect of swimming metabolic load on oxygen kinetics and efficiency during subsequent cycling at competition pace. Within this framework, a prior 750-m swim performed clone resulted in faster oxygen kinetics and a significantly higher global energy expenditure during subsequent cycling, in comparison with an identical swimming bout performed in a drafting position (P < 0.05).
Drafting during swimming and swimming metabolic load. The effects of drafting on energy expenditure during short-or long-distance events have been investigated over a variety of physical activities. Drafting has been shown to significantly reduce the metabolic load during swimming (2), cycling [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF], cross-country skiing [START_REF] Spring | Drag area of a cross-country skier[END_REF], and speed skating [START_REF] Van Ingen Schenau | The influence of air friction in speed skating[END_REF]. The lower energy cost observed in a drafting position is classically attributed to a decrease in aerodynamic or hydrodynamic drag [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF]. In this context, Bassett et al. [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF] have suggested that the decrease in drag associated with drafting was lower in swimming in comparison with terrestrial activities. This is because of the characteristics of swimming such as the relatively low velocity, the prone position, and the turbulence owing to the kicks of the lead swimmer. Decreases in passive hydrodynamic drag in drafting position from 10% to 26% have been reported in the literature [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Millet | Effects of drafting behind a two-or a six-beat kick swimmer in elite female triathletes[END_REF]. It should be noted that the active drag experienced by a subject while swimming is approximately 1.5-2 times greater than passive drag [START_REF] Di Prampero | Energetics of swimming in man[END_REF].
In this study, the HR values recorded during the two swimming bouts (Fig. 2) shows mean HR values corresponding respectively for SDC and SAC trials to 84.2% and 90.5% of HR max measured during cycling. Consequently, drafting involved a significant 7% decrease in HR during a 750-m swim (P < 0.05). Furthermore, the SDC trial was characterized by significant reductions in postswim lactate values (29.3%) and RPE values (20%), in comparison with the SAC trial.
The main factor classically evoked in the literature to explain the lower swimming energy cost in drafting position is the reduction of hydrodynamic drag owing to the body displacement of the leading swimmer. The extent to which hydrodynamic drag could be reduced in a drafting position depends on several factors, such as swimming velocity and the distance separating the draftee and the lead swimmer. Concerning the distance between the swimmers, there seems to be a compromise between the positive effect of the hydrodynamic wake created by the lead swimmer and the negative effect of the turbulence generated by his kicks [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Millet | Effects of drafting behind a two-or a six-beat kick swimmer in elite female triathletes[END_REF]. However, during triathlon, the draftee could follow the lead swimmer quite closely because triathletes usually adopt a two-beat kick that does not generate excessive turbulence.
The effects of drafting during short-distance swimming bouts have been well documented in the literature [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Chollet | The effects of drafting on stroking variations during swimming in elite male triathletes[END_REF]. However, during these experiments, the race conducted in drafting position was performed either at the same relative velocity as the isolated condition (2), or the subjects were asked to swim as fast as possible during the second half of the race [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Chollet | The effects of drafting on stroking variations during swimming in elite male triathletes[END_REF]. Using a protocol comparable to the present study, Bassett et al. [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF] have observed during a 549-m swim (600 yards) performed at 1.20 m•s-1 (1.18 m•s -1 in the present study) significantly lower HR (6.2%), lactate (31%), and RPE values (21%) when the swimming bout was performed in a drafting position (P < 0.05), compared with an isolated effort. These results are in agreement with this previous study. One interesting result of this study is that the significant effect of drafting previously reported in the literature was observed even though our subjects were wearing a wet suit. It has been reported that the use of wet suit induced significant decreases in energy cost (from 7% to 22%) and active drag (from 12% to 16%) among different speeds (for review, [START_REF] Chatard | Effects of wetsuit use in swimming events[END_REF]). It could be concluded that during triathlon events, where subjects are wearing wet suits, drafting could further increase the reduction in metabolic load during swimming.
Drafting during swimming and cycling exercise. In the present study, the decrease in metabolic load associated with swimming in a drafting position involved two main modifications in physiological parameters during subsequent cycling. First, VO 2 kinetics, at the onset of cycling, were significantly slowed when the prior swimming bout was performed in a drafting position (slower time constant, τV02) compared with swimming alone (P < 0.05). Second, a significantly higher cycling efficiency, measured at steady state level, was observed in the SDC trial versus the SAC trial (+4.8%, P < 0.05).
The modification in VO 2 kinetics observed in the present study is in accordance with previous results reported in the literature. During the last decade, several investigations have analyzed the influence of previous exercise metabolic load on the rate of VO 2 increase at the onset of subsequent exercise. Gerbino et al. [START_REF] Gerbino | Effects of prior exercise on pulmonary gasexchange kinetics during high-intensity exercise in humans[END_REF] have found that VO 2 kinetics during a high-intensity cycling exercise (i.e., greater than the lactate threshold) was significantly increased by a prior high-intensity cycling bout, whereas no effect was reported after a prior low-intensity exercise (i.e., lower than the lactate threshold). In addition, Bohnert et al. ( 4) have observed an acceleration of VO 2 kinetics when a cycling trial was preceded by a high-intensity arm-cranking exercise.
Many studies have been conducted in order to identify the mechanisms underlying the rate of VO 2 increase at the onset of exercise (e.g., [START_REF] Xu | Oxygen uptake kinetics during exercise[END_REF]. Although these mechanisms are not clearly established, two major hypotheses are reported in the literature. Some authors suggest that VO 2 kinetics are limited by the rate of oxygen supply to the active muscle mass, whereas others report that the capacity of muscle utilization is the most important determinant of VO 2 responses at the onset of exercise [START_REF] Xu | Oxygen uptake kinetics during exercise[END_REF]. Concerning the hypothesis of oxygen transport limitation, Hughson et al. [START_REF] Hughson | Kinetics of ventilation and gas exchange during supin and upright cycle exercise[END_REF] investigated the influence of an improved perfusion of active muscle mass during cycling on the rate of VO 2 increases at the onset of exercise. These authors found that VO 2 kinetics at the onset of exercise were significantly faster when the perfusion of active muscle mass was augmented. In our study, several factors could be evoked to increase perfusion in the muscles of the lower limbs during cycling, such as previous metabolic load and pedal rate.
Gerbino et al. [START_REF] Gerbino | Effects of prior exercise on pulmonary gasexchange kinetics during high-intensity exercise in humans[END_REF] suggested that the faster VO 2 kinetics observed during the second bout of two repeated high-intensity cycling exercises could be accounted for by the residual metabolic acidemia from previous high-intensity exercise, involving a vasodilatation and thus an enhancing blood flow to the active muscle mass at the start of subsequent cycling bout. In favor of this hypothesis, a higher metabolic acidemia was observed in the present study immediately after the swimming stage of the SAC trial in comparison with the SDC trial (postswim lactate values: 7.5 ± 2.4 mmol.L -1 vs 5.3 ± 2.1 mmol•L -1 for SAC and SDC trials, respectively, P < 0.05). Therefore, we suggest that the higher contribution of anaerobic metabolism to energy expenditure when swimming alone has involved a better perfusion of active muscular mass at the start of subsequent cycling exercise.
However, in this study, subjects adopted a higher pedal rate after the swimming bout performed alone. There is little information on the effects of pedal rate manipulation on cardiovascular adjustments during cycling. However, Gotshall et al. [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF] have indicated an enhanced muscle blood flow with increasing cadences from 70 to 110 rpm. Indeed, the frequency of contraction and relaxation of the muscles of the lower limbs increases throughout high cadences, improving venous return and therefore heart filling. As a consequence, the skeletal muscle pump is progressively more effective, resulting in an over perfusion of the active muscle mass [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF]. According to this hypothesis, the significantly higher pedal rate reported in the present study in the SAC trial in comparison with the SDC trial (Table 2, P < 0.05) could have involved an increased blood flow to the muscles of the lower limbs. Therefore, both the higher contribution of anaerobic metabolism to energy expenditure during prior swimming and the higher pedal rates adopted during subsequent cycling in the SAC trial could account for the faster VO 2 kinetics observed at the onset of cycling in this trial in comparison with the SDC trial.
The second principal result of the present study indicated a significantly higher cycling efficiency during the SDC trial in comparison with the SAC trial (Table 2, P < 0.05). To the best of our knowledge, the effects of drafting during swimming on subsequent cycling adaptation have never been investigated. However, these results were similar to another study from our laboratory showing that wearing a neoprene wet suit reduced the metabolic load at the end of swimming and led to a 12% increase in subsequent cycling efficiency [START_REF] Delextrat | Effect of wet suit use on energy expenditure during a swim-to-cycle transition[END_REF]. In our study, subjects were wearing a wet suit, and our results indicated that drafting could lead to a further improvement of cycling efficiency.
In the context of multidisciplinary events, the effect of drafting on subsequent performance has been mainly studied during the cycle-to-run portion of a simulated sprint triathlon [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF]. For example, Hausswirth et al. [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] reported that the significant reductions in VO 2 VE, HR, and blood lactate concentration during the cycle stage of a simulated sprint triathlon (0.75-km swim, 20-km cycle, 5-km run), observed when cycling was performed in drafting position in comparison with an isolated cycling stage, were related to significant increases in subsequent running velocity (4.1%). More recently, Hausswirth et al. [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] observed that drafting continuously behind a leader during the 20-km stage of a sprint triathlon resulted in a significantly lower cycling metabolic cost, in comparison with alternating drafting and leading every 500 m at the same pace. This lower metabolic cost led to a 4.2% improvement in velocity during a subsequent 5-km run [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF]. These authors suggested that during the drafting conditions (drafting position vs isolated cycling, or continuous vs alternate drafting), the decrease in energy cost of cycling is the main factor of running performance improvement. In the present study, the cycling bouts were conducted at constant speed. Therefore, no improvement in performance (i.e., velocity) could be observed. However, we recorded a 4.8% increase in cycling efficiency after a swimming bout performed in drafting position compared with an isolated swimming bout. This improvement in cycling efficiency could be mainly accounted for by the lower swimming relative intensity involving a lower state of fatigue in the muscles of the lower limbs at the beginning of subsequent cycling. Consequently, in long-distance events such as triathlon, where performance depends on the capacity to spend the lowest amount of metabolic energy during the whole race [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF], we suggest that the increase in cycling efficiency could lead to an improvement in performance. However, further studies are needed to investigate the effects of this improved cycling efficiency on running and total triathlon performance.
However, it should be noted that the possibility for athletes and coaches to put the results of the present study into practice could be limited by the Jack of cycling training of our subjects and by the difference between the intensity and duration of the cycling trials in this study and the metabolic load encountered during a sprint triathlon [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF]. Because cycling experience could lead to a lower variability in energy cost of locomotion, more training in cycling would be associated with a lower benefit of drafting. Furthermore, even if a measure of actual cycling performance improvements after drafting (such as time or power output) would have been more applicable to competition, the constant power output set in this study allowed the quantification of the modifications in energy expenditure during cycling, which is a main determinant of triathlon performance [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF]. Further studies are necessary to validate the effects observed in this study during a real triathlon event.
In conclusion, the results of the present study show that the metabolic load during swimming could have a significant effect on subsequent cycling performance during a sprint triathlon. In particular, a decrease in swimming relative intensity could lead to a significantly higher efficiency during subsequent cycling. These findings highlight that swimming behind another athlete is beneficial during triathlon events. Within this framework, further studies could include a running session to investigate more precisely the effects of drafting during the swimming bout of a sprint triathlon on total triathlon performance.
FIGURES and TABLES
FIGURE 1 -
1 FIGURE 1-Experimental protocol. L: blood sampling, K4 b 2 : installation of the Cosmed K4 b 2 analyzer.
FIGURE 2 -FIGURE 3 -
23 FIGURE 2-Changes in HR values during the fast 5 min of the two swimming trials (SAC and SDC). *Significant difference between SDC and SAC trials, P < 0.05.
TABLE 1 . Subjects' physiological characteristics during the cycling incremental test. VO2max (mL•min-1 .k -1 )
1 V0 2max , maximal oxygen uptake; MAP, maximal aerobic power; HR max, maximal heart rate; RER max , maximal respiratory exchange ratio; power output at VT, power output corresponding to the ventilatory threshold.
MAP (W) 75% of MAP (W) HRmax (beats•min-1) RERmax. Power Output at VT (W)
66.2 ± 6.8 343 ± 39 262 ± 29 190 ± 9 1.06 ± 0.05 258 ± 42
Table 2 . Effect of drafting during prior swimming on mean values of physiological parameters and pedal rate recorded during subsequent cycling exercice.
2 V0 2 , oxygen uptake; LA, blood lactate concentration; GE, gross efficiency; HR, heart rate; VE, expiratory flow; RF, respiratory frequency. * Significant difference between SAC and SDC trials; P < 0.05.
SDC SA
The authors acknowledge all the triathletes who took part to the experiment for their high cooperation and motivation. We are also grateful to Rob Suriano for his assistance with the language. | 38,680 | [
"19845",
"752657",
"1012603"
] | [
"303091",
"303091",
"303091",
"303091",
"441096",
"303091"
] |
01430561 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01430561/file/articleCurvature12.pdf | Francisco J Blanco-Rodríguez
Stéphane Le Dizès
Curvature instability of a curved Batchelor vortex
In this paper, we analyse the curvature instability of a curved Batchelor vortex. We consider this short-wavelength instability when the radius of curvature of the vortex centerline is large compared to the vortex core size. In this limit, the curvature instability can be interpreted as a resonant phenomenon. It results from the resonant coupling of two Kelvin modes of the underlying Batchelor vortex with the dipolar correction induced by curvature. The condition of resonance of the two modes is analysed in detail as a function of the axial jet strength of the Batchelor vortex. Contrarily to the Rankine vortex, only a few configurations involving m = 0 and m = 1 modes are found to become the most unstable. The growth rate of the resonant configurations is systematically computed and used to determine the characteristics of the most unstable mode as a function of the curvature ratio, the Reynolds number, and the axial flow parameter. The competition of the curvature instability with another short-wavelength instability, which was considered in a companion paper [Blanco-Rodríguez & Le Dizès, Elliptic instability of a curved Batchelor vortex, J. Fluid Mech. 804, 224-247 (2016)], is analysed for a vortex ring. A numerical error found in this paper which affects the relative strength of the elliptic instability is also corrected. We show that the curvature instability becomes the dominant instability in large rings as soon as axial flow is present (vortex ring with swirl).
Introduction
Vortices are ubiquitous in nature. They are subject to various instabilities induced by the interaction with their surroundings. In this work, we analyse the so-called curvature instability which is a short-wavelength instability induced by the local curvature of the vortex. We provide theoretical predictions for a curved vortex when the underlying vortex structure is a Batchelor vortex (Gaussian axial velocity and axial vorticity). This work is the follow-up of Blanco-Rodríguez & Le [START_REF] Blanco-Rodríguez | Elliptic instability of a curved Batchelor vortex[END_REF], hereafter BRLD16, where another short-wavelength instability, the elliptic instability, was analysed using the same theoretical framework.
These two instabilities are different from the long-wavelength instabilities which occur in vortex pairs [START_REF] Crow | Stability theory for a pair of trailing vortices[END_REF]) and helical vortices [START_REF] Widnall | The stability of a helical vortex filament[END_REF][START_REF] Quaranta | Long-wave instability of a helical vortex[END_REF]. Their characteristics strongly depend on the internal vortex structure and their wavelength is of the order of the vortex core size. When the vortex is weakly deformed, both instabilities can be understood as a phenomenon of resonance between two (Kelvin) modes of the underlying vortex and a vortex correction. For the elliptic instability, the resonance occurs with a quadripolar correction generated by the background strain field [START_REF] Moore | The instability of a straight vortex filament in a strain field[END_REF], while for the curvature instability, it is associated with a dipolar correction created by the vortex curvature [START_REF] Fukumoto | Curvature instability of a vortex ring[END_REF]. Numerous works have concerned the elliptic instability in the context of straight vortices [START_REF] Tsai | The stability of short waves on a straight vortex filament in a weak externally imposed strain field[END_REF][START_REF] Eloy | Three-dimensional instability of Burgers and Lamb-Oseen vortices in a strain field[END_REF]Fabre & Jacquin 2004a;[START_REF] Lacaze | Elliptic instability in a strained Batchelor vortex[END_REF]. The specific case of the curved Batchelor vortex has been analysed in BRLD16. Contrarily to the elliptic instability, the curvature instability has only been considered for vortices with uniform vorticity [START_REF] Fukumoto | Curvature instability of a vortex ring[END_REF][START_REF] Hattori | Modal stability analysis of a helical vortex tube with axial flow[END_REF].
Both elliptic and curvature instabilities have also been analysed using the local Lagrangian method popularized by [START_REF] Lifschitz | Local stability conditions in fluid dynamics[END_REF] [see [START_REF] Bayly | Three-dimensional instability of elliptical flow[END_REF]; [START_REF] Waleffe | On the three-dimensional instability of strained vortices[END_REF] for the elliptic instability, [START_REF] Hattori | Short-wavelength stability analysis of thin vortex rings[END_REF][START_REF] Hattori | Short-wavelength stability analysis of a helical vortex tube[END_REF][START_REF] Hattori | Effects of axial flow on the stability of a helical vortex tube[END_REF] for the curvature instability]. This method can be used to treat strongly deformed vortices but it provides a local information on a given streamline only. When the vortex is uniform, as the Rankine vortex, the local instability growth rate is also uniform. In that case, a connection can be made between the local results and the global results obtained by analyzing the mode resonances [START_REF] Waleffe | On the three-dimensional instability of strained vortices[END_REF][START_REF] Eloy | Stability of the Rankine vortex in a multipolar strain field[END_REF][START_REF] Fukumoto | The three-dimensional instability of a strained vortex tube revisited[END_REF][START_REF] Hattori | Short-wave stability of a helical vortex tube: the effect of torsion on the curvature instability[END_REF][START_REF] Hattori | Modal stability analysis of a helical vortex tube with axial flow[END_REF]. Le [START_REF] Dizès | Theoretical predictions for the elliptic instability in a twovortex flow[END_REF] used the local prediction at the vortex centre to estimate the global growth rate of the elliptic instability in a non-uniform vortex. Although a good agreement was demonstrated for the Lamb-Oseen vortex, no such link is expected in general.
The goal of the present work is to obtain global estimates for the curvature instability using the framework of [START_REF] Moore | The instability of a straight vortex filament in a strain field[END_REF] for the Batchelor vortex. Such an analysis was performed by [START_REF] Hattori | Modal stability analysis of a helical vortex tube with axial flow[END_REF] for a Rankine vortex. The passage from the Rankine vortex to the Batchelor vortex will turn out not to be trivial. The main reason comes from the different properties of the Kelvin modes in both vortices. In smooth vortices, Kelvin modes are affected by the presence of critical layers (Le Dizès 2004) which introduce singularities and damping [START_REF] Sipp | Widnall instabilities in vortex pairs[END_REF][START_REF] Fabre | The Kelvin waves and the singular modes of the Lamb-Oseen vortex[END_REF]. These singularities have to be monitored and avoided in the complex plane to be able to obtain the properties of the Kelvin modes from the inviscid equations as shown in [START_REF] Lacaze | Elliptic instability in a strained Batchelor vortex[END_REF]. In the present work, we shall also use the asymptotic theory of Le [START_REF] Dizès | An asymptotic description of vortex Kelvin modes[END_REF] to obtain an approximation of the Kelvin mode dispersion relation and analyse the condition of resonance.
The dipolar correction responsible of the curvature instability is also obtained by an asymptotic theory in the limit of small vortex core size [START_REF] Callegari | Motion of a curved vortex filament with decaying vortical core and axial velocity[END_REF]. This correction appears as a first order correction to the Batchelor vortex. The detail of the derivation can be found in [START_REF] Blanco-Rodríguez | Internal structure of vortex rings and helical vortices[END_REF]. As for the elliptic instability, the coupling terms, as well as weak detuning and viscous effects are computed using an orthogonality condition. The final result is an expression for the growth rate of a given resonant configuration close to the condition of resonance. Each resonant configuration provides a growth rate expression. We shall consider up to 50 resonant configurations to extract the most unstable one. This will allows us to obtain the curvature instability diagram as a function of the curvature ratio and the Reynolds number.
The paper is organized as follows. In §2, the base flow and perturbation equations are provided. In §3, the analysis leading the growth rate expression of a resonant configuration is presented. The results for the Batchelor vortex are obtained in §4. We first provide the characteristics of the resonant modes, then the stability diagrams for the Batchelor vortex for a few values of the axial flow parameter. Section §5 provides an application of the results to a vortex ring with and without swirl (axial flow). In that section, we analyse the competition of the curvature with the elliptic instability using the results of BRLD16. A numerical error affecting the strength of the elliptic instability has been found in this paper. It is corrected in a corrigendum which is presented in appendix D. The last section §6 gives a brief summary of the main results of the paper.
Problem formulation
Base flow
The first description of the base flow was provided by [START_REF] Callegari | Motion of a curved vortex filament with decaying vortical core and axial velocity[END_REF]. Here, as in BRLD16, we mainly follow the presentation given in [START_REF] Blanco-Rodríguez | Internal structure of vortex rings and helical vortices[END_REF]. The vortex is considered in the local Frenet frame (t, n, b) attached to the vortex centerline and moving with the structure. We assume that the vortex is concentrated (i.e. thin), which means that its core size a is small compared to the local curvature radius R c of the vortex centerline and the shortest separation distance δ to other vortex structures. For simplicity, we consider a single small parameter ε = a/R c , and assume that δ = O(R c ).
The internal vortex dynamics is described using the "cylindrical" coordinate system (r, ϕ, s) constructed from the Frenet frame (see Fig. 1).
The velocity-pressure field of the base flow is expanded in power of ε as U = U 0 + εU 1 + • • •. The leading order contribution is the prescribed Batchelor vortex of velocity field U 0 = (0, V (0) (r), W (0) (r), P (0) (r)) with
V (0) = 1 -e -r 2 r , W (0) = W 0 e -r 2 . (2.1)
As in BRLD16, spatial and time scales have been non-dimensionalized using the core size a and the maximum angular velocity of the vortex Ω (0) max = Γ/(2πa 2 ), Γ being the vortex circulation. The axial flow parameter W 0 is defined as the ratio
W 0 = W (0) max Ω (0) max a . (2.2)
We assume that W 0 0.5 such that the vortex remains unaffected by the inviscid swirling jet instability [START_REF] Mayer | Viscous and inviscid instabilities of a trailing vortex[END_REF]. We also implicitly assume that the weak viscous instabilities occurring for small values of W 0 (Fabre & Jacquin 2004b;[START_REF] Dizès | Large-Reynolds-number asymptotic analysis of viscous centre modes in vortices[END_REF] remain negligible in the parameter regime that is considered. In the following, we shall also use the expression of the angular velocity Ω (0) (r) and vorticity ζ (0) (r):
Ω (0) (r) = 1 -e -r 2 r 2 , ζ (0) (r) = 2e -r 2 . (2.3)
As explained by [START_REF] Blanco-Rodríguez | Internal structure of vortex rings and helical vortices[END_REF], the first order correction is a dipolar field which can be written as
U 1 ∼ ε Re U (1) e iϕ = ε 2 iU (1) (r) V (1) (r) W (1) (r) P (1) (r) e iϕ + c.c. , (2.4)
where expressions for U (1) , V (1) , W (1) and P (1) are provided in appendix A. It is worth emphasizing that these expressions only depend on the local characteristics of the vortex at leading order. In particular, they do not depend on the local torsion. For helices, torsion as well as the Coriolis effects associated with the change of frame appear at second order [START_REF] Hattori | Short-wavelength stability analysis of a helical vortex tube[END_REF]. The above expression then describes the internal structure of both helices and rings up to the order ε. This contrasts with the quadripolar correction responsible of the elliptic instability which appears at second order. This quadripolar correction varies according to the global vortex geometry and is different for rings and helices even if they have the same local curvature.
Perturbation equations
The perturbations equations are obtained by linearizing the governing equations around the base flow
U = U 0 + εU 1 + • • •.
As shown in BRLD16, if the perturbation velocitypressure field is written as u = (-iu, v, w, p) we obtain up to o(ε) terms a system of the form :
(i∂ t I + i∂ s P + M) u = ε e iϕ N (1)
+ + e -iϕ N
(1) -
u + i Re Vu (2.5)
where the operators I, P, M = M(-i∂ ϕ ), N
± = N (1) ± (-i∂ ϕ , -i∂ s ), V = V(-i∂ ϕ , -i∂ s ) are defined in Appendix B. (1)
The left-hand side corresponds to the inviscid perturbation equations of the undeformed Batchelor vortex. The first term on the right-hand side is responsible of the curvature instability, while the second term accounts for the viscous effects on the perturbations. By introducing viscous effects in this equation, we implicitly assume that the Reynolds number
Re = Ω (0) max a 2 ν = Γ 2πν ,
with ν the kinematic viscosity, is of order 1/ε.
Instability description
Curvature instability mechanism
The mechanism of the curvature instability is similar to that of the elliptic instability.
The instability results from a resonant coupling of two Kelvin modes of the undeformed axisymmetric vortex with non-axisymmetric corrections. Two Kelvin modes of characteristics (ω A , k A , m A ) and (ω B , k B , m B ) are resonantly coupled via the dipolar correction if they satisfy the condition of resonance (assuming m A < m B )
ω A = ω B , k A = k B , m A = m B -1. (3.1)
Fukumoto ( 2003) further demonstrated that the coupling is destabilizing only if the energy of the modes is opposite or if the frequency vanishes. It leads to a growth of the Kelvin mode combination with a maximum growth rate scaling as ε.
Formal derivation of the growth rate formula
For each resonant configuration, a growth rate expression can be obtained from an orthogonality condition as we did for the elliptic instability (see BRLD16). We consider a combination of two Kelvin modes of azimuthal wavenumber m A and m B = m A + 1 close to the their condition of resonance (3.1):
u = Aũ A (r) e im A ϕ + Bũ B (r) e im B ϕ e iks -iωt , (3.2)
where
k is close to k A = k B = k c , and ω close to ω A = ω c and ω B = ω c + i Im(ω B ).
We assume that the resonance is not perfect. The mode B will exhibit a weak critical layer damping given by Im(ω B ) (imaginary part of ω B ). The functions ũA (r) and ũB (r) are the eigenfunctions of the Kelvin modes which satisfy
(ω A I -k A P + M(m A )) ũA = 0, (3.3) (ω B I -k B P + M(m B )) ũB = 0, (3.4)
with a prescribed normalisation:
pA ∼ r→0 r |m A | , pB ∼ r→0 r |m B | . (3.5)
If we plug (3.2) in (2.5), we obtain for the components proportional to e im A ϕ and e im B ϕ :
A ωI -kP + M(m A ) - i Re V(m A , k) ũA = BεN (1) -(m B , k)ũ B , (3.6)
B ωI -kP + M(m B ) - i Re V(m B , k) ũB = AεN (1) + (m A , k)ũ A . (3.7)
Relations between the amplitudes A and B are obtained by projecting these equations on the subspace of the adjoint Kelvin modes. We define the adjoint eigenfunctions ũ † A and ũ † B of the Kelvin modes as the solutions to the adjoint equations of (3.3)-(3.4) with respect to the scalar product
< u 1 , u 2 >= ∞ 0 u 1 • u 2 r dr = ∞ 0 (u 1 u 2 + v 1 v 2 + w 1 w 2 + p 1 p 2 ) r dr.
(3.8)
We then obtain
ω -ω c -Q A (k -k c ) -i V A Re A = εC AB B, (3.9) ω -ω c -i Im(ω B ) -Q B (k -k c ) -i V B Re B = εC BA A, (3.10)
where the coefficients of these equations are given by
Q A = < ũ † A , P ũA > < ũ † A , I ũA > , Q B = < ũ † B , P ũB > < ũ † B , I ũB > , (3.11) V A = < ũ † A , V ũA > < ũ † A , I ũA > , V B = < ũ † B , V ũB > < ũ † B , I ũB > , (3.12) C AB = < ũ † A , N (1)
+ (m B , k B ) ũB > < ũ † A , I ũA > , C BA = < ũ † B , N (1)
-(m A , k A ) ũA > < ũ † B , I ũB > . (3.13)
The formula for the complex frequency ω is then finally given by
ω -ω c -i Im(ω B ) -Q B (k -k c ) -i V B Re ω -ω c -Q A (k -k c ) -i V A Re = -ε 2 N 2 , (3.14) with N = -C AB C BA .
(3.15)
The right-hand side of (3.14) represents the coupling terms responsible of the curvature instability. The left-hand side of (3.14) gives the dispersion relation of each Kelvin mode close to the resonant point. It is important to mention that none of the coefficients
Q A , Q B , V A , V B
and N depends on the normalization chosen for the Kelvin modes.
Instability results for the Batchelor vortex profile
4.1. Resonant Kelvin modes The main difficulty of the analysis is to determine the Kelvin modes that satisfy the condition of resonance (3.1). A similar problem was already addressed in [START_REF] Lacaze | Elliptic instability in a strained Batchelor vortex[END_REF]. The Kelvin modes are here defined from the inviscid equations. Two kinds of Kelvin modes are found to exist: the regular and neutral Kelvin modes which can easily be obtained by integrating the inviscid perturbation equations in the physical domain and the singular and damped Kelvin modes which require a particular monitoring of the singularities of the perturbation equations in the complex plane. We shall see below that the condition of resonance always involves a singular mode.
The singularities of the inviscid perturbation equations are the critical points r c where ω -kW (0) (r c ) -mΩ (0) (r c ) = 0. When Im(ω) > 0, these singularities are in the complex plane, and do not affect the solution in the physical domain (real r). However, one of such critical points may cross the real axis when Im(ω) becomes negative. As explained in Le [START_REF] Dizès | Viscous critical-layer analysis of vortex normal modes[END_REF], the inviscid equations must in that case be integrated on a contour in the complex r plane that avoids the critical point from below (resp. above) if the critical point has moved in the lower (resp. upper) part of the complex plane. On such a contour, the solution remains regular and fully prescribed by the inviscid equations. On the real axis, the inviscid solution is however not regular anymore. As illustrated in [START_REF] Fabre | The Kelvin waves and the singular modes of the Lamb-Oseen vortex[END_REF], it no longer represents the vanishing viscosity limit of a viscous solution in a large interval of the physical domain. The Kelvin mode forms by the contour deformation technique is damped and singular. The inviscid frequency of the mode then possesses a negative imaginary part, which corresponds to what we call the critical layer damping rate. By definition, the critical layer damping rate is independent of viscosity.
A mode cannot be involved in a resonance if it is too much damped. In the asymptotic framework, the growth rate associated with the resonance is expected to be O(ε), so the damping rate of the modes should a priori be asymptotically small of order ε. However, in practice, we shall consider values of ε up to 0.2, and the maximum growth rate will turn out to be around 0.05 ε. We shall then discard all the modes with a damping rate whose absolute value exceeds 0.01.
Predictions from the WKBJ analysis
Le [START_REF] Dizès | An asymptotic description of vortex Kelvin modes[END_REF] showed that information on the spectrum of the Kelvin modes can be obtained using a large k asymptotic analysis. They applied their theory to the Batchelor vortex and were able to categorize the neutral Kelvin modes in four different types: regular core modes, singular core modes, regular ring modes, singular ring modes. For each m, they provided the region of existence of each type of mode in a (kW 0 , ω)
-2.5 -2 -1.5 -1 -0.5 0 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 k W0 ω (1,2) (0,1)
(2,3)
Figure 2. Prediction from the WKBJ analysis of the domains of parameters in the (kW0, ω) plane where resonance between two Kelvin modes (mA, mA + 1) is possible. Only positive frequencies are considered. A symmetrical plot is obtained for negative frequencies.
plane. The energy of the waves can also be deduced from the asymptotic expression of the dispersion relation as shown in Le [START_REF] Dizès | Inviscid waves on a Lamb-Oseen vortex in a rotating stratified fluid: consequences on the elliptic instability[END_REF]. It is immediately found that regular core modes and regular ring modes are always of negative energy, while singular modes have positive energy. The condition of resonance can then easily be analysed. One just needs to superimpose the domains of existence of each pair of modes of azimuthal wavenumbers m and m+1, to find the regions of possible resonance. The final result is summarized in Fig. 2. For positive frequencies, only three different regions are obtained corresponding to (m A , m B ) = (0, 1), (1, 2) and (2, 3) (Negative frequencies are obtained by symmetry changing m → -m and k → -k). No intersection of the domains of existence of the modes m and m + 1 are obtained for m larger than 2. In each region of Fig. 2, we always find that the mode A is a regular core mode of negative energy, while the mode B is a singular core mode of positive energy. Each branch crossing is therefore expected to provide an instability.
As shown in Le [START_REF] Dizès | An asymptotic description of vortex Kelvin modes[END_REF], both types of core modes have an asymptotic dispersion relation of the form
k rt 0 ∆(r) Φ(r) dr = (|m| + 2l) π 2 l = 0, 1, 2, . . . (4.1)
where
∆(r) = 2Ω (0) (r)ζ (0) (r) -Φ 2 (r), (4.2) Φ(r) = ω -mΩ (0) (r) -kW (0) (r), (4.3)
and r t is a turning point defined by ∆(r t ) = 0. The integer l is a branch label which measures the number of oscillations of the mode in the vortex core. The larger l, the more oscillating is the mode. Singular modes differ from regular modes by the presence of a critical point r c > r t where Φ(r c ) = 0 in their radial structure. In the WKBJ description, this critical point does not create any damping at leading order. However, it makes the eigenfunction singular. It will justify the use of a complex integration path in the numerical resolution of the mode.
Numerical determination of the Kelvin modes
The characteristics of the resonant modes are obtained by integrating numerically Eqs. (3.3)-(3.4). The numerical code is based on a Chebyshev spectral collocation method, essentially identical to that used in Fabre & Jacquin (2004b). The eigenvalue problem is solved in a Chebyshev domain (-1,1) on 2(N + 1) nodes which is mapped on a line in the complex-r plane using the mapping
r(x; A c , θ c ) = A c tanh(x) e i θc , (4.4)
where A c is a parameter close to 1 that controls the spreading of the collocation points, and θ c is the small inclination angle of the path in the complex r plane. We typically take θ c ≈ π/10 such that the critical point of the singular mode is avoided. As in Fabre & Jacquin (2004b), we take advantages of the parity properties of the eigenfunctions by expressing for odd m (resp. even m), w and p on odd polynomials (resp. even) and ũ and ṽ on even polynomials (resp. odd). It leads to a discretized eigenvalue problem of order 4N , which is solved using a global eigenvalue method. We also use an Arnoldi algorithm in order to follow specific eigenvalues and easily find the condition of resonance. In most computations, the value N = 200 was found to be adequate. This collocation method was also used to determine the adjoint modes and compute the integrals that define the coefficients in the growth rate equation (3.14).
Typical results for the eigenvalues are shown in Fig. 3. In this figure, we compare the numerical results with the theoretical formula (4.1). The good agreement demonstrates the usefulness of the asymptotic approach to obtain valuable estimates for the condition of resonance.
Stability diagram
Lamb-Oseen vortex
In this section, we assume that there is no axial flow. The underlying vortex is then a Lamb-Oseen vortex. For this vortex, Kelvin mode properties have been documented in Le [START_REF] Dizès | An asymptotic description of vortex Kelvin modes[END_REF] and [START_REF] Fabre | The Kelvin waves and the singular modes of the Lamb-Oseen vortex[END_REF] for m = 0, 1, 2, 3. It was shown that the singular core modes become strongly damped as soon as the critical layer singularity moves in the vortex core. This gives a constraint on the frequency of the mode B which has to be small. As a consequence, we immediately see that the only modes (m A , m B ) that can possibly resonate are the modes (m A , m B ) = (0, 1). Moreover, the constraint on the frequency implies that only large branch labels of the mode m A = 0 will be able to resonate with a weakly damped mode m B = 1. In figure 4, we show the crossing of the first m A = 0 and m B = 1 branches in the (k, ω) plane. Only the modes with a damping rate smaller (in absolute value) than 0.01 are in solid lines. We observe that the branch label of the m A = 0 modes must be 6 or larger to cross the first m B = 1 branch in the part where it is only weakly damped. The characteristics of these first resonance points are given in table 1. We also give in this table, the value of the coefficients of Eq. (3.14) at each resonant point. For each resonant configuration, we can then plot the growth rate Im(ω) of the curvature instability as a function of the wavenumber k for any Re and ε. An example of such a plot is provided in Fig. 5. In this figure, we have plotted only the first four resonant configurations. Other configurations have been computed corresponding to labels [2, 6], [2, 7], etc but their growth rates were found to be much weaker for Re 10 5 . The spatial structure of the most unstable resonant configurations are also been given in Fig. 5. We have plotted the vorticity contours for a particular phase which maximizes the relative amplitude of the Kelvin mode m A = 0. This mode is then clearly visible in each case. If we had chosen a phase such that e iks-iωt = i, we would have seen the mode m B = 1 only.
l B = 2 l A = 2 l A = 1 l B = 1 l A = 9
We have systematically computed the maximum growth rate and obtained the most unstable mode characteristics for all ε 0.22 and Re 10 5 . The result is displayed in Fig. 6 where the maximum growth rate is shown in the (ε, Re) plane. The labels of the most unstable configurations are also shown in this plot. We can note that only 3 resonant configurations can become the most unstable corresponding to the crossing of the first branch of the Kelvin mode m B = 1 with the 7th to 9th branch of the Kelvin mode m A = 0. In particular, the resonant configuration [6, 1] observed in Fig. 5 never becomes the most unstable configuration although this configuration possesses the largest coupling coefficient N (see table 1). This is directly related to the property mentioned above: the critical layer damping rate Im(ω B ) of the mode m B = 1 is too large.
Figure 6 provides the stability diagram of the Lamb-Oseen vortex with respect to the curvature instability. It is important to emphasize the large value of the Reynolds number needed for instability. Even for a value as large as ε = 0.2, the critical Reynolds number for instability is Re c ≈ 6000. We shall see in the next section that axial flow will strongly decrease this value.
-1.6 -1.5 -1.4 -1.3 -1.2 -1.1 0 1 2 3 4 x 10 -3 k Im(ω) [6,1] [7,1] [8,1] [9,1] [7, 1] [8, 1] [9, 1]
Effects of the axial flow
The characteristics of the Kelvin modes strongly vary with the parameter W 0 . Additional branch crossings involving smaller branch labels are obtained as W 0 increases. As explained in the previous section, resonances between m A = 1 and m B = 2, as well as between m A = 2 and m B = 3 become a priori possible (see Fig. 2). However, they involve very high branch labels which implies that they never become the most unstable modes for moderate Reynolds numbers (Re 10 5 ) .
W0 = 0.2 W0 = 0.4 -3 -2.5 -2 -1.5 -1 0 0.002 0.004 0.006 0.008 0.01 0.012 k Im(ω) [4,2] [3,1] [4,4] [4,1] [5, 1] [3,2] [2,1] [4,5] [5, 3] [4,3] -2 -1.8 -1.6 -1.4 -1.2 -1 0 0.005 0.01 0.015 k Im(ω) [2,2] [2,4] [2,5] [3,2] [2,3] [2,1] [3,1] [5, 1] [3, 1] [4, 1] [4, 2] [3, 1] [2, 2] -1 0 1 -1 0 1 -5 0 5 -1 0 1 -1 0 1 -4 -2 0 2 4 -1 0 1 -1 0 1 -5 0 5 -1 0 1 -1 0 1 -4 -2 0 2 4 -1 0 1 -1 0 1 -4 -2 0
For the parameters W 0 = 0.1, 0.2, 0.3, 0.4 and 0.5, we have considered the crossing points of the seven first branches of the Kelvin modes m A = 0 and m B = 1. Each crossing point corresponds to a mode resonance. At each crossing point, we have computed the coefficients of the growth rate expression. In Fig. 7, we have plotted the growth rate curves obtained from Eq. (3.14) for W 0 = 0.2 and 0.4, and for ε = 0.2 and Re = 5000. Contrarily to the Lamb-Oseen vortex, more resonant configurations can now become unstable. Moreover, they involve smaller branch labels. The spatial structure of the main resonant configurations have also been provided in Fig. 7 for this set of parameters. As in Fig. 5, we have plotted the vorticity contour for a particular phase which maximizes the relative amplitude of the Kelvin mode m A = 0 . Note that the spatial structure of the resonant mode [3, 1] is different for W 0 = 0.2 and W 0 = 0.4: this difference is not only associated with the different values of the coefficient B/A obtained from (3.6), but also with an effect of W 0 on the Kelvin modes.
If we take the maximum value of the growth rate over all possible k for each ε and Re, we obtain the plots shown in Fig. 8. The same colormap and contour levels have been used as in Fig. 6 for comparison. We clearly see that the growth rates are larger in the presence of axial flow. The region of instability is also much larger. In these plots, we have indicated the labels of the most unstable modes. As for the Lamb-Oseen vortex, the most unstable configuration changes as ε or Re varies. However, the branch labels of the Kelvin modes are now smaller in the presence of axial flow. This property explains in part the larger growth rates of the configurations with jet. Indeed, the viscous damping of the modes with the smallest labels is the weakest. The impact of viscosity is therefore weaker on these modes. Yet, the resonant configuration with the smallest labels are not necessarily the most unstable because they may also exhibit a larger critical layer damping, or a smaller coupling coefficient N [see equation (3.14)].
In tables 2 and 3 of appendix C, we have provided the characteristics of the main resonant configurations for W 0 = 0.2 and W 0 = 0.4. The data for the other resonant configurations and for other values of W 0 are available as supplementary material.
Competition with the elliptic instability in a vortex ring
The results obtained in §4 can readily be applied to the vortex ring by using ε = a/R where R is the radius of the ring and a the core radius.
As first shown by [START_REF] Widnall | The instability of short waves on a vortex ring[END_REF], the vortex ring is also subject to the elliptic instability. This instability appears at the order ε 2 , so it is a priori smaller. Yet, the short wavelength instability observed experimentally in a vortex ring without swirl has always been attributed to the elliptic instability (see the review by [START_REF] Shariff | Vortex rings[END_REF]. It is therefore natural to provide a more precise comparison of the growth rates of both instabilities.
In BRLD16, we have obtained theoretical predictions for the elliptic instability in a vortex ring with a Batchelor profile. As for the curvature instability, growth rate contour plots can be obtained for the elliptic instability in a (ε, Re) plane using the data of this paper. It should be noted that an error of a factor 2 was found in some of the coefficients of the elliptic instability growth rate formula. This error, which is corrected in appendix D, does not affect the main conclusion of this paper but modifies the relative importance of the elliptic instability with respect to the curvature instability.
The comparison of the elliptic instability with the curvature instability is shown in Fig. 9 for three values of the axial flow parameters (W 0 = 0, 0.2, 0.4). In this figure, we have plotted the largest value of both instability growth rates in the (ε, Re) plane. We have also indicated where each instability appears and becomes dominant over the other one. Interestingly, we observe that depending on the value of W 0 the region of dominance of the curvature instability changes. For the case without axial flow [Fig. 9(a)], the elliptic instability domain is larger than the curvature instability domain and the elliptic instability is always the dominant instability. For the other two cases W 0 = 0.2 and W 0 = 0.4, the situation is different: there is a balance between both instabilities. For both cases, curvature instability is dominant over the elliptic instability for small ε while it is the opposite for large ε. Yet, there are some differences between both cases. For W 0 = 0.2, we observe that the curvature instability is the first instability to appear as Re is increased for all ε < 0.2. For W 0 = 0.4, the elliptic instability domain is larger and extends to smaller values of the Reynolds numbers than for the other two cases. It is also the dominant instability for all Reynolds numbers as soon as ε is larger than 0.1.
These plots have interesting implications. First, it explains why the curvature instability has never been observed in vortex ring without swirl. For such a vortex ring, the elliptic instability is always stronger than the curvature instability. Second, it implies that the curvature instability should be visible in a vortex ring with swirl if ε is smaller than 0.1 and the Reynolds number larger than 10000.
It should also be noted that due to the different inviscid scalings, which are in ε for the curvature instability growth rate and in ε 2 for the elliptic instability growth rate, the curvature instability should always become dominant over the elliptic instability whatever W 0 if ε is sufficiently small and the Reynolds number sufficiently large. This tendency is clearly seen in figures 9(b,c) for W 0 = 0.2 and W 0 = 0.4. For W 0 = 0 (fig. 9(a)), the change of dominance of both instabilities occurs for a much larger Reynolds number.
Conclusion
In this work, we have provided the characteristics of the curvature instability for a Batchelor vortex for several axial flow parameters. We have shown that although a same resonant coupling is active as in a Rankine vortex, the characteristics of the resonant configurations are very different owing to the critical layer damping of many Kelvin modes. We have shown that this effect precludes the resonance of Kelvin modes with azimuthal wavenumbers larger than m = 3. Moreover, when it occurs, the resonance of modes (m A , m B ) = (1, 2) or (2, 3), involves a Kelvin mode with a very high complexity (large branch label) which is strongly sensitive to viscous effects. For moderate Reynolds numbers (Re 10 5 ), we have then found that the most unstable configuration always involves Kelvin modes of azimuthal wavenumbers m A = 0 and m B = 1. We have analysed the condition of resonance of the 7 first branches (9 for the Lamb-Oseen vortex) for several axial flow parameters to identify the most unstable configuration.
For the case without axial flow (Lamb-Oseen vortex), we have shown that the most unstable configuration involves the first branch of the Kelvin mode of azimuthal wavenumber m B = 1 and the seventh to nineth branch of the Kelvin mode of azimuthal wavenumber m A = 0, depending on the Reynolds number and ε (for Re 10 5 ). The high value of the branch label implies a larger viscous damping and therefore a weaker growth rate of the curvature instability for this case. In the presence of axial flow, resonant configura-tions with smaller branch labels were shown to become possible. The instability growth rate was then found to be larger than without axial flow. We have presented the characteristics of the most unstable configurations for two axial flow parameters W 0 = 0.2, W 0 = 0.4. The data provided as supplementary material can be used to obtain the instability characteristics for other values of W 0 (W 0 = 0, 0.1, 0.2, 0.3, 0.4, 0.5).
We have applied our results to the vortex ring and analysed the competition of the curvature instability with the elliptic instability. We have shown that the elliptic instability is always dominant without axial flow. However, the situation changes in the presence of axial flow which provides hope in observing this instability in vortex rings with swirl.
The present results can also be applied to helical vortices as they only depend on the local vortex curvature. By contrast, the elliptic instability characteristics in helices depend on the helix pitch and on the number of helices (Blanco-Rodríguez & Le Dizès 2016). Whether the curvature instability dominates the elliptic instability must then be analysed on a case by case basis. All the elements to perform such an analysis are now available.
Our analysis has been limited to a particular model of vortices. In the very large Reynolds number context of aeronautics, other models have been introduced to describe the vortices generated by wing tips [START_REF] Moore | Axial flow in laminar trailing vortices[END_REF][START_REF] Spalart | Airplane trailing vortices[END_REF]. It would be interesting to analyse the occurrence of the curvature instability in these models as well as the competition with the elliptic instability (Fabre & Jacquin 2004a;[START_REF] Feys | Elliptical instability of the Moore-Saffman model for a trailing wingtip vortex[END_REF]. 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0
, P = W (0) 0 0 0 0 W (0) 0 0 0 0 W (0) 1 0 0 -1 0 , ( B 1)
M(-i∂ϕ) = Ω (0) i∂ ϕ -2 Ω (0) 0 ∂ r -ζ (0) Ω (0) i∂ ϕ 0 i r ∂ ϕ -W (0) r 0 Ω (0) i∂ ϕ 0 1 r + ∂ r -i r ∂ ϕ 0 0 , (B 2) V(-i∂ϕ, -i∂ s ) = ∆ - 1 r 2 2i r 2 ∂ ϕ 0 0 2i r 2 ∂ ϕ ∆ - 1 r 2 0 0 0 0 ∆ 0 0 0 0 0 , ( B
N
(1)
± (-i∂ϕ, -i∂ s ) = 1 2 D (1) ± ± U (1) r U (1) r + 2 V (1) r -2W (0) 0 V (1) r + V (1)
r D
(1)
± ± V (1) r ± U (1) r ±2W (0) 0 W (1) r -W (0) ± W (1) r ∓ W (0) D (1) ± ∓ V (0) -ri∂ s 1 ±1 ri∂ s 0 , ( B 4)
where
D
(1)
± = ±U (1) ∂ r - V (1)
r i∂ ϕ -T w i∂ s , T w = W (1) + rW (0) , (B 5) 6)
T v = V (1) + rV (0) , ∆ = ∂ 2 r + 1 r ∂ r + 1 r 2 ∂ 2 ϕ + ∂ 2 s . ( B
Figure 1 .
1 Figure 1. Sketch of the vortex structure and definition of the local Frenet frame (adapted from BRLD16).
Figure 3 .
3 Figure 3. Analysis of the branch crossing for the Batchelor vortex at W0 = 0.2. Plot of Re(ω) versus kW0 of the first branches of the Kelvin modes of azimuthal wavenumber mA (in blue) and mB = mA + 1 (in green). The branch labels are also indicated. Solid lines: numerical results. Dashed lines: WKBJ predictions. The domains shown in figure 2 where branch crossings are expected have also been indicated. (a): (mA, mB) = (0, 1); (b): (mA, mB) = (1, 2).
Figure 4 .
4 Figure 4. Frequency versus wavenumber of the Kelvin modes of the Lamb-Oseen vortex for mA = 0 (blue) and mB = 1 (red) in the frequency-wavenumber domain where resonance exists. The real part of the frequency is plotted in solid lines when |Im(ω)| < 0.01 (neutral or weakly damped modes) and in dotted lines when |Im(ω)| > 0.01 (strongly damped modes).
Figure 5 .Figure 6 .
56 Figure 5. Top: Temporal growth rate of the curvature instability as a function of the axial wavenumber for the Lamb-Oseen vortex (W0 = 0) at ε = 0.1, Re = ∞ (dashed line) and Re = 10 5 . The label [lA, lB] corresponds to the branch indices of the resonant configuration. It means that the resonant configuration is formed of the lAth branch of the Kelvin mode mA = 0 and the lBth branch of the Kelvin mode mB = 1. Bottom: Vorticity contours in a (x, y) cross section of modes [7, 1], [8, 1], and [9, 1] for the parameters indicated by a star on the top graph (that is at k = kc). The vorticity is defined by (3.2) at a time t and location s such that e iks-iωt = 1 with A = 1. The black circle indicates the vortex core radius.
2 4Figure 7 .
27 Figure 7. Top: Temporal growth rate of the curvature instability as a function of the axial wavenumber for the Batchelor vortex at ε = 0.2 and Re = 5000 for W0 = 0.2 (left) and W0 = 0.4 (right). Bottom: Vorticity contours in a (x, y) cross section of modes [3, 1], [4, 1], and [4, 2] for W0 = 0.2 (left) and modes [3,1] and [2.2] for W0 = 0.4 (right). See caption of Fig. 5 for more information.
Figure 8 .Figure 9 .
89 Figure 8. Maximum growth rate contours of the curvature instability in the (ε,Re) plane for the Batchelor vortex. (a): W0 = 0.2; (b): W0 = 0.4. See caption of Fig. 6.
Table 1 .
1 Characteristics of the first resonant configurations (mA, mB) = (0, 1) of label[lA, lB] for the Lamb-Oseen vortex (W0 = 0).
3)
Table 2 .
2 Same as table 1 for the Batchelor vortex with W0 = 0.2.
† Email address for correspondence: [email protected]
Acknowledgments
This work received support from the French Agence Nationale de la Recherche under the A*MIDEX grant ANR-11-IDEX-0001-02, the LABEX MEC project ANR-11-LABX-0092 and the ANR HELIX project ANR-12-BS09-0023-01.
Appendix A. Dipolar correction
The first order correction is given by
where
with
We have used the index r to denote derivative with respect to r (for example
Appendix B. Operators
The operators appearing in equation (2.5) are given by
Appendix C. Tables
In this section, we provide the coefficients of the growth rate formula (3.14) for the dominant instability modes for the three cases W 0 = 0, W 0 = 0.2 and W 0 = 0.4.
Blanco-Rodríguez and Le Dizès
Appendix D. Elliptic instability of a curved Batchelor vortex -Corrigendum
Due to a normalisation mistake, a systematic error has been made in the values of the coefficients R AB and R BA in the dispersion relation (4.7) of Blanco-Rodríguez & Le [START_REF] Blanco-Rodríguez | Elliptic instability of a curved Batchelor vortex[END_REF]. The correct values are twice those indicated in this paper for all the modes. This modifies the values given in table 2 and formulas (C2n-q), (C3n-q), (C4n-q). For instance, in table 2 the correct value of R AB for the mode (-2, 0, 1) at
This error affects the y-scale of the plots (c) and (d) of figure 5 which has to be multiplied by two, and those of figure 6, which has to be divided by two. It also changes all the figures obtained in section 8. The correct figures (available on request) are nevertheless qualitatively similar if we multiply the y-scale of all the plots by a factor 2.
The comparison with [START_REF] Widnall | The instability of the thin vortex ring of constant vorticity[END_REF] done in section 8.1 for a vortex ring is also slightly modified. With the correct normalisation, the inviscid result of [START_REF] Widnall | The instability of the thin vortex ring of constant vorticity[END_REF] for the Rankine vortex is σ max /ε 2 = [(0.428 log(8/ε) -0.455) 2 -0.113] 1/2 while we obtain for the Lamb-Oseen vortex σ max /ε 2 = 0.5171 log(8/ε) -0.9285. The Lamb-Oseen vortex ring is thus less unstable than the Rankine vortex ring as soon as ε > 0.039 for the same reason as previously indicated. | 43,922 | [
"8388"
] | [
"196526",
"196526"
] |
01430563 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01430563/file/article13.pdf | S Le Dizès
E Villermaux
Capillary jet breakup by noise amplification
A liquid jet falling by gravity ultimately destabilizes by capillary forces. Accelerating as it falls, the jet thins and stretches, causing the capillary instability to develop on a spatially varying substrate. We discuss quantitatively the interplay between instability growth, jet thinning and longitudinal stretching for two kinds of perturbations, either solely introduced at the jet nozzle exit, or affecting the jet all along its length. The analysis is conducted for any values of the liquid properties for sufficiently large flow rate. In all cases, we determine the net gain of the most dangerous perturbation for all downstream distances, thus predicting the jet length, the wavelength at breakup and the resulting droplet size.
Introduction
Seemingly simple questions are not always the simplest to answer quantitatively. A canonical illustration of this affirmation is the apparently simple problem of a liquid thread, falling from a nozzle by it own weight under the action of gravity, as shown in figure 1. As it falls, the thread eventually fragments into drops, a fact that we understand because it has locally a columnar shape, and thus suffers a capillary instability. But how far from the nozzle exit does breakup happen ? Even a distracted look at the possible scenarii lets one glimpse the potential difficulties of a precise analysis: a distance z is the product of a velocity u by a time τ z = u τ.
(1.1) Capillary breakup occurs within a time τ which depends on the thread radius h, on the liquid density ρ, viscosity η and surface tension γ, and we know that most of this time is spent at developing an instability about the quasi-columnar shape of the thread, the subsequent phenomena occurring around the pinching instant at the drops separation being comparatively much faster [START_REF] Eggers | Physics of fluid jets[END_REF]. The time τ is either the capillary time ρh 3 /γ when inertia and surface tension are solely at play, or the viscous capillary time ηh/γ if viscous effects dominantly slow down the unstable dynamics. When the jet issues from the nozzle ballistically, keeping its velocity and radius constant, the problem is indeed simple, and amounts to estimate correctly the relevant timescale τ to compute the so-called 'Liquid intact length' of the jet (see the corresponding section in [START_REF] Eggers | Physics of fluid jets[END_REF] for a complete discussion and experimental references, including the case when the jet suffers a shear instability with the surrounding environment). Subtleties arise when the axial velocity of the jet depends on axial distance z.
A jet falling in the direction of gravity accelerates. If fed at a constant flow rate at the nozzle, stationarity implies that the thread radius thins with increasing distances from the exit. Therefore, if both u and h depend on downstream distance, which estimates will correctly represent the breakup distance z in equation (1.1) ? Those at the nozzle exit, those at the breakup distance, or a mixture of the two ? As the radius thins, the instability h(z, t) h 0 u 0 z λ max d max Figure 1: Four successive panels showing a liquid jet (density 950 kg/m 3 , viscosity η = 50×10 -3 Pa s) issuing from a round tube with radius h 0 = 2 mm at velocity u 0 = 1 cm/s, stretching in the gravity field (aligned with the z direction), and thinning as it destabilizes through the growth of bulges separated by λ max at breakup, producing stable drops of diameter d max .
may switch from an inertia to a viscous dominated régime. Then, which timescale τ should be considered to compute z ?
The detailed problem is even more subtle : The capillary instability amplifies preferentially a varicose perturbation, adjacent bulges along the thread feeding on the thinner ligament linking them (figure 1). The most amplified wavelength is proportional to h, the other wavelengths having a weaker growth rate. Since the jet accelerates, mass conservation of the incompressible liquid also implies that the distance between two adjacent instability crests increases with larger distances from the nozzle exit. The capillary instability has thus to compete with another phenomenon, namely jet stretching, characterized by another timescale (∂ z u) -1 . There are thus three timescales which may potentially contribute to τ , and which all depend intrinsically on the distance to the nozzle. Deciding a-priori which one will dominate and how is a hazardous exercise.
Deciphering the relative importance of the coupled effects mentioned above requires an instability analysis accounting for both the substrate deformation (jet stretching), and for the modification of the local instability dispersion relation as the jet thins (to describe the growing relative influence of viscosity). That question has been envisaged in the very viscous limit by [START_REF] Tomotika | Breaking up of a drop of viscous liquid immersed in another viscous fluid which is extending at a uniform rate[END_REF], for the particular case where u increases linearly with z by [START_REF] Frankel | Stability of a capillary jet with linearly increasing axial velocity (with application to shaped charges)[END_REF][START_REF] Schlichting | Boundary Layer Theory[END_REF], and more recently by [START_REF] Senchenko | Shape and stability of a viscous thread[END_REF], [START_REF] Sauter | Stability of initially slow viscous jets driven by gravity[END_REF] and [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF] for a gravitationally accelerated jet.
These last authors quantified the maximum gain that perturbations can reach at a given location using a local plane wave decomposition (WKBJ approximation). By choosing adequately the gain needed for breakup, they were able to collapse measurements of the breakup distance on a theoretical curve. They also obtained an asymptotic expression in the viscous regime consistent with the anticipated scaling law which compares the viscous capillary timescale based on the current jet radius to the stretching time of the jet.
In the present work, we use a similar approach as [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF] by searching maximum perturbation gains using WKBJ approximations. In addition to providing much more details, we extend their analysis in several ways. We first consider all the regimes ranging from very viscous to inviscid. We then compare the maximum gain and the most dangerous frequency of the perturbations for two types of excitation: (1) nozzle excitation (the perturbation is introduced at the nozzle only); (2) background noise (the perturbation is present everywhere). We finally provide predictions for the breakup wavelength and the resulting droplet size.
The paper is organized as follows: In §2, we present the mathematical formulation by providing the model for the base flow and the perturbations. An expression of the perturbation gain is derived using the WKBJ framework. In §3, the result of the optimization procedure maximizing the gain is provided for each type of excitation. The break up distance, the most dangerous frequency, the wavelength and the droplet size are analysed as functions of the gain and fluid viscosity (Ohnesorge number Oh). Asymptotic formulas for weak and strong viscosity (small and large Oh) are provided in this section, though their derivation is moved in an appendix at the end of the paper. For nozzle excitation, a peculiar behavior of the optimal perturbation observed for intermediate Ohnesorge numbers (0.1 < Oh < 1) is further discussed in §4. We show that the peak of the breakup wavelength obtained for Oh ≈ 0.3 is related to a property of the local dispersion relation outside the instability band. The results are compared to local predictions in §5 and applied to realistic configurations in §6.
Mathematical formulation
We consider an axisymmetric liquid jet falling vertically by the action of gravity g. The jet has a radius h 0 and a characteristic velocity u 0 at the nozzle (figure 1). The fluid has a density ρ, a viscosity ν = η/ρ, and a surface tension γ. The surrounding environment is considered as evanescent, and is neglected.
Base Flow
Spatial and time variables are non-dimensionalized using the radius h 0 , and the capillary time τ c = ρh 3 0 /γ respectively. The base flow is governed by three parameters
Q = u 0 ρh 0 γ , The flow rate, (2.1a) Oh = ν ρ γh 0 , The Ohnesorge number, (2.1b) Bo = ρgh 2 0 γ
The Bond number.
(2.1c)
One could alternatively use the Weber number We = Q 2 instead of the dimensionless flow rate. The Ohnesorge number is the ratio of the viscous capillary timescale to the capillary timescale. We describe the liquid jet by the one-dimensional model [START_REF] Trouton | On the coefficient of viscous traction and its relation to that of viscosity[END_REF][START_REF] Weber | Zum zerfall eines flüssigkeitsstrahles[END_REF][START_REF] Eggers | Physics of fluid jets[END_REF])
∂A ∂t + ∂ (Au) ∂z = 0, (2.2a) ∂u ∂t + u ∂u ∂z = 3 Oh 1 A ∂ ∂z A ∂u ∂z + ∂K ∂z + Bo, (2.2b) with K = 4AA zz -2A 2 z [4A + A 2 z ] 3/2 - 2 [4A + A 2 z ] 1/2 , (2.3)
where u(z, t) is the local axial velocity, A = h 2 is the square of the local radius h(z, t), z is the axial coordinate oriented downward, t is the time variable, A z and A zz are respectively, the first and second derivative of A with respect to z. The boundary conditions at the nozzle are
A(z = 0, t) = 1, u(z = 0, t) = Q. (2.4)
The stationary base flow satisfies
∂ (A 0 U 0 ) ∂z = 0, (2.5a) U 0 ∂U 0 ∂z = 3 Oh 1 A 0 ∂ ∂z A 0 ∂U 0 ∂z + ∂K 0 ∂z + Bo . (2.5b)
The first equation gives
A 0 U 0 = Q. (2.6)
We will consider the régime where the jet base flow is inertial and given at leading order by
U 0 ∂U 0 ∂z = Bo . (2.7)
This hypothesis amounts to neglect viscous and curvature effects in the jet evolution.
Because it accelerates as it falls, the jet gets thinner and slender. Curvature effects along z thus soon vanish (unless the jet is initially very small, see [START_REF] Rubio-Rubio | On the thinnest steady threads obtained by gravitational stretching of capillary jets[END_REF], and viscous stresses applying on the jet cross section are also soon overcomed by the gravity force (beyond a physical distance from the nozzle of order νu 0 /g, see [START_REF] Clarke | The asymptotic effects of surface tension and viscosity on an axiallysymmetric free jet of liquid under gravity[END_REF]. Equations (2.6) and (2.7) thus give
U 0 (z) = 2 Bo z + Q 2 , (2.8a) A 0 (z) = Q 2 Bo z + Q 2 . (2.8b)
Plugging these expressions in the viscous and curvature terms of equation (2.2b), one observe that they are both decreasing with z. Viscous and curvature terms are therefore negligible along the entire jet, if they are already negligible in the vicinity of the nozzle exit. This is satisfied if the flow rate is sufficiently large, and more precisely if the following conditions are met
Q 1, (2.9a) Q 2 Bo, (2.9b) Q 3 Bo Oh . (2.9c)
Note that if the parameters Q, Bo and Oh are defined from the local values of U 0 and A 0 , conditions (2.9a-c) are always satisfied sufficiently far away from the nozzle (e.g. [START_REF] Sauter | Stability of initially slow viscous jets driven by gravity[END_REF]. Since the phenomena we will describe result from a dynamics which integrates over distances much larger than the jet initial radius, we use here (2.8) as a good approximation of the base flow everywhere.
For simplicity, we assume in the sequel that Q is the only large parameter, Bo and Oh being of order 1 or smaller. Both U 0 and A 0 then vary with respect to the slow variable
Z = z z o + 1, (2.10) as U 0 (Z) = Q √ Z, (2.11a) A 0 (Z) = 1/ √ Z, (2.11b)
where
z o = Q 2 2 Bo (2.12)
is the (large) nondimensionalized variation scale of the base flow.
Perturbations
We now consider linear perturbations (u p , A p ) in velocity and cross-section to the above base flow. These perturbations satisfy the linear system
∂A p ∂t = - ∂(A p U 0 + A 0 u p ) ∂z , (2.13a
)
∂u p ∂t + ∂u p U 0 ∂z = 3 Oh A 0 ∂ ∂z A 0 ∂u p ∂z + A p ∂U 0 ∂z - A p A 0 ∂ ∂z A 0 ∂U 0 ∂z + ∂L(A p ) ∂z , (2.13b)
where L(A p ) is the linear operator obtained by linearizing K -K 0 around A 0 . We want to analyze these perturbations in the 'jetting' regime when the jet is globally stable. More precisely, we do not consider the global transition that leads to dripping and which has been studied elsewhere [START_REF] Dizès | Global modes in falling capillary jets[END_REF][START_REF] Sauter | Stability of initially slow viscous jets driven by gravity[END_REF][START_REF] Rubio-Rubio | On the thinnest steady threads obtained by gravitational stretching of capillary jets[END_REF]. We are interested in the growth of the perturbations that give rise to the formation of droplets far away from the nozzle. In this regime, the jet is convectively unstable: the perturbations are advected downstream as they grow. We expect droplets to form when the perturbation has reached a sufficiently large amplitude. Of particular interest is the maximum amplitude that perturbations can reach at a given location z f from a fixed level of noise. This amounts to calculate the maximum spatial gain that perturbations can exhibit at a given downstream location. For this purpose, we will consider two situations:
(a) Fluctuations are mainly present at the nozzle as in laboratory experiments where the jet nozzle is vibrated for instance [START_REF] Sauter | Stability of initially slow viscous jets driven by gravity[END_REF]. In that case, we are interested in the spatial gain at z f of perturbations generated at the nozzle z = 0.
(b) The jet is subject to a background noise which acts at every z location. In that case, we are interested in the maximum gain at z f of perturbations which originates from anywhere along the jet. In other words, we are interested in the spatial gain between z i and z f , where z i is chosen such that the gain is maximum. Obviously, the gain in that case is larger than in (a), since z = 0 is one particular excitation location among the many possible in that case.
The base flow is stationary; a temporal excitation at a given location with a fixed frequency leads to a temporal response in the whole jet with the same frequency. As the jet can be forced on A or on u, we expect two independent spatial structures associated with each frequency. If we write (u p , A p ) = (ũ, Ã)e -iωt + c.c.,
(2.14) the normalized solution forced in u at the nozzle will satisfy Ã(z = 0) = 0, ũ(z = 0) = 1, while the one forced in A at the nozzle will satisfy Ã(z = 0) = 1, ũ(z = 0) = 0. A linear combination of these two solutions can be used to obtain the normalized solution forced in u or forced in A at any location z i .
We then define a spatial gain in A from z i to z f from the solution forced in
A at z i by G A (z i , z f ) = | Ã(z f )|. Similarly, we define a spatial gain in u from z i to z f from the solution forced in u at z i by G u (z i , z f ) = |ũ(z f )|.
Both U 0 and A 0 depend on the slow spatial variable Z. Anticipating that the typical wavelength will be of order 1, a local plane wave approximation (WKBJ approximation) can be used [START_REF] Bender | Advanced mathematical methods for scientists and engineers[END_REF]. In other words, each time-harmonic perturbation amplitude can be written as a sum of expressions of the form (WKBJ approximation)
(ũ, Ã) = (v(Z), a(Z))e izo Z k(s)ds , (2.15)
where k(Z), v(Z) and a(Z) depend on the slow variation scale of the base flow. With the WKBJ ansatz, the perturbations equations become at leading order in 1/z o
(-iω + ikU 0 )a + ikA 0 v = 0, (2.16a) (-iω + ikU 0 )v = -3 Oh k 2 v + ik 2A 3/2 0 (1 -k 2 A 0 )a. (2.16b)
These two equations can be simultaneously satisfied (by non-vanishing fields) if and only if
(-iω + ikU 0 ) 2 + 3 Oh k 2 (-iω + ikU 0 ) - k 2 2 √ A 0 (1 -k 2 A 0 ) = 0.
(2.17)
This equation provides k as a function of Z. Expressions for v(Z) and a(Z) can be obtained by considering the problem to the next order (see appendix B).
Among the four possible solutions to (2.17), only the two wavenumbers corresponding to waves propagating downstream are allowed. As explained in [START_REF] Bers | Space-time evolution of plasma instabilities-absolute and convective[END_REF] (see also [START_REF] Huerre | Local and global instabilities in spatially developing flows[END_REF], these wavenumbers are the analytic continuations for real ω of functions satisfying m(k) > 0 for large m(ω). They are well-defined in the convective regime that we consider here.
If ω = ωQ with ω = O(1), the wavenumbers associated with the downstream propagating waves can be expanded as
k ∼ k o + k 1 Q (2.18)
where k o is found to be identical for both waves:
k 0 = ω U 0 = ωA 0 . (2.19)
At the order 1/Q, we get
k 1 = -i k 0 A 3/4 0 √ 2 1 -A 0 k 2 0 + 9 Oh 2 √ A 0 k 2 0 2 - 3 Oh k 2 0 A 0 2 .
(2.20)
The two wavenumbers are obtained by considering the two possible values of the square root. Although both waves are needed to satisfy the boundary conditions at the nozzle, the solution is rapidly dominated downstream by a single wave corresponding to the wavenumber with the smallest imaginary part.
Both the solution forced in A and the solution forced in u are thus expected to have a similar WKBJ approximation (2.15). The main contribution to the two gains G A (z i , z f ) and G u (z i , z f ) is therefore expected to be the same and given by the exponential factor
G(z i , z f ) = e S(Zi,Z f ) , (2.21)
where
S(Z i , Z f ) = -z o Z f Zi m(k)(Z) dZ = - z o Q Z f Zi m(k 1 )(Z) dZ. (2.22)
This implicitly assumes that 1), the WKBJ approach remains valid but the gain (2.21) is of same order of magnitude as the variation of v and a. In that case, one should a priori take into account the amplitude v and a provided in Appendix B and apply explicitly the boundary conditions at the forcing location. It leads to different gains for a forcing in velocity and a forcing in radius.
z o /Q = Q/2 Bo is large. When z o /Q = O(
The gain G is associated with the temporal growth of the local perturbation. Indeed, S can be written as
S = z o Z f Zi σ(k l (Z), Oh l (Z)) τ c l (Z)U 0 (Z) dZ, (2.23)
where σ(k, Oh) is the growth rate of the capillary instability for the 1D model:
σ(k, Oh) = k √ 2 1 -k 2 + 9 Oh 2 k 2 2 - 3 Oh k 2 2 .
(2.24)
The local wavenumber k l (Z), local Ohnesorge Oh l (Z) and local capillary time scale τ c l (Z) vary as
k l (Z) = ωZ -3/8 , (2.25a) Oh l (Z) = Oh Z 1/8 , (2.25b) τ c l (Z) = Z -3/8 . (2.25c)
In the following, we write S as
S = z o √ 2Q S(Z f , Z i , Oh, ω), (2.26) with S(Z i , Z f , ω, Oh) = ω Z f Zi z -7/8 1 -ω 2 z -3/2 + 9 Oh 2 ω 2 2 z -5/4 - 3 Oh ω √ 2 z -5/8 dz.
(2.27) Our objective is to find the frequency ω that gives the largest value of S at a given Z f . For the type of perturbations in case (a) (nozzle excitation), Z i = 1, and we are looking for
S (a) max (Z f , Oh) = max ω S(1, Z f , ω, Oh).
(2.28)
For the type of perturbations in case (b) (background noise), the gain is maximized over all Z i between 1 and Z f , so
S (b) max (Z f , Oh) = max ω max 1≤Zi<Z f S(Z i , Z f , ω, Oh).
(2.29) For z > 1, the integrand in the expression of S is always positive when ω < 1. This means that as long as ω
(a)
max ≤ 1, the gain cannot be increased by changing Z i , and we have S
(b) max = S (a) max . When ω (a)
max > 1, the perturbation starts to decrease before increasing further downstream. In that case, the gain can be increased by considering larger Z i . More precisely, Z i has to be chosen such that the integrand starts to be positive which gives Z i = ω 4/3 . In this regime,
S (b) max (Z f , Oh) = max ω S(ω 4/3 , Z f , ω, Oh).
(2.30)
Both S
Quantitative results
The results of the optimization procedure are shown in figure 2 for both nozzle excitation and background noise. Both the maximum gain and the most dangerous frequency are plotted versus the rescaled distance z f /z o to the nozzle for Oh ranging from 10 -4 to 10 3 . The same results are shown as level curves in the (z f /z o , Oh) plane in figure 3. As expected, S max grows as z f /z o increases or Oh decreases (see figure 2(a)). The most dangerous frequency follows the same trend (see figure 2(b)). As already mentioned above, nozzle excitation [case (a)] and background noise [case (b)] provide the same results when ω max ≤ 1. The contour ω max = 1 has been reported in figure 3(a) as a dotted line. On the left of this dotted line, the contours of maximum gain are then the same for both cases. When ω max is larger than 1, background noise gain becomes larger than nozzle excitation gain. The most dangerous frequency for background noise also becomes larger than for nozzle excitation. Note however that significant differences are only observed in an intermediate regime of Oh (typically 10 -2 < Oh < 1 ) for large values of S (S > 5) (see figure 3).
Figure 3 can be used to obtain the distance of the expected transition to jet breakup and droplet formation. Assume that a gain of order G t ≈ e 7 , that is S t = 7 is enough for the transition, a value commonly admitted in boundary layers instabilities [START_REF] Schlichting | Boundary Layer Theory[END_REF]. From (2.26), we can deduce the value of S needed for transition If the fluid collapses in a single drop between two pinch-off, the distance between two droplets is given by the wavelength at breakup λ max = 2π/A 0 (z f )/ω max , deduced from (2.19), and the droplet diameter is
S t = S t √ 2Q/z o = S t 2 √ 2 Bo /Q (3.1) ≈ 20 Bo /Q (3.2) = 20 τ c g/u 0 , (3.3
d max ∼ [6λ max A 0 (z f )] 1/3 ∼ 12π ω max 1/3 . (3.4)
These two quantities are plotted in figure 4 for a few values of S t as a function of Oh.
What is particularly remarkable is that the drop diameter remains mostly constant in the full interval 10 -3 < Oh < 10 2 whatever the noise level for both cases [figure 4(b)]. Yet, in this interval of Oh, the breakup distance z f varies by a factor 1000 [figure 3(a)], while the wavelength varies by a factor 20 or more [figure 4(a)]. In the case of background noise, z f and λ max increase with Oh. We observe the same evolution in the case of noise excitation for small S t . However, the curves of both cases depart from each other for large values of S t (for instance S t = 10) with a surprising local peak for case (a) close to Oh ≈ 0.3. As we shall see in section §4, this peak is associated with a larger damping of the perturbation outside the instability range for moderate Oh.
In figures 3 and 4, we have also plotted the asymptotic behaviors of the different quantities obtained for large Oh and small Oh. The details of the derivation are provided in appendix A. We provide below the final result only. This scaling law, which was also derived by [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF], expresses that breakup occurs when the local capillary instability growth rate overcomes the stretching rate of the jet. Indeed and coming back to dimensional quantities, the velocity and local radius vary far from the nozzle as U 0 ∼ √ 2gz and h ∼ √ Q * /(2gz) 1/4 , respectively where
Q * = U 0 h 2 is the dimensional flow rate.
The local stretching rate is then given by ∂ z U 0 ∼ g/(2z) while the viscous capillary growth rate based on the current radius is of order γ/(ηh) = γ(2gz) 1/4 /(η √ Q * ). The latter overcomes the former at a distance z f of order (η/γ) 4/3 g 1/3 (Q * ) 2/3 . In terms of dimensionless parameters, this gives
z f /h 0 ∝ Oh 4/3 Bo 1/3 Q 2/3 , (3.6)
which is essentially the scaling deduced from (3.5) if one remembers that S t ∝ Bo /Q and z 0 ∝ Q 2 / Bo. In that viscous regime, the most dangerous frequencies are not the same in cases (a) and (b). This implies that the wavelengths λ max at the point of transition, and the droplet diameter d max are also different. For case (a), we obtain from (A 9) and (3.5) ω (a) max ∼ α a S 2/3 t Oh 1/6 , with α a = 3 3/4 2 7/4 ≈ 0.678, (3.7) which gives λ (a) max ∼ β a Oh 1/2 , with β a = 4π 3 1/4 ≈ 16.54, (3.8a)
d (a) max ∼ γ a S -2/9 t
Oh -1/18 , with γ a = π 1/3 3 1/12 2 15/12 ≈ 3.82.
(3.8b)
For case (b), we obtain from (A 11) and (3.5)
ω (b) max ∼ α b S 8/9 t
Oh 2/9 , with α b = 3 2 7/3 ≈ 0.595, (3.9) which gives
λ (b) max ∼ β b S -2/9 t
Oh 4/9 , with β b = 2 31/12 π ≈ 18.83, (3.10a)
d (b) max ∼ γ b S -8/27 t
Oh -2/27 , with γ b = π 1/3 2 13/9 ≈ 3.99.
(3.10b)
A naive local argument like the one leading to equation (3.6) would predict for λ max the most unstable local wavelength at z f . As it will be shown in section §5, this fails at making the correct predictions, precisely because it ignores the stretching history of the fluid particles, and of the corresponding unstable modes. Equation (3.6) is thus consistent with a local argument, but the local argument does not incorporate the whole truth.
Low viscosity (small Oh)
In the weakly viscous regime (Oh 1), both noise and nozzle excitations are expected to give the same breakup distance z f . This distance is well approximated by
z f /z 0 ≈ η 0 S 8/7 t with η 0 ≈ 3.45, (3.11) when z f /z o > 3.74, that is S t > 1.32.
Again, as in the previous viscous limit, this scaling law expresses that breakup occurs when the local capillary instability growth rate overcomes the stretching rate. The local jet stretching rate is still ∂ z U 0 ∼ g/(2z) while the inviscid capillary growth rate based on the current radius is now of order γ/ρh 3 = γ/ρ(2gz) 3/8 /(Q * ) 3/4 . The latter overcomes the former at a distance of order (Q * ) 6/7 g 1/7 (ρ/γ) 4/7 . In terms of dimensionless parameters, it gives
z f /h 0 ∝ Bo 1/7 Q 6/7 , (3.12)
which is essentially the scaling in equation (3.11) with S t ∝ Bo /Q and z 0 ∝ Q 2 / Bo. In this regime, the most dangerous frequency is also the same in both cases and given by ω max = α 0 S 6/7 t , with α 0 ≈ 0.79, (3.13) which gives
λ max ∼ β 0 S -2/7 t
, with β 0 ≈ 14.82, (3.14a)
d max ∼ γ 0 S -2/7 t
, with γ 0 ≈ 3.63.
(3.14b)
Again and for the same reason, naive local scaling fails at representing these scaling laws adequately.
Comparison with 3D predictions
In this section, we focus on the regime of intermediate values of Oh for which the asymptotic expressions do not apply. We address the peculiar behavior of the optimal perturbation in the case of nozzle excitation in this regime. In figure 4(a,b)), we have seen that for S t = 10 both λ max and d max exhibit a surprising kink around Oh ≈ 0.3. The same non-monotonic behavior has also been observed on the break-up distance z f /z 0 as a function of Oh (see figure 3(a)). These surprising behaviors are associated with the particular properties of the perturbations outside the instability domain. Indeed, for large S t , the optimal perturbation is obtained for ω max > 1. The local wavenumber of the perturbation which is ω at the nozzle is then larger than 1 close to the nozzle, that is in the stable regime [see figure 5(a)]. The optimal perturbation excited from the nozzle is thus first spatially damped before becoming spatially amplified. This damping regime explains the smaller gain obtained by nozzle excitation compared to background noise. It turns out that the strength of this damping is not monotonic with respect to Oh and exhibits a peak for an intermediate value of Oh. Such a peak is illustrated in figure 5 where we have plotted the (local) temporal growth rate of the perturbation versus Oh for a few values of the (local) wavenumber. We do observe that for the values of k satisfying k ≥ 1, that is outside the instability band, the local growth rate exhibits a negative minimum for Oh between 0.1 and 1. The presence of this damping regime naturally questions the validity of our 1D model. The 1D model is indeed known to correctly describe the instability characteristics of 3D axisymmetric modes [START_REF] Eggers | Physics of fluid jets[END_REF]. But, no such results exist in stable regimes. In fact, the 1D dispersion relation departs from the 3D dispersion relation of axisymmetric modes when k > 1. This departure is visible in figure 5 where we have also plotted the local growth rate obtained from the 3D dispersion relation given in [START_REF] Chandrasekhar | Hydrodynamic and hydromagnetic stability[END_REF], p. 541. Significant differences are observed but the 3D growth rates exhibit a similar qualitative behavior as a function of Oh. In particular, there is still a damping rate extremum in the interval 0.1 < Oh < 1. We can therefore expect a similar qualitative behavior of the perturbation outside the instability range with the 3D model.
In figure 6, we compare the optimization results for the nozzle excitation obtained with the 1D model with those obtained using the 3D dispersion relation of Chandrasekhar. This is done by replacing the function S in (2.27) by
S (3D) (Z i , Z f , ω, Oh) = √ 2 Oh Z f Zi y 2 (x, J) -x 2 dz, (4.1)
where
x = x(z, ω) = ω z 3/4 , J = J(z, Oh) = 1 Oh 2 z 1/4 , (4.2)
and y = y(x, J) is given by
2x 2 (x 2 + y 2 ) I 1 (x) I 0 (x) 1 - 2xy x 2 + y 2 I 1 (x)I 1 (y) I 1 (y)I 1 (x) -(x 4 -y 4 ) = J xI 1 (x) I 0 (x) (1 -x 2 ). (4.3)
As expected, differences can be observed between 1D and 3D results for the largest value of S t (S t = 10). However, the trends remain the same. Close to Oh ≈ 0.3, the breakup distance exhibits a plateau, the frequency a minimum, the wavelength and the drop diameter a peak. These peaks have a smaller amplitude for the 3D dispersion relation and are slightly shifted to higher values of Oh. For S t = 1, no difference between both models are observed. This can be understood by the fact that the perturbation does not exhibit a period of damping for such a small value of S t . The 1D model therefore perfectly describes the gain of 3D perturbations, which turns out to be the same as for background noise for Oh < 2 (see figure 3(a)).
Comparison with local predictions
In this section, our goal is to compare the results of the optimization procedure with predictions obtained from the local dispersion relation. We have seen in section 2 that the gain can be related to the local temporal growth rate of the perturbation along the jet [see expression (2.23)]. Both the local capillary time scale τ c l and the Ohnesorge number Oh l vary with Z [see expressions (2.25b,c)]. At a location Z, the maximum temporal growth rate (normalized by the capillarity time at the nozzle) is given by
σ max l (Z) = Z 3/8 2 √ 2 + 6 Oh Z 1/8 , (5.1)
and is reached for the wavelength (normalized by h 0 ) (see [START_REF] Eggers | Physics of fluid jets[END_REF])
λ l (Z) = 2π Z 1/4 2 + 3 √ 2 Oh Z 1/8 . (5.2)
If we form a drop from this perturbation wavelength at location, we would then obtain a drop diameter (normalized by h 0 )
d l (Z) = (12π) 1/3 Z 1/4 2 + 3 √ 2 Oh Z 1/8 1/6 . (5.3)
As the local growth rate increases downstream, a simple upperbound of the gain is then obtained by taking the exponential of the product of the maximum growth rate by the time T i needed to reach the chosen location. The time T i is the free fall time given by
T i = Q Bo ( √ Z -1).
(5.4)
In figure 7, we have plotted the product σ max l T i at the location predicted for the transition assuming that a gain e 7 is needed for such a transition. In figure 7(a), this quantity is plotted as a function of the transition location z f /z o . As expected, we obtain the chosen value for the transition (i. e. 7) for small z f /z o . For large z f /z o , the product σ max l T i also goes to a constant for background noise whatever the Ohnesorge number. However, it has a contrasted behavior for nozzle excitation, with an important increase with z f /z o for the value Oh = 0.3.
In figure 7(b), σ max l T i is plotted as a function of Oh, for different values of S t , that is for different values of the ratio Bo /Q in view of (3.1). For large and small Oh we recover the estimates deduced using (3.5) and (3.11):
σ max l T i ∼ 10.5 as Oh → ∞,
(5.5a)
σ max l T i ∼ 20.68 -11.14 S -4/7 t
as Oh → 0.
(5.5b)
For background noise, σ max l T i varies smoothly between these two extreme values. A completely different evolution is observed for nozzle excitation: a local peak forms between 0.1 < Oh < 1 with an amplitude increasing with S t . This phenomenon is related to the damping of the optimal perturbation discussed in the previous section. We have indeed seen that for nozzle excitation, large gain (that is large t ) are obtained for perturbations exhibiting a damping period prior to their growth. Thus, the growth has to compensate a loss of amplitude. The damping being the strongest for intermediate Oh, the transition is pushed the farthest for these values, explaining the largest growth of the Oh = 0.3 curve in figure 7(a) and the peaks of figure 7(b).
We have seen that the optimal procedure provides a wavelength and a droplet size as a function of Z and Oh only. These quantities are compared to the local estimates (5.2) and (5.3) in figure 8. Both nozzle excitation (solid lines) and background noise (dashed lines) are considered for Oh = 0.01, 0.3 and 10. We observe that the local predictions (dotted lines) always underestimate the wavelength and the drop diameter. For the wavelength, the ratio with the local estimate typically increases with z f /z 0 and Oh. The gap is the strongest for the nozzle excitation case, especially for intermediate Oh (see curve for Oh = 0.3) for which the local estimate is found to underestimate the wavelength by a factor as high as 25 for z f /z 0 = 10 3 .
Contrarily to the wavelength, the drop diameter follows the same trend as the local prediction as a function of z f /z 0 . For both noise excitation and background noise, the diameter decreases with the break-up location.
For large or small Oh, the behaviors of the wavelength and drop diameter obtained by the optimization procedure and local consideration can be directly compared using the results obtained in Appendix A. For large Oh, the local prediction reads
λ l /h 0 ∼ β l Oh 1/2 Z -3/16 f , with β l = 2π2 1/4 √ 3 ≈ 12.94, (5.6a)
d l /h 0 ∼ γ l Oh 1/6 Z -11/49 f , with γ l = (2π) 1/3 (3 √ 2) 1/6 ≈ 2.35, (5.6b)
while the optimization procedure gives
λ (a) max /h 0 ∼ β (a)
Oh 1/2 , with β (a) ≈ 16.54, (5.7a)
λ (b) max /h 0 ∼ β (b) Oh 2/3 Z -1/6 f , with β (b) ≈ 22.83, (5.7b) d (a) max /h 0 ∼ γ (a) Oh 1/6 Z -1/6 f
, with γ (a) ≈ 4.63, (5.7c)
d (b) max /h 0 ∼ γ (b) Oh 2/9 Z -2/9 f
, with γ (b) ≈ 5.16.
(5.7d)
For small Oh, the local estimates are
λ nv l /h 0 ∼ β nv l Z -1/4 f , with β nv l ≈ 2π √ 2 ≈ 8.88, (5.8a)
d nv l /h 0 ∼ γ nv l Z -1/4 f , with γ nv l = (12π) 1/3 (2) 1/6 ≈ 3.76, (5.8b)
while the optimization procedure gives for Z f > 4.74 (see appendix A)
λ max /h 0 ∼ β nv Z -1/4 f
, with β nv ≈ 20.20, (5.9a)
d max /h 0 ∼ γ nv Z -1/4 f
, with γ nv ≈ 4.94.
(5.9b)
Applications
We now apply the results to a realistic configuration obtained from an nozzle of radius h 0 = 1 mm in a gravity field with g = 9.81 m/s 2 . We consider three fluids: water (at 20 • ) for which γ ≈ 72 10 -3 N/m, ν ≈ 10 -6 m 2 /s; and two silicon oils of surface tension γ ≈ 21 10 -3 N/m and of viscosity ν ≈ 5 10 -5 m 2 /s and ν ≈ 3 10 -4 m 2 /s respectively. For these three fluids, we take ρ ≈ 10 3 kg/m 3 as a fair order of magnitude.
For water, we obtain Oh = 3.7 10 -3 , Bo = 0.13 and a parameter Q = 3.72 u 0 with the velocity u 0 at the nozzle expressed in m/s. For the silicon oils, we get Bo = 0.46 and Q = 6.9 u 0 and two values of Oh: Oh = 0.46 and Oh = 2. The conditions of validity (2.9a-c) of the inertial solution then require u 0 to be (much) larger than u c = 0.26 m/s for the water, and u c = 0.15 m/s for the silicon oils.
In figure 9, we have plotted the theoretical predictions for the breakup location, the frequency, the wavelength and the drop diameter as the fluid velocity at the nozzle is varied from u c to 10 u c , that is for Q varying from 1 to 10. We have chosen S t = 7 for the background noise transition, and S t = 4 for the transition by the nozzle excitation. A smaller value of S t has been chosen for the nozzle excitation to describe controlled conditions of forcing. Figure 9(a) shows that for the three fluids the transition by the nozzle excitation can be reached before the background noise transition. The values obtained for the breaking length are comparable to the experimental values reported in [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF]. They measured a normalized breaking length of order 100-150 for the silicon oil of ν ≈ 5 10 -5 m 2 /s from a nozzle of same diameter for flow rates ranging from Q = 0.5 to Q = 1.3. Figure 9(b) provides the most dangerous frequency of the excitation. For the three cases, the frequency for the nozzle excitation is relatively closed to the neutral frequency Q of the jet at the nozzle. For both silicon oils, this frequency is however much smaller than the frequency obtained by the background noise transition, especially for small Q.
The break-up wavelength shown in figure 9(c) exhibits a different behavior with respect to the flow rate Q for the nozzle excitation and the background noise. It decreases monotonically with Q for the noise excitation while it increases for the background noise up to an extremum before starting decreasing. For the three fluids, noise excitation provides a larger wavelength than background noise for small Q, but the opposite is observed above a critical value of Q which increases with Oh. Note that for small Q, the wavelengths obtained for noise excitation are comparable for both silicon oils. Both curves would even cross if a larger value of S t was considered. This property is related to the non-monotonic behavior of the breakup wavelength already discussed above [see figure 6(c)].
Contrarily to the wavelength, the droplet size [figure 9(d))] is not changing much with Q and is comparable for the three fluids. Nozzle excitation provides larger droplets but this effect is significant for the smallest values of Q only.
Finally note that the differences between the 1D and 3D predictions for the nozzle excitation are barely visible. A very small departure of the wavelength curves can be seen for the silicon oils only. This confirms both the usefulness and the validity of the 1D model.
Conclusion and final remarks
At the end of this detailed study, we are now in position to answer the questions raised in the Introduction: The breakup distance from the orifice of a jet falling by its own weight can indeed be understood by comparing two timescales. The relevant timescales are the capillary destabilization time (viscous, or not) based on the local jet radius, and the inverse of the local jet stretching rate. Breakup occurs, in both viscous and inviscid régimes as discussed in section 3, when the latter overcomes the former, a fact already known [START_REF] Villermaux | The formation of filamentary structures from molten silicates: Pele's hair, angel hair, and blown clinker[END_REF][START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF]. However, we have also learned that this aspect is only a tiny piece of the problem as a whole. This simple local rule, if naively extended to estimate the wavelength of the perturbation breaking the jet would predict that the wavelength is proportional to the local jet radius in the inviscid case for instance. This prediction was found to always underestimate the wavelength at breakup. The most dangerous wavelength and the drop diameter account for the stretching history of the fluid particles as they travel along the jet; this is the reason why their values are different depending on whether the perturbations are introduced at the jet nozzle only, or through a background noise affecting the jet all along its extension. An optimal theory computing the gain of every mode as the jet deforms and accelerates, was thus necessary to answer the -seemingly simple-question of its breakup. It has, in addition, revealed the existence of an unexpected non-monotonic dependency of the most dangerous wavelength λ max with respect to Oh.
We have also provided quantitative results assuming that a spatial gain of e 7 of the linear perturbations was sufficient for breakup. This value of the critical gain is an ad hoc criterion that assumes a particular level of noise and which neglects the possible influence of the nonlinear effects. It would be interested to test this criterion with experimental data.
Our analysis has focused on capillary jet whose base state is in an inertial regime. Close to the nozzle, especially if the flow rate is small, a viscous dominated regime is expected [START_REF] Senchenko | Shape and stability of a viscous thread[END_REF]. We have not considered such a regime here. But a similar WKBJ analysis could a priori be performed with a base flow obtained by resolving the more general equations (2.2) if the jet variation scale remains large compared to the perturbation wavelength. However, far from the nozzle, the jet always becomes inertial. The growth of the perturbation is therefore expected to be the same as described above. For this reason, the optimal perturbation obtained from background noise could be the same. Indeed, we have seen that in order to reach a large gain (S t > 5 or so), the optimal perturbation should be introduced far from the nozzle. If the jet is in the inertial regime at this location, the same gain is then obtained. This point was already noticed in [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF].
For nozzle excitation, the entire evolution of the jet contributes the optimal perturbation. We have seen that large gains (S t > 5) are obtained by perturbations which exhibit a spatial damping before starting to grow. We have also seen that this damping regime is only qualitatively described by the 1D model. We do not expect a better description if the jet is dominated by viscous effects. Moreover, it is known that in this regime nonparallel effects are also important close to the nozzle (Rubio-Rubio et al. 2013) which invalidates the WKBJ approach. For this regime, it would be interesting to perform an optimal stability analysis using more advanced tools [START_REF] Schmid | Nonmodal stability theory[END_REF] to take into account non-parallel effects and non-modal growth.
Note finally that we have computed the perturbation gain by considering the exponential terms of the WKBJ approximation only. A better estimate could readily be obtained by considering the complete expression of the WKBJ approximation. This expression which has been provided in appendix B involves an amplitude factor which contains all the other contributions affecting the growth of the perturbation. Different expressions are obtained for A and u which in particular implies that different gains are obtained for the velocity and the jet radius. It is important to mention that the other contributions are not limited to a simple correcting factor associated with the local stretching [START_REF] Tomotika | Breaking up of a drop of viscous liquid immersed in another viscous fluid which is extending at a uniform rate[END_REF][START_REF] Eggers | Physics of fluid jets[END_REF]. Other contributions associated with the z-dependence of the local wavenumber and local jet profile are equally important, leading to expressions which are not simple even in the large or small Oh limit. with
S 1 (Z i , Z f , X ω ) = X -3/4 ω Z f Xω ZiXω X -7/8 1 + 9 2X 5/4 - 3 √ 2X 5/8 dX, (A 3a) S 2 (Z i , Z f , X ω )) = X -1/2 ω Z f Xω ZiXω X -19/8 2 1 + 9 2X 5/4 dX. (A 3b)
and
X ω = (Oh ω) -8/5 . (A 4)
When Z f is not too large, we are in a configuration where: (1) Z i X ω 1 and Z f X ω 1. In that case, we can write
S 1 ∼ 2 √ 2 9 (Z 3/4 f -Z 3/4 i ) - X 5/4 ω 108 √ 2 (Z 2 f -Z 2 i ) + O(X 5/2 ω Z 13/4 f ) (A 5a) S 2 ∼ 2 √ 2 9X 5/4 ω 1 Z 3/4 i - 1 Z 3/4 f + O(Z 1/2 f ) (A 5b) which gives S ∼ 2 √ 2 9 Oh (Z 3/4 f -1 -ω 2 + ω 2 Z -3/4 f ) - Z 2 f -1 108 √ 2 Oh 3 ω 2 + O Z 1/2 f Oh 3 , Z 13/4 f Oh 5 ω 4 (A 6)
in case (a) (nozzle excitation) and
S ∼ 2 √ 2 9 Oh (Z 3/4 f -2ω + ω 2 Z -3/4 f ) - Z 2 f -ω 8/3 108 √ 2 Oh 3 ω 2 + O Z 1/2 f Oh 3 , Z 13/4 f Oh 5 ω 4 (A 7)
in case (b) (background noise) with Z i = ω 4/3 . In case (a), the maximum gain is obtained for
ω (a) max ∼ Z 2 f -1 48 Oh 2 (1 -Z -3/4 f ) 1/4 (A 8) that is ω (a) max ∼ Z 1/2 f 2 3 1/4 Oh 1/2 (A 9)
for large Z f , and equals
S (a) max ∼ 2 √ 2Z 3/4 f 9 Oh 1 -Z -3/4 f - Z 1/4 f 2 √ 3 Oh + O Z 1/4 f Oh 2 , Z 5/4 f Oh 3 . (A 10)
In case (b), the maximum gain is obtained for
ω (b) max ∼ Z 2/3 f 48 1/3 Oh 2/3 (A 11)
and equals The condition that Z f X max ω 1 does not give any restriction in case (b). However, it requires in case (a)
S (b) max ∼ 2 √ 2Z 3/4 f 9 Oh 1 - 3 2/3 2 4/3 Oh 2/3 Z 1/12 f . ( A
Z f Oh 4 . ( A 13)
When Z f Oh 4 , another limit has to be considered for case (a): (2) Z i X ω 1 and Z f X ω 1. In this limit, we have This estimate applies only when Z f Oh 4 . The asymptotic formulae are compared to numerical results in figures 10 for case (a) and in figure 11 for case (b). In both cases, we have plotted the maximum gain S max and the most dangerous frequency (the frequency that provides the maximum gain) versus Z f for Oh = 1, 10, 100, 1000. It is interesting to see that in case (a) the maximum gain and the most dangerous frequency both collapse on a single curve when plotted as a function of the variable Z f / Oh 4 with an adequate normalization (see figure 12). When Oh is small, viscous effects come into play if we go sufficiently far away for the nozzle because the local Ohnesorge number increases algebraically with the distance to the nozzle.
S 1 ∼ 8Z 8 f X 5/8 ω - I o X 3/4 ω (A 14a) S 2 ∼ 2 √ 2 9X 5/4 ω Z 3
Here, we shall assume that we remain inviscid in the whole domain of integration, that is 1 -ω 2 z -3/2 + 9 Oh 2 ω 2 2 z -5/4 -3 √ 2 ω Oh z -5/8 ∼ 1 -ω 2 z -3/2 . (A 18) with Y ω = ω -4/3 . Because in the inviscid limit, perturbations are neutral when they do not grow, case (a) and case (b) provide the same gain. For 1 < Z f < Z c f ≈ 4.74, the maximum gain is reached for ω < 1, i.e. Y ω > 1. The location Z c f is given by the vanishing of ∂ Yω S for Y ω = 1 and Z i = 1:
-7 8
Z c f 1 s -7/8 1 -s -3/2 ds + (Z c f ) 1/8 1 -(Z c f ) -3/2 = 0. (A 20)
For Z c f < Z f Oh -8 , the maximum gain is reached for This estimate is compared to numerical values in figure 13. We do observe a convergence of the maximum gain and most dangerous frequency curves toward the inviscid limit as Oh decreases. Note however that the convergence is slower for nozzle excitation (case (a)).
ω max ∼ Z f Z c f 3/4 ≈ 0.311Z
propagating waves can be formed such that at the orifice a = 1 and v = 0 or a = 0 and v = 1.
In the inviscid regime (Oh 1), equation (B 6) can be integrated explicitly for any A 0 as
a (i) (z) = C A 5/2 0 (z) k 1 (z) , ( B 8)
where C is a constant. It is interesting to compare this expression to the expression a ∼ A 0 that would have been obtained by the argument of [START_REF] Tomotika | Breaking up of a drop of viscous liquid immersed in another viscous fluid which is extending at a uniform rate[END_REF], that is by considering the solution as a uniformly stretched fluid cylinder (see [START_REF] Eggers | Physics of fluid jets[END_REF]).
Figure 2 :
2 Figure 2: Maximum gain S max (a) and most dangerous frequency ω max (b) of the perturbations excited from background noise (dashed lines) and at the nozzle (solid lines) as a function of the distance z f /z o = Z f -1 to the nozzle. From bottom to top, Oh takes the values 1000, 100, 10, 1, 0.1, 0.01, 10 -4 .
are obtained using standard Matlab subroutines.
Figure 3 :
3 Figure3: Level curves of the maximum gain S max (a) and of the most dangerous frequency ω max (b) of the perturbations excited from background noise (dashed lines) and at the nozzle (solid lines) in the (z f /z o , Oh) plane. The dashed lines correspond to the asymptotic limits (3.11) and (3.5) for small and large Oh respectively. On the left of the ω max = 1 curve (indicated as a grey line in (a)), solid and dashed lines are superimposed.
) and from figure 3(a) the position z f /z o where such a value of S is reached in case (a) or (b).
Figure 4 :
4 Figure 4: Wavelength at break-up (a) and resulting droplet diameter (b) versus Oh for background noise (dashed lines) and nozzle excitation (solid lines). The different curves correspond to the transition level St = 0.1, 1, 10. The thin dashed lines correspond to the asymptotic expressions for small and large Oh.
Figure 5 :
5 Figure 5: Comparison of 1D and 3D local dispersion relations. Solid line: 1D dispersion relation. Dashed line: 3D dispersion relation for axisymmetric modes. (a) Temporal growth rate versus the wavenumber k for various Oh. (b) Temporal growth rate versus Oh for fixed wavelengths.
Figure 6 :
6 Figure 6: Characteristics of the response to nozzle excitation versus Oh for various values of S t and two different stability models (Solid line: 1D; Dashed line: 3D axisymmetric). (a) Break-up distance. (b) Most dangerous frequency (c) Wavelength at break-up. (d) Drop diameter.
Figure 7 :
7 Figure 7: Maximum local temporal growth rate σ max l normalized by the free fall time T i at the breakup location assuming breakup for a gain e 7 . Solid line: nozzle excitation; dashed line: background noise. (a) Variation with respect to the breakup location z f /z o for different Oh. (b) Variation with respect to Oh for different values of S t . The dotted lines in (b) are the asymptotic predictions (5.5a,b).
Figure 8 :
8 Figure 8: Wavelength at break-up (a) and drop diameter (b) versus the break-up location z f /z 0 for various values of Oh. Solid line: nozzle excitation; dashed line: background noise.
Figure 9 :
9 Figure 9: Characteristics at break-up by nozzle excitation or background noise for a jet of radius h 0 = 1 mm assuming that break-up occurs when the perturbation gain has reached e St . Solid lines: Nozzle excitation with S t = 4. Dot-dash lines: Nozzle excitation with S t = 4 using the 3D dispersion relation. Dashed lines: Background noise with S t = 7. Black lines: Water; Red lines: Silicon oil of ν = 5 10 -5 m 2 /s (SO50); Green lines: Silicon oil of ν = 3 10 -4 m 2 /s (SO300); (a): Break-up location; (b): Most dangerous frequency; (c): Wavelength at break up; (d): Drop diameter.
Figure 10 :
10 Figure 10: Maximum gain S (a) max (a) and most dangerous frequency ω(a) max (b) of the perturbations excited at the nozzle as a function of the distance Z f to the nozzle. Solid lines: numerical results. Dashed and dotted lines: asymptotic results obtained for large Oh for Z f Oh 4 [formulae (A 10) and (A 9)] and Z f Oh 4 [formulae (A 17) and (A 16)] respectively. Oh takes the values 1, 10, 100, 1000.
Figure 11 :
11 Figure 11: Maximum gain S (b) max (a) and most dangerous frequency ω (b) max (b) of the perturbations excited from background noise as a function of the distance Z f to the nozzle. Solid lines: numerical results. Dashed lines: asymptotic results [formulae (A 12) and (A 11)]. From top to bottom, Oh takes the values 1, 10, 100, 1000.
Figure 12 :
12 Figure12: Same plots as fig.10but with rescaled variables versus Z f / Oh 4 . In (a), the dotted line is (A 17) while the dashed line is the first term of (A 10). In (b), the dotted and dashed line are (A 16) and (A 9) respectively.
Figure 13 :
13 Figure 13: Maximum gain S max (a) and most dangerous frequency ω max (b) of the perturbations excited from background noise (dashed lines) and at the nozzle (solid lines )as a function of the distance z f /z o = Z f -1 to the nozzle. From bottom to top, Oh takes the values 0.1, 0.01, 0.001. The formulae (A 22) and (A 21) are indicated as a solid gray line in (a) and (b) respectively.
Acknowledgments
We acknowledge support from the French Agence Nationale de la Recherche under the ANR FISICS project ANR-15-CE30-0015-03.
Appendix A. Asymptotic regimes
In this appendix, we provide asymptotic expressions for S max and ω max in the viscous and inviscid regimes, that is for Oh → ∞ and Oh → 0 respectively. A.1. Maximum gain in the viscous regime (Oh → ∞) When Oh → ∞, the expression of the integrand in (2.27) can be simplified, and in the whole domain of integration, we can use the approximation
(A 1) such that (2.27) can be written as
Appendix B. WKBJ analysis
In this section, we provide the full expression of the WKBJ approximation of each downward propagative wave. Each wave is searched into the form
where k(Z), v(Z) and a(Z) depend as the base flow on the slow spatial variable
These equations give at leading order (2.16a,b) from which we can deduce the dispersion relation (2.17) that defines k(Z). If we now replace v in the right hand side of (B 2a) by its leading order expression in term of a, we obtain an expression for v valid up to
Plugging this expression in (B 2b) with U 0 = Q/A 0 , we obtain the following equation for a(z)
with
This equation is valid for both downward and upward propagating waves.
For large Q, it can be simplified for the downward propagating wavenumbers using ω = ωQ and k 6) where k 1 (Z) is given by (2.20). The amplitude v(Z) is then deduced from a(Z) using (B 3) at leading order in Q:
The two downward propagating waves possess different expressions for k 1 , and thus different amplitudes a and v. This guarantees that a combination of the two downward | 53,718 | [
"8388",
"872888"
] | [
"196526",
"196526"
] |
01554090 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01554090/file/1607.01980.pdf | Simon Labouesse
Marc Allain
Member, IEEE Jérôme Idier
Awoke Negash
Thomas Mangeat
email: [email protected]
Penghuan Liu
Anne Sentenac
Sébastien Bourguignon
Joint reconstruction strategy
Keywords: Super-resolution, fluorescence microscopy, speckle imaging, near-black object model, proximal splitting
come
I. INTRODUCTION
In Structured Illumination Microscopy (SIM), the sample, characterized by its fluorescence density ρ, is illuminated successively by M distinct inhomogeneous illuminations Im. Fluorescence light emitted by the sample is collected by a microscope objective and recorded on a camera to form an image ym. In the linear regime, and with a high photon counting rate 1 , the dataset {ym} M m=1 is related to the sample ρ via [START_REF] Goodman | Introduction to Fourier Optics[END_REF]
ym = H ⊗ (ρ × Im) + εm, m = 1 • • • M, ( 1
)
where ⊗ is the convolution operator, H is the microscope point spread function (PSF) and εm is a perturbation term accounting for (electronic) noise in the detection and modeling errors. Since the spatial spectrum of the PSF [i.e., the optical transfer function (OTF)] is strictly bounded by its cut-off frequency, say, νpsf, if the illumination pattern Im is homogeneous, then the spatial spectrum of ρ that can be retrieved from the image ym is restricted to frequencies below νpsf. When the illuminations are inhomogeneous, frequencies beyond νpsf can be recovered from the low resolution images because the illuminations, acting as carrier waves, downshift part of the spectrum inside the OTF support [START_REF] Heintzmann | Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating[END_REF], [START_REF] Gustafsson | Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy[END_REF]. Standard SIM resorts to harmonic illumination patterns for which the reconstruction of the super-resolved image can be easily done by solving a linear system in the Fourier domain. In this case, the gain in resolution depends on the OTF support, the illumination cut-off frequency and the available signal-to-noise ratio (SNR). The main drawback of SIM is that it requires the knowledge of the illumination patterns and thus a stringent control of the experimental setup. If these patterns are not known with sufficient accuracy [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF], [START_REF] Ayuk | Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm[END_REF], severe artifacts appear in the reconstruction. Specific estimation techniques have been developed for retrieving the parameters of the periodic patterns from the images [START_REF] Orieux | Bayesian estimation for optimized structured illumination microscopy[END_REF]- [START_REF] Wicker | Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space[END_REF], but they can fail if the SNR is too low or if the excitation patterns are distorted, e.g., by inhomogeneities in the sample refraction index. The Blind-SIM strategy [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF], [START_REF] Ayuk | Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm[END_REF], [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF] has been proposed to tackle this key issue, the principle being to retrieve the sample fluorescence density without the knowledge of the illumination patterns. In addition, speckle illumination patterns are promoted instead of harmonic ones, the latter being much more difficult to generate and control. From the methodological viewpoint, this strategy relies on the simultaneous (joint) reconstruction of the fluorescence density and of the illumination patterns. More precisely, joint reconstruction is achieved through the iterative resolution of a constrained least-squares problem. However, the computational time of such a scheme clearly restricts the applicability of the method.
This paper provides a global re-foundation of the joint Blind-SIM strategy. More specifically, our work develops two specific, yet complementary, contributions:
• The joint Blind-SIM reconstruction problem is first revisited, resulting in an improved numerical implementation with execution times decreased by several orders of magnitude. Such an acceleration relies on two technical contributions. Firstly, we show that the problem proposed in [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF] is equivalent to a fully separable constrained minimization problem, hence bringing the original (large-scale) problem to M sub-problems with smaller scales. Then, we introduce a new preconditioned proximal iteration (denoted PPDS) to efficiently solve each sub-problem. The PPDS strategy is an important contribution of this article: it is provably convergent [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF], easy to implement and, for our specific problem, we empirically observe a superlinear asymptotic convergence rate. With these elements, the joint Blind-SIM reconstruction proposed in this paper is fast and can be highly parallelized, opening the way to real-time reconstructions.
• Beside these algorithmic issues, the mechanism driving superresolution (SR) in this blind context is investigated, and a connection is established with the well-known "Near-black object" effect introduced in Donoho's seminal contribution [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF]. We show that the SR relies on sparsity and positivity constraints enforced by the unknown illumination patterns. This finding helps to understand in which situation super-resolved reconstructions may be provided or not. A significant part of this work is then dedicated to numerical simulations aiming at illustrating how the SR effect can be enhanced. In this perspective, our simulations show that two-photon speckle illuminations potentially increase the SR power of the proposed method.
The pivotal role played by sparse illuminations in this SR mechanism also draws a connexion between joint Blind-SIM and other random activation strategies like PALM [START_REF] Betzig | Imaging intracellular fluorescent proteins at nanometer resolution[END_REF] or STORM [START_REF] Rust | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF]; see also [START_REF] Mukamel | Statistical deconvolution for superresolution fluorescence microscopy[END_REF], [START_REF] Min | FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data[END_REF] for explicit sparsity methods applied to STORM. With PALM/STORM, unparalleled resolutions result from an activation process that is massively sparse and mostly localized on the marked structures. With the joint Blind-SIM strategy, the illumination pattern playing the role of the activation process is not that "efficient" and lower resolutions are obviously expected. Joint Blind-SIM however provides SR as long as the illumination patterns enforce many zero (or almost zero) values in the product ρ × Im: the sparser the illuminations, the higher the expected resolution gain with joint Blind-SIM. Such super resolution can be induced by either deterministic or random patterns. Let us mention that random illuminations are easy and cheap to generate, and that a few recent contributions advocate the use of speckle illuminations for super-resolved imaging, either in fluorescence [START_REF] Min | Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery[END_REF], [START_REF] Oh | Sub-Rayleigh imaging via speckle illumination[END_REF] or in photo-acoustic [START_REF] Chaigne | Super-resolution photoacoustic fluctuation imaging with multiple speckle illumination[END_REF] microscopy. In these contributions, however, the reconstruction strategies are derived from the statistical modeling of the speckle, hence, relying on the random character of the illumination patterns. In comparison, our approach only requires that the illuminations cancel-out the fluorescent object and that their sum is known with sufficient accuracy. Finally, we also note that [START_REF] Labouesse | Fluorescence blind structured illumination microscopy: a new reconstruction strategy[END_REF] corresponds to an early version of this work. Compared to [START_REF] Labouesse | Fluorescence blind structured illumination microscopy: a new reconstruction strategy[END_REF], several important contributions are presented here, mainly: the super-resolving power of Blind-SIM is now studied in details, and a comprehensive presentation of the proposed PPDS algorithm includes a tuning strategy for the algorithm parameter that allows a substantial reduction of the computation time.
The remainder of the paper is organized as follows. In Section II, the original Blind-SIM formulation is introduced and further simplified; this reformulation is then used to get some insight on the mechanism that drives the SR in the method. Taking advantage of this analysis, a penalized Blind-SIM strategy is proposed and characterized with synthetic data in Section III. Finally, the PPDS algorithm developed to cope with the minimization problem is presented and tested in Section IV, and conclusions are drawn in Section V.
II. SUPER-RESOLUTION WITH JOINT BLIND-SIM ESTIMATION
In the sequel, we focus on a discretized formulation of the observation model [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF]. Solving the two-dimensional (2D) Blind-SIM reconstruction problem is equivalent to finding a joint solution ( ρ, { Im} M m=1 ) to the following constrained minimization problem [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF]:
min ρ,{Im} M m=1 ym -Hdiag(ρ) Im 2 (2a) subject to m Im = M × I0 (2b) and ρn ≥ 0, Im;n ≥ 0, ∀m, n (2c)
with H ∈ R P ×N the 2D convolution matrix built from the discretized PSF. We also denote ρ = vect(ρn) ∈ R N the discretized fluorescence density, ym = vect(ym;n) ∈ R P the m-th recorded image, and Im = vect(Im;n) ∈ R N the m-th illumination with expected spatial intensity I0 = vect(I0;n) ∈ R N + (this latter quantity may be spatially inhomogeneous but it is supposed to be known). Let us remark that (2) is a biquadratic problem. Block coordinate descent alternating between the object and the illuminations could be a possible minimization strategy, relying on cyclically solving M + 1 quadratic programming problems [START_REF] Jost | Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction[END_REF]. In [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF], a more efficient but more complex scheme is proposed. However, the minimization problem (2) has a very specific structure, yielding a fast and simple strategy, as shown below.
A. Reformulation of the optimization problem
According to [START_REF] Labouesse | Fluorescence blind structured illumination microscopy: a new reconstruction strategy[END_REF], let us first consider problem (2) without the equality constraint (2b). It is equivalent to M independent quadratic minimization problems:
minq m ym -Hqm 2 (3a) subject to qm ≥ 0, (3b)
where we set qm := vect(ρn × Im;n). Each minimization problem (3) can be solved in a simple and efficient way (see Sec. IV), hence providing a set of global minimizers { qm} M m=1 . Although the latter set corresponds to an infinite number of solutions ( ρ, { Im} M m=1 ), the equality constraint (2b) defines a unique solution such that qm = vect( ρn × Im;n) for all m:
ρ = Diag(I0) -1 q (4a) ∀m Im = Diag( ρ) -1 qm (4b)
with q := 1 M m qm. The solution (4) exists as long as I0;n = 0 and ρn = 0, ∀n. The first condition is met if the sample is illuminated everywhere (in average), which is an obvious minimal requirement. For any pixel sample such that ρn = 0, the corresponding illumination Im;n is not defined; this is not a problem as long as the fluorescence density ρ is the only quantity of interest. Let us also note that the following implication holds:
I0;n ≥ 0, qm;n ≥ 0 =⇒ Im,n ≥ 0 and ρn ≥ 0.
Because we are dealing with intensity patterns, the condition I0;n ≥ 0 is always met, hence the positivity granted for both the density and the illumination estimates, i.e., the positivity constraint (2c), is granted by [START_REF] Heintzmann | Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating[END_REF]. Indeed, it should be clear that combining [START_REF] Goodman | Introduction to Fourier Optics[END_REF] and (4) solves the original minimization problem (2): on the one hand, the equality constraint (2b) is met since2
m Im = Diag( ρ) -1 q = M I0 (5)
and on the other hand, the solution (4) minimizes the criterion given in (2a) since it is built from { qm} M m=1 , which minimizes (3a). Finally, it is worth noting that the constrained minimization problem (2) may have multiple solutions. In our reformulation, this ambiguity issue arises in the "minimization step" (3): while each problem (3) is convex quadratic, and thus admits only global solutions (which in turn provide a global solution to problem (2) when recombined according to (4a)-(4b)), it may not admit unique solutions since each criterion (3a) is not strictly convex3 in qm. Furthermore, the positivity constraint (3b) prevents any direct analysis of these ambiguities. The next subsection underlines however the central role of this constraint in the joint Blind-SIM strategy originally proposed in [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF].
B. Super-resolution unveiled
Whereas the mechanism that conveys SR with known structured illuminations is well understood (see [START_REF] Gustafsson | Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy[END_REF] for instance), the SR capacity of joint blind-SIM has not been characterized yet. It can be made clear, however, that the positivity constraint (2c) plays a central role in this regard. Let H + be the pseudo-inverse of H [START_REF] Golub | Matrix computation[END_REF]Sec. 5.5.4]. Then, any solution to the problem (2a)-(2b), i.e, without positivity constraints, reads
ρ = Diag(I0) -1 (H + y + q ⊥ ) (6a) Im = Diag( ρ) -1 (H + ym + q ⊥ m ), (6b)
with y = 1 M m ym, and q ⊥ = 1 M m q ⊥ m where q ⊥ m is an arbitrary element of the kernel of H, i.e. with arbitrary frequency components above the OTF cutoff frequency. Hence, the formulation (2a)-(2b) has no capacity to discriminate the correct high frequency components, which means that it has no SR capacity. Under the positivity constraint (2c), we thus expect that the SR mechanism rests on the fact that each illumination pattern Im activates the positivity constraint on qm in a frequent manner.
A numerical experiment is now considered to support this assertion. A set of M collected images are simulated following [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF] with the PSF H given by the usual Airy pattern that reads in polar coordinates
H(r, θ) = k 2 0 π J1(r k0 NA) k0 r 2 , r ≥ 0, θ ∈ R, ( 7
)
where J1 is the first order Bessel function of the first kind, NA is the objective numerical aperture set to 1.49, and k0 = 2π/λ is the free-space wavenumber with λ the emission/excitation wavelength. The ground truth is the 2D 'star-like' fluorescence pattern depicted in Fig. 1(left). The image sampling step for all the simulations involving the star pattern is set 4 to λ/20. For this numerical simulation, the illumination set {Im} M m=1 consists in M = 200 modified speckle patterns, see Fig. 2(A). More precisely, a first set of illuminations is 4 For an optical system modeled by [START_REF] Orieux | Bayesian estimation for optimized structured illumination microscopy[END_REF], the sampling rate of the (diffraction-limited) acquisition is usually the Nyquist rate driven by the OTF cutoff frequency ν psf = 2k 0 NA. A higher sampling rate is obviously needed for the super-resolved reconstruction, the up-sampling factor between the "acquisition" and the "processing" rates being at least equal to the expected SR factor. Here, we adopt a common sampling rate for any simulation involving the star-like pattern (even with diffraction-limited images), as it allows a direct comparison of the reconstruction results. obtained by adding a positive constant (equal to 3) to each speckle pattern, resulting in illuminations that never activate the positivity constraint in (3). On the contrary, the second set of illuminations is built by subtracting a small positive constant (equal to 0.2) to each speckle pattern, the negative values being set to zero. The resulting illuminations are thus expected to activate the positivity constraint in [START_REF] Goodman | Introduction to Fourier Optics[END_REF]. For both illumination sets, low-resolution microscope images are simulated and corrupted with Gaussian noise; in this case, the standard deviation was chosen so that the SNR of the total dataset is 40 dB. Corresponding reconstructions of the first product image q1 obtained via the resolution of ( 3) is shown in Fig. 2(B), while the retrieved sample (4a) is shown in Fig. 2(C); for each reconstruction, the spatial mean I0 in (4a) is set to the statistical expectation of the corresponding illumination set. As expected, the reconstruction with the first illumination set is almost identical to the deconvolution of the wide-field image shown in Fig. 1(upper-right), i.e., there is no SR in this case. On the contrary, the second set of illuminations produces a super-resolved reconstruction, hence establishing the central role of the positivity constraint in the original joint reconstruction problem (2).
III. A PENALIZED APPROACH FOR JOINT BLIND-SIM
As underlined in the beginning of Subsection II-B, there is an ambiguity issue concerning the original joint Blind-SIM reconstruction problem. A simple way to enforce unicity is to slightly modify (3) by adding a strictly convex penalization term. We are thus led to solving
min qm≥0 ym -Hqm 2 + ϕ(qm). ( 8
)
Another advantage of such an approach is that ϕ can be chosen so that robustness to the noise is granted and/or some expected features in the solution are enforced. In particular, the analysis conveyed above suggests that favoring sparsity in each qm is suited since speckle or periodic illumination patterns tend to frequently cancel or nearly cancel the product images qm. For such illuminations, the Near-Black Object introduced in Donoho's seminal paper [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF] is an appropriate modeling and, following this line, we found that the separable " 1 +
2" penalty 5 provides super-resolved reconstructions:
ϕ(qm) := α n |qm;n| + β||qm|| 2 , α ≥ 0, β > 0. ( 9
)
With properly tuned (α, β), our joint Blind-SIM strategy is expected to bring SR if "sparse" illumination patterns Im are used, i.e., if they enforce qm;n = 0 for most (or at least many) n. More specifically, it is shown in [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF]Sec. 4] that SR occurs if the number of non-zero Im;n (i.e., the number of non-zero components to retrieve in qm) divided by N is lower than 1 2 R/N , with R/N the incompleteness ratio and R the rank of H. In addition, the resolving power is driven by the spacing between the components to retrieve that, ideally, should be greater than the Rayleigh distance λ 2 NA , see [12, pp. 56-57]. These conditions are rather stringent and hardly met by illumination patterns that can be reasonably considered in practice. These illumination patterns are usually either deterministic harmonic or quasi-harmonic 6 patterns, or random speckle patterns, these latter illuminations being much easier to generate [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF]. Nevertheless, in both cases, a SR effect is observed in joint Blind-SIM. Moreover, one can try to maximize this effect via the tuning of some experimental parameters that are left to the designer of the setup. Such parameters are mainly: the period of the light grid and the number of grid shifts for harmonic patterns, the spatial correlation length and the point-wise statistics of the speckle patterns. Investigating the SR properties with respect to these parameters on a theoretical ground seems out of reach. However, a numerical analysis is possible and some illustrative results are now provided that address this question. Reconstructions shown in the sequel are built from (4a) via the numerical resolution of ( 8)- [START_REF] Wicker | Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space[END_REF]. For sake of clarity, all the algorithmic details concerning this minimization problem are reported in Sec. IV. These simulations were performed with low-resolution microscope images corrupted by additive Gaussian noise such that the signal-to-noise ratio (SNR) of the dataset {ym} M m=1 is 40 dB. In addition, we note that this penalized joint Blind-SIM strategy requires an explicit tuning of some hyper-parameters, namely α and β in the regularization function [START_REF] Wicker | Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space[END_REF]. Further details concerning these parameters are reported in Sec. III-D.
A. Regular and distorted harmonic patterns
We first consider unknown harmonic patterns defining a "standard" SIM experiment with M = 18 patterns. More precisely, the illuminations are harmonic patterns of the form I(r) = 1 + cos(2πν t r + φ) where φ is the phase shift, and with r = (x, y) t and ν = (νx, νy) t the spatial coordinates and the spatial frequencies of the harmonic 5 The super-resolved solution in [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF] is obtained with a positivity constraint and a 1 separable penalty. However, ambiguous solutions may exist in this case since the criterion to minimize is not strictly convex. The 2 penalty in ( 9) is then mostly introduced for the technical reason that a unique solution exists for problem [START_REF] Wicker | Phase optimisation for structured illumination microscopy[END_REF]. 6 Dealing with distorted patterns is of particular practical importance since it allows to cope with the distortions and misalignments induced by the instrumental uncertainties or even by the sample itself [START_REF] Ayuk | Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm[END_REF], [START_REF] Jost | Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction[END_REF]. function, respectively. Distorted versions of these patterns (deformed by optical aberrations such as astigmatism and coma) were also considered. Three distinct orientations θ := tan -1 (νy/νx) ∈ {0, 2π/3, 4π/3}, for each of which six phase shifts of one sixth of the period, were considered. The frequency of the harmonic patterns ||ν|| := (ν 2 x + ν 2 y ) 1/2 is set to 80% of the OTF cutoff frequency, i.e., it lies inside the OTF support. One regular and one distorted pattern are depicted in Fig. 3(A) and the penalized joint Blind-SIM reconstructions are shown in Fig. 3(B). For both illumination sets, a clear SR effect occurs, which is similar to the one obtained with the original approach presented in [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF]. As expected, however, the reconstruction quality achieved in this blind context is lower than what can be obtained with standard harmonic SIM -for the sake of comparison, see Fig. 1(B). In addition, we note that some artifacts may appear if the number of phase shifts for each orientation is decreased, see Fig. 3(C-left). If we keep in mind that the retrieved
B. Speckle illumination patterns
We now consider second-order stationary speckle illuminations Im with known first order statistics I0;n = I0, ∀n. Each one of these patterns is a fully-developed speckle drawn from the pointwise intensity of a correlated circular Gaussian random field. The correlation is adjusted so that the pattern Im exhibits a spatial correlation of the form (7) but with "numerical aperture" parameter NAill that sets the correlation length to λ 2 NA ill within the random field. As an illustration, the speckle pattern shown in Fig. 4(A-left) was generated in the standard case 7 NAill = NA. From this set of regular (fully-developed) speckle patterns, we also consider another set of random illumination patterns built by squaring each speckle pattern, see Fig. 4(A). These "squared" patterns are considered hereafter because they give a deeper insight about the SR mechanism at work in joint Blind-SIM. Moreover, we discuss later in this subsection that 7 It is usually considered that NA ill = NA if the illumination and the collection of the fluorescent light are performed through the same optical device. these patterns can be generated with other microscopy techniques, hence extending the concept of random illumination microscope to other optical configurations. From a statistical viewpoint, the probability distribution function (pdf) of "standard" and "squared" speckle patterns differ. For instance, the pdf of the squared speckle intensity is more concentrated around zero8 than the exponential pdf of the standard speckle intensity. In addition, the spatial correlation is also changed since the power spectral density of the "squared" random field spans twice the initial support of its speckle counterpart [START_REF] Denk | Two-photon laser scanning fluorescence microscopy[END_REF]. As a result, the "squared" speckle grains are sharper, and they enjoy larger spatial separation. According to previous SR theoretical results [12, p. 57] (see also the beginning of Sec. III), these features may bring more SR in joint Blind-SIM than standard speckle patterns. This assumption was indeed corroborated by our simulations. For instance, the reconstructions in Fig. 4(B) were obtained from a single set of M = 1000 speckle patterns such that NAill = NA: in this case, the "squared" illuminations (obtained by squaring the speckle patterns) provide a higher level of SR than the standard speckle illuminations.
Figure 5 shows how the reconstruction quality varies with the number of illumination patterns. With very few illuminations, the sample is retrieved in the few places that are activated by the "hot spots" of the speckle patterns. This actually illustrates that the joint Blind-SIM approach is also an "activation" strategy in the spirit of PALM [START_REF] Betzig | Imaging intracellular fluorescent proteins at nanometer resolution[END_REF] and STORM [START_REF] Rust | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF]. With our strategy, the activation process is nevertheless enforced by the structured illumination patterns and (A) not by the fluorescent markers staining the sample. This effect is more visible with the squared illumination patterns and, with these somehow sparser illuminations, the number of patterns needs to be increased so that the fluctuations in m Im is moderate, hence making the equality (2b) a legitimate constraint. We also stress that these simulations corroborate the empirical statement that M ≈ 9 harmonic illuminations and M ≈ 200 speckle illuminations produce comparable super-resolved reconstructions, see Fig. 3(Cleft) and Fig. 5(B-left). Obviously, imaging with random speckle patterns remains an attractive strategy since it is achieved with a very simple experimental setup, see [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF] for details. For both random patterns, we also note that increasing the correlation length above the Rayleigh distance λ 2 NA (i.e., setting NAill < NA) deteriorates the SR whereas, conversely, taking NAill = 2NA enhances it, see Fig. 6-(A,B). However, the resolving power of the joint Blind-SIM estimate deteriorates if the correlation length is further decreased; for instance, uncorrelated speckle patterns are finally found to hardly produce any SR, see Fig. 6-(C). Indeed, with arbitrary small correlation lengths, many "hot spots" tend to be generated within a single Rayleigh distance, leading to this loss in the resolving power. Obviously, the "squared" speckle patterns are less sensitive to this problem because they are inherently sparser.
Finally, the experimental relevance of the simulations involving "squared" speckle illuminations needs to be addressed. Since a twophoton (2P) fluorescence interaction is sensitive to the square of the intensity [START_REF] Gu | Comparison of three-dimensional imaging properties between two-photon and single-photon fluorescence microscopy[END_REF], most of these simulations can actually be considered as wide-field 2P structured illumination experiments. Unlike one-photon (i.e., fully-developed) speckle illuminations 9 , though, a 2P interaction requires an excitation wavelength λill ∼ 1000 nm that is roughly twice the one of the collected fluorescence λdet ∼ 500 nm. The lateral 2P correlation length being λ ill 4NA ill , epi-illumination setups with onephoton (1P) and 2P illuminations provide similar lateral correlation lengths. This 2P instrumental configuration is simulated in Fig. 6(Aright), which does not show any significant SR improvement with respect to 1P epi-illumination interaction shown in Fig. 5(C-left). The increased SR effect driven by "squared" illumination patterns can nevertheless be obtained with 2P interactions if the excitation and the collection are performed with separate objectives. For instance, the behaviors shown in Fig. 5(C-right) and in Fig. 6(B-right) can be obtained if the excitation NA is, respectively, twice and four times the collection NA. With these configurations, the 2P excitation exhibits a correlation length which is significantly smaller than the one driven by the objective PSF, and a strong SR improvement is observed in simulation by joint Blind-SIM. The less spectacular simulation shown in Fig. 6(C-right) can also be considered as a 2P excitation, in the "limit" case of a very low collection NA. The 1P simulation shown in Fig. 6(C-left) rather mock a photo-acoustic imaging experiment [START_REF] Chaigne | Super-resolution photoacoustic fluctuation imaging with multiple speckle illumination[END_REF], an imaging technique for which the illumination lateral correlation length is negligible with respect to the PSF width.
As a final remark, we stress that 2P interactions are not the only way to generate sparse illumination patterns for the joint Blind-SIM. In particular, super-Rayleigh speckle patterns [START_REF] Bromberg | Generating non-Rayleigh speckles with tailored intensity statistics[END_REF] are promising candidates for that purpose.
C. Some reconstructions from real and mock data
The star test-pattern used so far is a simple and legitimate mean to evaluate the resolving power of our strategy [START_REF] Horstmeyer | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF], but it hardly provides a convincing illustration of what can be expected with real data. Therefore, we now consider the processing of more realistic datasets with joint Blind-SIM. In this section, the microscope acquisitions are all designed so that the spatial sampling rate is equal or slightly above the Nyquist rate λ 4NA . As a consequence, a preliminary up-sampling step of the camera acquisitions is performed so that their sampling rate reaches that of the super-resolved reconstruction.
As a first illustration, we consider a real dataset resulting from a test sample composed of fluorescent beads with diameters of 100 nm. A set of 100 one-photon speckle patterns is generated by a spatial light modulator and a laser source operating at λill = 488 nm. The fluorescent light at λcoll = 520 nm is collected through an objective with NA = 1.49 and recorded by a camera. The excitation and the collection of the emitted light are performed through the same objective, i.e., the setup is in epi-illumination mode. The total number of photons per camera pixels is about 65 000. In the perspective of further processing, this set of camera acquisitions is first up-sampled with a factor of two. Figure 7(A-left) shows the sum of these (upsampled) acquisitions, which is similar to a wide-field image. Wiener deconvolution of this image can be performed so that all spatial frequencies transmitted by the OTF are equivalently contributing in a diffraction-limited image of the beads, see Figure 7(A-middle). The processing of the dataset by the joint Blind-SIM strategy shown in Figure 7(A-right) reveals several beads that are otherwise unresolved on the diffraction-limited images, hence demonstrating a clear SR effect. In this case, the distance between the closest pair of resolved beads provides an upper bound for the final resolution, that is λcoll/5. 9 With one-photon interactions, the Stokes shift [START_REF] Lakowicz | Principles of Fluorescence Spectroscopy[END_REF] implies that the excitation and the fluorescence wavelengths are not strictly equivalent. The difference is however negligible in practice (about 10%), hence our assumption that one-photon interactions occur with identical wavelengths for both the excitation and the collection. The experimental demonstration above does not involve any biological sample, and we now consider a simulation designed to be close to a real-world biological experiment. More specifically, the STORM reconstruction of a marked neuron 10 is used as a ground truth to simulate a series of microscope acquisitions generated from onephoton speckle illuminations. Our simulation considers 300 illuminations and acquisitions, both performed through the same objective, at λ = 488 nm and with NA = 1. Each low-resolution acquisition is finally plagued with Poisson noise, the total photon budget being equal to 50 000 so that it fits to the one of a standard fluorescence wide-field image. The sample (ground truth) shown in Figure 7(Bleft) interestingly exhibits a lattice structure with a 190 nm periodicity (in average) that is not resolved by the diffracted-limited image shown in Figure 7(B-middle). The joint Blind-SIM reconstruction in Figure 7(A-right) shows a significant improvement of the resolution, which reveals some parts of the underlying structure.
D. Tuning the regularization parameters
The tuning of parameters α and β in ( 9) is a pivotal issue since inappropriate values result in deteriorated reconstructions. On the one hand, the quadratic penalty in [START_REF] Wicker | Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space[END_REF] was mostly introduced to ensure that the minimizer defined by ( 8) is unique (via strict convexity of the criterion). However, because high-frequency components in qm are progressively damped as β increases, the latter parameter can 10 A rat hippocampal neuron in culture labelled with an anti-βIV-spectrin primary and a donkey anti-rabbit Alexa Fluor 647 secondary antibodies, imaged by STORM and processed similarly to [START_REF] Leterrier | Nanoscale architecture of the axon initial segment reveals an organized and robust scaffold[END_REF]. also be adjusted in order to prevent an over-amplification of the instrumental noise. A trade-off should nevertheless be sought since large values of β prevent super-resolution to occur. For a given SNR, β is then maintained to a fixed (usually small) value. For instance, we chose β = 10 -6 for all the simulations involving the star pattern in this paper since they were performed with a rather favorable SNR. On the other hand, the quality of reconstruction crucially depends on parameter α. More precisely, larger values of α will provide sparser solutions qm, and thus a sparser reconstructed object ρ. Fig. 8 shows an example of under-regularized and overregularized solutions, respectively corresponding to a too small and a too large value of α. The prediction of the appropriate level of sparsity to seek for each qm, or equivalently the tuning of the regularization parameter α, is not an easy task. Two main approaches can be considered. One relies on automatic tuning. For instance, a simple method called Morozov's discrepancy principle considers that the least-squares terms ym -H qm 2 should be chosen in proportion with the variance of the additive noise, the latter being assumed known [START_REF] Morozov | Methods for Solving Incorrectly Posed Problems[END_REF]. Other possibilities seek a trade-off between ym -H qm 2 and ϕ( qm). This is the case with the L-curve [START_REF] Hansen | Analysis of discrete ill-posed problems by means of the L-curve[END_REF], but also with the recent contribution [START_REF] Song | Regularization parameter estimation for non-negative hyperspectral image deconvolution[END_REF], which deals with a situation comparable to ours. Another option relies on a Bayesian interpretation of qm as a maximum a posteriori solution, which opens the way to the estimation of α marginally of qm. In this setting, Markov Chain Monte Carlo sampling [START_REF] Lucka | Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors[END_REF] or variational Bayes methods [START_REF] Babacan | Variational Bayesian blind deconvolution using a total variation prior[END_REF] could be employed. An alternate approach to automatic tuning consists in relying on a calibration step. It amounts to consider that similar acquisition conditions, applied to a given type of biological samples, lead to similar ranges of admissible values for the tuning of α. The validation of such a principle is however outside the scope of this article as it requires various experimental acquisitions from biological samples with known structures (or, at least, with some calibrated test patterns). Concerning the examples proposed in the present section, the much simpler strategy consisted in selecting the reconstruction which is visually the "best" among the reconstructed images with varying α.
IV. A NEW PRECONDITIONED PROXIMAL ITERATION
We now consider the algorithmic issues involved in the constrained optimization problem ( 8)-( 9). For sake of simplicity, the subscript m in ym and qm will be dropped. The reader should however keep in mind that the algorithms presented below only aim at solving one of the M sub-problems involved in the final joint Blind-SIM reconstruction. Moreover, we stress that all simulations presented in this article are performed with a convolution matrix H with a blockcirculant with circulant-block (BCCB) structure. The more general case of block-Toeplitz with Toeplitz-block (BTTB) structure is shortly addressed at the end of Subsection IV-C.
At first, let us note that ( 8)-( 9) is an instance of the more general problem min
q∈R N [f (q) := g(q) + h(q)] (10)
where g and h are closed-convex functions that may not share the same regularity assumptions: g is supposed to be a smooth function with a L-Lipschitz continuous gradient ∇g, but h does not need to be smooth. Such a splitting aims at solving constrained non-smooth optimization problems by proximal (or forward-backward) iterations.
The next subsection presents the basic proximal algorithm and the well-known FISTA that usually improves the convergence speed.
A. Basic proximal and FISTA iterations
We first present the resolution of (10) in a general setting, then the penalized joint Blind-SIM problem (8) is addressed as our particular case of interest.
1) General setting: Let q (0) be an arbitrary initial guess, the basic proximal update k → k + 1 for minimizing the convex criterion f is [START_REF] Combettes | Signal recovery by proximal forwardbackward splitting[END_REF]- [START_REF] Combettes | Proximal splitting methods in signal processing[END_REF] q (k+1) ←-P γh q (k) -γ∇g(q (k) )
where P γh is the proximity operator (or Moreau envelope) of the function γh [37, p.339]
P γh (q) := arg min x∈R N h(x) + 1 2γ ||x -q|| 2 . ( 12
)
(a) FISTA 10 it. For all these simulations, the initial-guess is q (0) = 0 and the regularization parameters is set to (α = 0.3, β = 10 -6 ).
The PPDS iteration implements the preconditioner given in [START_REF] Leterrier | Nanoscale architecture of the axon initial segment reveals an organized and robust scaffold[END_REF] with C = H t H and a = 1, see Sec IV-C for details.
Although this operator defines the update implicitly, an explicit form is actually available for many of the functions met in signal and image processing applications, see for instance [36, Table 10.2].
The Lipschitz constant L granted to ∇g plays an important role in the convergence of iterations [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]. In particular, global convergence toward a solution of (10) occurs as long as the step size γ is chosen such that 0 < γ < 2/L. However, the convergence speed is usually very low and the following accelerated version named FISTA [START_REF] Beck | A fast iterative shrinkage-thresholding algorithm for linear inverse problems[END_REF] is usually preferred
q (k+1) ←-P γh ω (k) -γ∇g(ω (k) ) (13a)
ω (k+1) ←-q (k+1) + k-1 k+2 q (k+1) -q (k) . (13b)
The convergence speed toward minq f (q) achieved by ( 13) is O(1/k 2 ), which is often considered as a substantial gain compared to the O(1/k) rate of the basic proximal iteration. It should be noted however that this "accelerated" form may not always provide a faster convergence speed with respect to its standard counterpart, see for instance [START_REF] Combettes | Proximal splitting methods in signal processing[END_REF]Fig. 10.2]. FISTA was nevertheless found to be faster for solving the constrained minimization problem involved in joint Blind-SIM, see Fig. 11. We finally stress that convergence of ( 13) is granted for 0 < γ < 1/L [START_REF] Beck | A fast iterative shrinkage-thresholding algorithm for linear inverse problems[END_REF].
2) Solution of the m-th joint Blind-SIM sub-problem: For the penalized joint Blind-SIM problem considered in this paper, the minimization problem [START_REF] Wicker | Phase optimisation for structured illumination microscopy[END_REF] [equipped with the penalty ( 9)] takes the form [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF] with
g(q) = ||y -Hq|| 2 + β||q|| 2 (14a) h(q) = α n φ(qn) (14b)
where φ : R → R ∪ {+∞} is such that
φ(u) := u if u ≥ 0. +∞ otherwise. ( 14c
)
The gradient of the regular part in the splitting
∇g(q) = 2 H t (Hq -y) + βq (15)
is L-Lipschitz-continuous with L = 2 λmax(H t H) + β where λmax(A) denotes the highest eigenvalue of the matrix A. Furthermore, the proximity operator [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF] with h defined by (14b) leads to the well-known soft-thresholding rule [START_REF] Moulin | Analysis of multiresolution image denoising schemes using generalized Gaussian and complexity priors[END_REF], [START_REF] Figueiredo | An EM algorithm for waveletbased image restoration[END_REF] P γh (q) = vect (max{qn -γα, 0}) .
From a practical perspective, both the basic iteration [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF] and its accelerated counterpart [START_REF] Betzig | Imaging intracellular fluorescent proteins at nanometer resolution[END_REF] are easily implemented at a very low computational cost 11 from equations ( 15) and ( 16). For our penalized joint Blind-SIM approach, however, we observed that both algorithms exhibit similar convergence behavior in terms of visual aspect of the current estimate. The convergence speed is also significantly slow: several hundreds of iterations are usually required for solving the M = 200 sub-problems involved in the joint Blind-SIM reconstruction shown in Fig. 5(B). In addition, Fig. 9(ac) shows the reconstruction built with ten, fifty and one thousand FISTA iterations. Clearly, we would like that this latter quality of reconstruction is reached in a reasonable amount of time. The next subsection introduces a preconditioned primal-dual splitting strategy that achieves a much higher convergence speed, as illustrated by Fig. 9(right).
B. Preconditioned primal-dual splitting
The preconditioning technique [42, p. 69] is formally equivalent to addressing the initial minimization problem (10) via a linear transformation q := P v, where P ∈ R N ×N is a symmetric positive-definite matrix. There is no formal difficulty in defining a preconditioned version of the proximal iteration [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]. However, if one excepts the special case of diagonal matrices P [43]- [START_REF] Raguet | Preconditioning of a generalized forwardbackward splitting and application to optimization on graphs[END_REF], the proximity operator of H(v) := h(P v) cannot be obtained explicitly and needs to be computed approximately. As a result, solving a nested optimization problem is required at each iteration, hence increasing the overall computational cost of the algorithm and raising a convergence issue since the sub-iterations must be truncated in practice [START_REF] Becker | A quasi-Newton proximal splitting method[END_REF], [START_REF] Chouzenoux | Variable metric forwardbackward algorithm for minimizing the sum of a differentiable function and a convex function[END_REF]. Despite this difficulty, the preconditioning is widely accepted as a very effective way for accelerating proximal iterations. In the sequel, the versatile primal-dual splitting technique introduced in [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF], [START_REF] Vũ | A splitting algorithm for dual monotone inclusions involving cocoercive operators[END_REF], [START_REF] Combettes | Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators[END_REF] is used to propose a new preconditioned proximal iteration, without any nested optimization problem.
This new preconditioning technique is now presented for the generic problem [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF]. At first, we express the criterion f with respect to the transformed variables
f (P v) = G(v) + h(P v) ( 17
)
11 Since H is a convolution matrix, the computation of the gradient (15) can be performed by fast Fourier transform and vector dot-products, see for instance [START_REF] Vogel | Computational Methods for Inverse Problems[END_REF]Sec. 5.2.3].
with G(v) := g(P v). Since the criterion above is a particular case of the form considered in [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]Eq. (45)], it can be optimized by a primal-dual iteration [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]Eq. (55)] that reads
v (k+1) ←-v (k) -θτ d (k) (18a) ω (k+1) ←-ω (k) + θ∆ (k) (18b) with d (k) := ∇G(v (k) ) + P ω (k) (19a) ∆ (k) := P σh ω (k) + σP (v (k) -2τ d (k) ) -ω (k) (19b)
where the proximal mapping applied to h , the Fenchel conjugate function for h, is easily obtained from
P σh (ω) = ω -σP h/σ (ω/σ). (20)
The primal update (18a) can also be expressed with respect to the untransformed variables q:
q (k+1) ←-q (k) -θτ Bζ (k) (21)
with ζ (k) := ∇g(q (k) )+ω (k) and B := P P . Since the update ( 21) is a preconditioned primal step, we expect that a clever choice of the preconditioning matrix B will provide a significant acceleration of the primal-dual procedure. In addition, we note that the quantity
a (k) := ω (k) + σP (v (k) -2τ d (k)
) involved in the dual step via (19b) also reads
a (k) := ω (k) + σ(q (k) -2τ Bζ (k) ). (22)
Hereafter, the primal-dual updating pair (18b) and ( 21) is called a preconditioned primal-dual splitting (PPDS) iteration. Following [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]Theorem 5.1], the convergence of these PPDS iterations is granted if the following conditions are met for the parameters (θ, τ, σ):
σ > 0, τ > 0, θ > 0 (23a) γτ,σ ∈ [1; 2) (23b) γτ,σ > θ (23c) with γτ,σ := 2 -τ [1 -τ σλmax(B)] -1 L/2
, where L is the Lipschitz-continuity constant of ∇G, see Eq. [START_REF] Min | Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery[END_REF]. Within the convergence domain ensured by [START_REF] Denk | Two-photon laser scanning fluorescence microscopy[END_REF], the practical tuning of the parameter set (θ, τ, σ) is tedious as it may impair the convergence speed. We propose the following tuning strategy, which appeared to be very efficient. At first, we note that the step length τ relates only to the primal update (18a) whereas σ relates only to the dual update (18b) via ∆ (k) . In addition, the relaxation parameter θ scales both the primal and the dual steps [START_REF] Oh | Sub-Rayleigh imaging via speckle illumination[END_REF]. Considering only under-relaxation (i.e., θ < 1), (23c) is unnecessary and (23b) is equivalent to the following bound
σ ≤ σ with σ := 1/τ -L/2 λ -1 max (B). (24)
This relation defines an admissible domain for (τ, σ) under the condition θ < 1, see Fig. 10. Our strategy defines τ as the single tuning parameter of our PPDS iteration, the parameter σ being adjusted so that the dual step is maximized:
0 < τ < τ , σ = σ and θ = 0.99, (25)
with τ := 2/L. We set θ arbitrary close to 1 since practical evidence indicates that under-relaxing θ slows down the convergence rate. The numerical evaluation of the bounds τ and σ is application-dependent since they depend on L and λmax(B).
0 τ σ 1/L 2/L L 2 λ -1 max (B B)
Fig. 10. Admissible domain for (τ, σ) ensuring the global convergence of the PPDS iteration with θ ∈ (0; 1), see Equation [START_REF] Gu | Comparison of three-dimensional imaging properties between two-photon and single-photon fluorescence microscopy[END_REF].
C. Resolution of the joint Blind-SIM sub-problem
For our specific problem, the implementation of the PPDS iteration requires first the conjugate function [START_REF] Labouesse | Fluorescence blind structured illumination microscopy: a new reconstruction strategy[END_REF]: with h defined by (14b), the Fenchel conjugate is easily found and reads
P σh (ω) = vect (min {ωn, α}) . ( 26
)
The updating rule for the PPDS iteration then reads
q (k+1) ←-q (k) -θτ Bζ (k) (27a) ω (k+1) ←-ω (k) + θ∆ (k) (27b) with ∆ (k) = vect min{a (k) n , α} -ω (k) and a (k)
n the nth component of the vector a (k) defined in [START_REF] Golub | Matrix computation[END_REF]. We note that the positivity constraint is not enforced in the primal update (27a). Primal feasibility (i.e. positivity) therefore occurs only asymptotically thanks to the global convergence of the sequence [START_REF] Horstmeyer | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF] toward the minimizer of the functional [START_REF] Rust | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF]. Compared to FISTA, this behavior may be considered as a drawback of the PPDS iteration. However, we do believe that the ability of the PPDS iteration to "transfer" the hard constraint from the primal to the dual step is precisely the cornerstone of the acceleration provided by preconditioning. Obviously, such an acceleration requires that the preconditioner B is wisely chosen. For our joint Blind-SIM problem, the preconditioning matrix is derived from Geman and Yang semi-quadratic construction [START_REF] Geman | Nonlinear image recovery with half-quadratic regularization[END_REF], [51, Eq. ( 6)]
B = 1 2 (C + β I d /a) -1 (28)
where I d is the identity matrix and a > 0 is a free parameter of the preconditioner. We choose C in the class of positive semidefinite matrix with a BCCB structure [START_REF] Vogel | Computational Methods for Inverse Problems[END_REF]Sec. 5.2.5]. This choice enforces that B is also BCCB, which considerably helps in reducing the computational burden: (i) B can be stored efficiently 12 and (ii) the matrix-vector product Bζ (k) in (27a) can be computed with O(N log N ) complexity by the bidimensional fast Fourier transform (FFT) algorithm. Obviously, if the observation model H is also a BCCB matrix built from the discretized OTF, the choice C = H t H in (28) leads to B = (∇ 2 g) -1 for a = 1. Such a preconditioner is expected to bring the fastest asymptotic convergence since it corrects the curvature anisotropies induced by the regular part g in the criterion [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF].
The PPDS pseudo-code for solving the joint Blind-SIM problem is given in Algorithm 1. This pseudo-code requires that L and λmax(B) are given for the tuning ( 25 // The primal step (Fourier domain)...
14 d (k) ←-ω (k) -2( h y -( γ + β) q (k) ) b; 15 q (k+1) ←-q (k) -θτ d (k) ; 16 // The dual step (direct domain)... 17 a (k) ←-FFT -1 ω (k) + σ( q (k) -2τ d (k) ) ; 18 ω (k+1) ←-(1 -θ) ω (k) + θ vect(min{a (k) n , α}); 19 // Prepare next PPDS iteration... 20 q (k) ←-q (k+1) ; ω (k) ←-FFT(ω (k+1) );
ρ ←-ρ + 1 M FFT -1 ( q (k) ) I0; 24 end
25 Final result: The joint Blind-SIM estimate is stored in ρ Algorithm 1: Pseudo-Code of the joint Blind-SIM PPDS algorithm, assuming that H is a BCCB matrix and C = H t H. The symbols and are the component-wise product and division, respectively. For the sake of simplicity, this pseudo-code implements a very simple stopping rule based on a maximum number of minimizing steps, see line 11. In practice, a more elaborated stopping rule could be used by monitoring the norm ||ζ (k) || defined by [START_REF] Jost | Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction[END_REF] since it tends towards 0 as q (k) asymptotically reaches the constrained minimizer of the m-th nested problem. since H is rank deficient in our context, and the Lipschitz constant that reads L = λmax(B ∇ 2 g) can be further simplified as
L = a if a ≥ 1 ( γmax + β)( γmax + β/a) -1 otherwise, (29b)
with γmax the maximum of the square magnitude of the OTF components. From the pseudo-code, we also note that the computation of the primal update (27a) remains in the Fourier domain during the PPDS iteration, see line 14. With this strategy (possible because ∇g is a linear function), the computational burden per PPDS iteration 13is dominated by one single forward/inverse FFT pair, i.e., PPDS and FISTA have equivalent computational burden per iteration. We now illustrate the performance of the PPDS iterations for minimizing the penalized criterion involved in the joint Blind-SIM reconstruction problem shown in Fig. 9-(right). These simulations 9. The chosen initial-guess is q (0) = 0 for the primal variables and ω (0) = -∇g(q (0) ) for the dual variables. The preconditioning parameter is set to a = 1 and (θ, τ, σ) were set according to the tuning rule [START_REF] Lakowicz | Principles of Fluorescence Spectroscopy[END_REF]. For the sake of completeness, the curve of the FISTA iterations and the PDS iterations (i.e., the PPDS equipped with the identity preconditioning matrix B = I d ) are also reported.
f q (k) -f ( q)
were performed with a standard MATLAB implementation of the pseudo-code shown in Algorithm 1. We set a = 1 so that the preconditioner B is the inverse of the Hessian of g in [START_REF] Rust | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF]. With this tuning, we expect that the PPDS iterations exhibit a very favorable convergence rate as long as the set of active constraints is correctly identified. Starting from initial guess q (0) = 0 (the dual variables being set accordingly to ω (0) = -∇g(q (0) ), see for instance [START_REF] Bertsekas | Nonlinear programming[END_REF]Sec. 3.3]), the criterion value of the PPDS iteration depicted in Fig. 11 exhibits an asymptotic convergence rate that can be considered as super-linear. Other tunings for a (not shown here) were tested and found to slow down the convergence speed. The pivotal role of the preconditioning in the convergence speed is also underlined since the PPDS algorithms becomes as slow as the standard proximal iteration when we set B = I d , see the "PDS" curve in Fig. 11. In addition, one can note from the reconstructions shown in Fig. 9 that the highfrequency components (i.e., the SR effect) are brought in the very early iterations. Actually, once PPDS is properly tuned, we always found that it offers a substantial acceleration with respect to the FISTA (or the standard proximal) iterates.
Finally, let us remind that the numerical simulations were performed with a BCCB convolution matrix H. In some cases, the implicit periodic boundary assumption 14 enforced by such matrices is not appropriate and a convolution model with a zero boundary assumption is preferable, which results in a matrix H with a BTTB structure. In such a case, the product of any vector by H t H can still be performed efficiently in O(N log N ) via the FFT algorithm, see for instance [START_REF] Vogel | Computational Methods for Inverse Problems[END_REF]Sec. 5.2.3]. This applies to the computation of ∇g(q (k) ) in the primal step [START_REF] Jost | Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction[END_REF], according to [START_REF] Mukamel | Statistical deconvolution for superresolution fluorescence microscopy[END_REF]. In contrast, 14 Let us recall that the matrix-vector multiplication Hq with H a BCCB matrix corresponds to the circular convolution of q with the convolution kernel that defines H.
exact system solving as required by ( 21) cannot be implemented in O(N log N ) anymore if matrix H is only BTTB (and not BCCB). In such a situation, one can define C as a BCCB approximation of H t H, so that the preconditioning matrix B = (C +βI d ) -1 remains BCCB, while ensuring that B(H t H +βI d ) has a clustered spectrum around 1 as the size N increases [START_REF] Chan | Conjugate gradient methods for Toeplitz systems[END_REF]Th. 4.6].
Finally, another practical issue arises from the numerical evaluation of L. No direct extension of (29b) is available when H is BTTB but not BCCB. However, according to [START_REF] Lakowicz | Principles of Fluorescence Spectroscopy[END_REF], global convergence of the PPDS iterations is still granted if τ < 2/ L with L ≤ L. For instance, L := λmax(B) (||H||∞||H||1 + β) is an easy-to-compute upper bound of L.
V. CONCLUSION
The speckle-based fluorescence microscope proposed in [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF] holds the promise of a super-resolved optical imager that is cheap and easy to use. The SR mechanism behind this strategy, that was not explained, is now properly linked with the sparsity of the illumination patterns. This readily relates joint Blind-SIM to localization microscopy techniques such as PALM [START_REF] Betzig | Imaging intracellular fluorescent proteins at nanometer resolution[END_REF] where the image sparsity is indeed brought by the sample itself. This finding also suggests that "optimized" random patterns can be used to enhance SR, one example being the two-photon excitations proposed in this paper. Obviously, even with such excitations, the massively sparse activation process at work with PALM/STORM remains unparalleled and one may not expect a resolution with joint Blind-SIM greater than twice or three times the resolution of a wide-field microscope. We note, however, that this analysis of the SR mechanism is only valid when the sample and the illumination patterns are jointly retrieved. In other words, this article does not tell anything about the SR obtained from marginal estimation techniques that estimates the sample only, see for instance [START_REF] Min | Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery[END_REF]- [START_REF] Chaigne | Super-resolution photoacoustic fluctuation imaging with multiple speckle illumination[END_REF]. Indeed, the SR properties of such "marginal" techniques are rather distinct [START_REF] Idier | A theoretical analysis of the super-resolution capacity of imagers using unknown speckle illuminations[END_REF].
From a practical perspective, the joint Blind-SIM strategy should be tested shortly with experimental datasets. One expected difficulty arising in the processing of real data is the strong background level induced in the focal plane by the out-of-focus light. This phenomenon prevents the local extinction of the excitation intensity, hence destroying the expected SR in joint Blind-SIM. A natural approach would be to solve the reconstruction problem in its 3D structure, which is numerically challenging, but remains a mandatory step to achieve 3D speckle SIM reconstructions [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF]. The modeling of the out-of-focus background with a very smooth function is possible [START_REF] Orieux | Bayesian estimation for optimized structured illumination microscopy[END_REF] and will be considered for a fast 2D reconstruction of the sample in the focal plane.
Another important motivation of this work is the reduction of the computational time in joint Blind-SIM reconstructions. The reformulation of the original (large-scale) minimization problem is a first pivotal step as it leads to M sub-problems, all sharing the same structure, see Sec. II-A. The new preconditioned proximal iteration proposed in Sec. IV-B is also decisive as it efficiently tackles each sub-problem. In our opinion, this "preconditioned primal-dual splitting" (PPDS) technique is of general interest as it yields preconditioned proximal iterations that are easy to implement and provably convergent. For our specific problem, the criterion values are found to converge much faster with the PPDS iteration than with the standard proximal iterations (e.g., FISTA). We do believe, however, that PPDS deserves further investigations, both from the theoretical and the experimental viewpoints. This minimization strategy should be tested with other observation models and prior models. For example, as a natural extension of this work, we will consider shortly the Poisson distribution in the case of image acquisitions with low photon counting rates. The global and local convergence properties of PPDS should be explored extensively, in particular when the preconditioning matrix varies over the iterations. This issue is of importance if one aims at defining quasi-Newton proximal iterations with PPDS in a general context.
Fig. 1 .
1 Fig. 1. [Row A] Lower-right quarter of the (160×160 pixels) groundtruth fluorescence pattern considered in [1] (left) and deconvolution of the corresponding wide-field image (right). The dashed (resp. solid) lines corresponds to the spatial frequencies transmitted by the OTF support (resp. twice the OTF support). [Row B] Positivity-constrained reconstruction from known illumination patterns: (left) M = 9 harmonic patterns and (right) M = 200 speckle patterns. The distance units along the horizontal and vertical axes are given in wavelength λ.
Fig. 2 .
2 Fig. 2. [Row A] One product image qm = vect(ρn ×Im;n) built from one of the 200 illumination patterns used for generating the dataset: (left) a positive constant is added to the standard speckle patterns so that the lowest value is much greater that zero; (right) a positive constant is subtracted to the standard speckle patterns and negative values are set to zero. [Row B] Reconstruction of the product image qm that corresponds to the one shown above. [Row C] Final reconstruction ρ achieved with the whole set of illuminations -see Subsection II-B for details.
Fig. 3 .
3 Fig. 3. Harmonic patterns: [Row A] One illumination pattern Im drawn from the set of regular (left) and distorted (right) harmonic patterns. [Row B] Corresponding penalized joint Blind-SIM reconstructions. [Row C] (left) Decreasing the number of phase shifts from 6 to 3 brings some reconstruction artifacts, see (B-left) for comparison. (right) Increasing the modulation frequency ||ν|| of the harmonic patterns above the OTF cutoff frequency prevents the super-resolution to occur. [Row D] Low-resolution image ym drawn from the dataset for a modulation frequency ||ν|| lying inside (left) and outside (right) the OTF domain-see Sec. III-A for details.
Fig. 4 .
4 Fig. 4. Speckle patterns: [Row A] One speckle illumination such that NA ill = NA (left) and its "squared" counterpart (right). [Row B] Corresponding penalized joint Blind-SIM reconstructions from M = 1000 speckle (left) and "squared" speckle (right) patterns
Fig. 5 .
5 Fig. 5. Speckle patterns (continued): Penalized joint Blind-SIM reconstructions from standard speckle (left) and "squared" speckle (right) patterns. The number of illumination patterns considered for reconstruction is M = 10 (A), M = 200 (B) and M = 10000 (C).
Fig. 6 .
6 Fig. 6. Speckle patterns (continued): The correlation length of speckle and "squared" speckle patterns drives the level of super-resolution in the penalized joint Blind-SIM reconstruction: [Rows A] reconstruction from M = 10000 speckle patterns with NA ill = 0.5 NA (left) and from the corresponding "squared" random-patterns (right). [Rows B] idem with NA ill = 2 NA. [Rows C] idem with uncorrelated patterns.
Fig. 7 .
7 Fig. 7. Processing of real and mock data: [Row A] Fluorescent beads with diameters of 100 nm are illuminated by 100 fully-developed (i.e., onephoton) speckle patterns through an illumination/collection objective (NA = 1.49). The sum of the acquisitions of the fluorescent light (left) and its Wiener deconvolution (middle) provide diffraction limited images of the beads. The joint Blind-SIM reconstruction performed with the hyper-parameters set to β = 5 × 10 -5 and α = 0.4 is significantly more resolved (right). The sampling rate used in these images is 32.5 nm, corresponding to an up-sampling factor of two with respect to the camera sampling. [Row B] STORM reconstruction of a marked rat neuron showing a lattice structure with a 190-nm periodicity (left). Deconvolution of the simulated wide-field image (middle). Joint Blind-SIM reconstruction of the sample obtained from 300 (one-photon) speckle patterns; the hyper-parameters are set to β = 2 × 10 -5 and α = 1.5 (right). The sampling rate of the STORM ground-truth image is 11.4 nm. The sampling rate of the joint Blind-SIM reconstruction is 28.5 nm, corresponding to an up-sampling factor of four with respect to the camera sampling. The distance units along the horizontal and vertical axes are given in wavelength λ coll , i.e., 520 nm in row A and 488 nm in row B.
Fig. 8 .
8 Fig.8. Penalized Blind-SIM reconstructions from the dataset used to generate the super-resolved reconstruction shown in Fig.4(B-left). The hyper-parameter β was set to 10 -6 in any case, and α was set to 10 -3 (left) and 0.9 (right). For the sake of comparison, our tuning for the reconstruction shown in Fig.4(B-left) is β = 10 -6 and α = 0.3.
Fig. 9 .
9 Fig. 9. Harmonic joint Blind-SIM reconstruction of the fluorescence pattern achieved by the minimization of the criterion (8) with 10, 50 or 1000 FISTA (abc) or PPDS (def) iterations. For all these simulations, the initial-guess is q (0) = 0 and the regularization parameters is set to (α = 0.3, β = 10 -6 ). The PPDS iteration implements the preconditioner given in[START_REF] Leterrier | Nanoscale architecture of the axon initial segment reveals an organized and robust scaffold[END_REF] with C = H t H and a = 1, see Sec IV-C for details.
13 6 ρ 11 /
611 ): we get λmax(B) = 1/λmin(B -1 ) = a (2β) -1 (29a)12 Any BCCB matrix B reads H = F † ΛF with F the unitary discrete Fourier transform matrix, ' †' the transpose-conjugate operator, and Λ := Diag( b) where b := vect( bn) are the eigenvalues of B, see for instance [41, Sec. 5.2.5]. As a result, the storage requirement reduces to the storage of b. Given quantities:2 PSF h, Dataset {ym} M m=1 , Average intensity I0 ∈ R N + ; Regularization parameters: β, α ∈ R+; 4 PPDS parameters: a ∈ R+; θ ∈ (0, 1); τ ∈ (0, 2L); kmax ∈ N; ←-0; σ ←-σ [see[START_REF] Gu | Comparison of three-dimensional imaging properties between two-photon and single-photon fluorescence microscopy[END_REF]];7 h ←-FFT(h); γ ←-h * h; b ←-(2 γ + 2β/a);8 // The outer loop: processing each view ym... 9 for m = 1 • • • M do 10 y ←-FFT(ym); q (0) ←-FFT(q (0) m ); ω (0) ←-FFT(ω / The inner loop: PPDS minimization... 12 for k = 0 • • • kmax do 13
21 end 22 /
2122 / Building-up the joint Blind-SIM estimate...
23
23
Fig. 11 .
11 Fig.11. Criterion value (upper plots) and distance to the minimizer (lower plots) as a function of the PPDS iterations for the reconstruction problem considered in Fig.9. The chosen initial-guess is q (0) = 0 for the primal variables and ω (0) = -∇g(q (0) ) for the dual variables. The preconditioning parameter is set to a = 1 and (θ, τ, σ) were set according to the tuning rule[START_REF] Lakowicz | Principles of Fluorescence Spectroscopy[END_REF]. For the sake of completeness, the curve of the FISTA iterations and the PDS iterations (i.e., the PPDS equipped with the identity preconditioning matrix B = I d ) are also reported.
Whenever ρn = 0, the corresponding entry in the illumination pattern estimates (4b) can be set to Im;n = I 0;n /M for all m, hence preserving the positivity (2c) and the constraint (2b).
A constrained quadratic problem such as (3) is strictly convex if and only if the matrix H is full rank. In our case, however, H is rank deficient since its spectrum is the OTF that is strictly support-limited.
Assuming a fully-developed speckle, the fluctuation in Im;n is driven by an exponential pdf with parameter I 0 whereas the pdf of the "squared" pointwise intensity Jm;n := I 2 m,n is a Weibull distribution with shape parameter k = 0.5 and scale parameter λ = I 2 0 .
The MATLAB implementation of the PPDS pseudo-code Algorithm 1 requires less than 6 ms per iteration on a standard laptop (Intel Core M 1.3 GHz). For the sake of comparison, one FISTA iteration takes almost 5 ms on the same laptop.
ACKNOWLEDGMENTS
The authors are grateful to the anonymous reviewers for their valuable comments, and to Christophe Leterrier for the STORM image used in Section III.
Agence Nationale de la Recherche (ANR-12-BS03-0006 | 72,647 | [
"6837",
"15503",
"1123807",
"16797"
] | [
"199338",
"445088",
"199338",
"1088564",
"199338",
"217752",
"473973",
"199338",
"473973"
] |
01460742 | en | [
"chim",
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://univ-rennes.hal.science/hal-01460742/file/Causse%20et%20al.%20-%20Direct%20DOC%20and%20nitrate%20determination%20in%20water%20usin.pdf | Jean Causse
Olivier Thomas
Aude-Valérie Jung
Marie-Florence Thomas
email: [email protected]
Direct DOC and nitrate determination in water using dual pathlength and second derivative UV spectrophotometry
Keywords: UV spectrophotometry, second derivative, nitrate, DOC, freshwaters, dual optical pathlength
come
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
spectrophotometric measurement of raw samples (without filtration) coupling a dual pathlength for spectra acquisition and the second derivative exploitation of the signal is proposed in this work. The determination of nitrate concentration is carried out from the second derivative of the absorbance at 226nm corresponding at the inflexion point of nitrate signal decrease. A short optical pathlength can be used considering the strong absorption of nitrate ion around 210nm. For DOC concentration determination the second derivative absorbance at 295nm is proposed after nitrate correction. Organic matter absorbing slightly in the 270-330nm window, a long optical pathlength must be selected in order to increase the sensitivity. The method was tested on several hundred of samples from small rivers of two agricultural watersheds located in Brittany, France, taken during dry and wet periods. The comparison between the proposed method and the standardised procedures for nitrate and DOC measurement gave a good adjustment for both parameters for ranges of 2-100 mg/L NO3 and 1-30 mg/L DOC.
Introduction
Nutrient monitoring in water bodies is still a challenge. The knowledge of nutrient concentrations as nitrate and dissolved organic carbon (DOC) in freshwater bodies is important for the assessment of the quality impairment of water resources touched by eutrophication or harmful algal blooms for example. The export of these nutrients in freshwater is often characterized on one hand, by a high spatio-temporal variability regarding seasonal change, agricultural practices, hydrological regime, tourism and on the other hand, by the nature and mode of nutrient sources (punctual/diffuse, continuous/discontinuous) [START_REF] Causse | Variability of N export in water: a review[END_REF]. In this context the monitoring of nitrate and DOC must be rapid and
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
easy to use on the field and UV spectrophotometry is certainly the best technique for that, given the great number of works, applications and systems proposed in the last decades.
Nitrate monitoring with UV sensing is a much more mature technique than DOC assessment by UV because nitrate ion has a specific and strong absorption. Several methods are available for drawing a relationship between UV absorbance and nitrate concentration using wavelength(s) around 200-220 nm, usually after sample filtration to eliminate interferences from suspended solids. Considering the presence of potential interferences such as dissolved organic matter (DOM) in real freshwater samples, the use of at least two wavelengths increases the quality of adjustment. The absorbance measurement at 205 and 300 nm was proposed by [START_REF] Edwards | Determination of Nitrate in Water Containing[END_REF] and the second derivative absorbance (SDA) calculated from three wavelengths was promoted by Suzuki and Kuroda (1987) and [START_REF] Crumpton | Nitrate and organic N analysis with 2nd-derivative spectroscopy[END_REF]. A comparison of the two methods (two wavelengths and SDA) carried out on almost 100 freshwater samples of different stations in a 35 km 2 watershed, gave comparable data with ion chromatography analysis (Olsen, 2008). Other methods based on the exploitation of the whole UV spectrum were also proposed in the last decades, namely for wastewater and sea water, with the aim of a better treatment of interferences. Several multiwavelength methods were thus designed such as the polynomial modelisation of UV responses of organic matter and colloids (Thomas et al.,1990), a semi deterministic approach, including reference spectra (nitrate) and experimental spectra of organic matter, suspended solids and colloids (Thomas et al., 1993), or partial least square regression (PLSR) method built-into a field portable UV sensor (Langergraber et al., 2003). Kröckel et al. (2011) for groundwater monitoring. The findings were that the MWS offers more possibilities for calibration and error detection, but requires more expertise compared with the DWS.
Contrary to UV measurement of nitrate in water, DOC is associated with a bulk of dissolved organic matter (DOM) with UV absorption properties less known and defined than nitrate.
The study of the relation between absorbing DOM (chromophoric DOM or CDOM) and DOC has given numerous works on the characterisation of CDOM by UV spectrophotometry or fluorescence on one hand, and on the assessment of DOC concentration from the measurement of UV parameters on the other hand. Historically the absorbance at 254nm (A254), 254 nm being the emission wavelength of the low pressure mercury lamp used in the first UV systems, was the first proxy for the estimation of Total organic carbon [START_REF] Dobbs | The use of ultra-violet absorbance for monitoring the total organic carbon content of water and wastewater[END_REF], and was standardised in 1995 [START_REF] Eaton | Measuring UV absorbing organics: a standard method[END_REF]. Then the specific UV absorbance, the ratio of the absorbance at 254 nm (A254) divided by the DOC value was also standardized ten years after (Potter and Wimsatt, 2005). Among the more recent works, Spencer et al. (2012) shown strong correlations between CDOM absorption (absorbance at 254 and 350 nm namely) and DOC for a lot of samples from 30 US watersheds. [START_REF] Carter | Freshwater DOM quantity and quality from a two-component model of UV absorbance[END_REF] proposed a two component model, one absorbing strongly and representing aromatic chromophores and the other absorbing weakly and associated with hydrophilic substances. After calibration at 270 and 350 nm, the validation of the model for DOC assessment was quite satisfactory for 1700 filtered surface water samples from North America and the UK. This method was also used for waters draining upland catchments and it was found that both a single wavelength proxy (263 nm or 230 nm) and a two wavelengths model performed well for both pore water and surface water (Peacock et al., 2014). Besides these one or two wavelengths methods, the use of chemometric ones was also proposed at the same time as nitrate determination from a same spectrum acquisition (Thomas et al., 1993); (Rieger et al., 2004) [START_REF] Avagyan | Application of high-resolution spectral absorbance measurements to determine dissolved organic carbon concentration in remote areas[END_REF] from the signal of a UV-vis submersible sensor with the recommendation to create site-specific calibration models to achieve the optimal accuracy of DOC quantification.
Among the above methods proposed for UV spectra exploitation, the second derivative of the absorbance (SDA) is rather few considered even if SDA is used in other application fields to enhance the signal and resolve the overlapping of peaks [START_REF] Bosch Ojeda | Recent applications in derivative ultraviolet/visible absorption spectrophotometry: 2009-2011[END_REF]. Applied to the exploitation of UV spectra of freshwaters SDA is able to suppress or reduce the signal linked to the physical diffuse absorbance of colloids and particles and slight shoulders can be revealed (Thomas and Burgess, 2007). If SDA was proposed for nitrate [START_REF] Crumpton | Nitrate and organic N analysis with 2nd-derivative spectroscopy[END_REF], its use for DOC has not been yet reported as well as a simultaneous SDA method for nitrate and DOC determination of raw sample (without filtration). This can be explained by the difficulty to obtain a specific response of organic matter and nitrate, in particular in the presence of high concentration of nitrate or high turbidity that cause spectra saturation and interferences. In this context, the aim of this work is to propose a new method to optimize the simultaneous measurement of DOC and nitrate using both dual optical pathlength and second derivative UV spectrophotometry.
Material and methods
Water samples
Water samples were taken from the Ic and Frémur watersheds (Brittany, France) through very different conditions during the hydrological year 2013-2014. These two rural watersheds of 86 km² and 77 km² respectively are concerned by water quality alteration with risks of green algae tides, closures of some beaches and contamination of seafood at their outlet. 580 samples were taken from spot-sampling (342 samples) on 32 different subwatersheds by dry or wet weather (defined for 5 mm of rain or more, 24 h before sampling) and by auto-
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
sampling (233 samples) during flood events on 3 subwatersheds. For wet weather, sampling was planned by spot-sampling before the rain, or programmed according to the local weather forecast to ensure a sample collection proportional to the flow. Samples were collected in 1L polyethylene bottles (24 for auto-sampler ISCO 3700) following the best available practices.
Samples were transported to the laboratory in a cooler and stored at 5 ି ା 3°C (NF EN ISO 5667-3, 2013).
Data acquisition
Nitrate concentration was analyzed according to NF EN ISO 13395 standard thanks to a continuous flow analyzer (Futura Alliance Instrument). Dissolved organic carbon (DOC) was determined by thermal oxidation coupled with infrared detection (Multi N/C 2100, Analytik Jena) following acidification with HCl (standard NF EN 1484). Samples were filtered prior to the measurement with 0.45 µm HA Membrane Filters (Millipore®).
Turbidity (NF EN ISO 7027, 2000) was measured in situ for each sample, with a multiparameter probe (OTT Hydrolab MS5) for spot-sampling and with an Odeon probe (Neotek-Ponsel, France) for auto-sampling stations.
Finally discharge data at hydrological stations were retrieved from the database of the national program of discharge monitoring.
UV measurement
Spectra acquisition
UV spectra were acquired with a Perkin Elmer Lambda 35 UV/Vis spectrophotometer, between 200 and 400 nm with different Suprasil® quartz cells (acquisition step: 1 nm, scan speed: 1920 nm/min). Two types of quartz cells were used for each sample. A short path length cell of 2 mm was firstly used to avoid absorbance saturation in the wavelength domain strongly influenced by nitrate below 240 nm (linearity limited to 2.0 a.u.). On the contrary, a
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
longer pathlength cell (20 mm) was used in order to increase the signal for wavelengths outside of the influence of nitrates (> 240 nm approx.). Regarding a classic UV spectrophotometer with a pathlength cell of 10 mm, these dual pathlength devices act as a spectrophotometric dilution/concentration system, adapted to a high range of variation of nitrate concentrations in particular.
Preliminary observation
Before explaining the proposed method, a qualitative relation between UV spectra shape and water quality can be reminded. Figure 1 shows two spectra of raw freshwaters with the same nitrate concentration (9.8 mgNO3/L) taken among samples of the present work. These spectra are quite typical of freshwaters. If the nitrate signal is well identified with the half Gaussian below 240 nm, the one of organic matter responsible for DOC is very weak with a very slight shoulder above 250 nm. In this context, the use of SDA already proposed for nitrate determination [START_REF] Crumpton | Nitrate and organic N analysis with 2nd-derivative spectroscopy[END_REF] and giving a maximum for any inflexion point in the decreasing part of the signal after a peak or a shoulder can be useful. However, given the absorbance values above 250 nm, the use of a longer optical pathlength is recommended in order to increase the sensitivity of the method.
Methodology
The general methodology is presented in Figure 2. Firstly, a UV spectrum is obtained directly from a raw sample (without filtration nor pretreatment) with a 2mm pathlength (PL) cell. If the absorbance value at 210 nm (A210), is greater than 2 u.a., a dilution with distilled water must be carried out. If not, the second derivative of the absorbance (SDA) at 226 nm is used for nitrate determination. The SDA value at a given wavelength λ is calculated according to the equation 1 (Thomas and Burgess 2007):
ܣܦܵ ఒ = ݇ * ൫ ഊష ା ഊశ -ଶ * ഊ ൯ మ [1]
where A λ is the absorbance value at wavelength λ, k is an arbitrary constant (chosen here equal to 1000) and h is the derivative step (here set at 10 nm).
Given the variability between successive SDA values linked to the electronic noise of the spectrophotometer, a smoothing step of the SDA spectrum is sometimes required, particularly when the initial absorbance values are low (< 0.1 a.u.). This smoothing step is based on the Stavitsky-Golay's method (Stavitsky and Golay, 1964).
For DOC measurement, the SDA value at 295 nm is used if A250 is greater than 0.1 a.u.. If A250 is lower than 0.1, the intensity of absorbances must be increased with the use of a 20mm pathlength cell. After the SDA295 calculation, a correction from the value of SDA226 linked to the interference of nitrate around 300nm is carried out. This point will be explained in the DOC calibration section. From the results of SDA values and the corresponding concentration of nitrate, a calibration is obtained for nitrate concentration ranging up to 100 mgNO 3 /L (Figure 4). This high value of nitrate concentration is possible thanks to the use of the 2 mm pathlength cell. Deduced from the calibration line, the R2 value is very close to 1 and the limit of detection (LOD) is 0.32 mgNO 3 /L. For DOC calibration, the procedure is different from the one for nitrate given the absence of standard solution for DOC, covering the complexity of dissolved organic matter. A test set of 49 samples was chosen among samples described hereafter, according to their DOC concentration up to 20 mgC/L. The choice of the SDA value at 295 nm is deduced from the examination of the second derivative spectra of some samples of the test set (Figure 5). Two peaks can be observed, the first one around 290 nm, and the second one less defined, around 330 nm (Figure 5a). The maximum of the first peak is linked to the DOC content, but its position shifts between 290 and 300 nm, because of the relation between DOC and nitrate concentration, with relatively more important SDA values when DOC is low (Thomas et al.
2014).
Considering that the measurement is carried out with a long optical pathlength for DOC, and that nitrate also absorbs in this region, its presence must be taken into account. On Figure 5a the second derivative spectrum of a 50 mgNO 3 /L of nitrate presents a valley (negative peak) around 310 nm and a small but large peak around 330 nm. Based on this observation, a correction is proposed for the SDA of the different samples (equation 2):
SDA* = SDAsample -SDAnitrate [2]
Where SDA* is the SDA corrected, SDA sample is the SDA calculated from the spectrum acquisition of a given sample and SDA nitrate is the SDA value corresponding to the nitrate concentration of the given sample.
After correction, the second derivative spectra show only a slight shift for the first peak around 300 nm and the peak around 330 nm is no more present (Figure 5b). From this observation, the SDA value at 295 nm is chosen for DOC assessment.
Samples characteristics
For this work, a great number of samples were necessary for covering the different subwatersheds characteristics and the variability of hydrometeorological conditions all along the hydrological year. 580 samples were taken from 32 stations and the majority of samples were taken in spring and summer time with regard to the principal land use of the two watersheds and the corresponding agricultural practices, namely fertilization (Figure 7).
Nitrate and DOC concentrations ranged respectively from 2.9 to 98.5 mgNO3/L and from 0.7 to 28.9 mgC/L. The river flows were between 0.8 L/s and 6299 L/s and turbidity between 0.1 and 821 NTU, after the rainy periods.
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
Validation on freshwater samples
The validation of the method for nitrate determination was carried out on 580 samples (Figure 8). The adjustment between measured and estimated values of nitrate concentration gave a R 2 greater than 0.99 and a RMSE of 2.32 mgNO 3 /L. The slope is close to 1 and the ordinate is slightly negative (-1.68) which will be explained in the discussion section. The validation of the method for DOC determination was carried out on 580 samples (Figure 9). The adjustment between measured and estimated values of DOC concentration gave a R 2 greater than 0.95 and a RMSE close to 1 mgC/L. The slope is close to 1 and the ordinate is low (0.086 mgC/L).
Interferences
Except for DOC assessment for which the value of SDA at 295 nm must be corrected by the presence of nitrate, different interferences have to be considered in nitrate measurement.
Nitrate absorbing in the first exploitable window of UV spectrophotometry (measurement below 200 nm being quite impossible given the strong absorption of dioxygen) the presence of nitrite with a maximum of absorption at 213 nm could be a problem. However the molar absorption coefficient of nitrite is equal to the half of the value of nitrate (Thomas and Burgess, 2007) and usual concentrations of nitrite are much lower in freshwaters than nitrate ones (Raimonet et al., 2015). For other interferences linked to the presence of suspended solids or colloids (for raw samples) and organic matter for nitrate determination, Figure 10 shows some example of spectral responses for some samples of the tests set taken under contrasted conditions (dry and wet weather) and corrected from nitrate absorption (i.e. the contribution of nitrate absorption is deduced from the initial spectrum of a each sample). The spectral shape is mainly explained by the combination of physical (suspended solids and colloids) and chemical (DOM) responses. Suspended solids are responsible for a linear decrease of absorbance up to 350 nm and more, colloids for an exponential decrease between 200 and 240 nm and the main effect of the presence of DOM is the shoulder shape between 250 and 300 nm the intensity of which being linked to DOC content. Thus the spectral shape is not linear around the inflexion point of the nitrate spectrum (226 nm) and the corresponding second derivative values being low at 226 nm give a theoretical concentration under 2 mgNO 3 /L at maximum. This observation explains the slight negative ordinate of the validation curve (Figure 8). In order to confirm the need of nitrate correction of SDA at 295 nm for DOC determination the adjustment between DOC and SDA at 295nm without nitrate correction was carried out for the same set of samples than for DOC calibration (Figure 6). Compared to the characteristics of the corrected calibration line, the determination coefficient is lower (0.983 against 0.996) and the slope is greater (1.2 times) as well as the ordinate (7.9 times), the RMSE (5.6 times) and the LOD (5.4 against 1.1 mgNO3/L. These observations can be explained by the shift of the peak (around 290-295nm) and the hypochromic effect of nitrate on the SDA value of the sample at 295 nm, (see Figure 5) showing the importance of the nitrate correction for DOC determination.
Another interfering substance can be free residual chlorine absorbing almost equally at 200nm and 291nm with a molar absorption coefficient of 7.96*10 4 m 2 /mol at 291nm (Thomas O. and Burgess C., 2007), preventing the use of the method for chlorinated drinking waters.
Optical pathlength influence for NO3 and DOC
Two optical pathlengths are proposed for the method, a short one (2mm) for nitrate determination and a longer one (20mm) for DOC (Figure 2). However, considering that the optimal spectrophotometric range UV spectrophotometers between 0.1 and 2.0 a.u. (O Thomas and Burgess, 2007) must be respected, other optical pathlengths can be chosen for some water samples depending on their UV response. If the absorbance value of a sample is lower than 0.1 a.u. at 200nm with the 2mm optical pathlength, a 20mm quartz cell must be used. Respectively, if the absorbance value is lower than 0.1 at 300nm with the proposed optical pathlength of 20mm, a 100 mm one must be used. This can be the case when nitrate or DOC concentration is very low given the inverse relationship often existing between these two parameters (Thomas et al., 2014). A comparison of the use of different optical pathlength for DOC estimation gives an R 2 value of 0.70 for the short pathlength (2mm) against 0.96 for the recommended one (20mm). Finally, the choice of a dual pathlength measurement was recently proposed by [START_REF] Chen | Development of variable pathlength UV-vis spectroscopy combined with partial-least-squares regression for wastewater chemical oxygen demand (COD) monitoring[END_REF] to improve successfully the chemical oxygen demand estimation in wastewater samples by using a PLS regression model applied to the two spectra.
Conclusion
A simple and rapid method for the UV determination of DOC and nitrate in raw freshwater samples, without filtration, is proposed in this work:
-Starting from the acquisition UV absorption spectra with 2 optical pathlengths (2 and 20 mm), the second derivative values at 226 and 295 nm are respectively used for nitrate and DOC measurement.
-After a calibration step with standard solutions for nitrate and known DOC content samples for DOC, LODs of 0.3 mgNO3/L for nitrate and 1.1 mgC/L were obtained for ranges up to 100 mgNO3/L and 0-25 mgC/L. -Given its simplicity, this method can be handled without chemometric expertise and adapted on site with field portable UV sensors or spectrophotometers.
It is the first UV procedure based on the use of the second derivative absorbance at 295nm for DOC determination, and calculated after correction of nitrate interference from the acquisition of the UV absorption spectrum with a long optical pathlength (20mm or more). This is a simple way to enhance the slight absorption shoulder around 280-300nm due to the presence of organic matter. Moreover the interferences of suspended matter and colloids being negligible on the second derivative signal, the measurement can be carried out for both parameters on raw freshwater samples without filtration. Finally, even if the validation of the method was carried out on a high number of freshwater samples covering different hydrological conditions, further experimentations should be envisaged in order to check the applicability of the method to the variability of DOM nature.
proposed a combined method of exploitation (multi component analysis (MCA) integrating reference spectra and a polynomial modelisation of humic acids), associated to a miniaturized spectrophotometer with a capillary flow cell. More recently, a comparison between two different commercials in situ spectrophotometers, a double wavelength spectrophotometer (DWS) and a M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT multiwavelength one (MWS) with PLSR resolution was carried out by Huebsch et al. (2015)
Figure 1 :
1 Figure 1: Example of UV spectra of raw freshwaters with the same concentration of nitrate
Figure 2 : General methodology
Figure 3
3 Figure 3 shows spectra and second derivatives of standard nitrate solutions. Nitrate strongly absorbs around 200-210 nm with a molar absorption coefficient of 8.63*10 5 m 2 /mol at 205.6
Figure 3 :
3 Figure 3 : Spectra of standard solutions of nitrate (raw absorbances left, and SDA right)
Figure 4 :
4 Figure 4 : Calibration line for nitrate determination from SDA at 226 nm (R 2 being equal to 1,
Figure5: Second derivative spectra of water samples without nitrate correction (5a with a
Fig 6 :
6 Fig 6: Calibration between SDA at 295nm corrected by nitrate (SDA* 295 ) and DOC
Figure 7 :
7 Figure 7: Relevance of samples in relation to land use, seasonality, land use and physic-
Fig 8 :
8 Fig 8: Relation between measured and estimated (from SDA 226 ) NO 3 concentrations for 580
Fig 9 :
9 Fig 9: Relation between measured and estimated (by SDA* 295 ) DOC concentrations for 580
Figure 10 :
10 Figure 10: Spectra of freshwater samples corrected from nitrate absorbance. Nitrate, DOC and
The peak around 295nm for the second derivative spectra reveals the existence of an inflexion point at the right part of the slight shoulder of the absorbance spectrum, between 250 and 300 nm. This observation can be connected with the use of the spectral slope between 265 and 305 nm(Galgani et al., 2011) to study the impact of photodegradation and mixing processes on the optical properties of dissolved organic matter (DOM) in the complement of fluorescence in two Argentine lakes. Fichot and Benner (2012) also used the spectral slope between 275 and 295 nm for CDOM characterisation and its use as tracers of the percent terrigenous DOC in river-influenced ocean margins.Helms et al. (2008) propose to consider two distinct spectral slope regions (275-295 nm and 350-400 nm) within log-transformed absorption spectra in order to compare DOM from contrasting water types. The use of the logtransformed spectra was recently proposed byRoccaro et al. (2015) for raw and treated drinking water and the spectral slopes between 280 and 350nm were shown to be correlated to the reactivity of DOM and the formation of potential disinfection by-products. Finally a very recent study(Hansen et al., 2016) based on the use of DOM optical properties for the discrimination of DOM sources and processing (biodegradation, photodegradation), focused on the complexity of DOM nature made-up of a mixture of sources with variable degrees of microbial and photolytic processing and on the need for further studies on optical properties of DOM. Thus, despite the high number of samples considered for this work and the contrasted hydrological conditions covered, the relevance of DOM nature as representing all types of DOM existing in freshwaters is not ensured. The transposition of the method, at least for DOC assessment, supposes to verify the existence of the second derivative peak at 290-300 nm and the quality of the relation between the SDA value at 295 (after nitrate correction), and the DOC content.
-
The method validation was carried out for around 580 freshwater samples representing different hydrological conditions in two agricultural watersheds.
Acknowledgement
The authors wish to thank the Association Nationale de la Recherche et de la Technologie (ANRT), and Coop de France Ouest for their funding during the PhD of Jean Causse (PhD grant and data collection), the Agence de l'Eau Loire-Bretagne and the Conseil Régional de Bretagne for their financial support (project C&N transfert).
Etheridge, J.R., Birgand, F., Osborne, J. a., Osburn, C.L., Burchell Ii, M.R., Irving, J., 2014. Using in situ ultraviolet-visual spectroscopy to measure nitrogen, carbon, phosphorus, and suspended solids concentrations at a high frequency in a brackish tidal marsh. Limnol. Oceanogr. Methods 12, 10-22.
Fichot, C.G., Benner, R., 2012. The spectral slope coefficient of chromophoric dissolved organic matter (S275-295) as a tracer of terrigenous dissolved organic carbon in river-influenced ocean margins. Limnol. Oceanogr. 57, 1453-1466. Galgani, L., Tognazzi, A., Rossi, C., Ricci, M., Angel Galvez, J., Dattilo, A.M., Cozar, A., Bracchini, L., Loiselle, S.A., 2011. Assessing the optical changes in dissolved organic | 27,944 | [
"774902",
"759195"
] | [
"301986",
"182194",
"10127",
"301986",
"182194",
"226143",
"10127"
] |
01765751 | en | [
"sdv"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01765751/file/Ferrarini2018.pdf | M G Ferrarini
S G Mucha
D Parrot
G Meiffren
J F R Bachega
G Comt
A Zaha
email: [email protected]
M F Sagot
Hydrogen peroxide production and myo-inositol metabolism as important traits for virulence of Mycoplasma hyopneumoniae
Keywords:
Mycoplasma hyopneumoniae is the causative agent of enzootic pneumonia. In our previous work, we reconstructed the metabolic models of this species along with two other mycoplasmas from the respiratory tract of swine: Mycoplasma hyorhinis, considered less pathogenic but which nonetheless causes disease and Mycoplasma flocculare, a commensal bacterium. We identified metabolic differences that partially explained their different levels of pathogenicity. One important trait was the production of hydrogen peroxide from the glycerol metabolism only in the pathogenic species. Another important feature was a pathway for the metabolism of myo-inositol in M. hyopneumoniae. Here, we tested these traits to understand their relation to the different levels of pathogenicity, comparing not only the species but also pathogenic and attenuated strains of M. hyopneumoniae. Regarding the myo-inositol metabolism, we show that only M. hyopneumoniae assimilated this carbohydrate and remained viable when myo-inositol was the primary energy source. Strikingly, only the two pathogenic strains of M. hyopneumoniae produced hydrogen peroxide in complex medium. We also show that this production was dependent on the presence of glycerol. Although further functional tests are needed, we present in this work two interesting metabolic traits of M. hyopneumoniae that might be directly related to its enhanced virulence.
Contents
Introduction
The notion that the lungs are sterile is frequently stated in textbooks; however, no modern studies have provided evidence for the absence of microorganisms in this environment [START_REF] Dickson | The lung microbiome: New principles for respiratory bacteriology in health and disease[END_REF]. Several bacteria colonize the respiratory tract of swine.
Mycoplasma hyopneumoniae, Mycoplasma flocculare, and Mycoplasma hyorhinis are some of the most important species identified so far [START_REF] Mare | New species: Mycoplasma hyopneumoniae; a causative agent of virus pig pneumonia[END_REF][START_REF] Meyling | Serological identification of a new porcine Mycoplasma species, Mycoplasma flocculare[END_REF][START_REF] Rose | Taxonomy of some swine Mycoplasmas: Mycoplasma suipneumoniae goodwin et al. 1965, a later, objective synonym of Mycoplasma hyopneumoniae mare and switzer 1965, and the status of Mycoplasma flocculare meyling and friis 1972[END_REF][START_REF] Siqueira | Microbiome overview in swine lungs[END_REF]. M. hyopneumoniae is widespread in pig populations and is the causative agent of enzootic pneumonia [START_REF] Maes | Enzootic pneumonia in pigs[END_REF]; M. hyorhinis, although not as pathogenic as M. hyopneumoniae, has already been found as the sole causative agent of pneumonia, polyserositis and arthritis in pigs [START_REF] Kobisch | Swine mycoplasmoses[END_REF][START_REF] Davenport | Polyserositis in pigs caused by infection with Mycoplasma[END_REF][START_REF] Whittlestone | Porcine mycoplasmas[END_REF][START_REF] Thacker | Mycoplasmosis[END_REF]. M. flocculare, on the other hand, has high prevalence in swine herds worldwide, but up to now, is still considered a commensal bacterium [START_REF] Kobisch | Swine mycoplasmoses[END_REF].
Because of the genomic resemblance of these three Mycoplasma species [START_REF] Stemke | Phylogenetic relationships of three porcine mycoplasmas, Mycoplasma hyopneumoniae, Mycoplasma flocculare, and Mycoplasma hyorhinis, and complete 16S rRNA sequence of M. flocculare[END_REF][START_REF] Siqueira | New insights on the biology of swine respiratory tract mycoplasmas from a comparative genome analysis[END_REF], it remains unclear why M. hyopneumoniae can become highly virulent if compared with the other two. It is also essential to understand that the simple presence or absence of each species is not in itself a determinant factor in the development of enzootic pneumonia: most piglets are thought to be vertically infected with M. hyopneumoniae at birth [START_REF] Maes | Enzootic pneumonia in pigs[END_REF][START_REF] Fano | Assessment of the effect of sow parity on the prevalence of Mycoplasma hyopneumoniae in piglets at weaning[END_REF][START_REF] Sibila | Current perspectives on the diagnosis and epidemiology of Mycoplasma hyopneumoniae infection[END_REF] and many can become carriers of the pathogen throughout their entire life without developing acute pneumonia. Moreover, M. hyopneumoniae also persists longer in the respiratory tract, either in healthy animals or even after successful treatment of the disease [START_REF] Thacker | Interaction between Mycoplasma hyopneumoniae and swine influenza virus[END_REF][START_REF] Ruiz | Mycoplasma hyopneumoniae colonization of pigs sired by different boars[END_REF][START_REF] Fano | Dynamics and persistence of Mycoplasma hyopneumoniae infection in pigs[END_REF][START_REF] Overesch | Persistence of Mycoplasma hyopneumoniae sequence types in spite of a control program for enzootic pneumonia in pigs[END_REF].
To make it even more complex, different strains of each species bear different levels (or even lack) of pathogenicity. For instance, M. hyopneumoniae has six sequenced strains, two of which are known to be attenuated by culture passages [START_REF] Zielinski | Effect of growth in cell cultures and strain on virulence of Mycoplasma hyopneumoniae for swine[END_REF][START_REF] Liu | Comparative genomic analyses of Mycoplasma hyopneumoniae pathogenic 168 strain and its high-passaged attenuated strain[END_REF]. These strains cannot cause the clinical symptoms of pneumonia in vivo and up to now it is not clear why.
In contrast to other pathogenic bacteria, and as revealed by the analysis of the sequenced genomes from several mycoplasmas [START_REF] Himmelreich | Complete sequence analysis of the genome of the bacterium Mycoplasma pneumoniae[END_REF][START_REF] Chambaud | The complete genome sequence of the murine respiratory pathogen Mycoplasma pulmonis[END_REF][START_REF] Minion | The genome sequence of Mycoplasma hyopneumoniae strain 232, the agent of swine mycoplasmosis[END_REF][START_REF] Vasconcelos | Swine and poultry pathogens: the complete genome sequences of two[END_REF][START_REF] Siqueira | New insights on the biology of swine respiratory tract mycoplasmas from a comparative genome analysis[END_REF], pathogenic Mycoplasma species seem to lack typical primary virulence factors such as toxins, invasins, and cytolysins [START_REF] Pilo | A metabolic enzyme as a primary virulence factor of Mycoplasma mycoides subsp. mycoides small colony[END_REF][START_REF] Maes | Update on Mycoplasma hyopneumoniae infections in pigs: Knowledge gaps for improved disease control[END_REF]. For this reason, classical concepts of virulence genes are usually problematic and a broader concept for virulence is used for these species. In this way, a virulence gene in mycoplasmas is described as any non essential gene for in vitro conventional growth, which is essential for the optimal survival (colonization, persistence or pathology) inside the host [START_REF] Browning | Identification and characterization of virulence genes in mycoplasmas[END_REF].
There have been many different types of virulence factors described so far in several Mycoplasma species, most of them related to adhesion [START_REF] Razin | Mycoplasma adhesion[END_REF], invasion [START_REF] Burki | Virulence, persistence and dissemination of Mycoplasma bovis[END_REF], cytotoxicity [START_REF] Vilei | Genetic and biochemical characterization of glycerol uptake in Mycoplasma mycoides subsp. mycoides SC: its impact on H(2)O(2) production and virulence[END_REF][START_REF] Hames | Glycerol metabolism is important for cytotoxicity of Mycoplasma pneumoniae[END_REF], host-evasion [START_REF] Simmons | How Some Mycoplasmas evade host immune responses[END_REF] and host-immunomodulation [START_REF] Katz | Comparison of mitogens from Mycoplasma pulmonis and Mycoplasma neurolyticum[END_REF][START_REF] Waites | Mycoplasma pneumoniae and its role as a human pathogen[END_REF].
As for M. hyopneumoniae and M. hyorhinis, adhesion factors such as antigen surface proteins and the ability of these organisms to produce a capsular polysaccharide have already been described in the literature [START_REF] Whittlestone | Porcine mycoplasmas[END_REF][START_REF] Tajima | Interaction of Mycoplasma hyopneumoniae with the porcine respiratory epithelium as observed by electron microscopy[END_REF][START_REF] Citti | Elongated versions of Vlp surface lipoproteins protect Mycoplasma hyorhinis escape variants from growth-inhibiting host antibodies[END_REF][START_REF] Djordjevic | Proteolytic processing of the Mycoplasma hyopneumoniae cilium adhesin[END_REF][START_REF] Seymour | Mhp182 (P102) binds fibronectin and contributes to the recruitment of plasmin(ogen) to the Mycoplasma hyopneumoniae cell surface[END_REF]. However, while the diseases caused by these swine mycoplasmas have been extensively studied, only recently their metabolism has been explored from a mathematical and computational point of view by our group [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF]. We are well aware that metabolism does not fully explain the pathologies caused by either of them. However, adhesion proteins, classically related to virulence in mycoplasmas cannot be associated with the different levels of pathogenicity between M. hyopneumoniae and M. flocculare. Both species harbor similar sets of adhesion proteins [START_REF] Siqueira | Unravelling the transcriptome profile of the swine respiratory tract mycoplasmas[END_REF] and have been shown to adhere to cilia in a similar way [START_REF] Young | A tissue culture system to study respiratory ciliary epithelial adherence of selected swine mycoplasmas[END_REF]. Thus, it remains unclear what prevents M. flocculare to cause disease in this context.
In our previous work [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF], we compared the reconstructed metabolic models of these three Mycoplasma species, and pointed out important metabolic differences that could partly explain the different levels of pathogenicity between the three species. The most important trait was related to the glycerol metabolism, more specifically the turnover of glycerol-3-phosphate into dihydroxyacetone-phosphate (DHAP) by the action of glycerol-3-phosphate oxidase (GlpO, EC 1.1.3.21), which was only present in the genomes of M. hyorhinis and M. hyopneumoniae. This would allow the usage of glycerol as a primary energy source, with the production of highly toxic hydrogen peroxide in the presence of molecular oxygen. The metabolism of glycerol and the subsequent production of hydrogen peroxide by the action of GlpO are essential for the cytotoxicity of lung pathogens Mycoplasma pneumoniae [START_REF] Hames | Glycerol metabolism is important for cytotoxicity of Mycoplasma pneumoniae[END_REF] and Mycoplasma mycoides subsp. mycoides [START_REF] Vilei | Genetic and biochemical characterization of glycerol uptake in Mycoplasma mycoides subsp. mycoides SC: its impact on H(2)O(2) production and virulence[END_REF]. Moreover, the Mycoplasma hominis group is not the only one where hydrogen peroxide production via glpO has been reported. In some Spiroplasma species (specifically Spiroplasma taiwanense) and within the pneumoniae group (for instance in Mycoplasma penetrans), the presence of this enzyme was also associated with virulence [START_REF] Kannan | Hemolytic and hemoxidative activities in Mycoplasma penetrans[END_REF][START_REF] Lo | Comparison of metabolic capacities and inference of gene content evolution in mosquitoassociated Spiroplasma diminutum and S. taiwanense[END_REF].
Another major difference between our previous models was related to the presence of a complete transcriptional unit (TU) encoding proteins for the uptake and metabolism of myo-inositol in M. hyopneumoniae (with the exception of one enzyme). This could be another important trait for the enhanced virulence of this species if compared with the other two. Here, we studied this pathway in more detail to try to find this missing enzyme and the possible reasons as to why natural selection kept these genes only in this Mycoplasma species.
In a recent review, Maes and colaborators [START_REF] Maes | Update on Mycoplasma hyopneumoniae infections in pigs: Knowledge gaps for improved disease control[END_REF] emphasize the need for the further investigation of the role of glycerol and myo-inositol metabolism and their contribution to virulence in M. hyopneumoniae. Here, we experimentally tested these two traits to show how they might be related to the different levels of pathogenicity, by comparing not only the species themselves but different strains of M. hyopneumoniae. Contrary to what we anticipated, only the two pathogenic strains of M. hyopneumoniae were able to produce hydrogen peroxide in complex medium, and we confirmed that this production was dependent on the presence of glycerol. The myo-inositol metabolism, in turn, was tested with the aid of deuterated myo-inositol in Friis medium. We were able to detect by mass spectrometry (MS) a slight decrease in the marked myo-inositol concentration throughout time, indicating the ability of M. hyopneumoniae to uptake such carbohydrate. We also show here that only the M. hyopneumoniae strains remained viable when myo-inositol was the primary energy source.
We present here two metabolic traits specific to M. hyopneumoniae that might be directly related to its enhanced virulence, specially in its ability to successfully overgrow the other two Mycoplasma species in the respiratory tract of swine, persist longer in this environment and possibly cause disease.
Results
Comparative genomics of glpO from glycerol metabolism
Highly conserved homolog genes to glpO from M. mycoides subsp. mycoides (EC 1.1.3.21) were found only in the genomes of M. hyopneumoniae and M. hyorhinis. Despite the annotation as a dehydrogenase in both M. hyopneumoniae and M. hyorhinis, we propose this enzyme to act as glycerol-3-phosphate oxidase (GlpO), using molecular oxygen as the final electron acceptor and producing DHAP and hydrogen peroxide. We therefore refer to the encoded protein in M. hyopneumoniae and M. hyorhinis as GlpO, rather than GlpD. The high similarity between these predicted proteins (Supplementary Figure S1A) may be an indication that this trait might be essential for the pathogenicity of these Mycoplasma species.
Particularly, the cytotoxicity of M. mycoides subsp. mycoides is considered to be related to the translocation of the hydrogen peroxide into the host cells [START_REF] Bischof | Cytotoxicity of Mycoplasma mycoides subsp. mycoides small colony type to bovine epithelial cells[END_REF]. This is presumably possible because of the close proximity to the host cells along with the integral membrane GlpO [START_REF] Pilo | A metabolic enzyme as a primary virulence factor of Mycoplasma mycoides subsp. mycoides small colony[END_REF][START_REF] Pilo | Molecular mechanisms of pathogenicity of Mycoplasma mycoides subsp. mycoides SC[END_REF]. Different transmembrane prediction softwares [START_REF] Hofmann | TMbase -A database of membrane spanning proteins segments[END_REF][START_REF] Krogh | Predicting transmembrane protein topology with a hidden Markov model: application to complete genomes[END_REF][START_REF] Combet | NPS@: network protein sequence analysis[END_REF][START_REF] Kahsay | An improved hidden Markov model for transmembrane protein detection and topology prediction and its applications to complete genomes[END_REF] identified putative transmembrane portions in the GlpO proteins from M. hyopneumoniae and M. hyorhinis (Supplementary Figure S1B). Similar results were reported for the homolog enzyme in M. mycoides subsp. mycoides [START_REF] Pilo | A metabolic enzyme as a primary virulence factor of Mycoplasma mycoides subsp. mycoides small colony[END_REF], and a recent proteomic study has detected GlpO from M. hyopneumoniae in surface-enriched extracts through LC-MS/MS (Personal communication from H. B. Ferreira, [START_REF] Machado | Comparative surface proteomic approach reveals qualitative and quantitative differences of two Mycoplasma hyopneumoniae strains and Mycoplasma flocculare[END_REF]).
Pathogenic M. hyopneumoniae strains produce hydrogen peroxide from glycerol
Contrary to what we had anticipated, we were only able to detect the production of hydrogen peroxide from the two pathogenic strains of M. hyopneumoniae (7448 and 7422) in Friis medium, as can be seen in Figure 1A. The attenuated strain from the same species (M. hyopneumoniae strain J), along with M. hyorhinis and M. flocculare did not produce detectable quantities of this toxic product. In order to verify if the amount of hydrogen peroxide produced was comparable between strains, we also counted the number of cells for each replicate. In this way, the two pathogenic strains produced approximately the same amount of hydrogen peroxide and had cell counts of the same order of magnitude (available in Supplementary Table S1).
We also show (Figure 1B) that the hydrogen peroxide produced by the M. hyopneumoniae strains 7448 and 7422 was dependent on the presence of glycerol in the incubation buffer.
Levels of glpO transcripts do not differ from pathogenic to attenuated strains of M. hyopneumoniae
We tested the three M. hyopneumoniae strains (7448, 7422 and J) in order to compare the mRNA expression levels of glpO gene by RT-qPCR. Since the transcript levels of normalizer genes were not comparable between strains, we used relative quantification normalized against unit mass; in our case, the initial amount of RNA. We chose one of the replicates from strain 7448 as the calibrator, and we were able to show (Figure 2 and Supplementary Table S2) that there was no significant difference in the transcript levels of glpO in all tested strains from M. hyopneumoniae.
Enzymes for the uptake and catabolism of myo-inositol are specific to M. hyopneumoniae strains
M. hyopneumoniae is the only reported species among the Mollicutes that contains genes involved in the catabolism of myo-inositol. Since Mycoplasma species seem to maintain a minimum set of essential metabolic capabilities, we decided to further investigate this pathway and the influence of its presence on the metabolism and pathogenicity of M. hyopneumoniae. The degradation of inositol can feed glycolysis with DHAP and also produces an acetyl coenzyme-A (AcCoA) (Figure 3). A TU for the myo-inositol catabolism is present in all M. hyopneumoniae strains, with the exception of the gene that codes for the enzyme 6-phospho-5-dehydro-2-deoxy-D-gluconate aldolase (IolJ, EC 4.1.2.29), responsible for the turnover of 6-phospho-5-dehydro-2-deoxy-D-gluconate (DKGP) into malonate semialdehyde (MSA).
The gene encoding IolJ in other organisms is similar to the one coding for enzyme fructose-bisphosphate aldolase (Fba) from glycolysis (EC 4.1.2.13).
There are two annotated copies of the gene fba in M. hyopneumoniae (fba and fba-1, Supplementary Table S3). We performed homology and gene context analyses, 3D comparative modeling and protein-ligand interaction analysis to check if either of them would be a suitable candidate for this activity.
The gene context and protein sequence alignment for 15 selected Fba homologs in Mollicutes can be seen in Supplementary Figures S2 andS3. Comparative models for both copies of Fba from M. hyopneumoniae and previously characterized IolJ and Fba from Bacillus subtilis [START_REF] Yoshida | myo-Inositol catabolism in Bacillus subtilis[END_REF] were constructed based on available structures of Fba in PDB [START_REF] Berman | The Protein Data Bank[END_REF] (Figure 4 and Supplementary Table S4).
Fba structures from Escherichia coli and Giardia intestinalis were used to gather more information about substrate binding (Supplementary Figure S3). The alignment shows a highly conserved zinc binding site (residues marked as '*'), essential for substrate binding and catalysis. Positions 'a', 'b', 'c', 'd' and 'e' surround the substrate cavity. The structural analysis suggests that the interaction mode of DKGP (substrate of IolJ) with the zinc ion of the active site is similar to that observed for FBP (fructose-1,6-bisphosphate, substrate of Fba).
Nevertheless the substrate specificity is strongly dependent on the residues that form the substrate cavity.
While there seems to be several common features between Fba and IolJ (residues 'c', 'd', 'e' and '*'), residue 'a' appears to be essential for the substrate interaction with IolJ. This residue is generally occupied by an arginine (R52) in several putative IolJs from other organisms (Supplementary Figure S4), and absent in all predicted Fbas analysed in this study. From the predicted structures, the presence of this positively charged arginine in IolJ seems to disfavour the interaction with the phosphate group of FBP whilst it is complementary to the carboxyl group from DKGP.
In this way, the predicted structure of Fba-1 from M. hyopneumoniae resembles more the Fba structures from the experimentally solved Fbas in B. subtilis, E. coli and G. intestinalis. The annotated Fba from M. hyopneumoniae, on the other hand, seems to be more similar to the IolJ structure from B. subtilis. Although functional studies are needed to test this hypothesis, we propose that all enzymes needed for the myo-inositol catabolism are present in M. hyopneumoniae.
M. hyopneumoniae is able to uptake myo-inositol from the culture medium
In order to ascertain the ability of different bacteria to uptake myo-inositol, we used two different approaches. The first was the use of marked myo-inositol in complex medium and analysis by MS, and the second was to check the viability of cells (through ATP production) whenever myo-inositol was used as primary energy source.
When we tested if cells were able to uptake the marked myo-inositol, over the course of 48 h, we found no significant difference in M. flocculare and M. hyorhinis when compared to the control medium (CTRL), as observed in Figure 5A. As expected, the concentrations of myo-inositol for both strains of M. hyopneumoniae after 48 h of growth were lower than the control medium. We also collected two extra time points for M. hyopneumoniae strain 7448 and CTRL: 8 h and 24 h of growth (Figure 5B). In all time points, there is significant difference between the residual marked myo-inositol and the control medium, which implies that M. hyopneumoniae is able to uptake such carbohydrate from the medium. MS peak data is available in Supplementary Table S5.
Since we had glucose and glycerol present in this complex medium analysed by MS, we also wanted to check if the viability of the different strains and species altered when myo-inositol was the primary energy source. For this, we incubated cells in myo-inositol defined medium (depleted with glucose and glycerol) for 8 hours and measured the amount of ATP these cells were able to produce. This in turn was directly related to the amount of viable cells after cultivation in the specific medium tested. Considering we do not know the energetic yield and efficiency of each strain and species, we could not directly compare the amount of ATP produced between different organisms. For this reason, growth in regular defined medium (with glucose) for each strain was used as a normalization control and the ratio of ATP production in both media was used to compare the viable cells between strains. Since there was no other energy source available in the medium and in accordance with our previous predictions and results, only M. hyopneumoniae cells remained viable (ranging from 75% to 280%) when compared to their control growth in the regular defined medium (Figure 6 and Supplementary Table S6). The viability of the other species in this medium was 11.5% for M. hyorhinis and 0.2% for M. flocculare. We also achieved similar results when comparing the growth in myo-inositol defined medium versus Friis medium (Supplementary Figure S5).
Discussion
In this study, we wanted to find possible differences between pathogenic and attenuated strains of M. hyopneumoniae and also compare them with M. hyorhinis and M. flocculare and assess possible links to the enhanced virulence of M. hyopneumoniae. While M. hyopneumoniae strains 7422 and 7448 are considered pathogenic, strain J became attenuated after serial passages of in vitro culture; M. hyorhinis strain ATCC 17981 was isolated from swine but, to our knowledge, its level of pathogenicity has not been tested in vivo; and even though M. flocculare is not considered pathogenic, strain ATCC 27399 was isolated from a case of swine pneumonia (strain ATCC 27716 is derived from this strain). In our previous study [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF], through mathematical modeling, we predicted two traits of M. hyopneumoniae in silico that could be associated with its enhanced virulence: the myo-inositol catabolism and the link between the glycerol and the glycolysis metabolism, with the production of highly toxic hydrogen peroxide (by the activity of the GlpO enzyme). In this work, we tested whether these species indeed differed from each other regarding their ability (i) to produce hydrogen peroxide in vitro and whether this was related to the availability of glycerol, (ii) to uptake myo-inositol, and (iii) to remain viable in a defined medium with myo-inositol as the primary energy source. While the uptake of myo-inositol might be a general feature of M. hyopneumoniae, the production of hydrogen peroxide in complex medium seems to be specific to pathogenic strains of this species.
Glycerol metabolism and hydrogen peroxide production
Even though the GlpO enzyme was previously detected in proteomes from both pathogenic and attenuated strains of M. hyopneumoniae (232 and J) [START_REF] Pinto | Comparative proteomic analysis of pathogenic and non-pathogenic strains from the swine pathogen Mycoplasma hyopneumoniae[END_REF][START_REF] Pendarvis | Proteogenomic mapping of Mycoplasma hyopneumoniae virulent strain 232[END_REF], only the pathogenic strains tested in our study (7448 and 7422) were able to produce detectable amounts of hydrogen peroxide in Friis medium (Figure 1). To our knowledge, no other study up to now was able to show that M. hyopneumoniae strains were able to produce this toxic product in vitro [START_REF] Maes | Update on Mycoplasma hyopneumoniae infections in pigs: Knowledge gaps for improved disease control[END_REF]. We also show here that the production of hydrogen peroxide in the pathogenic strains of M. hyopneumoniae is dependent on the presence of glycerol (Figure 1B).
The metabolism of glycerol and the formation of hydrogen peroxide were described as essential for the cytotoxicity of lung pathogens M. mycoides subsp. mycoides [START_REF] Vilei | Genetic and biochemical characterization of glycerol uptake in Mycoplasma mycoides subsp. mycoides SC: its impact on H(2)O(2) production and virulence[END_REF] and M. pneumoniae [START_REF] Hames | Glycerol metabolism is important for cytotoxicity of Mycoplasma pneumoniae[END_REF]. Moreover, although both M. hyopneumoniae and M. flocculare can adhere to the cilia of tracheal epithelial cells in a similar way, only the adhesion of M. hyopneumoniae causes tissue damage [START_REF] Young | A tissue culture system to study respiratory ciliary epithelial adherence of selected swine mycoplasmas[END_REF].
We showed that the difference in enzyme activity was not related to the expression levels of glpO gene from the strains tested (Figure 2). We did not find any extreme differences in their aminoacid sequences either (Supplementary Figure S3). This could be an indication that either this enzyme undergoes posttranslational modifications in order to be active and/or the availability of the substrate (glycerol) intracellularly might be a limiting step for its activity. Posttranslational modifications have been extensively reported experimentally in several proteins of M. hyopneumoniae [START_REF] Djordjevic | Proteolytic processing of the Mycoplasma hyopneumoniae cilium adhesin[END_REF][START_REF] Burnett | P159 is a proteolytically processed, surface adhesin of Mycoplasma hyopneumoniae: defined domains of P159 bind heparin and promote adherence to eukaryote cells[END_REF][START_REF] Pinto | Proteomic survey of the pathogenic Mycoplasma hyopneumoniae strain 7448 and identification of novel posttranslationally modified and antigenic proteins[END_REF][START_REF] Seymour | A processed multidomain Mycoplasma hyopneumoniae adhesin binds fibronectin, plasminogen, and swine respiratory cilia[END_REF][START_REF] Tacchi | Post-translational processing targets functionally diverse proteins in Mycoplasma hyopneumoniae[END_REF]. From transcriptomic and proteomic literature data, we were not able to find any enlightening differences in this pathway between strains or species (Supplementary Table S7).
As for the availability of intracellular glycerol, in our previous metabolic models, we predicted differences in the metabolism of glycerol among the three Mycoplasma species (Supplementary Figure S6). While M. hyopneumoniae has five different ways of uptaking glycerol (dehydrogenation of glyceraldehyde, ABC transport of glycerol and glycerol-phosphate, import of glycerophosphoglycerol, and glycerophosphocholine), the other two species lack at least two reactions.
This might also limit the rate of production of hydrogen peroxide in each species.
In this way, the enhanced pathogenicity of M. hyopneumoniae over M. hyorhinis and M. flocculare may therefore also be due to hydrogen peroxide formation resulting from a higher uptake of glycerol as an energy source. Similarly, one reason that could partially explain why M. mycoides subsp. mycoides is highly pathogenic in comparison with the less pathogenic M. pneumoniae might be the greater intracellular availability of glycerol due to the presence of a specific and very efficient ABC transporter in M. mycoides subsp. mycoides.
Since the production of hydrogen peroxide was not reported as essential to the in vivo virulence of Mycoplasma gallisepticum [START_REF] Szczepanek | Hydrogen peroxide production from glycerol metabolism is dispensable for virulence of Mycoplasma gallisepticum in the tracheas of chickens[END_REF], more studies are needed to better understand the importance of this metabolism in M. hyopneumoniae. Moreover, future biochemical and functional studies are needed to prove that GlpO is indeed responsible for the activity proposed here and to check if the enzyme in attenuated strains/species is functional.
Myo-inositol uptake and catabolism
M. hyopneumoniae is the only Mycoplasma species with sequenced genome that has the genes for the catabolism of myo-inositol. Myo-inositol is an essential precursor for the production of inositol phosphates and inositol phospholipids in all eukaryotes [START_REF] Gonzalez-Salgado | Myo-Inositol uptake is essential for bulk inositol phospholipid but not glycosylphosphatidylinositol synthesis in Trypanosoma brucei[END_REF]. Myo-inositol is also widespread in the bloodstream of mammalians [START_REF] Reynolds | Strategies for acquiring the phospholipid metabolite inositol in pathogenic bacteria, fungi and protozoa: making it and taking it[END_REF], which would make it a suitable energy source for bacteria in the extremely vascularized respiratory system. Previously, Mycoplasma iguanae was described to produce acid from inositol [START_REF] Brown | Mycoplasma iguanae sp. nov., from a green iguana (Iguana iguana) with vertebral disease[END_REF], but the methods used in that paper are not clear and there is no complete genome from this organism for us to draw any conclusions. Based on sequence homology, orthology, synteny and tridimensional analyses, we proposed a possible candidate for the missing enzyme IolJ in M. hyopneumoniae, namely a duplication of the fba gene from glycolysis. This functional divergence after duplication is particularly interesting in bacteria for which evolution was mostly driven by genome reduction. Another reported example of this event is the duplication of the trmFO gene in Mycoplasma capricolum and more recently in Mycoplasma bovis. The duplicated TrmFO in M. capricolum was reported to catalyze the methylation of 23S rRNA [START_REF] Lartigue | The flavoprotein Mcap0476 (RlmFO) catalyzes m5U1939 modification in Mycoplasma capricolum 23S rRNA[END_REF] while the duplicated copy in M. bovis has been described to act as a fibronectin-binding adhesin [START_REF] Guo | TrmFO, a Fibronectin-Binding Adhesin of Mycoplasma bovis[END_REF].
We showed here that M. hyopneumoniae was able to uptake marked myoinositol from a complex culture medium (Figure 5); in addition this was the only species that remained viable whenever myo-inositol was used as the primary energy source (Figure 6). From our metabolic model predictions [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF], the use of myo-inositol would be much more costly than the uptake and metabolism of glucose, which corroborates the small uptake of myo-inositol in Friis medium (glucose-rich) (Figure 5). This basal uptake of myo-inositol could also be an indication that this pathway is important not only for energetic yield. Supporting this idea, microarray studies on strain 232 showed that several genes (if not all) from the myo-inositol catabolism were differentially expressed during stress treatments: heat shock (downregulated) [START_REF] Madsen | Transcriptional profiling of Mycoplasma hyopneumoniae during heat shock using microarrays[END_REF], iron depletion (upregulated) [START_REF] Madsen | Transcriptional profiling of Mycoplasma hyopneumoniae during iron depletion using microarrays[END_REF], and norepinephrine (downregulated) [START_REF] Oneal | Global transcriptional analysis of Mycoplasma hyopneumoniae following exposure to norepinephrine[END_REF]. Moreover, a previous transcriptome profiling of M. hyopneumoniae [START_REF] Siqueira | Unravelling the transcriptome profile of the swine respiratory tract mycoplasmas[END_REF] showed that all genes from the myo-inositol catabolism were transcribed under normal culture conditions. Furthermore, three genes from the pathway (iolB, iolC and iolA) belonged to the list of the 20 genes with the highest number of transcript reads. Besides the transcription of these genes, proteomic studies of M. hyopneumoniae strains 232 [START_REF] Pendarvis | Proteogenomic mapping of Mycoplasma hyopneumoniae virulent strain 232[END_REF], 7422, 7448 and J [START_REF] Pinto | Comparative proteomic analysis of pathogenic and non-pathogenic strains from the swine pathogen Mycoplasma hyopneumoniae[END_REF][START_REF] Reolon | Survey of surface proteins from the pathogenic Mycoplasma hyopneumoniae strain 7448 using a biotin cell surface labeling approach[END_REF] (Supplementary Table S7) showed that several enzymes from this pathway were present in normal culture conditions. Indeed, myo-inositol has been extensively reported in several organisms as a signaling molecule [START_REF] Downes | Myo-inositol metabolites as cellular signals[END_REF][START_REF] Gillaspy | The cellular language of myo-inositol signaling[END_REF]. Moreover, the myo-inositol catabolism has been experimentally described as a key pathway for competitive host nodulation in the plant symbiont and nitrogen-fixing bacterium Sinorhizobium meliloti [START_REF] Kohler | Inositol catabolism, a key pathway in Sinorhizobium meliloti for competitive host nodulation[END_REF]. Host nodulation is a specific symbiotic event between a host plant and a bacterium. Kohler and collaborators (2010) showed that whenever inositol catabolism is disrupted (by single gene knockouts from the inositol operon), the mutants are outcompeted by the wild type for nodule occupancy. This means that genes for the catabolism of inositol are required for a successful competition in this particular symbiosis. Moreover, the authors were not able to find a suitable candidate for the IolJ activity. In our case, we proposed that the activity of the missing enzyme IolJ is taken over by a duplication of fba. We were able to find a similar duplication (also not inside the myo-inositol cluster) in the genome of S. meliloti 1021 (SM_b21192 and SM_b20199, both annotated as fructosebisphosphate-aldolase, EC 4.1.2.13). This means that in at least one other symbiont that has the myo-inositol catabolism genes, there could exist a putative IolJ not close to the myo-inositol cluster, just as we proposed here.
Whether this entire pathway is functional in M. hyopneumoniae is yet to be tested and further experiments should take place to support this hypothesis. However, the ability of M. hyopneumoniae to persist longer in the swine lung if compared to the other two mycoplasmas might come from the fact that this species is able to uptake and process myo-inositol. Furthermore, the ability of M. hyopneumoniae to grow in diverse sites [START_REF] Carrou | Persistence of Mycoplasma hyopneumoniae in experimentally infected pigs after marbofloxacin treatment and detection of mutations in the parC gene[END_REF] if compared to M. flocculare might also be due to this specific trait.
Concluding remarks
It is important to remember that even though M. hyopneumoniae is considered highly pathogenic, the three Mycoplasma species studied here are widespread in pig populations and can easily be found in healthy hosts [START_REF] Fano | Dynamics and persistence of Mycoplasma hyopneumoniae infection in pigs[END_REF][START_REF] Pieters | An experimental model to evaluate Mycoplasma hyopneumoniae transmission from asymptomatic carriers to unvaccinated and vaccinated sentinel pigs[END_REF]. However, the main question permeating this fact is: what causes the switch from a nonpathogenic Mycoplasma community to a pathogenic one? And what makes some strains pathogenic while others inflict no harm to the host cells? Some strains of M. hyopneumoniae become less pathogenic in broth culture and, after serial passages, they lose their ability to produce gross pneumonia in pigs [START_REF] Whittlestone | Porcine mycoplasmas[END_REF]. In a proteomic study comparing strains 232 and J, researchers have described that the attenuated strain J switches its focus to metabolism and therefore has developed better capabilities to profit from the rich culture medium while the ability to infect host cells becomes less important so that adhesionrelated genes are downregulated [START_REF] Li | Proteomic comparative analysis of pathogenic strain 232 and avirulent strain J of Mycoplasma hyopneumoniae[END_REF]. This might be related to the fact that here we detected a higher production of ATP in this attenuated strain when compared to the pathogenic strains 7448 and 7422. Liu and collaborators [START_REF] Liu | Comparative genomic analyses of Mycoplasma hyopneumoniae pathogenic 168 strain and its high-passaged attenuated strain[END_REF] have investigated genetic variations between M. hyopneumoniae strains 168 and attenuated 168-L and found out that almost all reported Mycoplasma adhesins were affected by mutations. Tajima and Yagihashi [START_REF] Tajima | Interaction of Mycoplasma hyopneumoniae with the porcine respiratory epithelium as observed by electron microscopy[END_REF] reported that capsular polysaccharides from M. hyopneumoniae play a key role in the interaction between pathogen and host. Indeed in several bacterial species it has been reported that the amount of capsular polysaccharide is a major factor in their virulence [START_REF] Corbett | The role of microbial polysaccharides in hostpathogen interaction[END_REF] and it decreases significantly with in vitro passages [START_REF] Kasper | Capsular polysaccharides and lipopolysaccharides from two Bacteroides fragilis reference strains: chemical and immunochemical characterization[END_REF]. In this way, it is likely that the difference in pathogenicity between strains in M. hyopneumoniae does not solely depend on their metabolism, but also on their ability to adhere to the host.
A recent metagenomic analysis of community composition [START_REF] Siqueira | Microbiome overview in swine lungs[END_REF] has described that M. hyopneumoniae is by far the most prevalent species in both healthy and diseased hosts. The difficult isolation of Mycoplasma species from diseased lung extracts is due to the fact that, in culture, fast-growing bacteria will overcome the slow-growth of mycoplasmas [START_REF] Mckean | Evaluation of diagnostic procedures for detection of mycoplasmal pneumonia of swine[END_REF]. This means that, in vitro, the competition for an energy source between fast and slow-growing bacteria usually ends with an overpopulation of the fast growing ones. Given the fact that mycoplasmas survive for longer periods inside the host even in competition with other bacteria [START_REF] Fano | Dynamics and persistence of Mycoplasma hyopneumoniae infection in pigs[END_REF][START_REF] Overesch | Persistence of Mycoplasma hyopneumoniae sequence types in spite of a control program for enzootic pneumonia in pigs[END_REF], we must assume that other factors exist and are usually not mimicked in cell culture.
While M. hyopneumoniae might cause no harm, depending mostly on the environment, the characteristics of the host, and the composition of this dynamic lung microbiome, any unbalance in this system is probably capable of turning a non-pathogenic community into a pathogenic one. The final conclusion is that the disease is a multifactorial process depending on several elements that include intra-species mechanisms, community composition, host susceptibility and environmental factors. One possibility is that the competition with fast-growing species could result in a lower carbohydrate concentration and that M. hyopneumoniae might have to overcome this environmental starvation with the uptake of glycerol or myo-inositol. Since the uptake of myo-inositol does not lead to the production of any toxic metabolite, it is more interesting for its persistence in the long run. Other bacteria will strongly compete for glucose and other related carbohydrates, while M. hyopneumoniae will have the entire supply of myoinositol for itself. The uptake of glycerol as an energy source, on the other hand, will probably lead to the production of toxic hydrogen peroxide as reported in other Mycoplasma species. This toxic product combined with other toxins from the external bacteria in the system would most probably recruit immune system effectors. Since M. hyopneumoniae has efficient mechanisms of host evasion [START_REF] Fano | Dynamics and persistence of Mycoplasma hyopneumoniae infection in pigs[END_REF][START_REF] Maes | Update on Mycoplasma hyopneumoniae infections in pigs: Knowledge gaps for improved disease control[END_REF], the newly introduced and fast-growing bacteria might be eliminated faster and M. hyopneumoniae, in this way, would be able to persist longer than other species inside the host (as it is reported in vivo).
As mentioned before, virulence factors in Mycoplasma species cover a broader concept if compared to other species: they are genes not essential for in vitro conventional growth that are instead essential for optimal survival in vivo. From our M. hyopneumoniae metabolic models, neither the GlpO activity nor the uptake and metabolism of myo-inositol seem to be essential features for in vitro growth. However, we were able to show that they might be two metabolic traits important for the enhanced virulence of M. hyopneumoniae when compared to M. hyorhinis and M. flocculare and could be essential for its survival in vivo and directly affect its pathogenicity.
Experimental Procedures
Mycoplasma cultivation
We used the following strains for experimental validation: M. hyopneumoniae strains 7448, 7422 (field isolates) and J (ATCC 25934), M. hyorhinis ATCC 17981 and M. flocculare ATCC 27716. Cells were cultivated in Friis medium [START_REF] Friis | Some recommendations concerning primary isolation of Mycoplasma suipneumoniae and Mycoplasma flocculare a survey[END_REF] at 37 ∘ C for varying periods of time with gentle agitation in a roller drum.
Hydrogen peroxide detection
Hydrogen peroxide was detected in culture medium by the Amplex® Red Hydrogen Peroxide/Peroxidase Assay Kit (Invitrogen Cat. No A22188), according to the manufacturer's manual. M. hyopneumoniae, M. hyorhinis and M. flocculare were cultivated for 48 h in modified Friis medium (with no Phenol Red) and thereafter centrifuged. The supernatant was used for the hydrogen peroxide readings compared to a standard curve (Supplementary Figure S7). The medium without bacterial growth was used as negative control. We used biological and technical triplicates to infer the average amount of hydrogen peroxide produced, and the concentration was standardized based on the average number of cells from each culture. Statistical analyses were performed using GraphPad Prism 6 software by one-way ANOVA followed by Dunnett's multiple comparison test considering M. flocculare as a control (p < 0.05).
In order to determine if the hydrogen peroxide production was dependent on the glycerol metabolism, we used the Merckoquant Peroxide Test (Merck Cat. No 110011) with detection range of 0.5 to 25 μg of peroxide per mL of solution (as described in [START_REF] Hames | Glycerol metabolism is important for cytotoxicity of Mycoplasma pneumoniae[END_REF]). Fifteen mL of M. hyopneumoniae 7448 and 7422 strains were grown for 48 h in Friis medium, harvested by centrifugation at 3360 g and washed twice in the incubation buffer (67.7 mM HEPES pH 7.3, 140 mM NaCl, 7 mM MgCl 2 ). Cells were resuspended in 4 mL of incubation buffer and aliquots of 1 mL were incubated for 1 h at 37 ∘ C. To induce hydrogen peroxide production, either glycerol or glucose (final concentration 100 μM or 1 mM) was added to the cell suspension and samples were incubated at 37 ∘ C for additional 2 h. Hydrogen peroxide levels were measured using colorimetric strips according to the manufacturer's instructions. Aliquots without any added carbon source served as an incubation control. The statistical significance of the results was calculated using one-way ANOVA followed by Dunnett's multiple comparison test (p < 0.05). The results represent four biological replicates with at least two technical replicates each.
Mycoplasma cell count with flow cytometry
Mycoplasma cells cultivated for hydrogen peroxide detection were sedimented at 3360 g for 20 min at 4 ∘ C and washed three times with NaCl 0,9% (1x 3360 g for 20 min and 2x 3360 g for 4 min). Cells were resuspended in 1 mL of NaCl 0,9% and diluted 1:30 for flow cytometry readings in a Guava EasyCyte cytometer (Millipore, USA). Cells were characterized by side-angle scatter (SSC) and forward-angle scatter (FSC) in a four-decade logarithmic scale. Absolute cell counting was performed up to 5000 events and the samples were diluted until the cell concentration was below 500 cells/μL. The number of counts obtained was then converted to cells/mL.
Transcript levels of glpO with the use of real-time quantitative RT-PCR
Total RNA was isolated from 20 mL culture of M. hyopneumoniae strains 7448, 7422 and J grown at 37 ∘ C for 24 h. Cells were harvested by centrifugation at 3360 g for 15 min, resuspended in 1mL of TRizol (Invitrogen, USA) and processed according to the manufacturer's instructions followed by DNA digestion with 50 U of DNaseI (Fermentas, USA). Absence of DNA in the RNA preparations was monitored by PCR assays. The extracted RNA was analysed by gel electrophoresis and quantified with the Qubit TM system (Invitrogen, USA).
A first-strand cDNA synthesis reaction was conducted by adding 500 ng of total RNA to 500 ng of pd(N)6 random hexamer (Promega, USA) and 10 mM deoxynucleotide triphosphates. The mixture was heated for 65 ∘ C for 5 min and then incubated on ice for 5 min. First-strand buffer (Invitrogen, USA), 0.1 M dithiothreitol and 200 U M-MLV RT (Moloney Murine Leukemia Virus Reverse Transcriptase -Invitrogen, USA) were then added to a total volume of 20 μL. The reaction was incubated at 25 ∘ C for 10 min and at 37 ∘ C for 50 min followed by 15 min at 70 ∘ C for enzyme inactivation. A negative control was prepared in parallel, differing only by the absence of the RT enzyme. Quantitative PCR (qPCR) assay was performed using 1:2.5 cDNA as template and Platinum SYBR Green qPCR SuperMix-UDG with ROX (Invitrogen, USA) with specific primers for glpO (5'GGTCGGGAACCTGCTAAAGC3' and 5'CCAGACGGAAACATCTTAGTTGG3') on StepOne Real-Time PCR Systems (Applied Biosystems, USA). The qPCR reactions were carried out at 90 ∘ C for 2 min and 95 ∘ C for 10 min followed by 40 cycles of 95 ∘ C for 15 s and 60 ∘ C for 1 min. A melting curve analysis was done to verify the specificity of the synthesized products and the absence of primer dimers. The amplification efficiency was calculated with the LinRegPCR software application [START_REF] Ruijter | Amplification efficiency: linking baseline and bias in the analysis of quantitative PCR data[END_REF]
Comparative modeling and protein-ligand interaction analysis of Fba and IolJ
The SWISS-MODEL server [START_REF] Schwede | SWISS-MODEL: An automated protein homology-modeling server[END_REF][START_REF] Biasini | SWISS-MODEL: modelling protein tertiary and quaternary structure using evolutionary information[END_REF] was used for template search and the comparative modeling for all Fba and IolJ proteins in this study. The best homology models were selected according to coverage, sequence identity, Global Model Quality Estimation (GMQE) and QMEAN statistical parameters [START_REF] Benkert | QMEAN server for protein model quality estimation[END_REF][START_REF] Benkert | Toward the estimation of the absolute quality of individual protein structure models[END_REF]. The Fba from M. hyopneumoniae along with IolJ and Fba from B.
subtilis were modeled using the crystal structure of fructose 1,6-bisphosphate aldolase from Bacillus anthracis in complex with 1,3-dihydroxyacetonephosphate (PDB 3Q94) while Fba-1 from M. hyopneumoniae was modeled using the fructose-1,6-bisphosphate aldolase from Helicobacter pylori in complex with phosphoglycolohydroxamic acid (PDB 3C52). Both selected templates have the same resolution range (2.30Å). Fba structures experimentally solved from E. coli [START_REF] Hall | The crystal structure of Escherichia coli class II fructose-1, 6-bisphosphate aldolase in complex with phosphoglycolohydroxamate reveals details of mechanism and specificity[END_REF] and G. intestinalis [START_REF] Galkin | Structural insights into the substrate binding and stereoselectivity of Giardia fructose-1,6-bisphosphate aldolase[END_REF] were used to include information about substrate binding in the active site. The DKGP and FBP ligands were drawn in the Avogadro version 1.1.1 [START_REF] Hanwell | Avogadro: an advanced semantic chemical editor, visualization, and analysis platform[END_REF] by editing the tagatose-1,6-biphosphate (TBP) molecule complexed with the Fba structure of G. intestinalis (PDB 3GAY). Each model was submitted to 500 steps of an energy minimization protocol using the universal force field (UFF). The DKGP and FBP molecules were inserted into the substrate binding sites of the acquisition models obtained by superposition of the models with the Fba structure of G. intestinalis.
Detection of marked myo-inositol through mass spectrometry
Sample preparation
All samples were filtered and concentrated with the use of Amicon Ultra 3 kDa (Merck Millipore Cat. No. UFC200324). After this step, samples were dried in a miVac sample concentrator (Genevac, Ipswich, UK) for approximately 45 min at 50 ∘ C. All samples were ressuspended in ultra pure water to a final concentration of 10 g/L and were subsequently submitted to mass spectrometry.
Mass spectrometry
Aqueous extracts of Mycoplasma sp. and commercial deuterated myo-inositol-1,2,3,4,5,6-d6 were analysed using an Accurate-Mass Q-TOF LCMS 6530 with LC 1290 Infinity system and Poroshell 120 Hilic column (3x100 mm, 2.7 μm) (Agilent Technologies, Santa Clara, USA). The extracts were dissolved in water (10 g/L) and injection volume was 3 μL. A binary mobile phase system (A: 0.4% formic acid in milliQ-water and B: acetonitrile) was pumped at a flow rate of 0.9 mL/min at the following gradient: 0-3.5 min, 90% B; 3.5-7 min, 90% to 0% B; 7-9.5 min, 0% B; 9.5-10 min 0% to 90% B; 10-15 min, 90% B (total run: 15 min).
MS and MS
Determination of cell viability of M. hyopneumoniae in myo-inositol defined medium
All available strains were grown in Friis medium at 37 ∘ C for 48 h, sedimented by centrifugation at 3360 g for 20 min at 4 ∘ C, washed twice with ice cold PBS and inoculated in glucose regular defined medium (described in [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF], supplemented with 5 g/L of succinate) or myo-inositol defined medium (regular defined medium depleted with glucose and glycerol and supplemented with 0.5 g/L of myoinositol). Viability of cells was measured by ATP production with live cells recovered after 8 h of growth in either media with a BacTiter-Glo TM Microbial Cell Viability Assay Kit (Promega, USA) according to the manufacturer's manual.
Luminescence was recorded in a SpectraMax MiniMax 300 Imaging Cytometer (Molecular Devices, USA) with an integration time of 0.5 s in an opaque-walled multiwell plate. Average ATP production was calculated with biological duplicates and technical triplicates. The ATP production of each strain was compared between regular defined medium and myo-inositol defined medium to determine the ratio of viable cells and to allow a comparison between strains. A 10-fold serial dilution of ATP was used as a standard curve (Supplementary Figure S8).
Statistical analyses were performed using GraphPad Prism 6 software by oneway ANOVA followed by Tukey's multiple comparison test (p < 0.05). Fig. 2 Expression levels of glpO gene in M. hyopneumoniae strains. We did not find any significant difference on the transcript levels of glpO from all tested strains. Bars show the average relative quantification normalized against unit mass (500 ng of total RNA) and replicate 2 from strain 7448 was used as the calibrator. Average expression levels were calculated with independent biological triplicates (p < 0.05). hyopneumoniae). In contrast, Fbas generally bear glycines in this position (for complete explanation see Supplementary Figures S3 andS4). While Fba-1 from M. hyopneumoniae resembles more the experimentally solved Fba enzymes from B. subtilis, E. coli and, G. intestinalis, the predicted structure of Fba from M.
hyopneumoniae is more similar to the IolJ structure from B. subtilis. hyorhinis ATCC 17981 (MHR). While there is no significant difference in the concentrations between MFL and MHR and the control medium (CTRL), both M.
hyopneumoniae strains seem to be able to uptake myo-inositol. B. We also collected two extra time points for MHP_7448 and CTRL: 8h and 24h of growth.
In all time points there is significant difference between residual marked myoinositol and the control medium. Data are presented as mean and standard deviation of 4 independent biological replicates. Asterisks indicate statistically significant differences in residual marked myo-inositol (*p < 0.05; **p < 0.01).
M. h y o p n e u mo n i a e M. h y o p n e u mo n i a e M. h y o p n e u mo n i a e M. h y o p n e u mo n i a e B. s u b t i l i s B. s u b t i l i s B. s u b t i l i s
1 3
13 Comparative genomics of glpO from glycerol metabolism 1.2 Pathogenic M. hyopneumoniae strains produce hydrogen peroxide from glycerol 1.3 Levels of glpO transcripts do not differ from pathogenic to attenuated strains of M. hyopneumoniae 1.4 Enzymes for the uptake and catabolism of myo-inositol are specific to M. hyopneumoniae strains 1.5 M. hyopneumoniae is able to uptake myo-inositol from the culture Mycoplasma cell count with flow cytometry 3.4 Transcript levels of glpO with the use of real-time quantitative RT-PCR 3.5 Comparative modeling and protein-ligand interaction analysis of Fba and IolJ 3.6 Detection of marked myo-inositol through mass spectrometry 3.7 Determination of cell viability of M. hyopneumoniae in myo-inositol defined medium
Acetonitrile and formic acid (Optima LC/MS Grade) were purchased from Fisher Scientific (Loughborough, UK). MilliQ water was obtained from a Direct-Q 5UV system (Merck Millipore, Billerica, Massachusetts, USA). Deuterated myoinositol-1,2,3,4,5,6-d6 was purchased from CIL (C/D/N Isotopes Inc. Cat No. D-3019, Canada).Cultivation in the presence of marked myo-inositolCells were cultivated in Friis medium supplemented with 0.25 g/L of deuterated myo-inositol-1,2,3,4,5,6-d6 (C/D/N Isotopes Inc. Cat No. D-3019). Cultures were interrupted after 8 h, 24 h and 48 h of cultivation for mass spectrometry analysis.
Fig. 1
1 Fig.1Hydrogen peroxide production by swine mycoplasmas. A. In Friis medium after bacterial growth: Hydrogen peroxide was only detected in growth media from pathogenic strains (field isolates) of M. hyopneumoniae 7448 (MHP_7448) and 7422 (MHP_7422). Neither the attenuated strain J (MHP_J) nor the other species M. hyorhinis (MHR) and M. flocculare (MFL) produced detectable amounts of this toxic product. The concentration was also standardized based on the average number of cells from each culture. Data are presented as mean and standard deviation of three independent samples and statistical analysis was performed considering M. flocculare as a control strain (since it lacks the glpO gene). B. In the presence of different carbon sources: Pathogenic M. hyopneumoniae strains were used to test hydrogen peroxide production in incubation buffer supplemented with either glycerol or glucose after 2 h of incubation. Both strains were able to produce significant amounts of the
Fig. 3
3 Fig. 3 Myo-inositol catabolism pathway in all M. hyopneumoniae strains and its transcriptional unit in M. hyopneumoniae strain 7448. Metabolites are depicted in dark green and enzymatic activities present in M. hyopneumoniae can be seen in pink. Metabolite abbreviations are as follows: MI (myo-inositol), 2KMI (2-ketomyo-inositol), THcHDO (3D-(3,5/4)-trihydroxycyclohexane-1,2-dione), 5DG (5deoxy-D-glucuronate), DKG (2-deoxy-5-dehydro-D-gluconate), DKGP (6phospho-5-dehydro-2-deoxy-D-gluconate), MSA (malonate semialdehyde), AcCoA (acetyl coenzyme-A), DHAP (dihydroxyacetone phosphate). EC 1.2.1.27:
Fig. 4
4 Fig. 4 Substrate cavity prediction for Fba and Fba-1 from M. hyopneumoniae strain 7448. Cavities from the comparative models of Fba and Fba-1 from M. hyopneumoniae in comparison to the models constructed for Fba and IolJ from B. subtilis. The specificity for DKPG in IolJ seems to be strongly associated to the presence of a conserved arginine in position 'a' (R52 in Fba-1 from M.
Fig. 5
5 Fig. 5 Deuterated myo-inositol-1,2,3,4,5,6-d6 uptake in complex medium. A. Comparison after 48 h of growth of M. hyopneumoniae J ATCC 25934 (MHP_J) and field isolate 7448 (MHP_7448), M. flocculare ATCC 27716 (MFL) and M.
Fig. 6
6 Fig.6Viability of M. hyopneumoniae, M. hyorhinis and M. flocculare after 8 hours of incubation in myo-inositol defined medium. The viability of cells in myoinositol defined medium was measured by ATP production in comparison to inoculation in regular defined medium (glucose-containing medium). Data are represented as the ratio between ATP production in each media. There is a significant decrease of ATP production in M. hyorhinis and M. flocculare whereas at least 75% of the cells from M. hyopneumoniae remained viable after cultivation in the myo-inositol defined medium (***p < 0.001; ****P < 0.0001).
. A relative quantification normalized against unit mass (500 ng of total RNA) was used to analyse the
expression data with the equation: ( Ratio test calibrator / ) 2 CT , where
CT CT CT [80] and MHP_7448 (Replicate 2) was chosen as
test calibrator
calibrator. Statistical analyses were performed using GraphPad Prism 6 software
by one-way ANOVA followed by Tukey's multiple comparison test (p < 0.05).
/MS spectra were obtained in negative mode, with the following conditions: nebulization gas (nitrogen) at 310 ∘ C, at a flow of 10 L/min and 40 psg pressure. The capillary tension was 3600 V and gave ionisation energy of 100 eV. In targeted MS/MS mode, collision energy was set at 18 eV. Acquisition range was m/z 50-500. MassHunter Qualitative Analysis Software (version B.07.00) was used for data analysis.
Data analysis
Deuterated myo-inositol-1,2,3,4,5,6-d6 was quantified in all aqueous extracts by
HPLC-MS. For that, a calibration curve (based on peak area) of a commercial
myo-inositol was performed from 0.001 g/L to 0.05 g/L in replicate (4 times during
the batch analysis). Statistical analyses were performed using GraphPad Prism 6
software. One-way ANOVA followed by Dunnett's multiple comparison test was
used to test for differences in residual marked myo-inositol in culture after
bacterial growth of all tested strains for 48 h (p < 0.05). A two-tailed unpaired t-
test was used to compare the residual marked myo-inositol between M.
hyopneumoniae 7448 and the control medium with two extra timepoints: 8 and
24 h (p < 0.05).
Acknowledgments Authors' contributions
Acknowledgments
This work was supported by grants from CAPES-COFECUB 782/13 and Inria. MGF was granted post doctoral fellowship funded by the European Research Council under the European Community's Seventh Framework Programme (FP7 / 2007-2013)/ ERC grant agreement no. [247073]10. SGM was the recipient of a CAPES doctoral fellowship. DP was granted post doctoral fellowship funded by the European Union Framework Program 7, Project BacHbERRY number FP7-613793. JFRB is a recipient of a CAPES postdoctoral fellowship. The mass spectrometry analysis was carried out in the Centre d'Etude des Substances Naturelles at the University of Lyon.
Competing interests
Authors' contributions
MGF, SGM, MFS and AZ conceived and designed the work. MGF and SGM performed most of the experimental work. DP, GM and GC collaborated in the mass spectrometry experiments and analysis. JFB performed tridimensional analysis of proteins. All authors collaborated in the analysis of all data. MGF and SGM wrote the manuscript with inputs from the other authors. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests. Table S1: Cytometry cell counts and replicate readings for the calculation of hydrogen peroxide production. Table S2: Replicate readings of real-time RT qPCR for glpO transcript relative expression. Table S3: Gene locus tag for the genes from the uptake and metabolism of myo-inositol in M. hyopneumoniae strains. Table S4: Comparative modeling summary. Table S5: Average peak surface for marked myo-inositol from mass spectrometry experiments. Table S6: ATP production replicates and average for each sample. Table S7: Literature experimental data available for genes important for the glycerol and myo-inositol metabolism.
Supplementary material
File S1: Statistical analyses results. | 64,303 | [
"748311",
"757318",
"982157",
"170068"
] | [
"417639",
"10025",
"262280",
"417639",
"543494",
"194495",
"262280",
"194495",
"262280",
"417639",
"543494"
] |
01467553 | en | [
"sdv"
] | 2024/03/05 22:32:13 | 2017 | https://univ-rennes.hal.science/hal-01467553/file/FDR-controlled%20metabolite%20annotation--accepted.pdf | I Phapale
Régis Chernyavsky
D Lavigne
A Fay
V Tarasov
J Kovalev
S Fuchser
Charles Nikolenko
Pineau
Andrew Palmer
Prasad Phapale
Ilya Chernyavsky
Regis Lavigne
Dominik Fay
Artem Tarasov
Vitaly Kovalev
Jens Fuchser
Sergey Nikolenko
Charles Pineau
Michael Becker
Theodore Alexandrov
email: [email protected]
FDR-controlled metabolite annotation for high-resolution imaging mass spectrometry
FDR-controlled metabolite annotation for high-resolution imaging mass spectrometry
mass-resolution (HR) MS that discriminates metabolites differing by a few mDa promises to achieve unprecedented reliability of metabolite annotation. However, no bioinformatics exists for automated metabolite annotation in HR imaging MS. This has restricted this powerful technique mainly to targeted imaging of a few metabolites only [START_REF] Spengler | Mass spectrometry imaging of biomolecular information[END_REF] . Existing approaches either need visual examination or are based on the exact mass filtering known to produce false positives even for ultra-HR MS [START_REF] Kind | Metabolomic database annotations via query of elemental compositions: mass accuracy is insufficient even at less than 1 ppm[END_REF] . This gap can be explained by the field novelty and high requirements to the algorithms which should be robust to strong pixel-to-pixel noise and efficient enough to mine 10-100 gigabyte datasets.
An additional obstacle is the lack of a metabolomics-compatible approach for estimating False Discovery Rate (FDR) [START_REF] Benjamini | Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing[END_REF][START_REF] Storey | A direct approach to false discovery rates[END_REF] . FDR is defined as the ratio of false positives in a set of annotations. FDR is a cornerstone of quantifying quality of annotations in genomics, transcriptomics, and proteomics [START_REF] Käll | Assigning significance to peptides identified by tandem mass spectrometry using decoy databases[END_REF] . The proteomics target-decoy FDR-estimation is not directly applicable in metabolomics where there is no equivalent of a decoy database of implausible peptide sequences. An FDR-estimate in metabolomics was proposed earlier [START_REF] Matsuda | Assessment of metabolome annotation quality: a method for evaluating the false discovery rate of elemental composition searches[END_REF] but is limited to phytochemical metabolites, has not found widespread use and cannot be applied to imaging MS as it does not allow incorporating spatial information. An alternative approach to estimate FDR is to use a phantom sample with controlled molecular content but it is inherently complex and narrowed to a specific protocol.
We have addressed this double challenge and developed a comprehensive bioinformatics framework for FDR-controlled metabolite annotation for HR imaging MS. Our open-source framework (https://github.com/alexandrovteam/pySM) is based on the following principles: database-driven annotation by screening for metabolites with known sum formulas, an original Metabolite-Signal Match (MSM) score combining spectral and spatial measures, a novel target-decoy FDR-estimation approach with a decoy set generated by using implausible adducts.
Our framework takes as input: 1) an HR imaging MS dataset in the imzML format, 2) a database of metabolite sum formulas in a CSV format (e.g., HMDB [START_REF] Wishart | HMDB 3.0--The Human Metabolome Database in 2013[END_REF] ), 3) an adduct of interest (e.g., +H, +Na, +K). For a specified FDR level (e.g., 0.1), the framework provides metabolite annotations: metabolites from the database detected as present in the sample. The framework cannot resolve isomeric metabolites; the provided putative molecular annotations are on the level of sum formulas [START_REF] Sumner | Proposed minimum reporting standards for chemical analysis Chemical Analysis Working Group (CAWG) Metabolomics Standards Initiative (MSI)[END_REF] .
Our novel MSM score quantifies the likelihood of the presence of a metabolite with a given sum formula in the sample (Figure 1; Supplementary Note 1, Figure S2). For an ion (sum formula plus ion adduct, e.g., +H), we generate its isotopic pattern accounting for the instrument resolving power with isotopic fine structure if resolvable. Then, we sample from the imaging MS dataset an ion signal, namely, the ion images for all isotopic peaks with predicted intensity greater than 0.01% of the principal peak (Supplementary Note 1, Figure S1). MSM is computed by multiplying the following measures. (1) Measure of spatial chaos quantifies spatial informativeness within the image of the principal peak [START_REF] Alexandrov | Testing for presence of known and unknown molecules in imaging mass spectrometry[END_REF] . We introduce an improved measure of spatial chaos (Algorithm OM1) which outperforms earlier proposed measures [START_REF] Alexandrov | Testing for presence of known and unknown molecules in imaging mass spectrometry[END_REF][START_REF] Wijetunge | EXIMS: an improved data analysis pipeline based on a new peak picking method for EXploring Imaging Mass Spectrometry data[END_REF] in both speed and accuracy (Supplementary Note 1). ( 2) Spectral isotope measure quantifies spectral similarity between a theoretical isotopic pattern and relative sampled isotopic intensities. (3) Spatial isotope measure quantifies spatial co-localization between isotopic ion images. The MSM score of 1 indicates the maximal likelihood of the signal to correspond to the ion.
Our novel FDR-estimate helps select an MSM cutoff so that the ions with MSM scores above the cutoff will confidently correspond to metabolites from the sample (Figure 1; Supplementary Note 1, Figure S2). According to the target-decoy approach [START_REF] Käll | Assigning significance to peptides identified by tandem mass spectrometry using decoy databases[END_REF] , we propose to construct a decoy set as follows. We define a target set as ions from a metabolite database with a given ion adduct (e.g., +H). We define the decoy set as ions for the same sum formulas but with the following implausible adducts. For each sum formula, we randomly select an implausible adduct from the CIAAW 2009 list of the elements (e.g., +B, +Db, +Ag) excluding plausible adducts. MSM scores are calculated for target and decoy ions. For any MSM cutoff, FDR is estimated as the ratio between the numbers of decoy false positives (the decoy ions with MSM scores above the cutoff, FP D ) and target positives (the target ions with MSM scores above the cutoff). Here, we approximate the number of target false positives (FP T ) by FP D assuming the target and decoy sets to be similar. The sampling of implausible adducts is repeated, averaging the resulting FDR-estimate. FDR-controlled metabolite annotation is performed by specifying the desired value of FDR (e.g., 0.1) and choosing the smallest MSM cutoff providing the desired FDR (Figure 1; Supplementary Note 1, Figure S2). FDR-controlling provides annotations of a given confidence independently on the MSM cutoff, dataset, MS settings and operator, and can be used for comparative and inter-lab studies.
We evaluated the proposed FDR-estimation (Supplementary Note 1). First, we studied the similarity between the decoy and target ions required to fulfill FP D ≈FP T . Relative intensities of isotopic patterns for target and decoy ions were found to be similar (Figure 2a) despite the decoy ions have higher relative intensities for heavier isotopic peaks due to more complex isotopic patterns. The target and decoy ions were also found to be similar in the m/z-and mass defect-space (Figure 2b), with a positive offset in m/z for decoy adducts which typically have heavier elements. Second, we compared the estimated and true FDR for a simulated dataset with a known ground truth (Figure 2c; Supplementary Note 1). Although there is some difference in the low-values region, estimated FDR follows the true FDR overall. Finally, negative control experiments using each of the implausible adducts as a target one showed that FDR values for implausible adducts are characteristically higher (Figure 2d; Supplementary Note 1).
We showcased our framework on HR imaging MS datasets from two (a1 and a2) female adult wild-type mice (Supplementary Note 1). The brains were extracted, snap-frozen, and sectioned using a cryostat. Five coronal sections were collected from each brain: 3 serial sections (s1-s3) at the Bregma 1.42 mm, s4 at -1.46 mm and s5 at -3.88 mm. The sections were imaged using a 7T MALDI-FTICR mass spectrometer solariX XR (Bruker Daltonics) in the positive mode with 50 µm raster size. The datasets were of 20-35 gigabytes in size each. FDR-controlled annotation was performed with the desired level of FDR=0.1 for metabolites from HMDB with +H, +Na, +K adducts, and m/z-tolerance of 2.5 ppm (Figure 2e-i). Venn diagrams of annotated metabolites (Figure 2e) show a high reproducibility between sections from the same animal (especially between the serial sections from a2 where 51 of 73 sum formulas were annotated in all three sections), and between the animals (only two sum formulas were annotated in the animal a1 only). The numbers of detected adducts were similar (Figure 1f). Exemplary molecular images of annotations illustrate a high reproducibility between technical replicates and animals (Figure 1g). Phospholipids were detected mostly (PCs, PEs, SMs, PAs; Supplementary Note 1, Table S5 and Figure S10) that is typical for MALDI imaging MS of brain tissue using the HCCA matrix [START_REF] Gode | Lipid imaging by mass spectrometry --a review[END_REF] . From overall 103 annotations, 16 representative ones were validated with LC-MS/MS by either using authentic standards or assigning fragment structures to MS/MS data (Supplementary Note 3).
We demonstrated the potential of using FDR curves in two examples. First, we showed that MSM outperforms the individual measures (Figure 2h; Supplementary Note 1, Figure S8). The exact mass filtering performs significantly worse, achieving the lowest FDR=0.25 for 10 annotations (vs. FDR=0 for the same number of annotations when using MSM). Second, we demonstrated that the number of FDR-controlled annotations decreases with the decreasing mass resolving power (Figure 2i; Supplementary Note 1, Figure S9). For this, we artificially reduced mass resolving power by using different m/z-tolerances when sampling m/z-signals: 1, 2.5 (default), 5, 30, 100, 1000, and 5000 ppm. This indicates that a high mass accuracy and resolution are essential for confident metabolite annotation.
Our framework is directly applicable to other types of HR imaging MS with FTICR or Orbitrap analyzers (MALDI-, DESI-, SIMS-, IR-MALDESI-, etc.; with proper adducts to be selected for each source) and other types of samples (plant tissue, cell culture, agar plate, etc.) for which a proper metabolite database can be selected.
Accession Codes
MTBLS313: imaging mass spectrometry data from mouse and rat brain samples, MTBLS317: simulated imaging mass spectrometry data and MTBLS378: LC-MS/MS data from mouse brain samples. Dorrestein (UCSD) and Peter Maass (University of Bremen) for providing a stimulating environment as well as for discussions on mass spectrometry and image analysis during the years of this work. S5 for breakdown about the annotations), f) overlaps between adducts of the annotations, g) examples of molecular ion images for annotations validated using LC-MS/MS (cf. Supplementary Note 2, Figures S11 andS12; Supplementary Note 3), as well as FDR curves illustrating h) superiority of MSM as compared to individual measures for a2s3, +K (see Supplementary Note 1, Figure S8 for other datasets and adducts), and g) decrease of number of annotations when simulating lower mass resolution/accuracy for a1s3, +K (cf. Supplementary Note 1, Figure S9).
Tables N/A
Online Methods
Imaging mass spectrometry 1.1 Imaging mass spectrometry data from mouse brain samples
Samples
Two female adult wild-type C57 mice (a1, a2) were obtained from Inserm U1085 -Irset Research Institute (University of Rennes1, France). Animals were age 60 days and were reared under ad-lib conditions. Care and handling of all animals complied with EU directive 2010/63/EU on the protection of animals used for scientific purposes. The whole brain was excised from each animal immediately post-mortem and are loosely wrapped rapidly in an aluminum foil to preserve their morphology and snap-frozen in liquid nitrogen. Frozen tissues were stored at -80 °C until use to avoid degradation.
Sample preparation
For each animal, five coronal 12 µm-thick brain sections were collected on a cryomicrotome CM3050S (Leica, Wetzlar, Germany) as follows. Three consecutive sections were acquired at the Bregma distance of 1.42 mm (sections s1, s2, s3) and two further sections were acquired at the Bregma distances of -1.46 and -3.88 mm (datasets s4 and s5). The sections were thaw-mounted onto indium tin oxide (ITO) coated glass slides (Bruker Daltonics, Bremen, Germany) and immediately desiccated. Alpha-Cyano-4-hydroxycinnamic acid (HCCA) MALDI-matrix was applied using the ImagePrep matrix deposition device (Bruker Daltonics). The method for matrix deposition was set as described: after an initialization step consisting in between 10-15 cycles with a spray power at 15%, an incubation time of 15 s and a drying time of 65 s, 3 cycles were performed under sensor control with a final voltage difference at 0.07 V, a spray power at 25%, an incubation time of 30 s and a drying time under sensor control at 20% and a safe dry of 10 s; then 6 cycles were performed under sensor control with a final voltage difference at 0.07 V, a spray power at 25%, an incubation time of 30 s and a drying time under sensor control at 20% and a safe dry of 15 s; 9 cycles were performed under sensor control with a final voltage difference at 0.2 V, a spray power at 15%, an incubation time of 30 s and a drying time under sensor control at 20% and a safe dry of 50 s; finally 20 cycles were performed under sensor control with a final voltage difference at 0.6 V (+/-0.5 V), a spray power at 25%, an incubation time of 30 s and a drying time under sensor control at 40% and a safe dry of 30 s.
Imaging mass spectrometry
For MALDI-MS measurements the prepared slides were mounted into a slide adapter (Bruker Daltonics) and loaded into the dual source of a 7T FTICR mass spectrometer solariX XR (Bruker Daltonics) equipped with a Paracell, at the resolving power R=130000 at m/z 400. The x-y raster width was set to 50µm using smartbeam II laser optics with the laser focus setting 'small' (20-30 µm). For a pixel, a spectrum was accumulated from 10 laser shots. The laser was running at 1000 Hz and the ions were accumulated externally (hexapole) before being transferred into the ICR cell for a single scan. For animal a1, each spectrum was internally calibrated by one-point correction using a known phospholipid with the ion C 42 H 82 NO 8 P+K + , at the m/z 798.540963. For animal a2, every spectrum was internally calibrated by several point correction using: matrix cluster of HCCA
Signal processing
Centroid data was exported into the imzML format by using the SCiLS Lab software, version 2016a (SCiLS, Bremen, Germany). Ion images were generated with the tolerance ±2.5 ppm. A hot-spot removal was performed for each image independently by setting the value of 1% highest-intensity pixels to the value of the 99'th percentile followed by an edgepreserving denoising using a median 3x3-window filter.
Data availability
The imaging mass spectrometry data is publicly available at the MetaboLights repository under the accession numbers MTBLS313.
Simulated imaging mass spectrometry data
An imaging MS dataset was simulated that contained 300 sum formulas from the HMDB metabolite database, version 2.5, and 300 randomly generated formulas not contained in HMDB. To each sum formula, either +H, +Na, or +K adduct was randomly assigned. Random sum formulas were generated such that the probability distributions of the number of CHNOPS atoms, the C-H ratio, and the C-O ratio are the same as all formulas from HMDB. Isotope patterns were generated for each formula at a resolving power of R=140000 at m/z 400. Each isotope pattern was multiplied by a random intensity in the range [0.2-1.0]. The patterns were assigned to one of two partially overlapping square regions: one with sum formulas from HMDB, the other with sum formulas not from HMDB. Additionally 700 peaks at randomly selected m/z-values were added independently to each spectrum so that a spectrum inside one of the squares would have 3500 ± 127 peaks. The resulted line spectra were then convolved with a Gaussian function with the sigma equal to 0.015.
Data availability
The simulated imaging mass spectrometry data is publicly available at the MetaboLights repository under the accession numbers MTBLS317.
Metabolite-Signal Match score
Individual measures used in the Metabolite-Signal Match (MSM) score were defined based on the ion images generated from each peak within the isotope pattern for a particular sum formula and adduct. Isotope envelopes were predicted for an ion (sum formula plus adduct) at the mass resolution of the dataset and peak centroids were detected.
Measure of spatial chaos
The measure of spatial chaos (Algorithm OM1) quantifies whether the principal ion image is informative (structured) or non-informative (noise). This approach was previously proposed by us for image-based peak picking [START_REF] Alexandrov | Testing for presence of known and unknown molecules in imaging mass spectrometry[END_REF] but here we developed an improved measure based on the concept of level sets earlier applied for image segmentation [START_REF] Vese | A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model[END_REF] . For an ion image, its range of intensities is split into a number of levels. For each level, a level set is calculated as an 0-or-1-valued indicator set having 1-values for pixels with intensities above the level. Then, the number of closed 1-valued objects (connected areas of 1-valued pixels) in the produced level set is computed. Images with structure tend to exhibit a small number of objects that simply shrink in size as the threshold increases whilst images with a noisy distribution produce a great number of objects as the pixels above the threshold level are randomly spatially distributed (see Figure S3a). The algorithm was inspired by a concept of computational topology called persistent homology [START_REF] Edelsbrunner | Persistent homology-a survey[END_REF] . The proposed measure of spatial chaos returns a value between zero and one which is high for spatially-structured images and low for noisy images. uses the label function from scipy 25 with 4-connectivity and returns the number of disconnected objects in an image.
Input
The computational complexity of the level-sets algorithm is
where is the number of pixels. The parameters controls the smoothness of the curve seen in Figure S3b and above a certain granularity the value of stabilises to a constant for a particular image. A value of was found to be sufficient to provide stable results for both the test images from 2 and random noise (data not shown).
Spatial isotope measure
The spatial isotope measure quantifies the spatial similarity between the ion images of isotopic peaks, composing a signal for a sum formula. It is calculated as a weighted average linear correlation between the ion image from the most intense isotope peak (
) and all others ( ) where is the number of theoretically predicted isotope peak centroids for a particular sum formula and adduct with an intensity greater than 1% of the principal (largest) peak. Each image is weighted by the relative abundance of the theoretical isotope peak height . Negative values are set to zero so the spatial isotope measure returns a value between zero and one; the higher values imply a better match.
Equation OM1. Spatial isotope measure quantifying the spatial similarity of each isotope peak to the principal peak where returns Pearson's correlation coefficient and where
is a vector of intensities from ion image of the 'th isotope peak.
Spectral isotope measure
The spectral isotope measure quantifies the spectral similarity between a predicted isotope pattern and measured spatial intensities. It is calculated as the average difference between normalised predicted isotope ratios and normalised measured intensities, reported so that larger values imply a better match.
Equation OM2. Spectral isotope measure quantifying the spectral similarity between a predicted isotope pattern and the measured intensities of a signal.
In Equation 2, is a vector containing the mean image intensity from the ion images for the pixels in with non-zero intensity values and , where . This can be considered as projecting both theoretical and empirical isotope patterns onto a sphere and then calculating one minus the average coordinate difference.
Metabolite-Signal Match score
The Metabolite-Signal Match (MSM) score quantifies the similarity between the theoretical signal of a sum formula and its measured counterpart, with the higher value corresponding to higher similarity. It is calculated according to Equation OM3, as a product of the individual measures: measure of spatial chaos, spatial isotope measure and spectral isotope measure). This puts an equal weighting on all measures whilst penalizing any annotation that gets low value for any of the measures.
Equation OM3. Metabolite-Signal Match (MSM) score quantifying similarity between a theoretical signal of a sum formula and its counterpart sampled from the dataset.
Section OM3. False Discovery Rate-controlled metabolite annotation
Molecular annotation
First, we consider all unique sum formulas from a metabolite database of interest. We used the Human Metabolome Database (HMDB), v. 2.5, considering only 7708 carbon-containing sum formulas [START_REF] Wishart | HMDB 3.0--The Human Metabolome Database in 2013[END_REF] . Then, we select a list of potential ion adducts. The adducts +H, +Na and +K were used as the adducts commonly detected during tissue MALDI imaging MS in the positive mode 27 . Then, we perform molecular annotation of an imaging MS dataset for each ion (combination of a sum formula and an adduct) independently as described in Algorithm OM2. Note that in this algorithm the MSM threshold needs to be specified; for the updated algorithm selecting the MSM threshold in an FDR-controlled way, please see Algorithm OM3.
Input
Calculation of the False Discovery Rate
To calculate the False Discovery Rate among the molecular annotations provided using Algorithm OM2 with an MSM threshold , we developed a target-decoy approach similar to (Elias and Gygi 2007) 28 . The innovative part of his development is in applying the targetdecoy approach in the spatial metabolomics context by defining a decoy set appropriate for metabolomics.
A target set was defined as a set of molecular ions for the sum formulas from a metabolite database (e.g. HMBD), with a given ion adduct type (e.g. +H, +Na, +K). A decoy search was defined as a set of implausible ions for the same sum formulas but with implausible ion adduct types. For each sum formula, an implausible elemental adduct is randomly chosen from the CIAAW 2009 list of isotopic compositions of the elements [START_REF] Berglund | Isotopic compositions of the elements 2009 (IUPAC Technical Report)[END_REF] excluding the plausible adducts, namely from He, Li, Be, B, C, N, O, F, Ne, Mg, Al, Si, P, S, Cl, Ar, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, Ge, As, Se, Br, Kr, Rb, Sr, Y, Zr, Nb, Mo, Ru, Rh, Pd, Ag, Cd, In, Sn, Sb, Te, I, Xe, Cs, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Ir, Th, Pt, Pu, Os, Yb, Lu, Bi, Pb, Re, Tl, Tm, U, W, Au, Er, Hf, Hg, Ta. Once the target and decoy sets are defined, the MSM scores are calculated for all target and decoy ions.
The MSM cutoff (
) is a key parameter of the molecular annotation. Setting the MSM cutoff changes the number of molecular annotations made. For any MSM cutoff, we define positives as the ions with MSM scores above the cutoff and negatives as the ions with MSM scores below the cutoff. We define as positive hits from the decoy. Since any decoy ion is constructed to be implausible, all decoy ions detected as positive are false positives. Then, we estimate FDR with FDR' according to Equation OM4.
Equation OM4. Definition of FDR and the proposed estimate of FDR (FDR'). FP, TP are False Positive and respectively True Positive,
and are the numbers of annotations from the target and decoy sets for the MSM cutoff .
Similar to the approach of FDR calculation in genome-wide studies proposed by (Storey & Tibshirani, 2003) [START_REF] Storey | Statistical significance for genomewide studies[END_REF] and picked up later in proteomics, Equation OM4 proposes an approximation of the true FDR defined as . This approach relies on having a high similarity between false-positives in the target set and the decoy set. The decoy set must be the same size as the target set and share the same statistical distributions as used by the measures used in the annotation. If these assumptions are satisfied then the number of false positives from the decoy ( ) approximates the number of false positives from the target ( ) while the denominator ( ) is equal between FDR and FDR'.
As the decoy generation is a randomized process, with one decoy search formed by a sampling of implausible adducts from all possible implausible adducts, FDR calculation is a repeated sampling process. We propose to repeat it (20 times for the presented results) and calculate the median of the observed FDR values. We favored median over mean for its general robustness to outliers and for providing integer values that can be translated into the numbers of annotations.
FDR-controlled molecular annotation
The term FDR-controlled molecular annotation means that parameters of molecular annotation are optimized so that the set of provided annotations has a desired level of FDR. This is the most widely used approach in proteomics for choosing parameters of molecular identification [START_REF] Choi | Significance analysis of spectral count data in label-free shotgun proteomics[END_REF] . We employed this approach to develop in Algorithm OM3 for selecting a key parameter of the molecular annotation, the MSM cutoff . This was performed similarly to (Zhang et al., 2012) [START_REF] Zhang | De Novo Sequencing Assisted Database Search for Sensitive and Accurate Peptide Identification[END_REF] by simultaneously sorting the MSM values for the target and decoy ions, decreasing the MSM cutoff thus one-by-one increasing the number of target ions annotated, recalculating the FDR after every new ion is annotated, and selecting the maximal number of annotations that provide FDR below the desired value (see Figure 1 in the main text). This process is repeated 20 times with the decoy adducts every time randomly sampled from the set of all considered implausible adducts and an observed recorded. After repetitions, the final MSM cutoff value is set at the median of the observed values. The final set of molecular annotations is a set of target ions with the MSM scores above the median cutoff value.
Input
LC-MS/MS validation of annotations 5.1 Samples
Mouse brain sample
One female adult wild-type C57 mouse age 10 weeks was obtained from the European Molecular Biology Laboratory animal resource (EMBL-LAR, Heidelberg, Germany). The animal was reared under ad-lib conditions within the specific pathogen free facility. Care and handling of the animal complied with EU directive 2010/63/EU on the protection of animals used for scientific purposes. The whole brain was excised from each animal immediately post-mortem and rapidly cryo-frozen in CO 2 cooled isopentane. Tissue was stored at -80 °C until use.
Authentic lipid standards and chemicals
All lipid standards used for validation of annotations were purchased from Sigma Chemicals (Sigma-Aldrich Co., St. Louis, MO) and Avanti Polar Lipids (Alabaster, LA, USA). The LC-MS grade buffers and other reagents were purchased from Sigma Chemical. All mass spectrometry grade solvents and MiliQ grade water was used throughout the analysis.
Sample preparation
20 mg of brain tissue was extracted using Bligh and Dyer extraction method [START_REF] Bligh | A rapid method of total lipid extraction and purification[END_REF] . The dried extract was reconstituted with 100 µL of methanol and isopropanol (1:1) and 10µL of this sample solution was injected LC-MS system for each run. Lipid standards were prepared in same solvent with concentration of 100 ng/mL each.
LC-MS/MS methods
The separation of lipids was carried out on Agilent 1260 liquid chromatography (LC) system with Ascentis® Express C 18 column (100 x 2.1 mm; 2.7uM) and detected with high resolution mass spectrometry (Q Exactive Plus MS, Thermo Scientific).
Three LC-MS/MS methods were used: Positive: ESI positive mode using 'buffer 1'. Negative 1: ESI negative mode method using 'buffer 1'. Negative 2: ESI negative mode method used 'buffer 2'. LC was run with flow rate of 0.25 ml/min with solvent A consisted of acetonitrile-water (6:4) and solvent B of isopropyl alcohol-acetonitrile (9:1), which are buffered with either 10 mM ammonium formate and 0.1% formic acid (buffer 1) or 10 mM ammonium acetate (buffer 2). MS parameters (Tune ,Thermo Scientific) were set as: spray voltage of 4 kV, sheath gas 30 and auxiliary gas 10 units, S-Lens 65 eV, capillary temperature 280 o C and vaporisation temperature of auxiliary gas was 280 o C.
LC-MS/MS validation strategy
LC-MS/MS validation of lipid annotations was performed differently for annotations when lipid standards are available and for other annotations. When lipid standards were available, LC-MS/MS information in particular the LC retention time (RT), MS and MS/MS (MS2) was used to compare the data from a standard with the data from a sample (both acquired using exactly the same LC-MS method and precursor selection range). First, extracted ion chromatograms (XICs) were evaluated for all possible adducts to confirm the presence of the ion of the sum formula obtained from imaging data. As for the tolerance value for XICs: for data with standards we used the 5 ppm; for data with no standards we selected the best fitting tolerance value from 2, 3, and 5 ppm. We considered possible adducts for each metabolite (+H, +Na, +NH4 for the 'Positive' method; -H, +FA-H for the 'Negative' methods, FA stands for the formic acid) and selected the best matching adduct as follows. The precursor delta m/z was calculated for the sample both in MS1 and MS/MS data. The matching MS/MS spectrum was searched within the elution profile and manually interpreted for fragments corresponding to head-group and fatty acid side chains. Only precursor and fragments with accuracy <6 ppm were considered for structural interpretation to identify possible lipid species. The lipid class was confirmed by the presence of head-group fragment or its neutral loss (e.g. MS/MS fragment with m/z 184.0735 corresponds to the phosphocholine head-group). Since lipids from the classes of phosphatidylcholines (PC) and sphingomyelins (SM) have the same head-group (m/z 184.0735), given a sum formula, we searched in HMDB and SwissLipids to rule out a possibility of the sum formula to correspond to a lipid from another class other than annotated by our framework. Further to confirm the fatty acid side chains, the 'Negative' LC-MS methods were used (e.g. fatty acid fragments for phosphocholines were obtained after fragmentation of formate ion precursors using the 'Negative' LC-MS method). The collision energy was selected as best representing the precursor and the expected fragments. When standards were available, the RT, precursor m/z and MS/MS fragments corresponding to head-groups and fatty acid chains from the sample were matched with spectra from the corresponding standard. When standards were not available the fragments were manually interpreted. Finally, structural annotation of the matching peaks in the MS/MS spectra was performed with the help of the HighChem MassFrontier software (Thermo Scientific). The MS, MS/MS and RT (for standards) data is presented in Supplementary Note 3 and summarized in Table S5.
Figures
Figures
Figure 1 .
1 Figure 1. The proposed framework for metabolite annotation for HR imaging MS;
Figure 2 .
2 Figure 2. Evaluation of the proposed framework: a) intensities of highest four peaks in the
[C 20 H 14 N 2 O 6 +H + , m/z 379.092462] if present and known phospholipids present in the mouse brain [C 40 H 80 NO 8 P+H + , m/z 734.569432] and [C 42 H 82 NO 8 P+K + , m/z 798.540963]. Data was acquired for the mass range 100 < m/z < 1200 followed by a single zero filling and a sinapodization. Online feature reduction was performed in the ftmsControl software, version 2.1.0 (Bruker Daltonics) to return only the peak centroids and intensities.
:
Real-valued image , number of levels
6.
7.
8. return
Algorithm OM1. The level-sets based algorithm for calculating the measure of spatial
chaos of an ion image. is a hole-filling operation to 'fill in' isolated missing pixels that
can happen in HR imaging MS (and to avoid overestimating the number of objects). It
consists of a sequence of morphological operation: with
structuring elements 24 .
Output: measure of spatial chaos
Algorithm:
// scale image intensity range to [0 1]
1.
// main part
2. For n in :
// threshold image at a current level
3.
4.
// fill single-pixel holes
5.
// count separate objects with 4-connectivity
:
Metabolite sum formula, adduct, charge, resolving power of the spectra, imaging MS dataset, MSM threshold
Output: Decision whether the ion is present in the dataset
Algorithm:
// Predict isotopic patterns
1. Predict the isotope envelope* for at the resolving
power
2. Detect centroids of the isotope envelope*, exact m/z's and relative intensities ( )
// Generate and score signals from the dataset
3. For in :
4. Generate an ion image for the i'th isotopic peak at m/z
5. Calculate from and from and according to Algorithm
1,
Equation OM1, and Equation OM2, respectively
6. Calculate the score according to Equation OM3
// Annotate the data
7. If :
8. the ion is annotated as being present in the dataset
Algorithm OM2. MSM-based molecular annotation determining whether a metabolite ion
is present in an imaging MS dataset.
:
Metabolite database, resolving power of the mass spectrometer used, imaging MS dataset, ion charge, target adduct, decoy adducts, desired FDR level FDR-controlled molecular annotation that screens for metabolite ions present in an imaging MS dataset, with the desired FDR level.
Algorithm OM3.
, number
of decoy samplings
Output: A set of molecular annotations (ions from the metabolite database detected as
present in the dataset)
Algorithm:
// Predict and score all metabolite signals
1. For in :
2.
3. Calculate according to Algorithm
4. , where decoy adduct is
randomly chosen from the list of decoy adducts
5. Calculate according to Algorithm OM2.(1-3)
// Calculate the MSM cutoff corresponding to the desired FDR level
6. Form a combined vector of values
// Find the maximal number of annotations providing FDR below
7. Sort in descending order.
8.
9. While :
10.
11.
12. Calculate according to Equation OM4
13.
14.
15. Repeat steps 1-11 according to the number of decoy samplings,
16.
// Perform the MSM-based molecular annotation with the calculated cutoff
17. For in :
a. If then add into the list of molecular
annotations
Acknowledgements
We thank Olga Vitek (Northeastern University), Alexander Makarov (ThermoFisher Scientific) and Mikhail Savitski (EMBL) for discussions on FDR and Dmitry Feichtner-Kozlov (University of Bremen) for discussions on computational topology. We acknowledge funding from the European Union's Horizon2020 and FP7 programmes under the grant agreements No. 634402 (AP, RL, AT, VK, SN, CP, TA), 305259 (IC, RL, CP), and from the Russian Government Program of Competitive Growth of Kazan Federal University (SN). We thank EMBL Core Facilities for instrumentation for LC-MS/MS analysis. TA thanks Pieter
Data Availability Statement
The data is publicly available at the MetaboLights repository under the following accession numbers: MTBLS313: imaging mass spectrometry data from mouse and rat brain samples, MTBLS378: LC-MS/MS data from mouse brain samples, and MTBLS317: simulated imaging mass spectrometry data.
Code availability
The reference implementation of the developed framework is freely available at https://github.com/alexandrovteam/pySM as open source under the permissive license Apache 2.0.
Code availability
The reference implementation of the developed framework is freely available at https://github.com/alexandrovteam/pySM as open source under the permissive license Apache 2.0. Data was acquired in full scan mode in mass range of 150-900 m/z (resolving power R=70000) and data dependent tandem mass spectra (MS/MS) were obtained for all precursors from an inclusion list (resolving power R=35000). Tandem mass spectra (MS/MS) were acquired using higher energy collisional dissociation (HCD) with normalized collision energies of 10, 20 and 30 units at the mass. The inclusion list was composed of all annotations provided from imaging MS analysis and detected in all three serial sections (s1, s2, s3 at the Bregma 1.42) for either of two animals. We considered adducts relevant for LC-MS (+H, +NH4, +Na for the Positive method; -H, -H+HCOOH for the Negative methods).
Data availability
The LC-MS/MS data from mouse brain samples is publicly available at the MetaboLights repository under the accession numbers MTBLS378.
Author Contributions
AP and TA conceived the study, AP, IC, DF, AT, VK implemented the algorithms, RL, JF, CP, MB provided imaging data, AP and TA analyzed imaging data, PP collected LC-MS/MS data, PP and TA performed LC-MS/MS validation, AP and TA wrote manuscript, with feedback from all other coauthors, TA coordinated the project.
Competing Financial Interest Statements
Theodore Alexandrov is the scientific director and a shareholder of SCiLS GmbH, a company providing software for imaging mass spectrometry. During the work presented in the paper, Michael Becker was an employee of Bruker Daltonik GmbH, a company providing instrumentation and software for imaging mass spectrometry. | 39,015 | [
"181338",
"1006829"
] | [
"30303",
"30303",
"89565",
"182194",
"30303",
"30303",
"30303",
"249812",
"30303",
"182194",
"249812"
] |
01766144 | en | [
"shs"
] | 2024/03/05 22:32:13 | 2011 | https://hal.science/hal-01766144/file/1.%20TJSEAS-LH%28%C2%8C%21%29-20110916.pdf | Laurence Husson
email: [email protected]
、 必須向外謀生的「推力」進行分析之外
, 我們也不能
忽略出於政治因素才產生的移民制度 , 特別是在上述三個亞洲島嶼國家
Is a Unique Culture of Labour Migration Emerging in the Island Nations of Asia?
Keywords: female migrant workers, Indonesia, Philippines, Sri Lanka
In the space of three decades, three countries, the Philippines, Indonesia and Sri Lanka have become the main exporters of labor on a worldwide scale.
Are island nations, such as the two archipelagos of the Philippines, Indonesia and the island of Sri Lanka, predisposed to the current large exodus of female migrant workers? Another record shared by these three countries: the very high percentage of women making up these migrant workers. This paper will analyze the principal factors of geography, population and the international labour market that explain this massive exportation of female migrant workers together with the state policies that are actively encouraging female migration.
Beyond the determining geographical factor and the need to leave an overpopulated land to earn a living, we should indeed take into consideration the presence of a political will that contributed to the formation of a system of migration which is possibly particular to the island nations of Asia.
TJSEAS 113
The emergence of a globalised labour market has encouraged the free movement of people. Instead of weakening the links between the place of origin and the place of living and working, migratory movements reinforce connections. Real networks are created that organize the way people relocate. These networks also contribute to maintaining the collective identity links beyond national borders. Considered from the identity point of view, labour-based migrations illustrate the delicate connection between local and global contexts and show how individuals practice their dual relationship between their country of origin and the country where they find employment.
Migrations due to work, voluntary or forced, supervised or spontaneous, have a long tradition in Asia. After the abolition of slavery in the late 1800s, European colonial powers introduced the labor contracts that led to the extensive Chinese and Indian diasporas.
The vast continent of Asia comprises 60% of the world's population and two-thirds of the world's workforce. This labour market is expected to remain a very mobile zone for a long time [START_REF] Hugo | The Demographic Underpinnings of Current and Future International Migration in Asia[END_REF]. The two archipelagos of Indonesia and the Philippines are at the crossroads of the trade routes between China and India. It appears that this geographic region, made of archipelagos, peninsulas and straits is an eastern equivalent of the "Mediterranean sea" that encourages mobility and flows in all kinds of exchanges.
In the space of three decades, the Philippines, Indonesia and Sri Lanka, have become the world's leading exporters of labour. Another feature that these two archipelagos and one island nation share is the record high percentage of women TJSEAS 114 making up these migrant workers. These two striking facts have been the catalyst for this paper in which we will analyze the principal factors that explain this massive exportation of female migrant workers.
Hania [START_REF] Zlotnik | The Global Dimensions of Female Migration[END_REF] estimates that women represent 47% of migrants in Asia.
However, in Sri Lanka, the Philippines and Indonesia, the proportion is higher than 70%. Are the islands of Asia predisposed to such a large exodus? Why are so many women leaving for foreign countries?
Beyond the determining geographical factors and the need to leave overpopulated islands to earn a living, state policies have played a determining role in the gender pattern of migration and have contributed to the formation of a system of labour migration which is possibly unique to Asia. The Philippines and Indonesia form the primary focus of this paper. The consideration of Sri Lankan worker migration is used as a comparison.
A Significant Recent Development
The rise of Asian migrations has followed global trends in migration. In 1965 the world accounted for 75 million international migrants. Twenty years later it was 105 million, and in 2000 it was 175 million. From the 1980s, the growth rate of the world population declined to 1.7% per year while international migration rose considerably to 2.59% per year (IOM 2003).
It was not until 1973, at the time of the extreme petroleum price escalations, that the large scale immigration of workers to the Gulf States began, firstly from Southern Asia and then from South-East Asia. The oil-rich states of the Arabian Peninsula 2000,2004).
The Gulf War (1990War ( -1991) ) as well as the Asian financial crisis of 1997 provoked a massive return, albeit temporary, of migrant workers. Since then migrant flows have resumed. Maruja Asis noticed that "unlike male migration, the demand for female migration is more constant and resilient during economic swings. The 1977 1997 crisis in Asia was instructive in this regard. While the demand for migrant workers in the construction and manufacturing sectors declined, no such change was observed for domestic workers" [START_REF] Asis | When Men and Women Migrate: Comparing Gendered Migration in Asia[END_REF]).
However, available statistical figures seem to be contradictory and are therefore difficult to analyze with confidence. For example, [START_REF] Stalker | Workers without Frontiers -The Impact of Globalization on International Migration[END_REF] stated that in 1997 there were up to 6.5 million Asian migrant workers in Japan, South Korea, Malaysia, Singapore, Thailand, Hong Kong and Taiwan. While, [START_REF] Huguet | International Migration and Development: Opportunities and Challenges for Poverty Reduction[END_REF] estimated that at the end of 2000 approximately 5.5 million foreign workers were living in a host East and Southeast Asian country. The misleading implication from comparing these two estimates is that the number of Asian migrants has decreased during those three years.
Whereas, a more careful consideration of these estimates suggests that the collection and analyses of migration statistics in this part of the world are not yet sufficiently reliable due to movement complexities. What may be discerned is that the global circulation of information, capital, ideas and labor and wider access to air travel has increased mobility and overcome the problem of large geographical distances. During this time, the main destination for female Asian migrants shifted from the Middle East TJSEAS 117 to the other Asian countries whose booming economies needed additional migrant workers to fill labor shortages.
A Growing Feminization
Since the 1980s, the massive participation of women in the international labour market has been a phenomenon without precedent in the history of human migrations.
While most researchers agree that global restructuring increasingly forces a larger number of women in developing countries to participate in the international labour market, Nana [START_REF] Oishi | Women in Motion. Globalization, State Policies and Labor Migration in Asia[END_REF] demonstrated the need to investigate the differential impacts of globalization, state policies, individual autonomy and social factors.
In the past migration flows, women have been the wives, mothers, daughters or sisters of male migrants. In contrast, since the 1990s women, with or without work contracts, have become active participants in the international labour market, and not just to join or accompany a male migrant. Since this period, these female migrants have became fully integrated into the host country's job market. This phenomenon is referred as "the feminization of migration".
In addition to the feminisation of migration, the other significant change has been a new level of awareness on the part of migration scholars and policy-makers as to the significance of female migration, the role of gender in shaping migratory processes and, most importantly, the increasingly important role of women as remittance senders (Instraw 2007). The trend seems now to be irreversible. The inclusion of the gender perspective in the analysis of migration has illuminated the new geographic mobility of women. The development in recent years of feminists studies has allowed female migrations to be understood as a different social phenomenon to the mobility TJSEAS 118 of men. Applying a gender lens to migration patterns can help identify ways to enhance the positive aspects of migration and to reduce the negative ones.
In 1990, the United Nations estimated that the total number of migrants living outside their native countries at 57 million, that is to say, 48% on the global scale.
According to an estimate by the ILO (International Labour Organisation) in 1996, at least 1.5 million female Asian workers were employed outside their country of origin.
Each year, almost 800,000 Asian women leave their own country for an employment under contract in the UAE (United Arab Emirates), Singapore, Hong Kong, Taiwan, Korea or Malaysia, where they will reside for a minimum of two years [START_REF] Lim | International Labor Migration of Asian Women: Distinctive Characteristics and Policy Concerns[END_REF]. The migrations of female workers henceforth, constitute the majority of the migrant work-flow under contract. Indeed, the Philippines and Indonesia export the largest number of migrant workers in Southeast Asia and are also the world's top exporters of female workers. The females of these two archipelagos are far more numerous than their male counterparts-as women represent 60% to 70% of workers sent abroad by these two countries. Sri Lanka created an office for foreign employment (SLBFE) with the express objective to promote and develop their export of workers, and especially female workers. . The number of Sri Lankan women leaving under contract to work abroad, in particular to Saudi Arabia, United Arab Emirates and Kuwait, grew from 9,000 in 1988 to 42,000 in 199442,000 in and to 115,000 in 199642,000 in (UNO 2003)). Even though Sri Lanka started sending its domestic assistants to the Gulf States later than Bangladesh, Pakistan or India, it remains the only country to continue doing so. The proportion is one male migrant to three female migrants, of which more than 60% work as domestics almost exclusively in one of the seven member states of the Gulf Cooperation Council. Besides the numerical importance of these flows and their visibility, another feature of these Asian female migrations is their disproportionate concentration in a very limited number of jobs.
It seems that female labor is mainly related to medical care, private domestic services and commercial sexual services. The categories of typical female employment returns to the stereotype roles of women as maids, waitresses, prostitutes and nurses [START_REF] Chin | Service and Servitude: Foreign Female Domestic Workers and the Mlaysian Modernity Project[END_REF][START_REF] Parrenas | Servants of Globalization: Women, Migration and Domestic Work[END_REF]). In the eyes of employers, Asian women are traditionally perceived to be discrete, subservient, docile, gentle, ready to please and serve, particularly suited to these subordinate employments as carers, nurses, domestic servants and as sex workers. The female migrants are forced into a limited number of trades due to a clear segregation of the sexes in the international labor market. They are concentrated in the service sector, domestic house-work, and a large number of entertainment trades that are a thinly disguised form of prostitution.
We will now compare the female migrations of Indonesia and the Philippines and consider the initiatives introduced in the Phillipines and Sri Lanka to protect vulnerable female migrant workers who emigrate to the Arabian Peninsula and Eastern Asia.
With 7.8 million migrant workers, the Philippines is an example of how to improve and defend the rights of migrant workers. Indonesia, has the largest total number of migrant workers and the majority of these are women who are recruited as maids. Indonesia has tried to protect them by providing training and by fighting against the non-official recruitment agencies. Repeated press articles on the vulnerability of foreign workers, where migrant workers and locals are rarely treated on an equal footing means that the states concerned can no longer remain insensitive to this problem.
We will use the following abbreviations: TKW (tenaga kerja wanita) in Indonesia to designate the female migrants workers and OFW (Overseas Filipino Workers) for the Philippines.
Over representation of female Asian migrants in the international labor market
The reasons for the strong participation of Asian women in the international labor flow are numerous and different in order: psychological, religious, economic, and political. In general, the offer of employment for female workers was as a result of supply and demand. These women were able to leave their families at short notice and for a set time in order to earn income at a time when the demand for male workers had diminished in the Gulf States. They profited from a demand for female workers.
This was even easier as the jobs for women generally did not need a diploma or qualifications and appeared very attractive due to the big differences in wages and salaries between the countries of departure and the countries of arrival.
In South East Asia, women have a certain level of freedom of movement and autonomy in decision making, and are used to working away from home. In several ethnic groups in Indonesia, women have traditionally had a significant role in the generation of household income, through productive work both within and outside the household (Williams 1990: 50). In 2005 the ILO (2006: 13) stated that up to 53% of women in Indonesia participate in the work force compared to 87.1% of men.
TJSEAS 122
Whereas in the Philippines, the proportions were 56.7% for women and 70.7% for men. In the families that need supplementary income, the women know how to cope with bringing money home, this would oviously favor emigration. However, many Islamic countries, including Pakistan and Bangladesh, have forbidden sending female workers overseas, as it was considered that young women travelling without a male escort outside their homes was against the teachings of the Koran. India has also put a brake on the export of female workers following too many denunciations by the national press.
To explain this massive feminine participation of women in migrational work, it is tempting to invoke traditional Asian values and the notion of family responsibility.
Family responsibility is a foundational concept and is not a question of filial devotion.
The sending of a woman abroad to earn money is related to the idea that she will remain committed to her family and that she will willingly sacrifice her own wellbeing and send all her savings home. A man in the same position, may be more inclined to play, drink and spend his savings. Overall, men remit more than women because they earn more, though women tend to remit a larger proportion of their earnings. This study carried out in Thailand confirms that despite their salaries being lower than men, the female migrants who work abroad to help their families managed to save more and sent most of their savings to their families [START_REF] Osaki | Economic Interactions of Migrants and their Households of Origin: Are Women More Reliable Supporters?[END_REF]. Women, especially if they are mothers, do not leave home easily and conversely the families certainly preferred to see the men go abroad rather than the women. In general, an extended family always found it more difficult to replace a mother than a father. She would often carry out a great number of tasks and is hard to replace, especially when it concerns the needs of very young children and elderly parents. But there is little TJSEAS 123 choice when the only employment offered is for females. The woman, whether she is a mother, daughter or sister, accepts temporary emigration and the family left behind will have to deal with her staying abroad.
Another possible factor, that needs to be verified with specific studies, is the incidence of bad conjugal relationships preceding female emigration. The researchers Gunatillake and Perera (Gunatillake and Perera 1995: 131) showed that in Sri Lanka, female worker migration is often a form of disguised divorce, in a society where divorce is still negatively perceived. The divorce is then not officially announced, but the emigration of the wife marks her economic independence and their physical separation as a couple. In Indonesia, while married women constitute the largest proportion of migrants, divorced women, single mothers and widows are over represented. It is clear that the departure is a solution to survive, a way of forgetting or escaping a situation of failure.
Women can also be "trapped" into migration. Low wages, financial difficulties, irresponsible spending, spousal infidelity, estrangement from children, and many others personal factors at home, may compel women to stay in, or return to the same country, or find another labor contract elswhere.
It is necessary to add that working abroad has become more commonplace and less harrowing as the cost of transport and overseas communications via phone, Internet, MSN, Skype have been considerably reduced. With quicker and cheaper exchanges, the effective distances have been shortened and the social-emotional separation from family has became more acceptable by the women. For example, in certain Javanese villages, the migration of women is so frequent, so usual, that they have become normal, and as one might say, they constitute a standard.
An additional religious factor is often proposed by Indonesians wanting to work in the Arabian Peninsula, especially in Saudi Arabia as it allows them to make the pilgrimage to Mecca at a reduced cost. It has been attested by researchers that Sri Lankan Muslim female workers were able to gain respectability and self-esteem by working in the Arabian peninsula, acquiring material assets, adopting Arab customs in fashion, cooking, interior decoration, and obtaining religious education for them and their relatives (Thangarajah 2003: 144). As pertinent as they are, these reasons alone cannot explain these growing flows.
It is also necessary to take into account the actions of governments in order to augment foreign-exchange revenues and private organizations that promote the export of female labour, and who apply pressure on family and social networks. Authorised Asian female migration would never have known such growth without an initial worker migration industry and the setting up of labour market channels and networks.
The Multiple Channels And Networks
Migrant workers have become commercial objects that constitute a valuable resource. In Asia, the recruitment of foreign workers has become a lucrative activity.
The agencies, whether public or private, legal or illegal cover the financial cost of migration, in order to maximise the profits on each departing candidate. Manolo In the Philippines, there are approximately 2,876 Foreign Employment Agencies, amongst which, 1400 are regarded as reliable (POEA 2004). Because of the competition, these agencies tend to specialize in particular destinations or types of employment.
In Indonesia, 412 employment/placement agencies were listed in 2000 (Cohen 2000). Sri Lanka had 524 licensed recruitment agencies by the end 2002 (plus many more illegal operators) which placed 204,000 workers abroad in that year [START_REF] Abella | Social Issues in the Management of Labour Migration in Asia and the Pacific[END_REF]. These numbers clearly show the commercial and profitable character of worker migrations. A large number of temporary work migrations are thus orchestrated by these paid recruiter agencies. The OIT (International Work Organization 1997) points out "their intervention in 80% of all movements of the Asian work-force to the Arab States, one of the biggest migrant flows in the world". In the same press release, the OIT added "In Indonesia and in the Philippines, the private agencies dominate the organization of migrant workers, placing 60% to 80% of migrants". [START_REF] Kassim | Labour Market Developments and Migration Movements and Policy in Malaysia[END_REF] has pointed out that both the heavy burden of the formal bureaucratic procedure and the high financial costs involved may induce Indonesian migrant workers to look for irregular recruitment channels in order to get a job in Malaysia.
TJSEAS 127
The Journey Of The Migrant Through Legal Or Illegal
Channels
For example, an Indonesian who wants to work in Malaysia has three possible choices. One is to contact a legal placement agency (PJTKI, Perusahaan Jasa Tenega Kerja Indonesia, Office for the Indonesian workers) situated in town and endure a long and costly administrative procedure due to civil servants demanding bribes.
The second way is to go to a local intermediary/recruiter (called locally calo, boss, taikong, mandor or patron/sponsor), who is often a notably rich and respected man, having performed the hadj (pilgrimage to Mecca), and who will serve as intermediary between the candidate and an official agency based in Jakarta. His performance is equally costly but it has the advantage of reassuring the candidate, as the calo is a well-known person.
In the embarcation ports to foreign countries, situated mostly in Sumatra (for example Medan, Tanjung, Pinang, Dumai, Batam, Tanjung Balai, and Pekanbaru) the migrant candidates may go to a third kind of intermediary, unscrupulous ferrymen who try to transport them to Malaysia on the first available boat. In the Indonesian archipelago, generally the official legal procedure is badly perceived and does not prove to be more secure, less expensive or any more effective than the private recruitment agencies. Corruption exists at every stage in the migratory cycle.
Despite these factors, these intermediaries do enable the low-qualified and less well-informed women to go through the procedures to find foreign work. The beneficial services provided by these intermediaries is that they can connect the employee and employer, give training and find board and lodging, supervise their work contracts, organize the trip, lend them money, and organize the return journeys of the migrant.
The negative costs is that these agencies have a tendency to claim excessively high and unjustified fees, commissions and gratuities that force the applicants into debt.
This debt can bring about a relationship of dependence and abuse between the female migrant worker and her recruiter agency. Such abuses are more obvious in the case where the female migrant worker is involved in illegal or Mafia networks.
Outside of these channels of recruitment, the migrant workers forge their own networks through which information circulates, allowing them sometimes to meet the supply and demand. The combination of formal agencies and informal networks end up creating a chain migration.
The Indonesians prove to be more exposed than Philippine workers to extortion and abuse before their departure. This system, along with the tariffs applied by the agents and their high rates of interest mean that the female Indonesians are often in debt for several months of their pay. In 2003, the Indonesian Ministry of Work and Emigration recognized that 80% of the problems, such as falsification of documents and the various extortions undergone by the migrants, take place before departure [START_REF] Dursin | Would be migrants chafe against ban on unskilled labor[END_REF].
The period immediately before departure and immediately upon their return are the critical moments when Indonesian and Sri Lankan female migrants are most at risk of being robbed.
Every day, almost 800 migrants pass through Terminal 3 in Jakarta's airport, while in Sri Lanka around 300 women a day return to their home country. These female migrant groups, who return loaded with packets, presents and money, are targeted by civil servants, servicemen, policemen, porters, and bus-drivers who seek to take their money.
The Philippines And Sri-Lanka Provide Two Female Labour-Export Models For Indonesia?
Massive labor-exporting countries like the Philippines, Indonesia and Sri-Lanka To benefit from the Filipino example, Indonesia should:
-increase the level of general education of its population;
-"clean up" its recruiter system; -train the migrant candidates better, particularly in language;
-diversify the countries of destination.
Following the example of the NGOs that advocate for Filipino women migrants workers, Indonesian NGO's could similarly pressure the Indonesian government to help the migrant labor-force through better national and international coordination, information networking to provide accurate information on all aspects of migration, TJSEAS 131 and more strict regulations for the recruitment industry to help prevent abuses and malpractices.
Towards An Industry And A Culture Of Migration In
Asia
As we have seen, the global trend towards the feminization of migratory flows in Asia and the demand for female migrant workers is likely to increase. This phenomenon is more accentuated in Asia, where the proportion of women in the total number of migrant workers approaches 70%. Even though we cannot speak of a migratory system peculiar to the island nations of Asia, the migratory flows of these three countries present common characteristics which are unique.
In three decades, Indonesia, the Philippines and Sri Lanka constitute the majority of emigration workers in the world.
These nations have put in place policies favoring the emigration of workers to reduce the poverty and unemployment and to increase the foreign currency remittance by migrant workers.
These two archipelagos and the island nation of Sri Lanka share high unemployment rates and chronic under-employment. The respective rates of unemployment in Indonesia and the Philippines remain high at 9.9% and 10.1%, under-employment is considerable. While there remains a large inequality in the wages between men and women, women will continue to comprise the majority of poor workers. .
TJSEAS 132
These countries, via their recruitment agencies, filled an opportune labour-market niche that was vacant by others. The agencies could satisfy the growing demand for domestic personnel, nannies and home nurses. These female employment positions had the advantage for South East Asian migrant women of not requiring any particular qualifications.
Encouraged by the State, in Indonesia, the Philippines, and Sri Lanka, a culture of migration has emerged with the establishment of a solid "migration industry" with a network of agents and intermediaries. Advisors, recruiters, travel agents, trainers, started to work together at all stages of the migration process to find as much labor as possible for the greatest number of foreign employers. The Philippines enjoys a great deal of experience in this field.
Another common characteristic of these three labor exporters resides in the creation of formal and informal migrant networks and channels by which to disseminate important information for future migrant workers. Maruja M. B. Asis (2005: 36) suggests that the nunerical growth in the domestic personnel in various countries, of Philippine origin, partly reflects the "multiplying effect" of the informal migrant information network resulting in parents and friends going to the same destination and working in the same niche markets.
This suggests the possibility of a direct job appointment without having to go through the employment agencies. Moreover, the conditions for migrant workers
Abella ( 2005 )
2005 has emphasized the role played by private fee-charging job brokers in organizing labour migration in those countries. According to him, "recruitment and placement have been left largely in the hands of TJSEAS 125 commercially-motivated recruitment agencies because few labour-importing states in the region have shown any interest in organizing labour migration on the strength of bilateral labour agreements. As a consequence, over the years, the organization of migration has emerged as a big business in both countries of origin and of employment".Nevertheless, the governments of exporting workforce countries play a major role. Sending migrants workers abroad is a solution to national unemployment, a way to avoid social unrest, and a means to gain foreign-exchange reserves. As early as 1974, the Philippine government, recognized the importance of labour migration to the national economy by establishing the Philippine Overseas Employment Administration in order to promote labour export. Sri Lankan and Indonesian authorities have followed this example.Female labor migration is a demand-driven, rather than a supply driven, phenomenon. To respond to demand patterns in the host countries, labor-exporting countries have to promote both male and female overseas contract workers and to face increased competition for a good position in the labor export market. To achieve this goal, they have created within their Ministries of Labor, offices or agencies to increase the number of foreign employment with an aim to promote, control and organize the recruiter and exportation of workers: AKAN in Indonesia, the Philippine Government Agency for Overseas Employment, (POEA) in the Philippines, and the Sri Lanka Bureau/Office for Foreign Employment (SLBFE) in Sri Lanka.The AKAN Indonesian Office of Foreign Employment was created in 1984 under the supervision of the Ministry of Labor. A year later, in 1985, the SLBFE TJSEAS 126 adopted the same objectives. The government objectives were to reduce national unemployment and to increase the savings of migrants. The transfer of funds from migrant workers made up 8.2% of PNB in the Philippines or more than $7 billion, 6.3% in Sri Lanka[START_REF] Cesap | Fifth Asian and Pacific Population Conference: Report and Plan of Action on Population and Poverty[END_REF], and 4.7% in Indonesia. The three million Indonesians who work abroad bring in approximately $1 billion (ILO 2006b).
are confronted with the dilemma between promotion of female labour emigration and the protection of their national workers abroad. The Philippines, has substantial experience in labour export. The government, conscious to protect their "new heroes" (Bagong Bayani in Tagalog) who allow the country to prosper, create two distinct institutions. The POEA whose mission was to promote the export of the work-force and the Overseas Workers Welfare Administration (OWWA) which was established to defend and protect the rights of migrants. In the same spirit, the archipelago established an official charter, the Migrant Workers and Overseas Filipinos Act, voted in, in June 1995, so that the migrants are aware of their rights and their duties to be respected. The government enacted this charter to slow down the exportation of the less qualified workers who were the most vulnerable. Filipino NGO's, civil society and the Catholic Church have had a long history of activism, campaigns and debates to improve the life, conditions and rights of migrants workers.TJSEAS 130The civil society is better organised organized in the Philippines. They list more than a hundred NGOs who are very active in the fight to protect migrant workers. In comparison, Indonesia accounts for only 15 NGOs and a similar number in Sri Lanka.Cassettes, training modules, self-defense courses, handbooks and information booklets emanating from Filipino NGOs inform the migrants of the dangers in looking for employment abroad. Very rapidly, the Sri Lankan NGOs are following their example with pre-departure orientation and training programs. One of the major problems is the lack of clear, precise and reliable information explaining each stage in the migration process. This information is often not provided to Indonesian workers resulting in frequent misdirections, errors and fraud with their damaging consequences. In general, female Filipino migrants with better education and training and a good command of the English language have less problems in communication than Indonesian migrants.
could progressively improve. The International Convention on the International Protection of the Rights of Migrant Workers and their families provides a normative framework to the transnational labor migrations. It has been ratified by 34 countries, TJSEAS 133 including the Philippines and Sri Lanka, and came into effect in 2003. These three countries were pulled between the desire to increase the export of their work-force and their duty to protect them. By sending so many female workers abroad, in conditions that, in one way, could put them in danger, they highlighted the global question of Human Rights within the general framework of migrations and work legislation.This positive outcome puts into perspective some of the criticisms that is sometimes addressed to them, that by exporting their own work-force in such great numbers, they would not create local jobs and thereby would avoid the internal problem of unemployment. Perhaps the years to come will show that the returning migrants can fulfill the role of providing local employment thanks to their remittances, their ideas and new skills.
Table 1 Estimated numbers of Asian (origin) workers in the Middle East
1
Nationality Year Number
Filipinos 2003 1,471,849
Indonesians 2000 425,000 *
Sri Lankans 2003 900,000
* for Saudi Arabia only
Source: Hugo (2005: 10).
sought Asian workers for construction and general laboring because they were thought to be more docile and cheaper.
Rising from 1 to 5 million between 1975 and 1990, the number of migrant workers rose to almost 10 million in 2000 in Saudi Arabia, the United Arab Emirates (UAE) and Kuwait. The table 1 above illustrates the attraction that the Gulf States continue to exert on Asian labourers.
After 1980, the South-East Asian work migrations became more diversified. The Gulf States continued to absorb a large number of mostly South-East Asian laborers and notably women (Philippinos, Sri Lankans and Indonesians) to meet the increased demand for house-workers (maids and domestics) and the growth in service jobs.
By the middle of the 1980s, inter-Asian mobility rapidly developed. Japan, Taiwan, South Korea, Malaysia, Singapore, Hong Kong and Brunei became preferred migrant worker destinations. During this period, migrant flows became more complex as numerous countries became simultaneous importers and exporters of labor (cf.
Table 2 Estimated number of annual departures
2
Country of origin Year Annual departures Estimated Destinations
number of
illegals migrants
Indonesia 2002 480,393 + 50,000 Malaysia, UAE
Philippines 2002 265,000 + 25,000 Asia, OCDE, UAE
Sri Lanka 2003 192,000 + 16,000 UAE, Singapore
Source : ILO (2006).
Table 3 Proportion of Female Labor Migrants
3
Origin Year Number of migrants under contract Percentage of women
Philippines 2003 651,938 72.5 %
Indonesia 2003 293,674 72.8 %
Sri Lanka 2003 203,710 65.3 %
Source:
Hugo (2005: 18)
.
Table 4 Female Migration in Asia -proportion of female in percent regarding the total number of migrants
4
Regions 1960 1970 1980 1990 2000
South Asia 46.3% 46.9% 45.9% 44.4% 44.4%
East and SE Asia 46.1% 47.6% 47% 48.5% 50.1%
West part of Asia 45.2% 46.6% 47.2% 47.9% 48.3%
Source:
Jolly and Narayanaswany (2003: 7)
.
Keiko
Yamanaka and Nicola Piper (2005: 9)
have calculated the number of women as a percentage of total migrant workers in the Asian labour-importing countries for the early 2000s (Singapore: 43,8% ; Malaysia: 20,5% ; Thailand: 43%; Hong Kong SAR: 95%; Taiwan: 56%; Korea: 35.1%).
One must emphasize that these statistics apply only to official work migrants who are legally permitted to work abroad. They do not take into account people who leave their own country to study, travel, or to get married who subsequently work in the visited country. Illegant entrants or people who work without a work permit. It is probable, therefore, that the number of migrant workers would be much higher if we include those who migrate clandestinely. It is estimated that at least a third of all labor migration in Asia is unauthorized.
TJSEAS 120 | 36,765 | [
"10026"
] | [
"191048"
] |
01764226 | en | [
"phys"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01764226/file/2018-058.pdf | Carole Lecoutre-Chabot
Samuel Marre
Yves Garrabos
Daniel Beysens
Inseob Hahn
C Lecoutre
email: [email protected]
Near-critical density filling of the SF6 fluid cell for the ALI-R-DECLIC experiment in weightlessness
Keywords: slightly off-critical sulfur-hexafluoride, liquid-gas density diameter, liquid-gas coexisting densities
Introduction
Thermodynamic and transport properties show singularities asymptotically close to the critical points of many different systems. The current theoretical paradigm on critical phenomena using renormalization group (RG) approach [START_REF] Wilson | The renormalization group: Critical phenomena and the Kondo problem[END_REF] has ordered these systems in well-defined universality classes [START_REF] Zinn-Justin | Quantum Field Theory and Critical Phenomena[END_REF] and has characterized the asymptotic singularities in terms of power laws of only two relevant scaling fields [START_REF] Fisher | Correlation Functions and the Critical Region of Simple Fluids[END_REF] in order to be conform to the scaling hypothesis. Simple fluids are then assumed similar [START_REF] Garrabos | Crossover equation of state models applied to the critical behavior of xenon[END_REF] to the O(1) symmetric (Φ 2 ) 2 field theory and the N=1vector model of three-dimensional (3D) Ising-like systems ( [START_REF] Zinn-Justin | Quantum Field Theory and Critical Phenomena[END_REF], [START_REF] Barmatz | Critical phenomena in microgravity: Past, present, and future[END_REF]). Their study in weightlessness condition is well-recommended to test the two-scalefactor universality approaching their critical point. However, for the case of the gas-liquid critical point of simple fluids, some additional difficulties can occur as the order parameter -the fluctuating local densityshows a noticeable asymmetry, as for instance the well-known rectilinear diameter form of the liquid-gas coexisting density curve first evidenced by Cailletet and Mathias [START_REF] Cailletet | Recherches sur les densités des gaz liquéfiés et de leurs vapeurs saturées[END_REF]. This linear asymmetry was largely confirmed in the subsequent literature (see for instance Ref. [START_REF] Singh | Rectilinear diameters and extended corresponding states theory[END_REF]). Such asymmetrical effects cannot be accounted for from the symmetrical uniaxial 3D Ising model and its induced standard fluid-like version, i.e., the symmetrical lattice-gas model.
An alternative theoretical way to introduce the fluid asymmetry nature in the scaling approach consists in extending the number of the physical fields contributing explicitly to the relevant scaling fields, the so-called complete scaling phenomenological hypothesis ( [START_REF] Fisher | The Yang-Yang anomaly in fluid criticality: experiment and scaling theory[END_REF]- [START_REF] Wang | Nature of vapor-liquid asymmetry in fluid criticality[END_REF]). For example, in a recent work [START_REF] Cerdeirina | Soluble model fluids with complete scaling and Yang-Yang features[END_REF], Yang-Yang and singular diameter critical anomalies arise in exactly soluble compressible cell gas models where complete scaling includes pressure mixing. The predictions of complete scaling have been tested against experiments and simulations at finite distance from the critical point, increasing the complexity in the fundamental quest of a true asymptotic fluid behavior. The latter remains a conundrum to the scientists who have for objective to check it by performing an experiment closer and closer to the critical point with the required precision. De facto, the asymmetrical contributions, the analytical backgrounds, and the classical-to-critical crossover behavior due to the mean-field-like critical point, further hindered the test of the asymptotic Ising-like fluid behavior. Such difficulties are intrinsically ineludible, even along the true critical paths where the crossover contribution due to one additional non-relevant field [START_REF] Wegner | Corrections to scaling laws[END_REF] can be accounted for correctly in the field theory framework ([15]- [START_REF] Garrabos | Master crossover functions for one-component fluids[END_REF]).
Moreover, the experiments are never exactly on these critical paths, adding paradoxically a new opportunity to investigate the theoretical expectations related to the non-symmetrical behaviors. Indeed, even though the temperature can be made very close to 𝑇𝑇 𝑐𝑐 , the mean density of the fluid cell is never at its exact critical density value [START_REF] Lecoutre | Weightless experiments to probe universality of fluid critical behavior[END_REF]. The error-bar related to this latter critical parameter was never contributing to the discussion of the Earth's based results in terms of true experimental distance to the critical point. Nevertheless, from the above experimental facts and the theoretical expectations, it appears that the related non-symmetrical effects can be unambiguously viewed in a slightly off-critical (liquid-like) cell. Indeed, in such a closed liquid-like cell, it is expected that the meniscus position crosses the median volumetric plane at a single finite temperature distance below the coexistence temperature. From the symmetrical lattice-gas model, we recall that the meniscus of any liquid-like cell is expected to be visible always above this median volumetric plane in the two-phase temperature range.
Therefore, an academic interest to use gravity field acceleration to horizontally stabilize the position of the liquid gas meniscus inside eight different cell positions is precisely investigated during the pre-flight determination of the off-critical mean density of a fluid cell before its use under weightlessness environment. More specifically, we would like to check if SF6 remains well similar -or not -to the 1974 standard SF6 fluid ([18]- [START_REF] Ley-Koo | Revised and extended scaling for coexisting densities of SF6[END_REF]) which support fluid asymmetry resulting from complete scaling hypothesis ([10]- [START_REF] Wang | Nature of vapor-liquid asymmetry in fluid criticality[END_REF]). Our experimental challenge is then to detect the previously observed significant hook (of 0.5% amplitude) in the rectilinear density diameter when the relative uncertainty in the filling density value is controlled with 0.1% precision along a non-critical path which exceeds from ∼0.2% the exact critical isochore. This experimental challenge is illustrated in Fig. 1. The high optical and thermal performances of the ALI-R insert used in the DECLIC facility Engineering Model allow the observation of the meniscus position behavior precisely due to the density diameter behavior, as the temperature of highly symmetrical test cells is changed. Each of these test cells consists in a quasi-perfect disk-shaped cylindrical fluid volume observed in light transmission, surrounded by two opposite, small, and similar dead volumes. The latter volumes define the single remaining transverse (non-cylindrical) axis of the fluid volume due to the cell in-line filling setup. Here, the selected test cell ALIR5 [START_REF]ALIR5 test cell was selected among a series of 10 identical ALIR n cell (with n=1 to 10). The series have provided statistical evaluation of the fluid volume and fluid mass uncertainties (0.05% and 0.1%, respectively)[END_REF] was filled at a liquid-like mean density 〈𝛿𝛿𝜌𝜌 �〉 = (𝜌𝜌 𝜌𝜌 𝑐𝑐 ⁄ ) -1 very close to the critical density ρc of SF6. The relative off-critical density 〈𝛿𝛿𝜌𝜌 �〉 = +0.20 -0.04 +0.04 % of ALIR5 was measured with great accuracy from our Earth's-based filling and checking processes (see below § 6 and Ref. [START_REF] Morteau | Proceedings of 2nd European Symposium Fluids in Space[END_REF]). The fluid under study is SF6 of electronic quality, corresponding to 99.995% purity (from Alpha Gaz -Air Liquide). The meniscus behavior could be analyzed in eight cell configurations. Such analyses provide an accurate experimental evaluation of the relative effects of (i) the complete cell design, (ii) the cell displacement in front of the CCD camera, (iii) the meniscus optical observations through gravitational stratification and liquid wettability, (iv) the cell filling mean density, and finally, (v) the coexisting density diameter behavior. Only the evaluations of (iv) and (v) are treated hereafter.
Experimental set-up and methods
Highly-symmetrical cell design
The essential characteristic of the ALIR5 cell (see Fig. 2(a)) is its highly symmetrical design with respect to any median plane of the observed cylindrical fluid volume. The main part of the fluid sample consists in a fluid layer of thickness 𝑒𝑒 𝑓𝑓 = (2.510 ± 0.002) mm and diameter 𝑑𝑑 𝑓𝑓 = 2𝑅𝑅 = (10.606 ± 0.005) mm. This fluid layer is confined between two flat, parallel, and transparent sapphire windows of thickness 𝑒𝑒 𝑤𝑤 = 8.995 mm and external diameter 𝑑𝑑 𝑤𝑤 = 12 mm. An engraved circle of 10 mm diameter, 30 μm thickness, is deposited on each sapphire external surface. Such a pancake cell design [START_REF] Zappoli | Heat Transfers and Related Effects in Supercritical Fluids[END_REF] The experiment is then performed in four cell directions 𝜃𝜃 = {-23.2°; 0°; +22.9°; +90°} regarding the above single fill-line positions with respect to two reverse orientations of the earth's gravity vector. That permits to analyze potential systematic errors associated with the cell dead volume. Throughout the paper, each cell configuration is labeled ⟦𝑖𝑖, 𝑋𝑋⟧ where the digit i represents two reverse gravity orientations (g ↓ for i=1 and g ↑ for i=2) and the letter X describes four directions of the fill-line axis of the fluid cell (X=H for 𝜃𝜃 = 0 °, X=V for 𝜃𝜃 = +90°, X=T for 𝜃𝜃 = 22.9°, and X=Z for 𝜃𝜃 = -23.2°). The corresponding cross-sectional shape of the cell is schematically pictured in Fig. 2(b), illustrating the relative positions of the meniscus and the dead fluid volumes with respect to the earth's gravity vector.
Phase transition temperature.
The laser light transmission measurements of the EM-DECLIC facility [24] and the wide field-of-view observation of the fluid sample are combined to observe the phase separation process during the cell cooling down. Each temperature quench step crossing the transition temperature (noted 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 ) is -1mK. The exact value of 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 is not essential for the following discussion. The temperature results for each experimental configuration are then reported from reference to the lowest temperature (noted 𝑇𝑇 1𝜑𝜑 ) of the monophasic range. The resulting true SF6 coexistence temperature 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 is such as 0 < 𝑇𝑇 1𝜑𝜑 -𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 < 1 mK, noting in addition the high reproducibility (2mK range) of 𝑇𝑇 1𝜑𝜑 (here from 318.721 K to 318.723 K) for the eight experimental runs. Moreover, the 𝑇𝑇 𝑐𝑐 -𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 ≃ 0.4 µK shift [START_REF]For ALIR5, 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 = 0.20% and[END_REF] due to the off-density criticality of the test-cell is neglected, i.e., 𝑇𝑇 𝑐𝑐 ~𝑇𝑇𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 . Finally, wide field-of-view imaging of each meniscus position data is recorded when thermal equilibration is achieved at each temperature difference (𝑇𝑇 1𝜑𝜑 -𝑇𝑇)~(𝑇𝑇 𝑐𝑐 -𝑇𝑇). Then 𝑇𝑇 𝑐𝑐 -𝑇𝑇 follows a logarithmic-like scale to cover the experimental temperature range 0 < 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 15000 m°C (here with 𝑇𝑇 1𝜑𝜑 >𝑇𝑇 𝑐𝑐 = 45573 ± 1 m°C).
Cell imaging and Image processing
Cell imaging of the meniscus position.
The liquid-gas meniscus is observed from optical transmission imaging through the cell, using LED illumination and cell view observation with a CCD camera (1024×1024 pixels). The pixel size corresponds to 12 μm in the wide field-of-view imaging, well controlled from the two engraved circles on the external surface of each sapphire window. Additional small field-of-view (microscopy) imaging with 1.0 μm -pixel resolution of a typical object area 1×1 mm 2 in the central part of the fluid sample are also performed but not reported here. The cell images are also made by tuning the focal plane between the two window internal surfaces to control small optical perturbative effects related to any nonlinear light bending situations, e.g., due to a nonparallelism between tilted windows, wetting layerlensing effects, compressibility effects, or displacement of the optical axis of imaging lenses versus the exact center axis of the cylindrical fluid cell volume.
Image processing of the cell position.
Before the determination of the meniscus position, the image processing needs the exact pixel coordinates of the viewed fluid cell volume to be determined inside the images recorded for each ⟦𝑖𝑖, 𝑋𝑋⟧ configuration. The picture given in Fig. 3 for the ⟦1, 𝑉𝑉⟧ configuration at 𝑇𝑇 = 45473 m°C is chosen to briefly summarize the method that uses the line profile analysis provided by the NI Vision Assistant 2012 software. Each pixel point is characterized by its x (horizontal)-y (vertical) raw coordinates where the axis origin takes place on the top-left corner of the picture. Therefore, the line profiles provide the x-y coordinates of the selected borderline points between the fluid and body cell (see A, B, C, T, & R points in Fig. 3). The resulting position of the cell borderline (here a quasi-circle of ∼10.380-10.464 mm diameter, i.e., ∼865-872 pixels) can be controlled by the comparison with the position of the two engraved circles (10±0.01 mm or 833.3/833.4 pixels of diameter) on the external surface of the input and output windows. As an essential result, the (horizontal and vertical) pixel coordinates of the apparent center point O are the intrinsic characteristics parameters of the fluid volume position whatever each cell picture. The resulting estimation of the maximum error on the absolute position of any characteristic point of each profile line of each picture is ± 0.5 pixels. The last step optimizes the matching of the selected characteristic points for two reversed similar configuration (⟦1, 𝑉𝑉⟧ and ⟦2, 𝑉𝑉⟧ for the chosen case). Indeed, the changes of the facility positions under the Earth's gravitational acceleration field induce small mechanical relative displacements due to the intrinsic clearance between the different (optical and mechanical) components. The present concern involves the cell (housed in the insert) in front of the video camera (located in the optical box of the facility). Therefore, this cell image matching step leads to the determination of the (horizontal and vertical) pixel shifts (∼2 to 6 pixels, typically) between two reversed images of the viewed fluid cell volume.
Image processing of the meniscus position.
For each ⟦𝑖𝑖, 𝑋𝑋⟧ case, the line profile analyses are then applied to the horizontal (or vertical) lines that are closer to the related O point (see for example the line DE in Fig. 3). The details of these analyses are not reported here, and only are illustrated in Fig. 4 the main results of these line profile analyses (for ⟦1, 𝑉𝑉⟧ and ⟦2, 𝑉𝑉⟧ configurations). The line profiles along DE give access to the position and shape of the meniscus at each temperature. Taking then reference from the x-y position of a characteristic point of the viewed cell volume (such as the point B in the selected case of Fig. 3), the bare pixel distance of the meniscus position can be estimated. The temperature dependences of these bare distances are reported in Fig. 4 which illustrates (i) the well-defined crossing of the meniscus at finite temperature distance from the transition temperature, and (ii) the well-defined position of the volumetric median plane of the fluid cell.
Additional important results are also obtained, as the amplitude and shape of the capillary rising effect, the symmetrical matching of the meniscus position by fine tuning (±0.1 pixel, typically) of the apparent median plane for the two (slightly shifted) apparent cells, and the resulting noticeable symmetrical behavior of the capillary rising on the complete temperature range. Finally, the essential feature of the image analyses reflects the combination of the highly symmetrical cell design, the small off-density criticality of the cell filling and the wide field-of-view cell imaging. Such a combination leads to estimate the absolute position of the volumetric median plane of the cell and the meniscus position with the ±0.5 pixel (i.e., ±6 μm) resolution. Such a resolution is obtained whatever the ⟦𝑖𝑖, 𝑋𝑋⟧ configurations, thanks to the similarity of the meniscus behavior and temperature crossing for two reverse positions under the gravity field acceleration. One noticeable remark concerns the (gas or liquid) filling of one half part of the dead volume (i.e., 7.0 mm 3 ). Such a non-viewed fluid volume is thus similar to a viewed fluid median layer of 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 = (1 2 ⁄ )𝑉𝑉 𝑓𝑓𝑓𝑓 𝐴𝐴 𝑓𝑓𝑓𝑓 ≃ 263 ⁄ µm thickness (i.e., ≃ 21.91 pixels). The possible non-symmetrical effects related to the phase behavior in each windowless fluid volume can then easily be detected from the related viewed change of the meniscus position (see § 4), while its minimum ±0.5 pixel variation only corresponds to ±0.0675% of the total fluid volume, conform to the density precision requirements.
Results
Figure 4 shows that the pixel coordinate of the symmetrized meniscus positions, i.e., one half part noted ℎ 𝑖𝑖,𝑋𝑋 of the differences between the related bare pixel distances, can be estimated from reference to the volumetric median plane of the cell in the selected configurations. The temperature behaviors of ℎ 𝑖𝑖,𝑋𝑋 are reported in Fig. 5 for the eight ⟦𝑖𝑖, 𝑋𝑋⟧ configurations. Except for both ⟦𝑖𝑖, 𝑇𝑇⟧ and ⟦𝑖𝑖, 𝐻𝐻⟧ cases (see below), the temperature crossing of the volumetric median plane of the cell occurs in the range 44673 ≤ 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 (m°C) ≤ 44973 (i.e., 𝑇𝑇 = 𝑇𝑇 𝑐𝑐 -{600; 900} m°C (accounting for ±0.5 pixel ~�𝛿𝛿ℎ 𝑖𝑖,𝑋𝑋 � ≤ ±6 µm uncertainty). The meniscus behaviour for ⟦𝑖𝑖, 𝑇𝑇⟧ cases is clearly affected by a significant non-symmetrical effect of wetting liquid phase inside the dead cell volume. Indeed that is the only configuration where the expected gas-like dead volume appears on a position which can easily be connected to the liquid side of the cell by the capillary effects. Accounting for the above remark about 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 , it seems that 1/3 of this gas-like dead volume can be filled by liquid around 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 . Obviously, this over liquid trapping decreases with temperature since the meniscus position is more and more lowered below the corresponding fill-in channel.
Such plausible non-symmetrical liquid wetting effects can also occur for ⟦𝑖𝑖, 𝐻𝐻⟧ cases, particularly observed at low temperatures ( 𝑇𝑇 ≃ 43573 m°C) where capillary condensation can exist in the small fill-in channel. Conversely only a very small part (2/10) of the dead volumes seems to be responsible of the ℎ 𝑖𝑖,𝐻𝐻 differences compared to the ⟦𝑖𝑖, 𝑉𝑉⟧ or the ⟦𝑖𝑖, 𝑍𝑍⟧ configuration cases.
Modeling
The following modeling starts from the initial result given in Ref. [START_REF] Morteau | Proceedings of 2nd European Symposium Fluids in Space[END_REF] for an ideal constant cylindrical volume of the fluid sample with radius 𝑅𝑅 filled at a small liquid-like off-critical density 〈𝛿𝛿𝜌𝜌 �〉 > 0 . The horizontal position ℎ ≪ 𝑅𝑅 of the liquid-gas meniscus from reference to the horizontal cell median plane is written as follows
ℎ 𝑅𝑅 = 𝜋𝜋 4 〈𝛿𝛿𝜌𝜌 �〉-∆𝜌𝜌 � 𝑑𝑑 (∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 (1)
where
∆𝜌𝜌 � 𝑑𝑑 = 𝜌𝜌 𝐿𝐿 +𝜌𝜌 𝐿𝐿 2𝜌𝜌 𝑐𝑐 -1 (2)
(∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 = 𝜌𝜌 𝐿𝐿 -𝜌𝜌 𝐿𝐿 2𝜌𝜌 𝑐𝑐 (3)
𝜌𝜌 𝐿𝐿 and 𝜌𝜌 𝐿𝐿 are the coexisting liquid and vapor densities at temperature 𝑇𝑇 < 𝑇𝑇 𝑐𝑐 , respectively. In this ideal cylindrical cell, the fluid compressibility and capillary effects are neglected, while only simple geometrical considerations are used to define the liquid-vapor distribution which results from the fluid mass conservation at any 𝑇𝑇.
For the present ALIR5 case, the additional Γ-like symmetrical windowless fluid volume is accounted for rewriting the total volume 𝑉𝑉 𝑓𝑓 as 𝑉𝑉 𝑓𝑓 = 𝜋𝜋𝑅𝑅 2 𝑒𝑒 𝑓𝑓 (1 + 𝑥𝑥) , with 𝑥𝑥 = 𝑉𝑉 𝑓𝑓𝑓𝑓 𝑉𝑉 𝑓𝑓𝑓𝑓 ⁄ ≃ 0.063 . As a direct consequence, the fluid (gas or liquid) filling of the half part of the dead volume is measured by the ratio 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 𝑅𝑅 ⁄ = ±𝜋𝜋𝑥𝑥 4 ⁄ ≃ ±0.0496 . 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 is the above viewed change of the meniscus position around the fluid median plane (i.e., ≃±21.91 pixels or ≃±263 μm).
The thermal effects are accounted for by exchanging the 〈𝛿𝛿𝜌𝜌 �〉 term of Eq. ( 1) by
〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 = 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 (1 + 3𝛼𝛼 𝑇𝑇 𝑇𝑇 𝑐𝑐 ∆𝜏𝜏 * ) + 3𝛼𝛼 𝑇𝑇 𝑇𝑇 𝑐𝑐 ∆𝜏𝜏 * (4)
The above temperature dependence of 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 is obtained from a linear change of the cell mean density written as 𝜌𝜌 𝑇𝑇 = 〈𝜌𝜌〉 𝑇𝑇 𝑐𝑐 (1 + 3𝛼𝛼 𝑇𝑇 𝑇𝑇 𝑐𝑐 ∆𝜏𝜏 * ) . 𝛼𝛼 𝑇𝑇 = 1.8 × 10 -6 K -1 is the thermal dilatation coefficient of the CuCo2Be alloy. In the temperature range 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 2 K, the cell thermal dilatation effect is lower than 5.5%. These effects reach 29% at 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≃ 10 K. Such effects need to be accounted for in the cell filling process at the laboratory temperature (they are ≃60 % in the temperature range at 𝑇𝑇 𝑙𝑙𝑙𝑙𝑓𝑓 ≃ 20 -25°C.
The liquid wettability effects are estimated as the form of an equivalent height down 〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 of the meniscus position
〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 ∝ 𝑙𝑙 𝑐𝑐𝑙𝑙 2 �1 - 𝜋𝜋 4 � 2𝑅𝑅+𝑐𝑐 𝑓𝑓 𝑅𝑅𝑐𝑐 𝑓𝑓 (5)
〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 corresponds to the thickness of a horizontal liquid planar layer having similar volume to the total wetted liquid volume on sapphire and cell body. To derive Eq. ( 5), it is assumed an ellipsoidal shape for the meniscus capillary rising such that the product of characteristic size parameters are proportional to the squared capillary length 𝑙𝑙 𝑐𝑐𝑙𝑙 2 (so-called the capillary constant 𝑎𝑎 2 ) i.e., 𝑙𝑙 𝑐𝑐𝑙𝑙 2 = 𝑙𝑙 0 2 |∆𝜏𝜏 * | 2ν-𝛽𝛽 , with 2ν -𝛽𝛽 = 0.935 and asymptotic amplitude 𝑙𝑙 0 2 ≃ 3.84 mm 2 [START_REF] Garrabos | Master singular behavior for the Sugden factor of one-component fluids near their gas-liquid critical point[END_REF]. The 𝑙𝑙 𝑐𝑐𝑙𝑙 2 behavior compares well with the effective singular behavior 𝑎𝑎 2 = (3.94 𝑚𝑚𝑚𝑚 2 )|∆𝜏𝜏 * | 0.944 [27] [START_REF] Moldover | Capillary rise, wetting layers, and critical phenomena in confined geometry[END_REF]. The validity of 𝑙𝑙 𝑐𝑐𝑙𝑙 2 ∝ 𝑎𝑎 2 is controlled from the apparent size of the meniscus thickness due to the meniscus capillary rising. Finally only the proportional amplitude (of value 1.44, see § 6) of Eq. ( 5) remains as the adjustable parameter at large temperature distances from 𝑇𝑇 𝑐𝑐 . However, in the temperature range 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 3 K, it is noticeable that 〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 remains lower than one half pixel (<6 µm), i.e., 〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 𝑅𝑅 < 1.2 × 10 -3 ⁄ . In such a case, it is also important to note the large value of the ratio 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 ⁄ ≃ 50. The final functional form of ℎ then writes as follows (1 + 𝑥𝑥) -〈𝛿𝛿ℎ〉 𝑐𝑐𝑐𝑐 𝑅𝑅 [START_REF] Cailletet | Recherches sur les densités des gaz liquéfiés et de leurs vapeurs saturées[END_REF] where only the fluid compressibility effects still remain neglected. These latter effects can be observed from the grid deformation and the related local turbidity on the both sides of the vapor-liquid meniscus and are only noticeable in the temperature range 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 5 m°C.
Discussion
When the capillary rise effects are negligible (i.e. in the temperature range 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 3 K), Eq. [START_REF] Cailletet | Recherches sur les densités des gaz liquéfiés et de leurs vapeurs saturées[END_REF] shows that the meniscus behavior in a cell with finite positive value of 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 crosses the cell median plane at a single temperature 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 where ℎ = 0 , i.e., 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = ∆𝜌𝜌 � 𝑑𝑑 . A first approach considers the linear functional form of 𝜌𝜌 � 𝑑𝑑 such as
𝜌𝜌 � 𝑑𝑑 = 1 + 𝑎𝑎 𝑑𝑑 |∆𝜏𝜏 * | (7)
where the value 𝑎𝑎 𝑑𝑑 = 0.84 ± 0.015 of the slope of rectilinear diameter results from the coexisting density data on the complete two-phase range. The above central estimation of 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 44823 m°C then leads to 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 ≅ 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 = 0.20% . Earth's-based visualization of the meniscus behavior in the ALIR5 cell is well an academic benchmark experiment where the resolution in the image processing at the pixel level is of prime interest for accurate determination of the mean filling density of the fluid cell. This experiment authorizes a preliminary checking (without accounting for the compressibility effects) of the validity of the expected singular top shape of the coexisting density curve and its related singular diameter presumably satisfying the different theoretical functional forms issued from the literature. The singular top-shape of the coexistence curve (∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 (|∆𝜏𝜏 * |) for |∆𝜏𝜏 * | ≤ 10 -2 can be predicted without adjustable parameter [START_REF] Garrabos | Crossover equation of state models applied to the critical behavior of xenon[END_REF] from the theoretical master crossover functions estimated from the massive renormalization scheme. Nevertheless, any other effective power laws to describe (∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 (|∆𝜏𝜏 * |) of SF6 (such as for instance (∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 = 1.7147|∆𝜏𝜏 * | 0.3271 + 0.8203|∆𝜏𝜏 * | 0.8215 -1.4396|∆𝜏𝜏 * | 1.2989 from Ref. [START_REF] Ley-Koo | Revised and extended scaling for coexisting densities of SF6[END_REF]) do not modify the following analysis, especially considering the temperature range 0.03 ≤ 𝑇𝑇 𝑐𝑐 -𝑇𝑇 < 3 K where �𝛿𝛿ℎ 𝑖𝑖,𝑋𝑋 � ≤ 100 µm (i.e., �𝛿𝛿ℎ 𝑖𝑖,𝑋𝑋 � ≤ 8 pixels.
The second approach thus introduces the singular functional forms of ∆𝜌𝜌 � 𝑑𝑑 as follows
∆𝜌𝜌 � 𝑑𝑑 = 𝐴𝐴 𝛽𝛽 |∆𝜏𝜏 * | 2𝛽𝛽 +𝐴𝐴 𝛼𝛼 |∆𝜏𝜏 * | 1-𝛼𝛼 +𝐴𝐴 1 ∆𝜏𝜏 * +𝐴𝐴 ∆ |∆𝜏𝜏 * | 𝑥𝑥 ∆ 1+𝑙𝑙 ∆ |∆𝜏𝜏 * | ∆ (8)
Equation ( 8) results from the various complete field mixing (CFM) models predicting the singular asymmetry with adjustable amplitudes. The amplitude sets obtained from the Weiner's data fitting [START_REF] Kim | Singular coexistence-curve diameters: Experiments and simulations[END_REF] given in Table 1 with 𝛼𝛼 = 0.109 , 𝛽𝛽 = 0.326 , ∆= 0.52, and 𝑥𝑥 ∆ = 1 -𝛼𝛼 + ∆. Any adjustment of (at least three) free amplitudes appears Weiner-like compatible whatever the used additive forms of the power laws and exponents involved in Eq. ( 8). The corresponding estimations of (ℎ 𝑅𝑅 ⁄ ) as a function of 𝑇𝑇 𝑐𝑐 -𝑇𝑇 are illustrated in Fig. [START_REF] Cailletet | Recherches sur les densités des gaz liquéfiés et de leurs vapeurs saturées[END_REF] where the value 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 = 0.20% is fixed. Only the experimental results for the ⟦𝑖𝑖, 𝑉𝑉⟧ (full blue circles) and ⟦𝑖𝑖, 𝑍𝑍⟧ (full green triangles) configurations are used. For the rectilinear diameter case, the dotted and full blue curves are for the use of Eqs. ( 6)-( 7), without or with capillary correction term, where are introduced the corresponding predictions of Ref. [START_REF] Garrabos | Crossover equation of state models applied to the critical behavior of xenon[END_REF] (see above). For the critical hook case, the brown, green, and pink full curves are for the use of Eqs. ( 6) and ( 8) with parameters of columns 2, 3, and 5 (without visible difference between the green curve (column 3 case) and green circles (column 4 case). In addition, published 𝜌𝜌 𝐿𝐿 𝜌𝜌 𝑐𝑐 ⁄ and 𝜌𝜌 𝐿𝐿 𝜌𝜌 𝑐𝑐 ⁄ Weiner's data (Table X of [START_REF] Weiner | Breakdown of the law of rectilinear diameter[END_REF]), earlier supporting the diameter deviation on SF6, can directly be used to estimate the various terms of Eq. ( 6). In Fig. 6, the orange full diamonds are for the corresponding meniscus positions at the Weiner's experimental 𝑇𝑇 𝑐𝑐 -𝑇𝑇 values. Clearly, only the (ℎ 𝑅𝑅 ⁄ ) calculations for the linear density diameter case are in good agreement with the experimental data, especially in the two temperature decades 25 ≤ 𝑇𝑇 𝑐𝑐 -𝑇𝑇 < 2500 mK of prime interest regarding the neglected effects. In addition to an intrinsic questioning of the Weiner's measurements of the SF6 diameter deviation [START_REF] Moldover | Capillary rise, wetting layers, and critical phenomena in confined geometry[END_REF], the noticeable inconsistency observed in Fig. 6 can be attributed to a non-realistic estimation of the uncertainty on (𝜌𝜌 𝐿𝐿 𝜌𝜌 𝑐𝑐 ⁄ ) + (𝜌𝜌 𝐿𝐿 𝜌𝜌 𝑐𝑐 ⁄ ) Weiner values (at least onedecade larger than the maximum amplitude (0.5%) of the hook-like deviation), especially close to critical temperature (see Fig. 1). More generally, the systematic large data dispersion on the complete temperature range of the Weiner's data mainly seems due to the Weiner's values of the critical parameters 𝜌𝜌 𝑐𝑐 (density), 𝜀𝜀 𝑐𝑐 (dielectric constant), and then 𝐶𝐶𝐶𝐶 𝑐𝑐 = (1 𝜌𝜌 𝑐𝑐 ⁄ ) (𝜀𝜀 𝑐𝑐 + 1) (𝜀𝜀 𝑐𝑐 + 2) ⁄ (Clausius-Mossotti constant), which are significantly different ( -1.5% , -10.9% , and -3.3% , respectively) from the literature ones [START_REF]Weiner's values are 𝜌𝜌 𝑐𝑐 = (0.731 ± 0.001) g.cm -3 , 𝜀𝜀 𝑐𝑐 = 0.262 ± 0.010[END_REF].
Conclusions
The (ℎ 𝑅𝑅 ⁄ ) modeling from Eqs. ( 6) and ( 7) is comparable (in amplitude and uncertainty) with the Earth's-based measurements. Along the off-critical thermodynamic path of 〈𝛿𝛿𝜌𝜌 �〉 = +0.20 -0.04 +0.04 % , the careful imaging analysis of the SF6 two-phase domain appears well understood without the supplementary addition of any singular hook-shaped deviation in the rectilinear density diameter. The main part of the uncertainty in the rectilinear density diameter remains due to the actual level of precision (0.21%) for the SF6 critical density value. In such an uncertainty range, the cell thermal dilatation, the fluid compressibility, the fluid coexisting densities and the liquid wettability effects can be well-controlled from the highly symmetrical ALIR5 cell design. The slope of the SF6 linear density diameter seems the only remaining adjustable parameter, leading to questionable applicability of the complete field mixing to the simple fluid case. Future modeling approaches [START_REF] Garrabos | Liquid-vapor rectilinear diameter revisited[END_REF] will be focused on the estimation of the fluid compressibility effects using an upgraded version of the universal parametric equation of state. Moreover ongoing experimental works will be performed to account for the eventual contribution of the SF6 purity on the critical parameters before to close in a sure manner the debating situation about the density diameter behavior close to its critical point.
Fig. 1 .
1 Fig. 1. Experimental critical hook of the linear density diameter of SF6 reported in [10] and related temperature 𝑇𝑇 𝑐𝑐 -𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 where the meniscus position must be collapsed on the volumetric median plane of a cylindrical cell filled at the liquid-like off-critical density of 0.20%. Green lines: Expected from the singular hook of the density diameter. Red lines: expected from the rectilinear density diameter
leads to the ALIR5 viewed cylindrical fluid volume of 𝑉𝑉 𝑓𝑓𝑓𝑓 = 𝜋𝜋𝑅𝑅 2 𝑒𝑒 𝑓𝑓 = (221.7 -0.70 +0.20 ) mm 3 and the ALIR5 cross-sectional area of 𝐴𝐴 𝑓𝑓𝑓𝑓 = 2𝑅𝑅𝑒𝑒 𝑓𝑓 = (26.621 ± 0.020) mm 2 for any viewed median volumetric plane, except around the direction of the fill-line setup, as detailed below. The cell body is made of a machined CuCo2Be parallelepipedic block of external dimensions 𝐿𝐿(= 25) × 𝑙𝑙(= 27) × ℎ(= 24) mm 3 . This body contains two similar fill-line dead volumes -each one in the Γ-like form of two perpendicularly crossed cylindrical holes -located in the median plane of the fluid thickness. These two Γ-like volumes are symmetrical from the central optical axis used as the rotation axis of the cell. The resulting windowless fluid volume is (1 2 ⁄ )𝑉𝑉 𝑓𝑓𝑓𝑓 = (7.0 ± 0.2) mm 3 on each side of the observed cylindrical fluid volume. Therefore, 𝑉𝑉 𝑓𝑓 = 𝑉𝑉 𝑓𝑓𝑓𝑓 + 𝑉𝑉 𝑓𝑓𝑓𝑓 = (235.70 -1.0 +0.5 ) mm 3 is the total fluid sample volume. All the above dimensional values are from mechanical measurements performed at 20°C. The common axis of the two small opposite cylindrical holes opened in the main cylindrical fluid volume defines the single particular direction of the common median plane. In this latter plane occurs the maximum fluid area ( 𝐴𝐴 𝑓𝑓𝑓𝑓 + 𝐴𝐴 𝑓𝑓𝑓𝑓 = 𝐴𝐴 𝑓𝑓𝑓𝑓 + (17.8 ± 1.0) mm 2 ) crossing the complete fluid volume. The horizontal position of this median plane is chosen as the zero angle ( 𝜃𝜃 = 0 °) of the cell rotation (or 𝜃𝜃 = 180 °, equivalently, for the opposite configuration of the cell versus the direction of the gravity vector). From reference to this cell direction, the maximum tilted angle 𝜃𝜃 𝑚𝑚 that overlaps the Γ-like configuration of the dead volume is 𝜃𝜃 𝑚𝑚 ≳ 28°. The ±𝜃𝜃 𝑚𝑚 -directions are not equivalent versus the liquid (or gas) gravity positionning inside each dead volume (see below § 6).
Fig. 2 .
2 Fig. 2. (a) Picture of the ALIR5 cell. (b) Schematic cross section of the ALIR5 cell where are illustrated the four relative directions of the meniscus 𝜃𝜃 = {-23.2°; 0°; +22.9°; +90°} and the direction 𝜃𝜃 𝑚𝑚 ≳ 28° that overlaps the Γ-like forms of the two symmetrical dead volumes associated to the fill-line direction. Red area: fluid volume; blue area: diffusion windows, filling screws and stoppers; green area: cell body; external circle : dimensional scale .
Fig. 3 .
3 Fig. 3. Video picture of the ALIR5 cell for the ⟦1, 𝑉𝑉⟧ configuration at temperature 𝑇𝑇 = 45473 m°C. Selected borderline points A, B, C, T, & R between the fluid and cell body are used in the image processing to define the fluid sample cell position (especially the apparent center point O) inside the picture. Vertical line DE is used to analyse the meniscus position and shape as functions of temperature. The next step compares similar line profiles obtained at different temperatures to probe the absence of thermal effect at the pixel level during the temperature timeline on each facility configuration.The last step optimizes the matching of the selected characteristic points for two reversed similar configuration (⟦1, 𝑉𝑉⟧ and ⟦2, 𝑉𝑉⟧ for the chosen case). Indeed, the changes of the facility positions under the Earth's gravitational acceleration field induce small mechanical relative displacements due to the intrinsic clearance between the different (optical and mechanical) components. The present concern involves the cell (housed in the insert) in front of the video camera (located in the optical box of the facility). Therefore, this cell image matching step leads to the determination of the (horizontal and vertical) pixel shifts (∼2 to 6 pixels, typically) between two reversed images of the viewed fluid cell volume.
Fig. 4 .
4 Fig. 4. Bare pixel distances of the meniscus as functions of the temperature for the ⟦1, 𝑉𝑉⟧ (open red circles) and ⟦2, 𝑉𝑉⟧ (open blue circles) configurations. Related bare pixel distance of the volumetric median plane of the cell. From reference to the y pixel coordinate of the B point in Fig. 3.
Fig. 5 .
5 Fig. 5. Temperature dependence of the symmetrical pixel shift of the meniscus position from reference to the corresponding volumetric median plane, for the eight ⟦𝑖𝑖, 𝑋𝑋⟧ configurations. 1pixel = 12µm.
Fig. 6 .
6 Fig. 6. Comparison between experimental and modelling results of (ℎ 𝑅𝑅 ⁄ ) as functions of 𝑇𝑇 𝑐𝑐 -𝑇𝑇 , fixing 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 = 0.20%.
Table 1 .
1
SF6 parameters for Eq. (8)
[7] [7] [7] [9]
Aβ 0 1.124 1.0864 0.46028392
Aα 6.365 -9.042 -7.990 -0.6778981
A1 -10.13 11.37 9.770 0.13516245
A∆ 8.080 -3.354 0 0
a∆ 0 0 3.318 0
Acknowledgements
We thank all the CNES, CNES-CADMOS, NASA, and associated industrial teams involved the DECLIC facility project. CL, SM, YG, DB, are grateful to CNES for the financial support. They are also grateful to Philippe Bioulez and Hervé Burger for their operational support at CADMOS. The research of I.H. was carried out at Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. | 35,589 | [
"1234985",
"734715",
"749499",
"841527"
] | [
"525101",
"525101",
"525101",
"1159",
"61129"
] |
01695557 | en | [
"chim"
] | 2024/03/05 22:32:13 | 2017 | https://univ-rennes.hal.science/hal-01695557/file/Experimental%20and%20Computational%20Investigations%20on%20Highly%20Syndioselective_accepted.pdf | Elisa Louyriac
Eva Laur
Alexandre Welle
Aurélien Vantomme
Olivier Miserque
Jean-Michel Brusson
Laurent Maron
email: [email protected]
Jean-François Carpentier
email: [email protected]
Evgueni Kirillov
email: [email protected]
Experimental and Computational Investigations on Highly Syndioselective Styrene-Ethylene Copolymerization Catalyzed by Allyl ansa-Lanthanidocenes
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
Syndiotactic polystyrene (sPS) is an attractive engineering plastic potentially usable for many industrial applications due to its fast crystallization rate, low permeability to gases, low dielectric constant and good chemical and temperature resistance. [START_REF] Ishihara | Stereospecific Polymerization of Styrene Giving the Syndiotactic Polymer[END_REF][START_REF] Malanga | Syndiotactic Polystyrene Materials[END_REF][START_REF] Schellenberg | Syndiotactic Polystyrene: Process and Applications[END_REF] However, its high melting point (270 °C) and its brittleness are the two main drawbacks limiting its processability. To tackle this issue, several strategies have been envisaged: blending or postmodification of sPS, polymerization of functionalized styrene derivatives, or copolymerization of styrene with other monomers. [START_REF] Zinck | Functionalization of Syndiotactic Polystyrene[END_REF][START_REF] Jaymand | Recent Progress in the Chemical Modification of Syndiotactic Polystyrene[END_REF] The latter approach was found effective and versatile to fine-tune the properties of sPS, [START_REF] Laur | Engineering of Syndiotactic and Isotactic Polystyrene-Based Copolymers via Stereoselective Catalytic Polymerization[END_REF] more particularly via syndioselective copolymerization of styrene with ethylene. [START_REF] Rodrigues | Groups 3 and 4 Single-Site Catalysts for Styrene-Ethylene and Styrene-α-olefin Copolymerization[END_REF] The copolymerization of those two monomers is quite challenging due to their strikingly different reactivity. As a result, most of the group 4 catalysts active for sPS production only provided "ethylene-styrene interpolymers" (ESI), featuring no stereoregularity and amounts of incorporated styrene below 50mol%. Those issues were overcome by the development of group 3 catalysts, independently disclosed by our group [START_REF] Rodrigues | Allyl ansa-lanthanidocenes: Single-Component, Single-Site Ccatalysts for Controlled Syndiospecific Styrene and Styrene-Ethylene (Co)Polymerization[END_REF] and by Hou and co-workers. [START_REF] Luo | Scandium Half-Metallocene-Catalyzed Syndiospecific Styrene Polymerization and Styrene-Ethylene Copolymerization: Unprecedented Incorporation of Syndiotactic Styrene-Styrene Sequences in Styrene-Ethylene Copolymers[END_REF] Yet, the number of effective catalytic systems for sPSE synthesis remains quite limited to date. [START_REF] Li | Aluminum Effects in the Syndiospecific A c c e p t e d m a n u s c r i p t 21[END_REF] Very recently, we reported on the synthesis and catalytic investigations of a new series of neutral ansa-lanthanidocene catalysts for the production of sPS; 11 a thorough DFT study of these systems highlighted the different factors governing the formation of sPS. 11 In this new contribution, we describe the syndioselective copolymerization of styrene with ethylene using this latter series of complexes and demonstrate that some of them feature improved catalytic performances as compared to the current state-of-the-art (Scheme 1). For the first time, the parameters that control syndioselective styrene-ethylene copolymerization were investigated also by DFT computations. These calculations contributed to a better understanding of the
Results and Discussion
Styrene-Ethylene Copolymerizations Catalyzed by Allyl Ansa-Lanthanidocenes.
Styrene/ethylene copolymerizations catalyzed by complexes 1-Nd-K-allyl, 2-Nd7-Nd, 2-Sc, 2-La, 2-Sm and 2-Pr were first screened under similar conditions (Table 1, entries 111). As already described for styrene homopolymerization, 11 the reactions were best conducted using a few equiv of (nBu)2Mg as scavenger, to prevent catalyst decomposition by trace impurities, especially at low catalyst loading and high temperature (vide infra). This dialkylmagnesium appeared to be a poor chain transfer agent under those conditions and did not affect the reaction mechanism nor the properties of the produced sPSE copolymers. showed lower productivities and afforded sPSE copolymers with a higher ethylene content (thus affecting the calculation of syndioselectivity which appeared, at first sight, lower due to more abundant St-E enchainments) (entries 2 and 67). The latter observation suggests that introduction of substituents bulkier than tBu, namely cumyl or Ph2MeC-, at the 2,7-positions of the fluorenyl ligand favors insertion of a small monomer ethylene rather than styrene.
The nature of the metal center played also a key role. Complex 2-Sc was nearly inactive whereas 2-Pr and 2-La afforded sPSE copolymer with productivities of ca. 300 kg•mol(Ln) 1 •h 1 . Under those non-discriminating conditions, full styrene conversion was reached when using complex 2-Sm, as observed with its neodymium analogue 2-Nd. Substantial improvement of the productivity values was obtained under more forcing and demanding copolymerization conditions (entries 1217). Increasing both the temperature of polymerization up to 140 °C and the monomer-to-catalyst ratio up to 40 000 allowed to reach productivities above 1,000 kg(sPSE)•mol(Ln) of the copolymer significantly dropped ([r] 5 = 3235%). Such a marked discrepancy between the stereoselectivity of 2-Nd and 2-Sm was not observed for the copolymerizations performed at 60 °C (compare entries 2, 10 and 11 with entries 1217). Overall, these results are in line with those already described for syndioselective styrene homopolymerization, 11 and highlight the remarkable stability of 2-Ln catalytic systems under such drastic conditions (thanks to (nBu)2Mg as scavenger). The productivities of these systems are comparable with those of the most active cationic scandium-based systems reported for syndiospecific styrene/ethylene copolymerization. [START_REF] Rodrigues | Allyl ansa-lanthanidocenes: Single-Component, Single-Site Ccatalysts for Controlled Syndiospecific Styrene and Styrene-Ethylene (Co)Polymerization[END_REF][START_REF] Li | Aluminum Effects in the Syndiospecific A c c e p t e d m a n u s c r i p t 21[END_REF] The most productive and syndioselective catalyst, 2-Nd, was tested on a 10-fold larger production scale (i.e., on a half-kg styrene) in bulk conditions at 100 °C in a closed reactor;
five different experiments with variable amounts of ethylene (vide infra) were conducted and returned improved productivities in the range 2 7305 430 kg(sPSE)•mol(Nd) 1 •h 1 (entries 1822). Under these bulk conditions, molecular weights of the resulting copolymers were somewhat higher than those obtained at a lower (bench) scale in 50:50 v/v mixtures of styrene/hydrocarbon solvent (Mn = 43 00062 000 g•mol 1 vs. Mn = 33 000 g•mol 1 , respectively) and the polydispersities were also narrower (ÐM = 1.42.5). These data highlight the significant impact of the process conditions on both the catalytic system productivities and characteristics of the polymers.
A c c e p t e d m a n u s c r i p t The initial styrene-to-ethylene ratio was also varied by changing the amount of ethylene introduced at the beginning of the polymerization (entries 18-22; see Experimental part). The ethylene content in the copolymer can be hence easily tuned, allowing the Those 13 C{ 1 H} NMR spectra were recorded using an inverse-gated-decoupling sequence in order to accurately determine the amount of ethylene incorporated. As only isolated units of ethylene were detected, the amount of ethylene incorporated was determined integrating the signal of the ipso carbon (polystyrene sequences, δ 145.8 ppm) and the signals at 3738 ppm corresponding to the secondary carbons Sαγ.
Prod. b [kg• mol 1 •h 1 ] C2 inc. c [mol%] Tm d [° C] Tc d [° C] Tg d [° C] Hm d [J•g 1 ] Mn×10 3 [g•mol 1 ] e ÐM e [r]
The analysis of the spectra area corresponding to the resonance of the secondary carbon Sα of PS sequences also allowed quantifying the syndiotacticity at the hexad level (Figure 3). The relative intensity of the rrrrr hexad signal was obtained after deconvolution and integration of all the signals in this area. This means that not only the presence of others hexads mrrrr, rmrrr and rrmrr but also the presence of other unassigned sequences (in particular, those that are the consequence of S-E junctions, and presumably as well hexads with meso diads) were considered for the calculation of [r] [START_REF] Jaymand | Recent Progress in the Chemical Modification of Syndiotactic Polystyrene[END_REF] . The values measured in the present case ([r] 5 = 3278%) are similar to those previously reported in the case of sPSE materials obtained with I (Pr > 0.81, [r] 5 > 35%; depending on the ethylene content). [START_REF] Rodrigues | Allyl ansa-lanthanidocenes: Single-Component, Single-Site Ccatalysts for Controlled Syndiospecific Styrene and Styrene-Ethylene (Co)Polymerization[END_REF] A c c e p t e d m a n u s c r i p t down-si and up-re.
DFT investigation of styrene-ethylene copolymerization catalyzed by {Me2C(C5H4)(Flu)}Nd(C3H5) (I).
Complex I, which is highly effective to copolymerize styrene with ethylene while maintaining a high syndiotacticity, [START_REF] Rodrigues | Allyl ansa-lanthanidocenes: Single-Component, Single-Site Ccatalysts for Controlled Syndiospecific Styrene and Styrene-Ethylene (Co)Polymerization[END_REF] was selected as a benchmark for our theoretical study. Subsequently, it will allow us to highlight the influence of catalyst substituents present on the allyl and fluorenyl ligands on styrene-ethylene copolymerization.
i) First styrene vs. ethylene insertion. Energy profiles were computed for the first ethylene (3-E) and the 2,1-down-re (3d-re) styrene 11 insertions (Figure 4). From a kinetic point of view, transition state 3-E is more stable (by 3.3 kcal mol 1 ) than 3d-re, but this is included within the error range of the method. 14,15 The first insertion is likely more thermodynamically controlled and in favor of styrene insertion by 4.6 kcal mol 1 . The energy difference between 3d-re/3-E is mainly due to the steric hindrance around the metal center (Figure S9). Product 4-E obtained following the intrinsic reaction coordinate is extra stabilized by a resulting interaction between the terminal double bond of the allyl ligand and the metal center (Figure S10). The relaxation of the polymer chain leads to an endothermic product 4-Erelaxed (by 0.2 kcal mol 1 ), which is consistent with the fact that formation of an alkyl-from an allyl-complex is thermodynamically unfavorable.
A c c e p t e d m a n u s c r i p t
ii) Second styrene vs. ethylene insertion. As in the first step, 2,1-down-re styrene insertion is thermodynamically favored, second insertions were computed from the product 4d-re. The energy profiles were calculated for the stationary (6-E) and migratory (6-E') ethylene insertions and for 2,1-up-si (6u-si) stationary styrene insertions (Figure 5). Chart 2. Numbering used for carbon atoms in the allyl ligand.
As regards the 2 nd insertion products, the presence of a π-coordination between the phenyl ring of the first styrene inserted and the metal center further stabilizes the migratory insertion ethylene product 7-E' by 5.0 kcal mol 1 than for stationary insertion product 7-E (Figure S12). Hence, at the second insertion stage, the ethylene monomer will be inserted preferentially, according to the "migratory" mechanism.
ii) Third vs. ethylene insertion. Energy profiles were for the (9-E) ethylene insertion and for 2,1-down-re (9d-re) and 2,1-up-si (9u-si) styrene insertions (Figure 6). At the third insertion step, there is a slight kinetic preference for insertion of ethylene (9d-re/9-E = 4.1 kcal mol 1 ) (dark-green), probably related to the decrease of the steric hindrance around the metal center (Figure S13). In all products, the growing chains feature the same orientation, which may explain the same range of their energies (Figure S14).
To obtain further information about the nature of the resulting copolymer, it was crucial to investigate reactional pathways after insertion of two ethylene units and, more generally, after the two units of the same monomer were consecutively inserted.
Ethylene-Ethylene-Ethylene (E-E-E) vs. Ethylene-Ethylene-Styrene (E-E-S).
The energy profiles were computed for the third ethylene (9-E) and 2,1-down-re (9d-re) styrene
A c c e p t e d m a n u s c r i p t insertions, in case where two ethylene monomers were inserted according to the "migratory" insertion mechanism (Figure 7). At this stage, there is no significant kinetic preference between styrene and ethylene insertions (9d-re/9-E = 3.2 kcal mol 1 ). The energy difference between the both insertion products 10-E and 10d-re is 2.5 kcal mol 1 , which is also within the error range of the method. This is reflected in the product structures in which the growing chains are similarly oriented (Figure S16).
7.
Energetic profiles for the third ethylene insertion in {Me2C(C5H4)(Flu)}Nd(C3H5) (I), after two ethylene insertions. The third 2,1-down-re styrene insertion is plotted in blue.
A c c e p t e d m a n u s c r i p t
similarly performed for the third 2,1-down-re (9d-re) styrene and ethylene (9-E) insertions, after insertion of two styrene monomers (Figure 8). The results match those obtained for the above E-E-E vs. E-E-S study: (i) the energy difference 9d-re/9-E (1.3 kcal mol 1 ) is included within the error range of the method, and (ii) the growing chains appear to be similar in the transition state and product structures (Figures S17 andS18). This is confirmed by the lack of thermodynamic preference between the two monomers (10d-re/10-E = 0.5 kcal mol 1 ).
Overall, the above calculations indicate that, once two units of the same monomer have been inserted, there will be no selectivity at the next stage. In other words, I tends to form random styrene-ethylene copolymers.
A c c e p t e d m a n u s c r i p t (I), after two styrene insertions according to the "stationary" mode. The third 2,1-down-re styrene insertion (the most stable found in homopolymerization case) is plotted in blue.
DFT investigation of styrene-ethylene copolymerization catalyzed by {Me2C(C5H4)(Flu)}Nd(1,3-C3H3(SiMe3)2) (1-Nd).
In order to obtain information on the influence of SiMe3 substituents of the allyl ligand on styrene-ethylene copolymerization as well as on the nature of the sPSE copolymer obtained, the same study as that for I was carried out for the putative 1-Nd catalyst. The computational results are similar to those highlighted for the non-substituted catalyst I (all reaction profiles and structures are available in the Supporting Information; Figures S19-S33): (i) at the first step, a 2,1-down-re styrene insertion is preferred, followed by an ethylene insertion, and then, a slight preference for this latter monomer at the third step; (ii) after insertion of two same monomer units, there is no clear kinetic or thermodynamic preference between the two monomers.
Hence, the above calculations indicate that the presence of the bulky substituents in the allyl initiating group does not affect the chemistry and the nature of the obtained copolymer: the 1-Nd catalyst tends also to form random styrene-ethylene copolymers. This is consistent with an initiating group which is progressively rejected at the end of the growing polymer chain. It should be noted, however, that the bulky substituents on the allyl ligand induce an increase in the energy of the first insertion barriers (for example 24.5 vs. 14.5 kcal mol 1 for the first ethylene insertion), as this has already been observed for styrene homopolymerization. 11 This is again due to charge localization on the "wrong" carbon atom of the allyl ligand, that is the one that ensures the interaction with the metal center and therefore provides the nucleophilic assistance, rather than the one that is involved in the CC coupling.
A c c e p t e d m a n u s c r i p t
Furthermore, it is noteworthy that at the first ethylene insertion step in 1-Nd, the alkyl product 4-E-relaxed is thermodynamically favorable (by 10.3 kcal mol 1 ). This is not consistent with the usual trend that the formation of an alkyl from an allyl compound is disfavored thermodynamically. A charge analysis at the NBO level was then carried out in order to obtain some information about the nature of the allyl ligand in Therefore, this is not a standard allyl in 1-Nd but rather a masked alkyl, explaining why formation of 4-E-relaxed is thermodynamically favorable (10.3 kcal mol 1 vs. +0.2 kcal mol 1 in the case of the (C3H5) allyl in I).
DFT investigation of styrene-ethylene copolymerization catalyzed by {Me2C(C5H4)(2,7-tBu2Flu)}Nd(1,3-C3H3(SiMe3)2) (2-Nd).
It has been experimentally found that complex 2-Nd with tBu groups in 2,7-positions of the fluorenyl ligand exhibits a high productivity of up to 5,430 kg(sPSE)•mol(Nd) 1 •h 1 for styrene-ethylene copolymerization.
The microstructure of the sPSE copolymers shares the same features as those observed for copolymers obtained with I. DFT calculations were performed to rationalize this influence of the 2,7-tBu2 groups on the Flu ligand on the reactivity and on the copolymer obtained.
The first and second styrene insertion were computed (see ESI figures S34 and S35) and it was found that, unlike complex I, the migratory styrene insertion is preferred for the 2-Nd catalyst over the stationary insertion found for I. This can be attributed to the presence of bulky substituents on the allyl ligand that leads to a change in the polymerization mechanism in order to minimize steric repulsion. From a kinetic point of view, there is a clear preference for 3-E which is more stable by 6.8 kcal mol -1 than 3d-re. This energy difference is due to a repulsion between the tBu groups and the Ph ring of the incoming styrene which tends to destabilize 3d-re compared to between the tBu and the SiMe3 groups and cannot ensure the nucleophilic assistance. The C 1 (allyl) carries the negative charge in order to induce a reaction with the carbon atom of the ethylene monomer and maintains the interaction with the metal center (nucleophilic assistance). This implies that ethylene moves away from the metal center and, thus has a CC double bond less activated (CC = 1.40 Å vs.1.42 Å in 1-Nd and in I). This charge localization effect allows decreasing the activation barrier compared to the case of complex 1-
Nd.
In terms of thermodynamics, the alkyl product 4-E-relaxed is favorable (by 12.5 kcal mol 1 ) which, as pointed out above for 1-Nd, is related to the charge of the carbon atoms in the allyl ligand [C 1 (allyl) (1.03), C 2 (allyl) (0.23), C 3 (allyl) (1.15)] in 2-Nd. In this case, 4-E-relaxed is more stable by 4.2 kcal mol 1 compared to 4d-re-relaxed; therefore, ethylene would be preferentially inserted.
ii) Second styrene vs. ethylene insertion. Second insertions were computed after a 2,1-downre styrene insertion. The corresponding energy profiles were calculated for the stationary (6-E) and migratory (6-E') ethylene insertions and for 2,1-up-re (6u'-re) migratory styrene insertion (Figure 10). The results for the second insertion with 2-Nd are similar to those obtained for the two previous catalysts. Indeed, after a 2,1-down-re styrene insertion, migratory ethylene insertion is kinetically preferred (6u'-re/6-E' = 10.4 and 6-E/6-E' = 9.5 kcal mol -1 ).
These results are quite similar to those obtained for the 1-Nd and I catalysts, suggesting the formation of random copolymers. This conclusion is further strengthened by the results obtained for the third steps (Figures S40 andS41), as no selectivity was found, in line with the formation of random styrene-ethylene copolymers.
In summary, DFT calculations allowed to rationalize the nature of the copolymer obtained as well as the influence of the substituents of the catalyst. The 1,3-trimethylsilyl substituents on the allyl ligand cause (i) a modification of the distribution of the charges on A c c e p t e d m a n u s c r i p t the allylic carbon atoms, which makes the first ethylene insertion product thermodynamically favorable, and (ii) an increase in the insertion barriers, related to the steric hindrance and the charge distribution. On the other hand, bulky 2,7-tert-butyl groups on the fluorenyl ligand tend to promote ethylene insertion for the second insertion. This is also related to a charge localization effect.
Finally, for the three catalytic systems studied, no modification in the nature of the obtained copolymer is observed, that is the formation of random styrene-ethylene copolymers with a high syndiotacticity in the PS sequences.
Conclusions
The performance of a series of allyl ansa-lanthanidocenes of the general formula {R2C(C5H4)(R'R'Flu)}Ln(1,3-C3H3(SiMe3)2)(THF)x was assessed in styrene-ethylene copolymerization. By using forcing copolymerization conditions, that is a low catalyst loading and relatively high temperature, a high productivity of 5,430 kg(sPSE)•mol(Nd) 1 •h 1 was achieved with 2-Nd on a half-kilogram scale, which is comparable with the most active scandium half-sandwich complexes. [START_REF] Luo | Scandium Half-Metallocene-Catalyzed Syndiospecific Styrene Polymerization and Styrene-Ethylene Copolymerization: Unprecedented Incorporation of Syndiotactic Styrene-Styrene Sequences in Styrene-Ethylene Copolymers[END_REF] The sPSE copolymers thus obtained feature a random microstructure with single ethylene units distributed in highly syndiotactic PS sequences. The ethylene content and thus the thermal properties of the materials can be tuned by the initial comonomer feed.
Theoretical DFT studies allowed rationalizing the random nature of the obtained styrene-ethylene copolymers catalyzed by complexes I, 1-Nd and 2-Nd. The calculations showed that: (i) SiMe3 substituents on the allyl ligand have an influence on the nature of the first insertion product and notably on the stability of the ethylenic product, and (ii) those on the fluorenyl ligand either make the catalyst more ethylene reactive at the second insertion Typical procedure for bench-scale styrene-ethylene copolymerization. In a typical experiment (Table 1, entry 1), a 300 mL glass high-pressure reactor (TOP-Industrie) was charged with 50 mL of solvent (cyclohexane or n-dodecane) under argon flash and heated at the appropriate temperature by circulating water or oil in a double mantle. Under an ethylene A c c e p t e d m a n u s c r i p t flow, styrene (50 mL), a solution of (nBu)2Mg (0.5 mL of a 1.0 M solution in heptane) and a solution of pre-catalyst in toluene (ca. 43 mg in 2 mL) were introduced. The gas pressure in the reactor was set at 2 atm and kept constant with a back regulator, and the reaction media was mechanically stirred. At the end of the polymerization, the reaction was cooled, vented, and the copolymer was precipitated in methanol (ca. 500 mL); after filtration, it was washed with methanol and dried under vacuum at 60 °C until constant weight.
Typical procedure for half-kg-scale styrene-ethylene copolymerizations in a closed reactor. In a typical experiment (Table 1, entry 18), a 1 L high-pressure reactor was charged with 500 mL of styrene (degassed under nitrogen, stored in the fridge on 13X molecular sieves and eluted through an alumina column prior to use) under nitrogen flush and heated at the appropriate temperature by circulating oil in a double mantle. An exact amount of ethylene was introduced in one shot in the reactor using an injecting system equipped with a pressure gauge, followed by a solution of (nBu)2Mg (2.5 mL of a 1.0 M solution in heptane)
and the pre-catalyst (ca. 45 mg). The reactor was closed and the reaction mixture was mechanically stirred. At the end of the polymerization, the reaction mixture was cooled, vented, and the copolymer was precipitated in isopropanol (ca. 2 L); after filtration, it was washed with isopropanol. Polymer samples were dried in under vacuum in an oven heated at 200 °C.
Computational Details. The calculations were performed at the DFT level of theory using the hybrid functional B3PW91. 16,17 Neodymium was treated with a large-core 19,20 Toluene was chosen as solvent. The model that was used to take into account solvent effects is the SMD solvation model. The solvation energies are evaluated by a self-consistent reaction field (SCRF) approach based on accurate numerical solutions of the Poisson-Boltzmann equation. 21 All the calculations were carried out with the Gaussian 09 program. 22 Electronic energies and enthalpies were calculated at T = 298 K. Geometry optimizations were computed without any symmetry constraints and analytical frequency calculations was used to assess the nature of the extrema.
The connectivity of the optimized transition states was determined by performing Intrinsic Reaction Coordinates (IRC) calculations. Activation barriers ΔH # are defined depending on the sign of ΔHcoord (see Figure 11). 13 Electronic charges were obtained by using Natural Population Analysis (NPA) analysis. [START_REF] Reed | Intermolecular interactions from a natural bond orbital, donor-acceptor viewpoint[END_REF] NBO analysis [START_REF] Reed | Intermolecular interactions from a natural bond orbital, donor-acceptor viewpoint[END_REF]
AScheme 1 .
1 Scheme 1. Allyl {Cp/Flu} ansa-lanthanidocenes used as single-component catalysts for
11
A
c c e p t e d m a n u s c r i p t As we have demonstrated in the previous study, 11 substitution on the fluorenyl moiety of the ligand has a strong influence on copolymerization productivities. Complexes 3-Nd and 5-Nd, bearing bulky substituents at the 3,6-positions of the fluorenyl ring, were not or poorly active. 1-Nd-K-allyl and 4-Nd, which bear no substituents on the fluorenyl ring, exhibited moderate productivities and 2-Nd, which holds tert-butyl substituents on remote 2,7positions, proved to be the most active within the Nd series (non-optimized productivity > 400 kg(sPSE)•mol(Nd) 1 •h 1 , entry 2). Compared to 2-Nd, complexes 6-Nd and 7-Nd
A c c e p t e d m a n u s c r i p t
A c c e p t e d m a n u s c r i p t production of a
range of sPSE materials containing from 1.1 to 10 mol% of ethylene. DSC measurements showed that the melting transition temperature and the glass transition of those materials are closely related to the quantity of ethylene incorporated, decreasing almost linearly with the quantity of ethylene incorporated (Figure1).
Figure 1 .2457-Nd and 2 -
12 Figure 1. Melting (Tm) and glass (Tg) transition temperatures of sPSE materials prepared in
Figure 2 .
2 Figure 2. Aliphatic region of the 13 C{ 1 H} NMR spectra (125 MHz, 130 °C, C6H3Cl3/C6D6)
Chart 1 .
1 Computational studies. In the previous study,11 DFT calculations including solvent model in the styrene homopolymerization catalyzed by {Me2C(C5H4)(Flu)}Nd(C3H5) (I), the putative {Me2C(C5H4)(Flu)}Nd(1,3-C3H3(SiMe3)2) (1-Nd) and the most effective [{Me2C (C5H4)(2,7-tBu2Flu)}Nd(1,3-C3H3(SiMe3)2)] (2-Nd) allowed to identify the factors which influence the styrene insertion according to the 2,1-pathway (which is the most favored mode). By using Castro et al.13 method, styrene and ethylene insertions were computed in order to evaluate the effectiveness of catalysts I, 1-Nd and 2-Nd in styrene-ethylene copolymerization and the topology of the obtained sPSE copolymer. At each step, the preference between ethylene and styrene insertions has been examined. Moreover, two chainend stereocontrol mechanims were also considered computationally. For the sake of clarity, the following definitions are considered: insertions that occur on the same enantiotopic site of coordination are denoted as "stationary" mechanism whereas "migratory" insertions refer to the switch of coordination site at each step (Chart 1). Nomenclature and orientation modes used for styrene insertion with respect to the ancillary ligand. In this representation, only down-re and up-si styrene coordination modes are depicted, corresponding to the enantiomer of the metal catalyst used for "stationary" A c c e p t e d m a n u s c r i p t insertions. The opposite configurations have been employed for "migratory" insertions, viz.
A c c e p t e d m a n u s c r i p t
Figure 4 .
4 Figure 4. Energetic profiles for the first ethylene (black) and 2,1-down-re styrene (blue)
Figure 5 .
5 Figure 5. Energetic profiles for the second ethylene (stationary, black, and migratory, red)
A c c e p t e d m a n u s c r i p t 6 .
6 Energetic profiles for the third insertions in {Me2C(C5H4)(Flu)}Nd(C3H5) (I), after a 2,1-down-re styrene first insertion and a migratory ethylene second insertion.
Figure 8 .
8 Figure 8. Energetic profiles for the third ethylene insertion in {Me2C(C5H4)(Flu)}Nd(C3H5)
1 -
1 Nd. The charges on the carbon atoms in the (1,3-C3H3(SiMe3)2) allyl ligand are [C 1 (allyl) (1.05), C 2 (allyl) (0.23), C 3 (allyl) (1.11)] in 1-Nd, whereas those obtained in the case of the unsubstituted allyl (C3H5) in I are [C 1 (allyl) (0.79), C 2 (allyl) (0.26), C 3 (allyl) (0.81)]. Thus, the sterically hindered allyl leads to a charge relocalization at the C 3 (allyl) carbon atom.
A c c e p t e d m a n u s c r i p tFigure 9 .
9 Figure 9. Energetic profiles for the first ethylene (black) and 2,1-down-re styrene (blue)
3 -
3 E. Moreover, the ethylene insertion barrier is intermediate (ΔH # = 20.4 kcal mol 1 ) to those A c c e p t e d m a n u s c r i p t calculated for I and 1-Nd (ΔH # = 14.5 and 24.5 kcal mol 1 , respectively). Indeed, the incorporation of tBu substituents counteracts the effect of the SiMe3 on the allyl ligand, which reduces the activation barrier and makes the catalyst more reactive towards ethylene. This is reflected in the NBO charge analysis. The charges on the carbon atoms in the allyl ligand in 3-E are [C 1 (allyl) (0.98), C 2 (allyl) (0.29), C 3 (allyl) (0.83)] for 2-Nd, [C 1 (allyl) (0.89), C 2 (allyl) (0.23), C 3 (allyl) (0.95)] for 1-Nd and [C 1 (allyl) (0.66), C 2 (allyl) (0.23), C 3 (allyl) (0.61)] for I. In complex 2-Nd, the carbon C 3 (allyl) is repulsed by an interaction
A c c e p t e d m a n u s c r i p tFigure 10 .
10 Figure 10. Energetic profiles for the second ethylene (stationary, black, and migratory, red)
A c c e p t e d m a n u s c r i p t ( 2 , 2 -Y, 2 -La, 2 -
2222 7 substitution) or block the reactivity(3,6 substitution). This last point is essential to explain the good productivity of catalyst 2-Nd for styrene-ethylene copolymerization.Experimental SectionGeneral considerations. All experiments were performed under a dry argon atmosphere, using a glovebox or standard Schlenk techniques. Complexes 1-Nd-K-allyl, 27-Nd, 2-Sc, Pr and 2-Sm were synthesized as reported before. 11 Cyclohexane and n-dodecane were distillated from CaH2 and stored over 3 Ǻ MS. Styrene (Fisher Chemical, general purpose grade, stabilized with 1015 ppm of tert-butylcatechol) was eluted through neutral alumina, stirred and heated over CaH2, vacuum-distilled and stored over 3Ǻ MS at 30 °C under argon. The (nBu)2Mg solution (1.0 M in heptane, Sigma-Aldrich) was used as received. Ethylene (Air Liquide, N35) was used without further purification.Instruments and measurements.13 C{ 1 H} NMR and GPC analyses of sPSE samples were performed at the research center of Total Raffinage-Chimie in Feluy (Belgium). 13 C{ 1 H} NMR analyses were run on a Bruker Avance III 500 MHz equipped with a cryoprobe HTDUL in 10 mm tubes (1,2,4-trichlorobenzene/C6D6, 2:0.5 v/v). GPC analyses were performed in 1,2,4-trichlorobenzene at 135 °C using PS standards for calibration. Differential scanning calorimetry (DSC) analyses were performed on a Setaram DSC 131 apparatus, under continuous flow of helium and using aluminum capsules. Crystallization temperatures were measured during the first cooling cycle (10 °C/min), and glass and melting transition temperatures were measured during the second heating cycle (10 °C/min).
Stuttgart-
Dresden relativistic effective core potential (RECP) where the 4f electrons are included in core. The RECP was used in combination with its adapted basis set augmented by a set of f polarization function (α = 1.000). 18 A 6-31+G(d,p) double -ζ quality basis set was used for carbon and hydrogen atoms. The Si atoms were described with a Stuttgart-Dresden relativistic effective core potential in combination with its optimized basis set with the A c c e p t e d m a n u s c r i p t addition of a d polarization function (α = 0.284).
Figure 11 .
11 Figure 11. Definition of ΔH # depending on the sign of ΔHcoord. 13
A c c e p t e d m a n u s c r i p t introduced to scavenge
the reaction medium, uncontrolled, radical (thermally selfinitiated) polymerization can take place (see ref 11;
A c c e p t e d m a n u s c r i p t
1
•h 1 .
2-Nd gave 1 4001 700
kg(sPSE)•mol(Nd) 1 •h 1 , affording a highly syndiotactic copolymer ([r] 5 = 5461%) with a
relatively narrow dispersity value (ÐM = 2.4), despite the elevated polymerization temperature
(entries 12 and 13). Similar results were observed using 2-Pr, even though it appeared to be somewhat less active and stereoselective than its Nd analogue. Better productivities in the range 1 8672 265 kg(sPSE)•mol(Sm) 1 •h 1 were observed with 2-Sm but the syndiotacticity
Table 1 .
1 Styrene-ethylene copolymerizations catalyzed by 1
-Nd-K-allyl, 27-Nd and 2- Sc,La,Sm,Pr a
Entry Complex [St]0 [M] [St]0/ [Ln] [Mg] /[Ln] Tpolym (Tmax) [° C] Time [min] Ethylene (bar or g)
Table
Figure 3. Methylene region of the 13 C{ 1 H} NMR spectrum (125 MHz, 130 °C, C6H3Cl3/C6D6) of a sPSE copolymer (98 mol% styrene; Table 1, entry 9).
S αα T ββ
(SSSS) S αα (SSSE) (SSSS) (SSSE)
T βδ
T δδ (SSSE)
(SES)
rrrrr
rrmrr rrrrm rrrmr
1, entry 2), (bottom) 98 mol% styrene (entry 9).
S αγ (SES) S ββ (SES)
A c c e p t e d m a n u s c r i p t
of the neodymium system was done by applying Clark et al. method.[START_REF] Clark | DFT study of tris(bis(trimethylsilyl)methyl)lanthanum and samarium[END_REF]
Table 3
3 , entries 12-13). 13 Castro, L.; Kirillov, E.; Miserque, O.; Welle, A.; Haspeslagh, L.; Carpentier, J.-F.; Maron, L. Are solvent and dispersion effects crucial in olefin polymerization DFT calculations? Some insights from propylene coordination and insertion reactions with group 3 and 4 metallocenes. ACS Catal. 2015, 5, 416-425. 14 Schultz, N. E.; Zhao, Y.; Truhlar, D. G. Benchmarking approximate density functional theory for s/d excitation energies in 3d transition, metal cations. J. Comput. Chem. 2008, Ab initio energy-adjusted pseudopotentials for elements of groups 13-17. Mol. Phys. 1993, 80, 1431-1441.
29, 185-189.
15 Zhao, Y.; Truhlar, D. G. Density functionals with broad applicability in chemistry. Acc. Chem. Res. 2008, 41, 157-167. 16 Becke, A. D. Density-functional thermochemistry. III. The role of exact exchange. J. Chem. Phys. 1993, 98, 5648-5652. 17 Burke, K.; Perdew, J. P.; Wang, Y. In Electronic Density Functional Theory: Recent Progress and New Directions; Dobson, J. F., Vignale, G., Das, M. P., Eds.; Plenum: New York, 1998. 18 Dolg, M.; Stoll, H.; Savin, A.; Preuss, H. Energy-adjusted pseudopotentials for the rare earth elements. Theor. Chem. Acc. Theory Comput. Model. Theor. Chim. Acta 1989, 75, 173-194. 19 Bergner, A.; Dolg, M.; Küchle, W.; Stoll, H.; Preuss, H. pseudo-potential basis sets of the main group elements Al-Bi and f-type polarization functions for Zn, Cd, Hg. Chem. Phys. Lett. 1993, 208, 237-240.
The presence of atactic PS is likely a result of thermally self-initiated polymerization (Mayo's mechanism). Some of us have previously investigated this process in presence (or absence) of dialkylmagnesium reagents (see: Bogaert, S.; Carpentier, J.-F.; Chenal, T.; Mortreux, A.; Ricart, G. Macromol. Chem. Phys., 2000, 201, 1813-1822) under conditions which are comparable to those reported in the current manuscript, in particular with styrene purified the same way (it is noteworthy that thoroughly purified styrene is much less prone to radical polymerization as there is no "residual" initiator) and reactions performed in bulk styrene. We observed that, at 105 °C, only 18-20% of atactic PS formed after 5 h; with 5 mmol of MgR2 for 175 mmol of styrene, the resulting aPS had a high molecular weight (typically Mn = 150 kg.mol 1 , PDI = 2.4). The reactions reported in the current manuscript were conducted at higher temperatures, but over shorter reactions times. We hence did not expect the formation of significant amounts of atactic PS. This is corroborated by GPC analyses which showed monomodal traces with Mn values in the typical range 12-45 kg/mol; no significant presence of high MW PS was observed (see the Supp. Info.). Yet, the possible presence of a few % of aPS in the essentially sPS(E) materials cannot be discarded. Note that during homopolymerization of styrene with the same neodymocene catalysts, it was noted that if MgR2 is not | 37,583 | [
"1211343"
] | [
"43574",
"194938",
"309770"
] |
01766236 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01766236/file/ILP_2017_paper_60.pdf | Yin Jun Phua
email: [email protected]
Tony Ribeiro
email: [email protected]
Sophie Tourret
email: [email protected]
Katsumi Inoue
email: [email protected]
Learning Logic Program Representation for Delayed Systems With Limited Training Data
Keywords: dynamical systems, Boolean networks, attractors, learning from interpretation transition, delayed systems
Understanding the influences between components of dynamical systems such as biological networks, cellular automata or social networks provides insights to their dynamics. Influences of such dynamical systems can be represented by logic programs with delays. Logical methods that learn logic programs from observations have been developed, but their practical use is limited since they cannot handle noisy input and need a huge amount of data to give accurate results. In this paper, we present a method that learns to distinguish different dynamical systems with delays based on Recurrent Neural Network (RNN). This method relies on Long Short-Term Memory (LSTM) to extract and encode features from input sequences of time series data. We show that the produced high dimensional encoding can be used to distinguish different dynamical systems and reproduce their specific behaviors.
Introduction
Being able to learn the dynamics of an environment purely by observing has many applications. For example, in multi-agent systems where learning other agents' behavior without direct access to their internal state can be crucial for decision making [START_REF] Jennings | A roadmap of agent research and development[END_REF]. In system biology, learning the interaction between genes can greatly help in the creation of drugs to treat sicknesses [START_REF] Ribeiro | Learning multi-valued biological models with delayed influence from time-series observations[END_REF].
Problem Statement
Having an understanding of the dynamics of a system allows us to produce predictions of the system's behavior. Being able to produce predictions means that we can weigh between different options and evaluate their outcome from a given state without taking any action. In this way, learning about the dynamics of a system can aid in planning [START_REF] Martínez | Relational reinforcement learning for planning with exogenous effects[END_REF].
In most real world systems, we do not have direct access to the rules that govern the systems. What we do have, however, is the observation of the systems' state at a certain time step, or a series of observations if we look long enough. Therefore, the problem is to learn the dynamics of systems purely from the observations that we are able to obtain.
Several learning algorithms have been proposed, that learn rules for a system, provided that the observations given cover every case that can happen within the system. However, most real world systems, particularly in system biology, obtaining data for even a short amount of time is difficult, time consuming and expensive. Therefore most current learning algorithms, while complete, are not practical in the biology setting. In addition to that, most real world observations that can be obtained are often full of random noise. Therefore, dealing with noise is also an integral part in solving this problem. The focus of this paper is therefore on being able to learn the rules despite some of the rules not having manifested in the observation. We also consider the setting in which actions from past states are able to have a delayed influence on the current state. In addition, our proposed model can also deal with noise within the data, that no previous approaches dealt with, as shown in the experiments section.
Proposed Approach
In this paper, we propose an approach to this problem utilizing Recurrent Neural Networks (RNN) to learn a logic program representation from a series of boolean state transitions. Our method is based on a framework called Learning from Interpretation Transition (LFIT) [START_REF] Inoue | Learning from interpretation transition[END_REF]. LFIT is an unsupervised learning algorithm, that can learn logic programs describing fully the dynamics of the system, purely by observing state transitions. In our approach, we construct two neural networks, one for encoding the observed state transitions, and another one of which to produce the logic program representation for the system. The idea behind this is that given a series of state transitions with a large enough length, it should be possible to uniquely identify the system. Therefore we can transform this into a classification problem, in which we attempt to classify which logic program a specific series of state transition belongs to. Neural networks are known to be good at performing classification, which makes them suitable tools for our proposed approach.
Our proposed approach works well even with a limited amount of data. This is possible because the neural network used in our model is not trained to model the dynamical system itself, but rather to output a classification of different systems. Therefore, it can be trained on artificial data prior to being applied to real data. Thus it is easy to see that the amount of data obtained has no direct relation with the performance of our model.
The rest of the paper is organized as follows. We cover some of the prior researches in Section 2, following by introducing the logical and neural network background required in Section 3. Then we present the RNN-LFIT approach in Section 4. We pursue by presenting an experimental evaluation demonstrating the validity of an approach in Section 5 before concluding the paper in Section 6.
Related Work
Standard LFIT
One way of implementing the LFIT algorithm is by relying on a purely logical method. In [START_REF] Inoue | Learning from interpretation transition[END_REF], such an algorithm is introduced. It constructs an NLP by doing bottom-up generalization for all the positive examples provided in the input state transition. An improved version of this algorithm, utilizing binary decision diagrams as internal data structures, was introduced in [START_REF] Ribeiro | A BDD-based algorithm for learning from interpretation transition[END_REF]. These methods, while proven to be theoritically correct, generate rules from every positive examples. The resulting NLP has been proven to be non-minimal, and thus not very humanfriendly. To allow practical use of the resulting NLP, a method for learning minimal NLP was introduced in [START_REF] Ribeiro | Learning prime implicant conditions from interpretation transition[END_REF]. In [START_REF] Ribeiro | Learning delayed influences of biological systems[END_REF], an algorithm that learns delayed influences, that is cause/effect relationship that may be dependent on the previous k time steps, is introduced. Another recent development in the prolongation of the logical approach to LFIT is the introduction of an algorithm which deals with continuous values [START_REF] Ribeiro | Inductive learning from state transitions over continuous domains[END_REF].
This class of algorithms that utilizes logical methods, are proven to be complete and sound, however a huge disadvantage with these methods is that the resulting NLP is only representable of the observations that have been fed to the algorithm thus far. Any observations that did not appear in the input, will be predicted as either to be always true or always false depending on the algorithm used.
NN-LFIT
To deal with the shortcomings stated in the previous paragraph, an algorithm that utilizes neural networks (NN) was proposed [START_REF] Gentet | Learning from interpretation transition using feed-forward neural network[END_REF]. This method starts by training a feed-forward NN to model the system that is being observed. The NN, when fully trained, should predict the next state of the system when provided with the current state observation. Then, there is a pruning phase where weak connections inside the NN are removed in a manner that doesn't affect the prediction accuracy. After the pruning phase, the algorithm extracts rules from the network based on the remaining connections within the NN. To do so, a truth table is constructed for each variable. The truth table contains variables only based on observing the connections from the outputs to the inputs of the trained and pruned NN. A simplified rule is then constructed from each truth table. In [START_REF] Gentet | Learning from interpretation transition using feed-forward neural network[END_REF], it is shown that despite reducing the amount of training data, the resulting NLP is still surprisingly accurate and representative of the observed system. However, this approach does not deal with systems that have inherent delays.
Other NN-based Approaches
There are also several other approaches attempting to tie NNs with logic programming [START_REF] Garcez | Symbolic knowledge extraction from trained neural networks: A sound approach[END_REF][START_REF] Garcez | The connectionist inductive learning and logic programming system[END_REF]. In [START_REF] Garcez | Symbolic knowledge extraction from trained neural networks: A sound approach[END_REF], the authors propose a method to extract logical rules from trained NNs. The method proposed deals directly with the NN model, and thus imposes some restrictions on the NN architecture. In particular, it was not made to handle delayed influences in the system. In [START_REF] Garcez | The connectionist inductive learning and logic programming system[END_REF], a method for constructing NNs from logic program is proposed, along with a method for constructing RNNs. However this approach requires background knowledge, or a certain level of knowledge about the observed system (such as an initial NLP to improve on) before being applicable.
In [START_REF] Khan | Construction of gene regulatory networks using recurrent neural networks and swarm intelligence[END_REF], the authors proposed a method for constructing models of dynamical systems using RNNs. However, this approach suffers from its important need of training data, which increases exponentially as the number of variables grow. This is a well-known computational problem called the curse of dimensionality [START_REF] Donoho | High-dimensional data analysis: The curses and blessings of dimensionality[END_REF].
In contrast to these methods, the method proposed in this paper does not assume there exists a direct relation between the trained RNN model and the observed system. Our model aims at classifying a series of state transition to the system that generated it, whereas each of the NN based approaches listed above aims to train a NN model that predicts the next state of the observed system.
Background
LFIT
The main goal of LFIT is to learn a normal logic program (NLP) describing the dynamics of the observed system. NLP is a set of rules of the form
A ← A 1 ∧ A 2 • • • ∧ A m ∧ ¬A m+1 ∧ • • • ∧ ¬A n (1)
where A and A i are propositional atoms, n ≥ m ≥ 0. ¬ and ∧ are the symbols for logical negation and conjunction. For any rule R of the form 1, the atom A is called the head of R and is denoted as h(R). The conjunction to the right of ← is called the body of R. We represent the set of literals in the body of R as b(R) = {A 1 , . . . , A m , ¬A m+1 , . . . , ¬A n }. The set of all propositional atoms that appear in a particular Boolean system is denoted as the Herbrand base B.
An Herbrand interpretation I is a subset of B. For a logic program P and an Herbrand interpretation I, the immediate consequence operator (or T P operator) is the mapping T P : 2 B → 2 B :
T P (I) = {h(R) | R ∈ P, b + (R) ⊆ I, b -(R) ∩ I = ∅}. (2)
Given a set of Herbrand interpretations E and {T P (I) | I ∈ E}, the LFIT algorithm outputs a logic program P which completely represents the dynamics of E.
In the case of Markov(k) systems (i.e. systems with delayed effects of at most k time steps), we can define the timed Herbrand base of a logic program P , denoted by B k , as follows:
B k = k i=1 {v t-i | v ∈ B} ( 3
)
where t is a constant term which represents the current time step. Given a Markov(k) system S, if all rules R ∈ S are such that h(R) ∈ B and b(R) ∈ B k , then we represent S as a logic program P with Herbrand base B k . A trace of execution T of S is a finite sequence of states of S. We can define T as T = (x 0 , . . . ,
x n ), n ≥ 1, x i ∈ 2 B .
Thus a k-step interpretation transition is (I, J) where I ⊆ B k , J ⊆ B.
Neural Network
A multi-layer perceptron (MLP) is a type of feed-forward neural network. An MLP usually consists of one input layer, one or more hidden layer and an output layer. Each layer is fully connected, and the output layer is activated by a non-linear function. MLPs can be trained using backpropagation by gradient descent. The other neural network that we use to learn the system's dynamics is Long Short-Term Memory (LSTM) [START_REF] Hochreiter | Long short-term memory[END_REF]. LSTM is a form of RNN that, contrary to earlier RNNs, can learn long term dependencies and do not suffer from the vanishing gradient problem. It has been popular in many sequence to sequence mapping application such as machine translation [START_REF] Sutskever | Sequence to sequence learning with neural networks[END_REF]. An LSTM consists of a memory cell for each time step, and each memory cell has an input gate i t , an output gate o t and a forget gate f t . When a sequence of n X time steps X = {x 1 , x 2 , . . . , x n X } is given as input, LSTM calculates the following for each time step:
i t f t o t l t = σ σ σ tanh W • h t-1 x t c t = f t • c t-1 + i t • l t h t = o t • c t
where W is a weight matrix, h t is the output of each memory cell, c t is the hidden state of each memory cell and l t is the input to each memory cell. σ is the sigmoid function. The input gate decides how much of the input influences the hidden state. The forget gate decides how much of the past hidden state influences the current hidden state. The output gate is responsible for deciding how much of the current hidden state influences the output. A visual illustration of a single LSTM memory cell is shown in Figure 1.
LSTM networks can be trained by performing backpropagation through time (BPTT) [START_REF] Graves | Framewise phoneme classification with bidirectional lstm and other neural network architectures[END_REF]. In BPTT, the LSTM is trained by unfolding across time steps, and then performing gradient descent to update the weights, as illustrated in Figure 2. A direct consequence of BPTT is that the LSTM can only be trained on fixed-length data. One way of overcoming this is by using truncated BPTT [START_REF] Williams | An efficient gradient-based algorithm for on-line training of recurrent network trajectories[END_REF]. In truncated BPTT, the sequence is truncated into subsequences, and backpropagation is performed on the subsequences.
It can easily be seen that the connections in an LSTM model are complex, and it can be very complicated to attempt to extract or derive relations from the inner architecture of the network. Therefore we forgo the approach of extracting rules from the model, and propose a different method which instead utilizes the LSTM to classify the different inputs depending on the system that generated them.
Model
In this section, we propose an architecture for performing LFIT. It consists of an encoder and decoder for the state transitions, and a neural network for performing LFIT. A visualization of the architecture is shown in Figure 3 and4. The input for the whole model is the sequence of state transitions obtained from observing the target system. The output of the model is an encoding of an approximation of the logic program representation in ... , where an encoder LSTM will receive a series of state vectors and encode them into a single vector, then we will multiply the logic program representation matrix and produce a vector, which will be decoded by an MLP to produce the predicted state. matrix form. However, we will be performing evaluation of the performance of the model based on the predicted state.
Given a series of state transitions X T = (x 1 , x 2 , . . . , x T ), where x t ∈ [0, 1] represents the state of the system at time t, our goal is to predict x T +1 . Note that to be able to deal with noise and continuous values, we are not restricting the domain of x t to Z 2 . If we obtain a representation of X T in the form of a vector x, we can learn a matrix P, with which we can perform matrix multiplication as Px = x T +1 . This can be thought of as performing the T P operator in algebraic space.
The training objective function of the model is defined as:
min W 1 n n i=1 (x (i) T +1 -y (i) T +1 ) 2 + λ W 2 2 ( 4
)
where W is the set of neural network weights, x T +1 is the prediction of the model, y T +1 is the truth state, W 2 2 is the weight decay regularization [START_REF] Krogh | A simple weight decay can improve generalization[END_REF] with hyperparameter λ.
The input state transition is fed to both the encoder and the LFIT model, as can be seen in the figure. We describe the responsibilities of the three neural network models in the following sections. 4: A visualization of the LSTM model that is responsible for performing the LFIT. It receives as input a matrix p 0 , a series of state vectors x 1 , . . . , x t and outputs a matrix.
Autoencoder
The autoencoder for the input sequences is responsible for encoding discrete time series into a feature vector that can later be manipulated by the neural network. This sequence of vectors is then encoded into one feature vector of dimension 2 × k a × l a , where k a denotes the number of memory cell units in the autoencoder LSTM and l a denotes the number of LSTM layers. This amount is doubled because both c and h, which represent the state of the memory cell, are considered.
LFIT Network
This LSTM network can be thought of as performing LFIT. This network takes as input the state transitions and an initial program encoding and outputs a program encoding that is consistent with the observations, which is the same as the definition of the LFIT algorithm. Although in practice, this network is responsible for classifying the series of state transition to the corresponding logic program representation.
The produced output is the representation of the normal logic program. The observations are the same input sequence as that given to the autoencoder. The dimensions of the matrix output by this network is
(2 × l l × k l , 2 × l a × k a ),
where k l denotes the number of memory cell units in this network and l l denotes the number of layers.
In this work, the initial program is always set to ∅ and the LSTM network is trained to produce the complete normal logic program representation. In future work, it could be easily extended so as to accept background knowledge.
Decoder
The decoder is responsible for mapping the product of the NLP matrix and the state transition vector into a state vector that represents the predicted following state. The decoder can theoritically be any function that maps a continuous vector into a binary vector. We detail the model used in Section 4.4.
The goal of the architecture is to produce an encoding of past states, and an encoding of a normal logic program, that can then be multiplied together to predict the next state transition. This multiplication is a matrix × vector multiplication and produces a vector of R n where n is the number of features in the logic program representation. This can be thought of as performing the T p operator within linear geometric space. A MLP then decodes this vector into the desired boolean state vector.
With the encoding of the state transition and an initial program, the LFIT network learns to produce an encoded program based on the observed state transitions. This encoded program can then be used for prediction, and in future work we plan to decode it into a normal logic program thus making it possible to reason with it.
Model Details
In our experiment, the autoencoder takes a series of 10 state transitions, where each state is a 10 dimensional vector which represents the state of each variable within the system. The autoencoder LSTM model we trained has 2 layers, each with 512 memory cell units. The produced state representation is then multiplied by a (2 × 2 × 512, 128) matrix, to produce a 128 dimension feature vector that represents the series of state transitions.
The LFIT model takes the same input as the encoder model, but the LSTM model has 4 layers, 4 being the dimension of the resulting feature vector for the predicted state, and has 1,024 hidden units which is twice the number of hidden units of the autoencoder model. The produced logic program representation is then transformed into (4, 128) matrix by multiplying it with a (2 × 4 × 1024, 4 × 128) matrix and then reshaping.
The decoder model takes the resulting feature vector for the predicted state, which is a vector of 4 dimensions, and outputs a vector of 10 dimensions with each dimension representing the state of the variables within the system. The decoder model consists of a MLP with 1 hidden layer, and each layer has 8 hidden units. Each hidden layer is activated by ReLU (Rectified Linear Unit), which is a function that outputs 0 for all input less than 0, and is linear when the input is larger than 0. The final output layer is activated by a sigmoid function, which is defined as σ(x) = 1/(1 + exp(-x)). The sigmoid function has a range of [0, 1], which is suitable for our use where we want the MLP to output a boolean vector, with noise. The decoder model is simple, this is to avoid the decoder overfitting and thus preventing the LFIT model and the encoder model from learning.
Evaluation
We applied our model to learn the dynamics of Boolean networks from continuous time series. The Boolean network used in this experiment is adapted from Dubrova and Teslenko [START_REF] Dubrova | A sat-based algorithm for finding attractors in synchronous boolean networks[END_REF] and represents the cell cycle regulation of mammalians. The Boolean network is first encoded as a logic program. Each dataset represents a time series generated from an initial state vector of continuous values. The performance of the model is measured by taking the root mean-squared error (RMSE) between the predicted state and the true subsequent state. RMSE is defined as following:
RMSE = 1 n n i=1 (ŷ i -y i ) 2 (5)
where ŷi denotes the predicted value and y i is the actual value.
The initial state vector is generated by giving each of the 10 variables a random value between 0 and 1. Generated states are then mapped back to real values: 0 becomes 0.25 + and 1 becomes 0.75 + , where ∈ (-0.25, 0.25), chosen randomly simulates the measurement noise. We used the following training parameters for our experiment:
-Training steps: 10 4 -Batch size: 100 -Gradient descent optimizer: Adam, learning rate and various other parameters are left with the defaults for Tensorflow r1.2 -Dropout: probability of 0.3 per training step -Regularization hyperparameter λ of 0.2
The model was implemented on Tensorflow r1.2 [START_REF] Abadi | TensorFlow: Large-scale machine learning on heterogeneous systems[END_REF], and all experiments were done on Intel Xeon E5-2630 with 64 GiB of RAM and GTX 1080 Ti.
Training data is generated randomly by first randomly generating logic rules and grouping them together as NLPs. Then the initial state is set as the zero vector, and we continuously perform the T P operator to generate all the consequent states. Variables referring to delays before the initial state is assumed to be 0. In order to assure effective training of the model, we only train on data that varies a lot. We do so by calculating the standard deviation of all states that are generated from a certain NLP, and only keeping those with standard deviation greater than or equal 0.4. We show some of the accepted NLPs in table 3.
Here, we consider two methods for training the model. One by training the model with data without noise, that is the training data is strictly Z 2 . Another way of training the model is by training on data with added noise. Each model is trained with 50 acceptable NLPs, generating 500 data points from each NLP, and training for a total of 4 hours. We evaluate each method in the following section. 1 shows the RMSE of the prediction made by the proposed model. Each dataset represents 50 datapoints from the same NLP, generated from different initial states. The results show that there is little difference in the accuracy between dataset with noise and without noise, which shows that the robustness of our model regarding the presence of noise in the input. In table 2, we show the performance of the model trained with data with noise. Comparing with table 1, the presence of noise in the training data doesn't affect the performance. Both models are equally robust in dealing with noise in the test data. The results obtained appear a little bit skewed due to being produced from the same system. We are planning to test the model in various other dataset when we can get access to them.
Results
Dataset
Figure 5 shows the graph of the learned representation for 8 different randomly generated NLPs based on principal component analysis (PCA). PCA is a popular technique for visualizing high dimensional embeddings [START_REF] Wold | Principal component analysis[END_REF]. As with the previous experiment, the model is fed with state transitions that were generated from the NLPs. The logic representation obtained from our model is a 4 × 128 matrix. We obtain the graph shown in Figure 5 by applying PCA on this matrix which extracts 3 of the dimensions that separate the data the most. Each dot in the graph is a representation learned separately from various state transitions from the logic program. Note that learned representations that are from different logic programs are clearly separated. For each dot plotted on the graph, they are actually multiple dots, representing different initial state generated from the same NLPs. The overlap for NLP 7 and NLP 8 that can be seen in the plot is due to the 2D projection of a 3D graph.
In this experiment, we observe that the model is able to identify the dynamics of the system solely based on a sequence of state transitions. We further expect that the accuracy of the predictions can be improved more by tweaking the neural network architecture. In this paper we propose a method for learning a matrix representation of dynamical systems with delays. One of the interesting aspects of this approach is that it produces a logic program representation in matrix form, which when multiplied with a feature vector of the past states, is able to compute a vector that represents the predicted state. This could lead to future works such as reasoning and performing induction purely in the algebraic space.
The main contribution of this work is to devise a method of modeling systems where only limited amounts of data can be collected. Without sufficient amount of data, purely logical methods cannot provide useful information, and attempts at training neural networks to model the system will result in overfitting. Therefore we speculate that generating artificial data in order to train a more generalized neural network may be a more successful approach in such cases. We also managed to show that the devised method is resilience to noise, where purely logical methods are not able to deal with.
As future work, we are planning to adapt the current method to take as input a partial program as background knowledge to the network and to decode the NLP representation into logical form to allow humans to reason with. We also hope to evaluate the predictions made by this model with other similar models. a t ← f t-5 ∧ ¬d t-4 ∧ ¬i t-1 ∧ ¬g t-1 ∧ ¬g t-4 ∧ ¬d t-1 b t ← ¬d t-1 ∧ ¬d t-5 c t ← ¬b t-1 d t ←¬c t-1 ∧ ¬i t-5 ∧ ¬f t-3 ∧ ¬c t-2 ∧ ¬i t-1 ∧ ¬h t-1 ∧ ¬a t-1 ∧ ¬d t-3 ∧ ¬d t-5 e t ← e t-2 ∧ ¬e t-1 ∧ ¬a t-3 ∧ ¬f t-4 ∧ ¬j t-5 f t ← b t-2 ∧ g t-1 ∧ h t-5 ∧ ¬i t-2 ∧ ¬f t-2 g t ← ¬d t-1 ∧ ¬g t-1 h t ← ¬i t-1 i t ← ¬e t-4 ∧ ¬j t-1 ∧ ¬d t-2 ∧ ¬g t-5 ∧ ¬c t-2 ∧ ¬i t-5 ∧ ¬g t-3 ∧ ¬j t-2 ∧ ¬i t-1 j t ← b t-3 ∧ c t-4 ∧ ¬j t-2 ∧ ¬c t-3 a t ←b t-3 ∧ g t-2 ∧ f t-4 ∧ j t-4 ∧ ¬c t-5 ∧ ¬e t-2 ∧ ¬a t-4 ∧ ¬h t-3 ∧ ¬i t-3 ∧ ¬h t-1 ∧ ¬e t-3 ∧ ¬c t-1 ∧ ¬c t-2 ∧ ¬a t-5 b t ←¬e t-4 ∧ ¬c t-3 ∧ ¬i t-3 ∧ ¬f t-3 ∧ ¬b t-2 ∧ ¬i t-5 ∧ ¬i t-3 ∧ ¬a t-5 ∧ ¬f t-5 c t ← ¬c t-3 ∧ ¬b t-1 ∧ ¬c t-5 ∧ ¬j t-2 ∧ ¬b t-5 ∧ ¬i t-2 ∧ ¬a t-5 ∧ ¬b t-3 d t ← ¬e t-4 ∧ ¬a t-5 ∧ ¬e t-4 e t ← ¬g t-1 f t ← h t-1 ∧ e t-3 ∧ c t-3 ∧ ¬a t-2 ∧ ¬g t-4 g t ← ¬f t-5 h t ← ¬e t-5 i t ← ¬j t-3 ∧ ¬a t-5 ∧ ¬i t-4 j t ← ¬g t-5 ∧ ¬e t-5 ∧ ¬d t-1 a t ← ¬e t-2 b t ← ¬b t-4 c t ← j t-1 ∧ f t-1 ∧ f t-2 ∧ d t-1 ∧ h t-5 ∧ ¬g t-3 ∧ ¬c t-5 d t ←¬g t-3 ∧ ¬b t-5 ∧ ¬c t-3 ∧ ¬b t-5 ∧ ¬j t-3 ∧ ¬h t-2 ∧ ¬f t-5 ∧ ¬d t-2 ∧ ¬c t-5 e t ←g t-2 ∧ g t-4 ∧ f t-5 ∧ j t-3 ∧ e t-1 ∧ ¬j t-1 ∧ ¬a t-1 ∧ ¬f t-1 ∧ ¬e t-4 f t ←f t-5 ∧ b t-5 ∧ g t-5 ∧ ¬j t-2 ∧ ¬c t-5 ∧ ¬i t-5 ∧ ¬g t-4 ∧ ¬g t-5 ∧ ¬f t-2 ∧ ¬f t-3 ∧ ¬h t-4 g t ← a t-2 ∧ d t-3 ∧ ¬g t-2 ∧ ¬c t-3 h t ← ¬j t-5 ∧ ¬e t-4 ∧ ¬g t-5 ∧ ¬f t-1 i t ← ¬e t-4 j t ← ¬i t-5 Table 3: Example NLPs that are randomly generated and used for training
Fig. 1 :
1 Fig. 1: An LSTM memory cell
Fig. 2 :Fig. 3 :
23 Fig. 2: Unfolding of an LSTM network for BPTT training
Fig.4: A visualization of the LSTM model that is responsible for performing the LFIT. It receives as input a matrix p 0 , a series of state vectors x 1 , . . . , x t and outputs a matrix.
Fig. 5 :
5 Fig. 5: PCA plot of the learned representation for NLPs based on input time series 6 Conclusion and Future Work
Table 1 :
1 Results of the RMSE of the prediction made by the proposed model trained on non-noisy data on various datasets
RMSE (Original) RMSE (Noisy)
1 0.27 0.28
2 0.27 0.28
3 0.26 0.26
4 0.27 0.26
5 0.27 0.28
6 0.27 0.27
7 0.27 0.28
8 0.27 0.28
9 0.27 0.27
10 0.27 0.27
Dataset RMSE (Original) RMSE (Noisy)
1 0.27 0.28
2 0.27 0.27
3 0.27 0.28
4 0.27 0.28
5 0.28 0.28
6 0.27 0.28
7 0.28 0.28
8 0.27 0.27
9 0.27 0.27
10 0.27 0.27
Table 2 :
2 Results of the RMSE of the prediction made by the proposed model trained on noisy data on various datasets
Table | 30,493 | [
"1028387",
"770524"
] | [
"473973",
"1041966",
"6501"
] |
01766300 | en | [
"math"
] | 2024/03/05 22:32:13 | 2021 | https://hal.science/hal-01766300/file/Lovignenko_Hermite_10avril18.pdf | Karine Beauchard
email: [email protected]
Philippe Jaming
email: [email protected]
Karel Pravda-Starov
email: [email protected]
Karel Pravda
SPECTRAL INEQUALITY FOR FINITE COMBINATIONS OF HERMITE FUNCTIONS AND NULL-CONTROLLABILITY OF HYPOELLIPTIC QUADRATIC EQUATIONS
Keywords: 2010 Mathematics Subject Classification. 93B05, 42C05, 35H10 Uncertainty principles, Logvinenko-Sereda type estimates, Hermite functions, Null-controllability, observability, quadratic equations, hypoellipticity, Gelfand-Shilov regularity
come
Spectral inequality for finite combinations of Hermite functions and null-controllability of hypoelliptic quadratic equations
Introduction
The classical uncertainty principle was established by Heisenberg. It points out the fundamental problem in quantum mechanics that the position and the momentum of particles cannot be both determined explicitly, but only in a probabilistic sense with a certain uncertainty. More generally, uncertainty principles are mathematical results that give limitations on the simultaneous concentration of a function and its Fourier transform. When using the following normalization for the Fourier transform (1.1)
f (ξ) = R n f (x)e -ix•ξ dx, ξ ∈ R n ,
the mathematical formulation of the Heisenberg's uncertainty principle can be stated in a directional version as follows
(1.2) inf a∈R R n (x j -a) 2 |f (x)| 2 dx inf b∈R 1 (2π) n R n (ξ j -b) 2 | f (ξ)| 2 dξ ≥ 1 4 f 4 L 2 (R n ) ,
for all f ∈ L 2 (R n ) and 1 ≤ j ≤ n, and shows that a function and its Fourier transform cannot both be arbitrarily localized. Moreover, the inequality (1.2) is an equality if and only if f is of the form f (x) = g(x 1 , ..., x j-1 , x j+1 , ..., x n )e -ibx j e -α(x j -a) 2 , where g is a function in L 2 (R n-1 ), α > 0, and a and b are real constants for which the two infima in (1.2) are achieved. There are various uncertainty principles of different nature.
We refer in particular the reader to the survey article by Folland and Sitaram [START_REF] Folland | The uncertainty principle: a mathematical survey[END_REF], and the book of Havin and Jöricke [START_REF] Havin | The uncertainty principle in harmonic analysis[END_REF] for detailed presentations and references for these topics. Another formulation of uncertainty principles is that a non-zero function and its Fourier transform cannot both have small supports. For instance, a non-zero L 2 (R n )-function whose Fourier transform is compactly supported must be an analytic function with a discrete zero set and therefore a full support. This leads to the notion of weak annihilating pairs as well as the corresponding quantitative notion of strong annihilating pairs: Definition 1.1 (Annihilating pairs). Let S, Σ be two measurable subsets of R n .
-The pair (S, Σ) is said to be a weak annihilating pair if the only function f ∈ L 2 (R n ) with supp f ⊂ S and supp f ⊂ Σ is zero f = 0. -The pair (S, Σ) is said to be a strong annihilating pair if there exists a positive constant C = C(S, Σ) > 0 such that for all f ∈ L 2 (R n ),
(1.3)
R n |f (x)| 2 dx ≤ C R n \S |f (x)| 2 dx + R n \Σ | f (ξ)| 2 dξ .
It can be readily checked that a pair (S, Σ) is a strong annihilating pair if and only if there exists a positive constant D = D(S, Σ) > 0 such that for all f ∈ L 2 (R n ) with supp f ⊂ Σ, (1.4)
f L 2 (R n ) ≤ D f L 2 (R n \S) .
As already mentioned above, the pair (S, Σ) is a weak annihilating one if S and Σ are compact sets. More generally, Benedicks has shown in [START_REF] Benedicks | On Fourier transforms of functions supported on sets of finite Lebesgue measure[END_REF] that (S, Σ) is a weak annihilating pair if S and Σ are sets of finite Lebesgue measure |S|, |Σ| < +∞. Under this assumption, the result of Amrein-Berthier [START_REF] Amrein | On support properties of L p -functions and their Fourier transforms[END_REF] actually shows that the pair (S, Σ) is a strong annihilating one. The estimate C(S, Σ) ≤ κe κ|S||Σ| (which is sharp up to numerical constant κ > 0) has been established by Nazarov [START_REF] Nazarov | Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type[END_REF] in dimension n = 1. This result was extended in the multi-dimensional case by the second author [START_REF] Jaming | Nazarov's uncertainty principles in higher dimension[END_REF], with the quantitative estimate C(S, Σ) ≤ κe κ(|S||Σ|) 1/n holding if in addition one of the two subsets of finite Lebesgue measure S or Σ is convex.
An exhaustive description of all strong annihilating pairs seems for now totally out of reach. We refer the reader for instance to the works [START_REF] Amit | On the annihilation of thin sets[END_REF][START_REF] Bourgain | Fourier dimension and spectral gaps for hyperbolic surfaces[END_REF][START_REF] Bourgain | Spectral gaps without the pressure condition, to appear in Ann. Math[END_REF][START_REF] Demange | Uncertainty principles associated to non-degenerate quadratic forms[END_REF][START_REF] Dyatlov | Dolgopyat's method and the fractal uncertainty principle[END_REF][START_REF] Shubin | Some harmonic analysis questions suggested by Anderson-Bernoulli models[END_REF] for a large variety of results and techniques available as well as for examples of weak annihilating pairs that are not strong annihilating ones. However, there is a complete description of all the support sets S forming a strong annihilating pair with any bounded spectral set Σ. This description is given by the Logvinenko-Sereda theorem [START_REF] Logvinenko | Equivalent norms in spaces of entire functions of exponential type[END_REF]: Theorem 1.2 (Logvinenko-Sereda). Let S, Σ ⊂ R n be measurable subsets with Σ bounded. Denoting S = R n \ S, the following assertions are equivalent:
-The pair (S, Σ) is a strong annihilating pair -The subset S is thick, that is, there exists a cube K ⊂ R n with sides parallel to coordinate axes and a positive constant 0 < γ ≤ 1 such that
∀x ∈ R n , |(K + x) ∩ S| ≥ γ|K| > 0,
where |A| denotes the Lebesgue measure of the measurable set A
It is noticeable to observe that if (S, Σ) is a strong annihilating pair for some bounded subset Σ, then S makes up a strong annihilating pair with every bounded subset Σ, but the above constants C(S, Σ) > 0 and D(S, Σ) > 0 do depend on Σ. In order to be able to use this remarkable result in the control theory of partial differential equations, it is essential to understand how the positive constant D(S, Σ) > 0 depends on the Lebesgue measure of the bounded set Σ. This question was answered by Kovrijkine [START_REF] Kovrijkine | Some results related to the Logvinenko-Sereda Theorem[END_REF]Theorem 3] who established the following quantitative estimates : Theorem 1.3 (Kovrijkine). There exists a universal positive constant C n > 0 depending only on the dimension n ≥ 1 such that if S is a γ-thick set at scale L > 0, that is, for all x ∈ R n ,
(1.5) | S ∩ (x + [0, L] n )| ≥ γL n ,
with 0 < γ ≤ 1, then we have for all R > 0 and f ∈ L 2 (R n ) with supp f ⊂ {ξ = (ξ 1 , ..., ξ n ) ∈ R n : ∀j = 1, ..., n, |ξ j | ≤ R},
(1.6) f L 2 (R n ) ≤ C n γ Cn(1+LR) f L 2 ( S) .
Thanks to this explicit dependence of the constant with respect to the parameter R > 0 in the estimate (1.6), Egidi and Veselic [START_REF] Egidi | Sharp geometric condition for null-controllability of the heat equation on R d and consistent estimates on the control cost[END_REF], and Wang, Wang, Zhang and Zhang [START_REF] Wang | Observable set, observability, interpolation inequality and spectral inequality for the heat equation in R n[END_REF] have independently established the striking result that the heat equation
(1.7) (∂ t -∆ x )f (t, x) = u(t, x)1l ω (x) , x ∈ R n , t > 0, f | t=0 = f 0 ∈ L 2 (R n ),
is null-controllable in any positive time T > 0 from a measurable control subset ω ⊂ R n if and only if this subset ω is thick in R n . The notion of null-controllability is defined as follows:
Definition 1.4 (Null-controllability). Let P be a closed operator on L 2 (R n ) which is the infinitesimal generator of a strongly continuous semigroup (e -tP ) t≥0 on L 2 (R n ), T > 0 and ω be a measurable subset of R n . The equation
(1.8) (∂ t + P )f (t, x) = u(t, x)1l ω (x) , x ∈ R n , t > 0, f | t=0 = f 0 ∈ L 2 (R n ),
is said to be null-controllable from the set ω in time T > 0 if, for any initial datum
f 0 ∈ L 2 (R n ), there exists u ∈ L 2 ((0, T ) × R n ), supported in (0, T ) × ω, such that the mild (or semigroup) solution of (1.8) satisfies f (T, •) = 0.
By the Hilbert Uniqueness Method, see [START_REF] Coron | Control and nonlinearity[END_REF]Theorem 2.44] or [START_REF] Lions | Contrôlabilité exacte, perturbations et stabilisation de systèmes distribués[END_REF], the null-controllability of the equation (1.8) is equivalent to the observability of the adjoint system (1.9)
(∂ t + P * )g(t, x) = 0 , x ∈ R n , g| t=0 = g 0 ∈ L 2 (R n ).
The notion of observability is defined as follows: Definition 1.5 (Observability). Let T > 0 and ω be a measurable subset of R n . Equation (1.9) is said to be observable from the set ω in time T > 0 if there exists a positive constant C T > 0 such that, for any initial datum g 0 ∈ L 2 (R n ), the mild (or semigroup) solution of (1.9) satisfies
(1.10) R n |g(T, x)| 2 dx ≤ C T T 0 ω |g(t, x)| 2 dx dt .
Following [START_REF] Egidi | Sharp geometric condition for null-controllability of the heat equation on R d and consistent estimates on the control cost[END_REF] or [START_REF] Wang | Observable set, observability, interpolation inequality and spectral inequality for the heat equation in R n[END_REF], the necessity of the thickness property of the control subset for the null-controllability in any positive time is a consequence of a quasimodes construction; whereas the sufficiency is derived in [START_REF] Egidi | Sharp geometric condition for null-controllability of the heat equation on R d and consistent estimates on the control cost[END_REF] from an abstract observability result obtained by an adapted Lebeau-Robbiano method and established by the first and third authors with some contributions of Luc Miller 1 : Theorem 1.6. [4, Theorem 2.1]. Let Ω be an open subset of R n , ω be a measurable subset of Ω, (π k ) k∈N * be a family of orthogonal projections defined on L 2 (Ω), (e -tA ) t≥0 be a strongly continuous contraction semigroup on L 2 (Ω); c 1 , c 2 , a, b, t 0 , m > 0 be positive constants with a < b. If the following spectral inequality
(1.11) ∀g ∈ L 2 (Ω), ∀k ≥ 1, π k g L 2 (Ω) ≤ e c 1 k a π k g L 2 (ω) ,
and the following dissipation estimate
(1.12) ∀g ∈ L 2 (Ω), ∀k ≥ 1, ∀0 < t < t 0 , (1 -π k )(e -tA g) L 2 (Ω) ≤ 1 c 2 e -c 2 t m k b g L 2 (Ω) ,
hold, then there exists a positive constant C > 1 such that the following observability estimate holds
(1.13) ∀T > 0, ∀g ∈ L 2 (Ω), e -T A g 2 L 2 (Ω) ≤ C exp C T am b-a T 0 e -tA g 2 L 2 (ω) dt.
In the statement of [4, Theorem 2.1], the subset ω is supposed to be an open subset of Ω. However, the proof given in [START_REF] Beauchard | Null-controllability of hypoelliptic quadratic differential equations[END_REF] works as well when the subset ω is only assumed to be measurable. Notice that the assumptions in the above statement do not require that the orthogonal projections (π k ) k≥1 are spectral projections onto the eigenspaces of the infinitesimal generator A, which is allowed to be non-selfadjoint. According to the above statement, there are two key ingredients to derive a result of null-controllability, or equivalently a result of observability, while using Theorem 1.6: a spectral inequality (1.11) and a dissipation estimate (1.12). For the heat equation, the orthogonal projections used are the frequency cutoff operators given by the orthogonal projections onto the closed vector subspaces
(1.14) E k = f ∈ L 2 (R n ) : supp f ⊂ {ξ = (ξ 1 , ..., ξ n ) ∈ R n : |ξ j | ≤ k, 1 ≤ j ≤ n} ,
for k ≥ 1. With this choice, the dissipation estimate readily follows from the explicit formula
(1.15) (e t∆x g)(t, ξ) = g(ξ)e -t|ξ| 2 , t ≥ 0, ξ ∈ R n ,
whereas the spectral inequality is given by the sharpened formulation of the Logvinenko-Sereda theorem (1.6). Notice that the power 1 for the parameter R in (1.6) and the power 2 for the term |ξ| in (1.15) account for the fact that Theorem 1.6 can be applied with the parameters a = 1, b = 2 that satisfy the required condition 0 < a < b. It is therefore essential that the power of the parameter R in the exponent of the estimate (1.6) is strictly less than 2. As there is still a gap between the cost of the localization (a = 1) given by the spectral inequality and its compensation by the dissipation estimate (b = 2), it is interesting to notice that we could have expected that the null-controllability of the heat equation could have held under weaker assumptions than the thickness property on the control subset, by allowing some higher costs for localization with some parameters 1 < a < 2, but the Logvinenko-Sereda theorem actually shows that this is not the case. Notice that Theorem 1.6 does not only apply with the use of frequency cutoff projections and a dissipation estimate induced by some Gevrey type regularizing effects. Other regularities than the Gevrey regularity can be taken into account. In the previous work by the first and third authors [START_REF] Beauchard | Null-controllability of hypoelliptic quadratic differential equations[END_REF], Theorem 1.6 is used for a general class of accretive hypoelliptic quadratic operators q w generating some strongly continuous contraction semigroups (e -tq w ) t≥0 enjoying some Gelfand-Shilov regularizing effects. The definition and standard properties related to Gelfand-Shilov regularity are recalled in Appendix (Section 4.3). As recalled in this appendix, the Gelfand-Shilov regularity is characterized by specific exponential decays of the functions and their Fourier transforms; and in the symmetric case, can be read on the exponential decay of the Hermite coefficients of the functions in theirs expansions in the L 2 (R n )-Hermite basis (Φ α ) α∈N n . Explicit formulas and some reminders of basic facts about Hermite functions are given in Appendix (Section 4.1). The class of hypoelliptic quadratic operators whose description will be given in Section 2.2 enjoys some Gelfand-Shilov regularizing effects ensuring that the following dissipation estimate holds [4, Proposition 4.1]:
(1.16) ∃C 0 > 1, ∃t 0 > 0, ∀t ≥ 0, ∀k ≥ 0, ∀f ∈ L 2 (R n ), (1 -π k )(e -tq w f ) L 2 (R n ) ≤ C 0 e -δ(t)k f L 2 (R n ) , with (1.17) δ(t) = inf(t, t 0 ) 2k 0 +1 C 0 ≥ 0, t ≥ 0, 0 ≤ k 0 ≤ 2n -1,
where (1.18)
P k g = α∈N n |α|=k g, Φ α L 2 (R n ) Φ α , k ≥ 0, with |α| = α 1 + • • • + α n ,
denotes the orthogonal projection onto the k th energy level associated with the harmonic oscillator
H = -∆ x + |x| 2 = +∞ k=0 (2k + n)P k ,
and
(1. [START_REF] Ganzburg | Polynomial inequalities on measurable sets and their applications[END_REF])
π k = k j=0 P j , k ≥ 0,
denotes the orthogonal projection onto the (k + 1) th first energy levels. In order to apply Theorem 1.6, we need a spectral inequality for finite combinations of Hermite functions of the type
(1.20) ∃C > 1, ∀k ≥ 0, ∀f ∈ L 2 (R n ), π k f L 2 (R n ) ≤ Ce Ck a π k f L 2 (ω) ,
with a < 1, where π k is the orthogonal projection (1.19). In [4, Proposition 4.2], such a spectral inequality is established with a = 1 2 when the control subset ω is an open subset of R n satisfying the following geometrical condition: In the present work, we study under which conditions on the control subset ω ⊂ R n , the spectral inequality
(1.21) ∃δ, r > 0, ∀y ∈ R n , ∃y ′ ∈ ω, B(y ′ , r) ⊂ ω, |y -y ′ | < δ,
(1.22) ∀k ≥ 0, ∃C k (ω) > 0, ∀f ∈ L 2 (R n ), π k f L 2 (R n ) ≤ C k (ω) π k f L 2 (ω) ,
holds and how the geometrical properties of the set ω relate to the possible growth of the positive constant C k (ω) > 0 with respect to the energy level when k → +∞. The main results contained in this article provide some quantitative upper bounds on the positive constant C k (ω) > 0 with respect to the energy level for three different classes of measurable subsets :
-non-empty open subsets in R n , -measurable sets in R n verifying the condition
(1.23) lim inf R→+∞ |ω ∩ B(0, R)| |B(0, R)| = lim R→+∞ inf r≥R |ω ∩ B(0, r)| |B(0, r)| > 0,
where B(0, R) denotes the open Euclidean ball in R n centered in 0 with radius R > 0, -thick measurable sets in R n . We observe that in the first two classes, the measurable control subsets are allowed to have gaps containing balls with radii tending to infinity, whereas in the last class there must be a bound on such radii. We shall see that the quantitative upper bounds obtained for the two first classes (Theorem 2.1, estimates (i) and (ii)) are not sufficient to obtain any result of null-controllability for the class of hypoelliptic quadratic operators studied in Section 2.2. Regarding the third one, the quantitative upper bound (Theorem 2.1, estimate (iii)) is a noticeable analogue of the Logvinenko-Sereda theorem for finite combinations of Hermite functions. As an application of this third result, we extend in Theorem 2.2 the result of null-controllability for parabolic equations associated with accretive quadratic operators with zero singular spaces from any thick set ω ⊂ R n in any positive time T > 0.
Statements of the main results
2.1. Uncertainty principles for finite combinations of Hermite functions. Let (Φ α ) α∈N n be the n-dimensional Hermite functions and
(2.1)
E N = Span C {Φ α } α∈N n ,|α|≤N ,
be the finite dimensional vector space spanned by all the Hermite functions Φ α with |α| ≤ N , whose definition is recalled in Appendix (Section 4.1).
As the Lebesgue measure of the zero set of a non-zero analytic function on C is zero, the L 2 -norm • L 2 (ω) on any measurable set ω ⊂ R of positive measure |ω| > 0 defines a norm on the finite dimensional vector space E N . As a consequence of the Remez inequality, we check in Appendix (Section 4.4) that this result holds true as well in the multi-dimensional case when ω ⊂ R n , with n ≥ 1, is a measurable subset of positive Lebesgue measure |ω| > 0. By equivalence of norms in finite dimension, for any measurable set ω ⊂ R n of positive Lebesgue measure |ω| > 0 and all N ∈ N, there therefore exists a positive constant C N (ω) > 0 depending on ω and N such that the following spectral inequality holds
(2.2) ∀f ∈ E N , f L 2 (R n ) ≤ C N (ω) f L 2 (ω) .
We aim at studying how the geometrical properties of the set ω relate to the possible growth of the positive constant C N (ω) > 0 with respect to the energy level. The main results of the present work are given by the following uncertainty principles for finite combinations of Hermite functions:
Theorem 2.1. With E N the finite dimensional vector space spanned by the Hermite functions (Φ α ) |α|≤N defined in (2.1), the following spectral inequalities hold:
(i) If ω is a non-empty open subset of R n , then there exists a positive constant C = C(ω) > 1 such that ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ Ce 1 2 N ln(N +1)+CN f L 2 (ω) . (ii) If the measurable subset ω ⊂ R n satisfies the condition (1.23), then there exists a positive constant C = C(ω) > 1 such that ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ Ce CN f L 2 (ω) .
(iii) If the measurable subset ω ⊂ R n is γ-thick at scale L > 0 in the sense defined in (1.5), then there exist a positive constant C = C(L, γ, n) > 0 depending on the dimension n ≥ 1 and the parameters γ, L > 0, and a universal positive constant κ = κ(n) > 0 only depending on the dimension such that
∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ C κ γ κL √ N f L 2 (ω) .
According to the above result, the control on the growth of the positive constant C N (ω) > 0 with respect to the energy level for an arbitrary non-empty open subset ω of R n , or when the measurable subset ω ⊂ R n satisfies the condition (1.23), is not sufficient to satisfy the estimates (1.20) needed to obtain some results of null-controllability and observability for the parabolic equations associated to the class of hypoelliptic quadratic operators studied in Section 2.2. As the one-dimensional harmonic heat equation is known from [13, Proposition 5.1], see also [START_REF] Miller | Unique continuation estimates for sums of semiclassical eigenfunctions and nullcontrollability from cones[END_REF], to not be null-controllable, nor observable, in any time T > 0 from a half-line and as the harmonic oscillator obviously belongs to the class of hypoelliptic quadratic operators studied in Section 2.2, we observe that spectral estimates of the type
∃0 < a < 1, ∃C > 1, ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ Ce CN a f L 2 (ω) ,
cannot hold for an arbitrary non-empty open subset ω of R n , nor when the measurable subset ω ⊂ R n satisfies the condition (1.23), since Theorem 1.6 together with (1.16) would then imply the null-controlllability and the observability of the one-dimensional harmonic heat equation from a half-line. This would be in contradiction with the results of [START_REF] Duyckaerts | Resolvent conditions the control of parabolic equations[END_REF][START_REF] Miller | Unique continuation estimates for sums of semiclassical eigenfunctions and nullcontrollability from cones[END_REF].
On the other hand, when the measurable subset ω ⊂ R n is γ-thick at scale L > 0, the above spectral inequality (iii) is an analogue for finite combinations of Hermite functions of the sharpened version of the Logvinenko-Sereda theorem proved by Kovrijkine in [30, Theorem 3] with a similar dependence of the constant with respect to the parameters 0 < γ ≤ 1 and L > 0 as in (1.6). Notice that the growth in √ N is of the order of the square root of the largest eigenvalue of the harmonic oscillator H = -∆ x + |x| 2 on the spectral vector subspace E N , whereas the growth in R in (1.6) is also of order of the square root of the largest spectral value of the Laplace operator -∆ x on the spectral vector subspace
E R = f ∈ L 2 (R n ) : supp f ⊂ {ξ = (ξ 1 , ..., ξ n ) ∈ R n : ∀j = 1, ..., n, |ξ j | ≤ R .
This is in agreement with what is usually expected for that type of spectral inequalities, see [START_REF] Rousseau | Applications to unique continuation and control of parabolic equations[END_REF].
The spectral inequality (i) for arbitrary non-empty open subsets is proved in Section 3.1. Its proof uses some estimates on Hermite functions together with the Remez inequality. The spectral inequality (ii) for measurable subsets satisfying the condition (1.23) is proved in Section 3.2 and follows from similar arguments as the ones used in Section 3.1. The spectral inequality (iii) for thick sets is proved in Section 3.3. This proof is an adaptation of the proof of the sharpened version of the Logvinenko-Sereda theorem given by Kovrijkine in [30, Theorem 1]. As in [START_REF] Kovrijkine | Some results related to the Logvinenko-Sereda Theorem[END_REF], the proof is only written with full details in the onedimensional case with hints for its extension to the multi-dimensional one following some ideas of Nazarov [START_REF] Nazarov | Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type[END_REF], the proof given in Section 3.3 is therefore more specifically inspired by the proof of the Logvinenko-Sereda theorem in the multi-dimensional setting given by Wang, Wang, Zhang and Zhang in [49, Lemma 2.1].
2.2.
Null-controllability of hypoelliptic quadratic equations. This section presents the result of null-controllability for parabolic equations associated with a general class of hypoelliptic non-selfadjoint accretive quadratic operators from any thick set ω of R n in any positive time T > 0. We begin by recalling few facts about quadratic operators.
2.2.1.
Miscellaneous facts about quadratic differential operators. Quadratic operators are pseudodifferential operators defined in the Weyl quantization
(2.3) q w (x, D x )f (x) = 1 (2π) n R 2n e i(x-y)•ξ q x + y 2 , ξ f (y)dydξ, by symbols q(x, ξ), with (x, ξ) ∈ R n × R n , n ≥ 1, which are complex-valued quadratic forms q : R n x × R n ξ → C (x, ξ) → q(x, ξ).
These operators are non-selfadjoint differential operators in general; with simple and fully explicit expression since the Weyl quantization of the quadratic symbol x α ξ β , with (α, β) ∈ N 2n , |α + β| = 2, is the differential operator
x α D β x + D β x x α 2 , D x = i -1 ∂ x .
Let q w (x, D x ) be a quadratic operator defined by the Weyl quantization (2.3) of a complexvalued quadratic form q on the phase space R 2n . The maximal closed realization of the quadratic operator q w (x, D x ) on L 2 (R n ), that is, the operator equipped with the domain
(2.4) D(q w ) = f ∈ L 2 (R n ) : q w (x, D x )f ∈ L 2 (R n ) ,
where q w (x, D x )f is defined in the distribution sense, is known to coincide with the graph closure of its restriction to the Schwartz space [28, pp. 425-426],
q w (x, D x ) : S (R n ) → S (R n ).
Let q : R n x × R n ξ → C be a quadratic form defined on the phase space and write q(•, •) for its associated polarized form. Classically, one associates to q a matrix F ∈ M 2n (C) called its Hamilton map, or its fundamental matrix. With σ standing for the standard symplectic form
(2.5) σ((x, ξ), (y, η)) = ξ, y -x, η = n j=1 (ξ j y j -x j η j ), with x = (x 1 , ..., x n ), y = (y 1 , ...., y n ), ξ = (ξ 1 , ..., ξ n ), η = (η 1 , ..., η n ) ∈ C n ,
the Hamilton map F is defined as the unique matrix satisfying the identity
(2.6) ∀(x, ξ) ∈ R 2n , ∀(y, η) ∈ R 2n , q((x, ξ), (y, η)) = σ((x, ξ), F (y, η)).
We observe from the definition that
F = 1 2 ∇ ξ ∇ x q ∇ 2 ξ q -∇ 2 x q -∇ x ∇ ξ q ,
where the matrices
∇ 2 x q = (a i,j ) 1≤i,j≤n , ∇ 2 ξ q = (b i,j ) 1≤i,j≤n , ∇ ξ ∇ x q = (c i,j ) 1≤i,j≤n , ∇ x ∇ ξ q = (d i,j ) 1≤i,
j≤n are defined by the entries
a i,j = ∂ 2 x i ,x j q, b i,j = ∂ 2 ξ i ,ξ j q, c i,j = ∂ 2 ξ i ,x j q, d i,j = ∂ 2 x i ,ξ j q.
The notion of singular space was introduced in [START_REF] Hitrik | Spectra and semigroup smoothing for non-elliptic quadratic operators[END_REF] by Hitrik and the third author by pointing out the existence of a particular vector subspace in the phase space S ⊂ R 2n , which is intrinsically associated with a given quadratic symbol q. This vector subspace is defined as the following finite intersection of kernels
(2.7) S = 2n-1 j=0 Ker Re F (Im F ) j ∩ R 2n ,
where Re F and Im F stand respectively for the real and imaginary parts of the Hamilton map F associated with the quadratic symbol q,
Re F = 1 2 (F + F ), Im F = 1 2i (F -F ).
As pointed out in [START_REF] Hitrik | Spectra and semigroup smoothing for non-elliptic quadratic operators[END_REF][START_REF] Hitrik | Short-time asymptotics of the regularizing effect for semigroups generated by quadratic operators[END_REF][START_REF] Hitrik | From semigroups to subelliptic estimates for quadratic operators[END_REF][START_REF] Ottobre | Exponential return to equilibrium for hypoelliptic quadratic systems[END_REF][START_REF] Pravda-Starov | Subelliptic estimates for quadratic differential operators[END_REF][START_REF] Pravda-Starov | Propagation of Gabor singularities for Schrödinger equations with quadratic Hamiltonians[END_REF][START_REF] Viola | Spectral projections and resolvent bounds for partially elliptic quadratic differential operators[END_REF], the notion of singular space plays a basic role in the understanding of the spectral and hypoelliptic properties of the (possibly) nonelliptic quadratic operator q w (x, D x ), as well as the spectral and pseudospectral properties of certain classes of degenerate doubly characteristic pseudodifferential operators [START_REF] Hitrik | Semiclassical hypoelliptic estimates for non-selfadjoint operators with double characteristics[END_REF][START_REF] Hitrik | Eigenvalues and subelliptic estimates for non-selfadjoint semiclassical operators with double characteristics[END_REF][START_REF] Viola | Resolvent estimates for non-selfadjoint operators with double characteristics[END_REF][START_REF] Viola | Non-elliptic quadratic forms and semiclassical estimates for non-selfadjoint operators[END_REF]. In particular, the work [23, Theorem 1.2.2] gives a complete description for the spectrum of any non-elliptic quadratic operator q w (x, D x ) whose Weyl symbol q has a non-negative real part Re q ≥ 0, and satisfies a condition of partial ellipticity along its singular space S,
(2.8) (x, ξ) ∈ S, q(x, ξ) = 0 ⇒ (x, ξ) = 0.
Under these assumptions, the spectrum of the quadratic operator q w (x, D x ) is shown to be composed of a countable number of eigenvalues with finite algebraic multiplicities. The structure of this spectrum is similar to the one known for elliptic quadratic operators [START_REF] Sjöstrand | Parametrices for pseudodifferential operators with multiple characteristics[END_REF]. This condition of partial ellipticity is generally weaker than the condition of ellipticity, S R 2n , and allows one to deal with more degenerate situations. An important class of quadratic operators satisfying condition (2.8) are those with zero singular spaces S = {0}.
In this case, the condition of partial ellipticity trivially holds. More specifically, these quadratic operators have been shown in [39, Theorem 1.2.1] to be hypoelliptic and to enjoy global subelliptic estimates of the type
(2.9) ∃C > 0, ∀f ∈ S (R n ), (x, D x ) 2(1-δ) f L 2 (R n ) ≤ C( q w (x, D x )f L 2 (R n ) + f L 2 (R n ) ), where (x, D x ) 2 = 1 + |x| 2 + |D x | 2
, with a sharp loss of derivatives 0 ≤ δ < 1 with respect to the elliptic case (case δ = 0), which can be explicitly derived from the structure of the singular space.
When the quadratic symbol q has a non-negative real part Re q ≥ 0, the singular space can be also defined in an equivalent way as the subspace in the phase space where all the Poisson brackets
H k Imq Re q = ∂Im q ∂ξ • ∂ ∂x - ∂Im q ∂x • ∂ ∂ξ k Re q, k ≥ 0, are vanishing S = X = (x, ξ) ∈ R 2n : (H k Imq Re q)(X) = 0, k ≥ 0 .
This dynamical definition shows that the singular space corresponds exactly to the set of points X ∈ R 2n , where the real part of the symbol Re q under the flow of the Hamilton vector field H Imq associated with its imaginary part (2.10) t → Re q(e tH Imq X), vanishes to infinite order at t = 0. This is also equivalent to the fact that the function (2.10) is identically zero on R.
In this work, we study the class of quadratic operators whose Weyl symbols have nonnegative real parts Re q ≥ 0, and zero singular spaces S = {0}. According to the above description of the singular space, these quadratic operators are exactly those whose Weyl symbols have a non-negative real part Re q ≥ 0, becoming positive definite
(2.11) ∀ T > 0, Re q T (X) = 1 2T T -T (Re q)(e tH Imq X)dt ≫ 0,
after averaging by the linear flow of the Hamilton vector field associated with its imaginary part. These quadratic operators are also known [23, Theorem 1.2.1] to generate strongly continuous contraction semigroups (e -tq w ) t≥0 on L 2 (R n ), which are smoothing in the Schwartz space for any positive time
∀t > 0, ∀f ∈ L 2 (R n ), e -tq w f ∈ S (R n ).
In the recent work [27, Theorem 1.2], these regularizing properties were sharpened and these contraction semigroups were shown to be actually smoothing for any positive time in the Gelfand-Shilov space
S 1/2 1/2 (R n ): ∃C > 0, ∃t 0 > 0, ∀f ∈ L 2 (R n ), ∀α, β ∈ N n , ∀0 < t ≤ t 0 , (2.12) x α ∂ β x (e -tq w f ) L ∞ (R n ) ≤ C 1+|α|+|β| t 2k 0 +1 2 (|α|+|β|+2n+s) (α!) 1/2 (β!) 1/2 f L 2 (R n ) ,
where s is a fixed integer verifying s > n/2, and where 0 ≤ k 0 ≤ 2n -1 is the smallest integer satisfying (2.13)
k 0 j=0 Ker Re F (Im F ) j ∩ R 2n = {0}.
The definition and few facts about the Gelfand-Shilov regularity are recalled in Appendix (Section 4.3). Thanks to this Gelfand-Shilov smoothing effect (2.12), the first and third authors have established in [4, Proposition 4.1] that, for any quadratic form q : R 2n x,ξ → C with a non-negative real part Re q ≥ 0 and a zero singular space S = {0}, the dissipation estimate (1.16) holds with 0 ≤ k 0 ≤ 2n -1 being the smallest integer satisfying (2.13). Let ω ⊂ R n be a measurable γ-thick set at scale L > 0. We can then deduce from Theorem 1.6 with the following choices of parameters:
(i) Ω = R n , (ii) A = -q w (x, D x ), (iii) a = 1 2 , b = 1, (iv) t 0 > 0 as in (1. [START_REF] Erdélyi | The Remez inequality on the size of polynomials[END_REF]) and (1.17), (v) m = 2k 0 + 1, where k 0 is defined in (2.13), (vi) any constant c 1 > 0 satisfying for all
∀k ≥ 1, C κ γ κL √ k ≤ e c 1 √ k ,
where the positive constants [START_REF] Erdélyi | The Remez inequality on the size of polynomials[END_REF]) and (1.17), the following observability estimate in any positive time
C = C(L, γ, n) > 0 and κ = κ(n) > 0 are defined in Theorem 2.1 (formula (iii)), (vii) c 2 = 1 C 0 > 0, where C 0 > 1 is defined in (1.
∃C > 1, ∀T > 0, ∀f ∈ L 2 (R n ), e -T q w f 2 L 2 (R n ) ≤ C exp C T 2k 0 +1 T 0 e -tq w f 2 L 2 (ω) dt.
We therefore obtain the following result of null-controllability:
Theorem 2.2. Let q : R n x × R n ξ → C be a complex-valued quadratic form with a non negative real part Re q ≥ 0, and a zero singular space S = {0}. If ω is a measurable thick subset of R n , then the parabolic equation
∂ t f (t, x) + q w (x, D x )f (t, x) = u(t, x)1l ω (x) , x ∈ R n , f | t=0 = f 0 ∈ L 2 (R n ),
with q w (x, D x ) being the quadratic differential operator defined by the Weyl quantization of the symbol q, is null-controllable from the set ω in any positive time T > 0.
As in [START_REF] Beauchard | Null-controllability of hypoelliptic quadratic differential equations[END_REF], this new result of null-controllability given by Theorem 2.2 applies in particular for the parabolic equation associated to the Kramers-Fokker-Planck operator
(2.14) K = -∆ v + v 2 4 + v∂ x -∇ x V (x)∂ v , (x, v) ∈ R 2 ,
with a quadratic potential
V (x) = 1 2 ax 2 , a ∈ R * ,
which is an example of accretive quadratic operator with a zero singular space S = {0}. It also applies in the very same way to hypoelliptic Ornstein-Uhlenbeck equations posed in weighted L 2 -spaces with respect to invariant measures, or to hypoelliptic Fokker-Planck equations posed in weighted L 2 -spaces with respect to invariant measures. We refer the reader to the works [START_REF] Beauchard | Null-controllability of hypoelliptic quadratic differential equations[END_REF][START_REF] Ottobre | Exponential return to equilibrium for hypoelliptic quadratic systems[END_REF] for detailed discussions of various physics models whose evolution turns out to be ruled by accretive quadratic operators with zero singular space and to which therefore apply the above result of null-controllability.
Proof of the spectral inequalities
This section is devoted to the proof of Theorem 2.1. We recall from (2.2) that
(3.2) ∀N ∈ N, ∃C N (ω) > 0, ∀f ∈ E N , f L 2 (R n ) ≤ C N (ω) f L 2 (ω) .
On the other hand, it follows from Lemma 4.2 that
(3.3) ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ 2 √ 3 f L 2 (B(0,cn √ N +1)) .
Let N ∈ N and f ∈ E N . According to (4.1) and (4.6), there exists a complex polynomial function P ∈ C[X 1 , ..., X n ] of degree at most N such that
(3.4) ∀x ∈ R n , f (x) = P (x)e -|x| 2 2
. We observe from (3.3) and (3.4) that
(3.5) f 2 L 2 (R n ) ≤ 4 3 B(0,cn √ N +1) |P (x)| 2 e -|x| 2 dx ≤ 4 3 P 2 L 2 (B(0,cn √ N +1))
and
(3.6) P 2 L 2 (B(x 0 ,r)) = B(x 0 ,r) |P (x)| 2 e -|x| 2 e |x| 2 dx ≤ e (|x 0 |+r) 2 f 2 L 2 (B(x 0 ,r)) .
We aim at deriving an estimate of the term P L 2 (B(0,cn
√ N +1)) by P L 2 (B(x 0 ,r)) when N ≫ 1 is sufficiently large. Let N be an integer such that c n √ N + 1 > 2|x 0 | + r. It implies the inclusion B(x 0 , r) ⊂ B(0, c n √ N + 1).
To that end, we may assume that P is a non-zero polynomial function. By using polar coordinates centered at x 0 , we notice that
B(x 0 , r) = {x 0 + tσ : 0 ≤ t < r, σ ∈ S n-1 } and (3.7) P 2 L 2 (B(x 0 ,r)) = S n-1 r 0 |P (x 0 + tσ)| 2 t n-1 dtdσ.
As c n √ N + 1 > 2|x 0 | + r, we notice that there exists a continuous function ρ N : S n-1 → (0, +∞) such that
(3.8) B(0, c n √ N + 1) = {x 0 + tσ : 0 ≤ t < ρ N (σ), σ ∈ S n-1 } and (3.9) ∀σ ∈ S n-1 , 0 < |x 0 | + r < c n √ N + 1 -|x 0 | < ρ N (σ) < c n √ N + 1 + |x 0 |.
It follows from (3.8) and (3.9) that (3.10) P 2
L 2 (B(0,cn √ N +1)\B(x 0 , r 2 )) = S n-1 ρ N (σ) r 2 |P (x 0 + tσ)| 2 t n-1 dtdσ ≤ (c n √ N + 1 + |x 0 |) n-1 S n-1 ρ N (σ) r 2 |P (x 0 + tσ)| 2 dtdσ.
By noticing that
t → P x 0 + ( ρ N (σ) 2 + r 4 )σ + tσ ,
is a polynomial function of degree at most N , we deduce from (3.9) and Lemma 4.4 used in the one-dimensional case n = 1 that
ρ N (σ) r 2 |P (x 0 + tσ)| 2 dt = ρ N (σ) 2 -r 4 -( ρ N (σ) 2 -r 4 ) P x 0 + ρ N (σ) 2 + r 4 σ + tσ 2 dt (3.11) ≤ 2 4N +2 3 4(ρ N (σ) -r 2 ) r 2 2 - r 2 4(ρ N (σ)-r 2 ) r 2 4(ρ N (σ)-r 2 ) 2N -ρ N (σ) 2 + 3r 4 -( ρ N (σ) 2 -r 4 ) P x 0 + ρ N (σ) 2 + r 4 σ + tσ 2 dt ≤ 2 4N +2 3 4(ρ N (σ) -r 2 ) r 2 2 - r 2 4(ρ N (σ)-r 2 ) r 2 4(ρ N (σ)-r 2 ) 2N r r 2 |P (x 0 + tσ)| 2 dt ≤ 2 12N +n+4 3r 2N +n c n √ N + 1 + |x 0 | - r 2 2N +1 r r 2 |P (x 0 + tσ)| 2 t n-1 dt.
It follows from (3.10) and (3.11) that (3.12) P 2
L 2 (B(0,cn √ N +1)\B(x 0 , r 2
)) ≤ (c n √ N + 1 + |x 0 |) n-1 × 2 12N +n+4 3r 2N +n c n √ N + 1 + |x 0 | - r 2 2N +1 S n-1 r r 2 |P (x 0 + tσ)| 2 t n-1 dt, implying that (3.13) P 2 L 2 (B(0,cn √ N +1)) ≤ 1 + (c n √ N + 1 + |x 0 |) n-1 × 2 12N +n+4 3r 2N +n c n √ N + 1 + |x 0 | - r 2 2N +1 P 2 L 2 (B(x 0 ,r)) ,
thanks to (3.7). We deduce from (3.13) that there exists a positive constant C = C(x 0 , r, n) > 1 independent on the parameter N such that (3.14)
P L 2 (B(0,cn √ N +1)) ≤ Ce
f L 2 (R n ) ≤ 2 √ 3 Ce 1 2 (|x 0 |+r) 2 e 1 2 N ln(N +1)+CN f L 2 (B(x 0 ,r)) .
The two estimates (3.2) and (3.15) allow to prove the assertion (i) in Theorem 2.1.
3.2.
Case when the control subset is a measurable set satisfying the condition (1.23). Let ω ⊂ R n be a measurable subset satisfying the condition
(3.16) lim inf R→+∞ |ω ∩ B(0, R)| |B(0, R)| = lim R→+∞ inf r≥R |ω ∩ B(0, r)| |B(0, r)| > 0,
where B(0, R) denotes the open Euclidean ball in R n centered in 0 with radius R > 0. It follows that there exist some positive constants R 0 > 0 and δ > 0 such that
(3.17) ∀R ≥ R 0 , |ω ∩ B(0, R)| |B(0, R)| ≥ δ > 0.
We recall from (2.2) that
(3.18) ∀N ∈ N, ∃C N (ω) > 0, ∀f ∈ E N , f L 2 (R n ) ≤ C N (ω) f L 2 (ω)
and as in the above section, it follows from Lemma 4.2 that
(3.19) ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ 2 √ 3 f L 2 (B(0,cn √ N +1)) .
Let N ∈ N be an integer satisfying
c n √ N + 1 ≥ R 0 and f ∈ E N . It follows from (3.17) that (3.20) |ω ∩ B(0, c n √ N + 1)| ≥ δ|B(0, c n √ N + 1)| > 0.
According to (4.1) and (4.6), there exists a complex polynomial function P ∈ C[X 1 , ..., X n ] of degree at most N such that
(3.21) ∀x ∈ R n , f (x) = P (x)e -|x| 2 2
. We observe from (3.19) and (3.21) that
(3.22) f 2 L 2 (R n ) ≤ 4 3 B(0,cn √ N +1) |P (x)| 2 e -|x| 2 dx ≤ 4 3 P 2 L 2 (B(0,cn √ N +1))
and
(3.23) P 2 L 2 (ω∩B(0,cn √ N +1)) = ω∩B(0,cn √ N +1) |P (x)| 2 e -|x| 2 e |x| 2 dx ≤ e c 2 n (N +1) f 2 L 2 (ω∩B(0,cn √ N +1))
. We deduce from Lemma 4.4 and (3.20) that
(3.24) P 2 L 2 (B(0,cn √ N +1)) ≤ 2 4N +2 3 4|B(0, c n √ N + 1)| |ω ∩ B(0, c n √ N + 1)| F |ω ∩ B(0, c n √ N + 1)| 4|B(0, c n √ N + 1)| 2N P 2 L 2 (ω∩B(0,cn √ N +1)) ,
with F the decreasing function
∀0 < t ≤ 1, F (t) = 1 + (1 -t) 1 n 1 -(1 -t) 1 n ≥ 1.
By using that F is a decreasing function, it follows from (3.20) and (3.24) that (3.25)
P 2 L 2 (B(0,cn √ N +1)) ≤ 2 4N +4 3δ F δ 4 2N P 2 L 2 (ω∩B(0,cn √ N +1)) .
Putting together (3.22), (3.23) and (3.25), we deduce that there exists a positive constant
C = C(δ, n) > 0 such that for all N ∈ N with c n √ N + 1 ≥ R 0 and for all f ∈ E N , (3.26) f 2 L 2 (R n ) ≤ 2 4N +6 9δ F δ 4 2N e c 2 n (N +1) f 2 L 2 (ω∩B(0,cn √ N +1)) ≤ C 2 e 2CN f 2 L 2 (ω) .
The two estimates (3.18) and (3.26) allow to prove the assertion (ii) in Theorem 2.1.
3.3.
Case when the control subset is a thick set. Let ω be a measurable subset of R n . We assume that ω is γ-thick at scale L > 0,
(3.27) ∃0 < γ ≤ 1, ∃L > 0, ∀x ∈ R n , |ω ∩ (x + [0, L] n )| ≥ γL n .
The following proof is an adaptation of the proof of the sharpened version of the Logvinenko-Sereda theorem given by Kovrijkine in [30, Theorem 1] in the one-dimensional setting, and the one given by Wang, Wang, Zhang and Zhang in [49, Lemma 2.1] in the multidimensional case.
3.3.1.
Step 1. Bad and good cubes. Let N ∈ N be a non-negative integer and f ∈ E N \ {0}.
For each multi-index α = (α 1 , ..., α n ) ∈ (LZ) n , let
Q(α) = x = (x 1 , ..., x n ) ∈ R n : ∀1 ≤ j ≤ n, |x j -α j | < L 2 .
Notice that
∀α, β ∈ (LZ) n , α = β, Q(α) ∩ Q(β) = ∅, R n = α∈(LZ) n Q(α),
where Q(α) denotes the closure of Q(α). It follows that for all f ∈ L 2 (R n ),
f 2 L 2 (R n ) = R n |f (x)| 2 dx = α∈(LZ) n Q(α) |f (x)| 2 dx.
Let δ > 0 be a positive constant to be chosen later on. We divide the family of cubes (Q(α)) α∈(LZ) n into families of good and bad cubes. A cube Q(α), with α ∈ (LZ) n , is said to be good if it satisfies (3.28)
∀β ∈ N n , Q(α) |∂ β x f (x)| 2 dx ≤ e eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N Q(α) |f (x)| 2 dx.
On the other hand, a cube Q(α), with α ∈ (LZ) n , which is not good, is said to be bad, that is,
(3.29) ∃β ∈ N n , |β| > 0, Q(α) |∂ β x f (x)| 2 dx > e eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N Q(α) |f (x)| 2 dx.
If Q(α) is a bad cube, it follows from (3.29) that there exists
β 0 ∈ N n , |β 0 | > 0 such that (3.30) Q(α) |f (x)| 2 dx ≤ e -eδ -2 8δ 2 (2 n + 1) |β 0 | (|β 0 |!) 2 e 2δ -1 √ N Q(α) |∂ β 0 x f (x)| 2 dx ≤ β∈N n ,|β|>0 e -eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N Q(α) |∂ β x f (x)| 2 dx.
By summing over all the bad cubes, we deduce from (3.30) and the Fubini-Tonelli theorem that
(3.31) bad cubes Q(α) |f (x)| 2 dx = bad cubes Q(α) |f (x)| 2 dx ≤ β∈N n ,|β|>0 e -eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N bad cubes Q(α) |∂ β x f (x)| 2 dx ≤ β∈N n ,|β|>0 e -eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N R n |∂ β x f (x)| 2 dx.
By using that the number of solutions to the equation
β 1 + ... + β n = k, with k ≥ 0, n ≥ 1 and unknown β = (β 1 , ..., β n ) ∈ N n , is given by k+n-1 k
, we obtain from the Bernstein type estimates in Proposition 4.3 (formula (i)) and (3.31) that
(3.32) bad cubes Q(α) |f (x)| 2 dx ≤ β∈N n ,|β|>0 1 2(2 n + 1) |β| f 2 L 2 (R n ) = +∞ k=1 k + n -1 k 1 2 k (2 n + 1) k f 2 L 2 (R n ) ≤ 2 n-1 +∞ k=1 1 (2 n + 1) k f 2 L 2 (R n ) = 1 2 f 2 L 2 (R n ) , since (3.33) k + n -1 k ≤ k+n-1 j=0 k + n -1 j = 2 k+n-1 .
By writing
f 2 L 2 (R n ) = good cubes Q(α) |f (x)| 2 dx + bad cubes Q(α) |f (x)| 2 dx, it follows from (3.32) that (3.34) f 2 L 2 (R n ) ≤ 2 good cubes Q(α) |f (x)| 2 dx.
3.3.2.
Step 2. Properties on good cubes. As any cube Q(α) satisfies the cone condition, the Sobolev embedding
W n,2 (Q(α)) ֒-→ L ∞ (Q(α)),
see e.g. [1, Theorem 4.12] implies that there exists a universal positive constant C n > 0 depending only on the dimension n ≥ 1 such that
(3.35) ∀u ∈ W n,2 (Q(α)), u L ∞ (Q(α)) ≤ C n u W n,2 (Q(α))
.
By translation invariance of the Lebesgue measure, notice in particular that the constant C n does not depend on the parameter α ∈ (LZ) n . Let Q(α) be a good cube. We deduce from (3.28) and (3.35) that for all β ∈ N n ,
∂ β x f L ∞ (Q(α)) ≤ C n β∈N n ,| β|≤n ∂ β+ β x f 2 L 2 (Q(α)) 1 2 (3.36) ≤ C n e eδ -2 2 e δ -1 √ N β∈N n ,| β|≤n 8δ 2 (2 n + 1) |β|+| β| (|β| + | β|)! 2 1 2 f L 2 (Q(α)) ≤ Cn (δ) 32δ 2 (2 n + 1) |β| 2 |β|!e δ -1 √ N f L 2 (Q(α)) , with (3.37) Cn (δ) = C n e eδ -2 2 β∈N n ,| β|≤n 32δ 2 (2 n + 1) | β| (| β|!) 2 1 2 > 0, since (|β| + | β|)! ≤ 2 |β|+| β| |β|!| β|!.
Recalling that f is a finite combination of Hermite functions, we deduce from the continuity of the function f and the compactness of Q(α) that there exists
x α ∈ Q(α) such that (3.38) f L ∞ (Q(α)) = |f (x α )|.
By using spherical coordinates centered at x α ∈ Q(α) and the fact that the Euclidean diameter of the cube Q(α) is √ nL, we observe that
|ω ∩ Q(α)| = +∞ 0 S n-1 1l ω∩Q(α) (x α + rσ)dσ r n-1 dr (3.39) = √ nL 0 S n-1 1l ω∩Q(α) (x α + rσ)dσ r n-1 dr = n n 2 L n 1 0 S n-1 1l ω∩Q(α) (x α + √ nLrσ)dσ r n-1 dr,
where 1l ω∩Q(α) denotes the characteristic function of the measurable set ω ∩ Q(α). By using the Fubini theorem, we deduce from (3.39) that
(3.40) |ω ∩ Q(α)| ≤ n n 2 L n 1 0 S n-1 1l ω∩Q(α) (x α + √ nLrσ)dσ dr = n n 2 L n S n-1 1 0 1l ω∩Q(α) (x α + √ nLrσ)dr dσ = n n 2 L n S n-1 1 0 1l Iσ (r)dr dσ = n n 2 L n S n-1 |I σ |dσ,
where (3.41)
I σ = {r ∈ [0, 1] : x α + √ nLrσ ∈ ω ∩ Q(α)}.
The estimate (3.40) implies that there exists σ 0 ∈ S n-1 such that
(3.42) |ω ∩ Q(α)| ≤ n n 2 L n |S n-1 ||I σ 0 |.
By using the thickness property (3.27), it follows from (3.42) that
(3.43) |I σ 0 | ≥ γ n n 2 |S n-1 | > 0. 3.3.3.
Step 3. Recovery of the L 2 (R)-norm. We first notice that f L 2 (Q(α)) = 0, since f is a non-zero entire function. We consider the entire function
(3.44) ∀z ∈ C, φ(z) = L n 2 f (x α + √ nLzσ 0 ) f L 2 (Q(α))
.
We observe from (3.38) that
|φ(0)| = L n 2 |f (x α )| f L 2 (Q(α)) = L n 2 f L ∞ (Q(α)) f L 2 (Q(α)) ≥ 1.
Instrumental in the proof is the following lemma proved by Kovrijkine in [30, Lemma 1]:
sup x∈[0,1] |f (x α + √ nLxσ 0 )| f L 2 (Q(α)) ≤ C |I σ 0 | ln M ln 2 L n 2 sup x∈Iσ 0 |f (x α + √ nLxσ 0 )| f L 2 (Q(α)) , with (3.46) M = L n 2 sup |z|≤4 |f (x α + √ nLzσ 0 )| f L 2 (Q(α)) .
It follows from (3.43) and (3.45) that
(3.47) sup x∈[0,1] |f (x α + √ nLxσ 0 )| ≤ Cn n 2 |S n-1 | γ ln M ln 2 sup x∈Iσ 0 |f (x α + √ nLxσ 0 )| ≤ M 1 ln 2 ln( Cn n 2 |S n-1 | γ ) sup x∈Iσ 0 |f (x α + √ nLxσ 0 )|.
According to (3.41), we notice that
(3.48) sup x∈Iσ 0 |f (x α + √ nLxσ 0 )| ≤ f L ∞ (ω∩Q(α)) .
On the other hand, we deduce from (3.38) that
(3.49) f L ∞ (Q(α)) = |f (x α )| ≤ sup x∈[0,1] |f (x α + √ nLxσ 0 )|.
It follows from (3.47), (3.48) and (3.49) that
(3.50) f L ∞ (Q(α)) ≤ M 1 ln 2 ln( Cn n 2 |S n-1 | γ ) f L ∞ (ω∩Q(α)) .
By using the analyticity of the function f , we observe that
(3.51) ∀z ∈ C, f (x α + √ nLzσ 0 ) = β∈N n (∂ β x f )(x α ) β! σ β 0 n |β| 2 L |β| z |β| .
By using that Q(α) is a good cube, x α ∈ Q(α) and the continuity of the functions ∂ β x f , we deduce from (3.36) and (3.51) that for all |z| ≤ 4,
(3.52) |f (x α + √ nLzσ 0 )| ≤ β∈N n |(∂ β x f )(x α )| β! (4 √ nL) |β| ≤ Cn (δ)e δ -1 √ N β∈N n |β|! β! δL 2 9 n(2 n + 1) |β| f L 2 (Q(α))
.
By using anew that the number of solutions to the equation
β 1 + ... + β n = k, with k ≥ 0, n ≥ 1 and unknown β = (β 1 , ..., β n ) ∈ N n ,
β∈N n |β|! β! δL 2 9 n(2 n + 1) |β| ≤ β∈N n δL 2 9 n 3 (2 n + 1) |β| = +∞ k=0 k + n -1 k δL 2 9 n 3 (2 n + 1) k ≤ 2 n-1 +∞ k=0 δL 2 11 n 3 (2 n + 1) k .
For now on, the positive parameter δ > 0 is fixed and taken to be equal to The positive constant C > 1 given by Lemma 3.1 may be chosen such that
(3.56) Cn n 2 |S n-1 | > 1.
With this choice, we deduce from (3.50) and (3.55) that
(3.57) f L ∞ (Q(α)) ≤ Cn n 2 |S n-1 | γ ln((4L) n 2 Cn(δ -1 n L -1 )) ln 2 + δn ln 2 L √ N f L ∞ (ω∩Q(α)) .
Recalling from the thickness property (3.27) that |ω ∩ Q(α)| ≥ γL n > 0 and setting
(3.58) ωα = x ∈ ω ∩ Q(α) : |f (x)| ≤ 2 |ω ∩ Q(α)| ω∩Q(α) |f (x)|dx , we observe that (3.59) ω∩Q(α) |f (x)|dx ≥ (ω∩Q(α))\ωα |f (x)|dx ≥ 2|(ω ∩ Q(α)) \ ωα | |ω ∩ Q(α)| ω∩Q(α) |f (x)|dx.
By using that the integral
ω∩Q(α) |f (x)|dx > 0,
is positive, since f is a non-zero entire function and |ω ∩ Q(α)| > 0, we obtain that
|(ω ∩ Q(α)) \ ωα | ≤ 1 2 |ω ∩ Q(α)|, which implies that (3.60) |ω α | = |ω ∩ Q(α)| -|(ω ∩ Q(α)) \ ωα | ≥ 1 2 |ω ∩ Q(α)| ≥ 1 2 γL n > 0,
thanks anew to the thickness property (3.27). By using again spherical coordinates as in (3.39) and (3.40), we observe that
(3.61) |ω α | = |ω α ∩ Q(α)| = n n 2 L n 1 0 S n-1 1l ωα∩Q(α) (x α + √ nLrσ)dσ r n-1 dr ≤ n n 2 L n S n-1 | Ĩσ |dσ, where (3.62) Ĩσ = {r ∈ [0, 1] : x α + √ nLrσ ∈ ωα ∩ Q(α)}.
As in (3.42), the estimate (3.61) implies that there exists σ 0 ∈ S n-1 such that
(3.63) |ω α | ≤ n n 2 L n |S n-1 || Ĩσ 0 |. We deduce from (3.60) and (3.63) that (3.64) | Ĩσ 0 | ≥ γ 2n n 2 |S n-1 | > 0. Applying anew Lemma 3.1 with I = [0, 1], E = Ĩσ 0 ⊂ [0, 1] verifying |E| = | Ĩσ 0 | > 0,
sup x∈[0,1] |f (x α + √ nLxσ 0 )| f L 2 (Q(α)) ≤ C | Ĩσ 0 | ln M ln 2 L n 2 sup x∈ Ĩσ 0 |f (x α + √ nLxσ 0 )| f L 2 (Q(α)) ,
where M denotes the constant defined in (3.46). It follows from (3.64) and (3.65) that (3.66) sup
x∈[0,1] |f (x α + √ nLxσ 0 )| ≤ 2Cn n 2 |S n-1 | γ ln M ln 2 sup x∈ Ĩσ 0 |f (x α + √ nLxσ 0 )| ≤ M 1 ln 2 ln( 2Cn n 2 |S n-1 | γ ) sup x∈ Ĩσ 0 |f (x α + √ nLxσ 0 )|.
According to (3.62), we notice that
(3.67) sup x∈ Ĩσ 0 |f (x α + √ nLxσ 0 )| ≤ f L ∞ (ωα∩Q(α)) .
It follows from (3.49), (3.66) and (3.67) that
(3.68) f L ∞ (Q(α)) ≤ M 1 ln 2 ln( 2Cn n 2 |S n-1 | γ ) f L ∞ (ωα∩Q(α)) .
On the other hand, it follows from (3.58)
(3.69) f L ∞ (ωα∩Q(α)) ≤ 2 |ω ∩ Q(α)| ω∩Q(α) |f (x)|dx.
We deduce from (3.68), (3.69) and the Cauchy-Schwarz inequality that
f L 2 (Q(α)) ≤ L n 2 f L ∞ (Q(α)) (3.70) ≤ 2L n 2 |ω ∩ Q(α)| M 1 ln 2 ln( 2Cn n 2 |S n-1 | γ ) ω∩Q(α) |f (x)|dx ≤ 2L n 2 |ω ∩ Q(α)| 1 2 M 1 ln 2 ln( 2Cn n 2 |S n-1 | γ ) f L 2 (ω∩Q(α)) .
By using the thickness property (3.27), it follows from (3.55), (3.56) and (3.70)
(3.71) f 2 L 2 (Q(α)) ≤ 4 γ M 2 ln 2 ln( 2Cn n 2 |S n-1 | γ ) f 2 L 2 (ω∩Q(α)) ≤ 4 γ (4L) n 2 Cn (δ -1 n L -1 )e δnL √ N 2 ln 2 ln( 2Cn n 2 |S n-1 | γ ) f 2 L 2 (ω∩Q(α)) . With (3.72) κ n (L, γ) = 2 3 2 γ 1 2 2Cn n 2 |S n-1 | γ ln((4L) n 2 Cn(δ -1 n L -1 )) ln 2
> 0, we deduce from (3.71) that there exists a positive universal constant κn > 0 such that for any good cube Q(α),
(3.73) f 2 L 2 (Q(α)) ≤ 1 2 κ n (L, γ) 2 κn γ 2κnL √ N f 2 L 2 (ω∩Q(α)) .
It follows from (3.34) and (3.73) that
(3.74) f 2 L 2 (R n ) ≤ 2 good cubes Q(α) |f (x)| 2 dx = 2 good cubes f 2 L 2 (Q(α)) ≤ κ n (L, γ) 2 κn γ 2κnL √ N good cubes f 2 L 2 (ω∩Q(α)) ≤ κ n (L, γ) 2 κn γ 2κnL √ N ω∩( good cubes Q(α)) |f (x)| 2 dx ≤ κ n (L, γ) 2 κn γ 2κnL √ N f 2 L 2 (ω) .
This ends the proof of assertion (iii) in Theorem 2.1.
Appendix
4.1. Hermite functions. This section is devoted to set some notations and recall basic facts about Hermite functions. The standard Hermite functions (φ k ) k≥0 are defined for x ∈ R,
(4.1) φ k (x) = (-1) k 2 k k! √ π e x 2 2 d k dx k (e -x 2 ) = 1 2 k k! √ π x - d dx k (e -x 2 2 ) = a k + φ 0 √ k! ,
where a + is the creation operator
a + = 1 √ 2 x - d dx .
The Hermite functions satisfy the identity
(4.2) ∀ξ ∈ R, ∀k ≥ 0, φ k (ξ) = (-i) k √ 2πφ k (ξ),
when using the normalization of the Fourier transform (1.1). The L 2 -adjoint of the creation operator is the annihilation operator
a -= a * + = 1 √ 2 x + d dx .
The following identities hold
(4.3) [a -, a + ] = Id, - d 2 dx 2 + x 2 = 2a + a -+ 1, (4.4) ∀k ∈ N, a + φ k = √ k + 1φ k+1 , ∀k ∈ N, a -φ k = √ kφ k-1 (= 0 si k = 0), (4.5) ∀k ∈ N, - d 2 dx 2 + x 2 φ k = (2k + 1)φ k
, where N denotes the set of non-negative integers. The family (φ k ) k∈N is an orthonormal basis of L 2 (R). We set for α
= (α j ) 1≤j≤n ∈ N n , x = (x j ) 1≤j≤n ∈ R n , (4.6) Φ α (x) = n j=1 φ α j (x j ).
The family (Φ α ) α∈N n is an orthonormal basis of L 2 (R n ) composed of the eigenfunctions of the n-dimensional harmonic oscillator
(4.7) H = -∆ x + |x| 2 = k≥0 (2k + n)P k , Id = k≥0 P k , where P k is the orthogonal projection onto Span C {Φ α } α∈N n ,|α|=k , with |α| = α 1 + • • • + α n .
The following estimates on Hermite functions are a key ingredient for the proof of the spectral inequalities (i) and (ii) in Theorem 2.1. This result was established by Bonami, Karoui and the second author in the proof of [6, Theorem 3.2], and is recalled here for the sake of completeness of the present work.
Lemma 4.1. The one-dimensional Hermite functions (φ k ) k∈N defined in (4.1) satisfy the following estimates:
∀k ∈ N, ∀a ≥ √ 2k + 1, |x|≥a |φ k (x)| 2 dx ≤ 2 k+1 k! √ π a 2k-1 e -a 2 .
Proof. For any k ∈ N, the k th Hermite polynomial function
(4.8) H k (x) = (-1) k e x 2 d dx k (e -x 2 ),
has degree k and is an even (respectively odd) function when k is an even (respectively odd) non-negative integer. The first Hermite polynomial functions are given by (4.9)
H 0 (x) = 1, H 1 (x) = 2x, H 2 (x) = 4x 2 -2.
The k th Hermite polynomial function H k admits k distinct real simple roots. More specifically, we recall from [44, Section 6.31] that the k roots of
H k denoted -x [ k 2 ],k , ..., -x 1,k , x 1,k , ..., x [ k 2 ],k , satisfy (4.10) - √ 2k + 1 ≤ -x [ k 2 ],k < ... < -x 1,k < 0 < x 1,k < ... < x [ k 2 ],k ≤ √ 2k + 1, with [ k 2 ]
the integer part of k 2 , when k ≥ 2 is an even positive integer. On the other hand, the k roots of
H k denoted -x [ k 2 ],k , ..., -x 1,k , x 0,k , x 1,k , ..., x [ k 2 ],k , satisfy (4.11) - √ 2k + 1 ≤ -x [ k 2 ],k < ... < -x 1,k < x 0,k = 0 < x 1,k < ... < x [ k 2 ],k ≤ √ 2k + 1,
when k is an odd positive integer. We denote by z k the largest non-negative root of the k th Hermite polynomial function H k , that is, with the above notations
z k = x [ k 2 ],k , when k ≥ 1. Relabeling temporarily (a j ) 1≤j≤k the k roots of H k such that a 1 < a 2 < ... < a k . The classical formula (4.12) ∀k ∈ N * , H ′ k (x) = 2kH k-1 (x),
see e.g. [44, Section 5.5], together with Rolle's Theorem imply that H k-1 admits exactly one root in each of the k -1 intervals (a j , a j+1 ), with 1 ≤ j ≤ k -1, when k ≥ 2. According to (4.9), (4.10) and (4.11), it implies in particular that for all k ≥ 1,
(4.13) 0 = z 1 < z 2 < ... < z k ≤ √ 2k + 1.
Next, we claim that
(4.14) ∀k ≥ 1, ∀|x| ≥ z k , |H k (x)| ≤ 2 k |x| k .
To that end, we first observe that
(4.15) ∀k ≥ 1, ∀x ≥ z k , H k (x) ≥ 0, since the leading coefficient of H k ∈ R[X]
is given by 2 k > 0. As the polynomial function H k is an even or odd function, we notice from (4.15) that it is actually sufficient to establish that
(4.16) ∀k ≥ 1, ∀x ≥ z k , H k (x) ≤ 2 k x k ,
to prove the claim. The estimates (4.16) are proved by recurrence on k ≥ 1. Indeed, we observe from (4.9) that ∀x ≥ z 1 = 0, H 1 (x) = 2x.
Let k ≥ 2 such that the estimate (4. 16) is satisfied at rank k -1. It follows from (4.12) for all x ≥ z k , (4.17)
H k (x) = H k (x) -H k (z k ) = x z k H ′ k (t)dt = 2k x z k H k-1 (t)dt ≤ 2k x z k 2 k-1 t k-1 dt = 2 k (x k -z k k ) ≤ 2 k x k , since 0 ≤ z k-1 < z k .
This ends the proof of the claim (4.14). We deduce from (4.9), (4.13) and (4.14) that
(4.18) ∀k ∈ N, ∀|x| ≥ √ 2k + 1, |H k (x)| ≤ 2 k |x| k .
It follows from (4.1), (4.8) and (4.18) that
(4.19) ∀k ∈ N, ∀|x| ≥ √ 2k + 1, |φ k (x)| ≤ 2 k 2 √ k!π 1 4 |x| k e -x 2 2 .
We observe that
(4.20) ∀a > 0, +∞ a e -t 2 dt ≤ a -1 e -a 2 2 +∞ a te -t 2 2 dt = a -1 e -a 2 and (4.21) ∀α > 1, ∀a > √ α -1, +∞ a t α e -t 2 dt ≤ a α-1 e -a 2 2 +∞ a te -t 2 2 dt = a α-1 e -a 2 ,
as the function (a, +∞) ∋ t → t α-1 e -t 2 2 ∈ (0, +∞) is decreasing on (a, +∞). We deduce from (4.19), (4.20) and (4.21) that
(4.22) ∀k ∈ N, ∀a ≥ √ 2k + 1, |x|≥a |φ k (x)| 2 dx ≤ 2 k k!π 1 2 |x|≥a x 2k e -x 2 dx = 2 k+1 k!π 1 2 x≥a x 2k e -x 2 dx ≤ 2 k+1 k!π 1 2 a 2k-1 e -a 2 .
This ends the proof of Lemma 4.1.
We consider E N = Span C {Φ α } α∈N n ,|α|≤N the finite dimensional vector space spanned by all the Hermite functions Φ α with |α| ≤ N . The following lemma is also instrumental in the proof of Theorem 2.1 : Lemma 4.2. There exists a positive constant c n > 0 depending only on the dimension n ≥ 1 such that
∀N ∈ N, ∀f ∈ E N , |x|≥cn √ N +1 |f (x)| 2 dx ≤ 1 4 f 2 L 2 (R n ) ,
where | • | denotes the Euclidean norm on R n .
Proof. Let N ∈ N. We deduce from Lemma 4.1 and the Cauchy-Schwarz inequality that the one-dimensional Hermite functions (φ k ) k∈N satisfy for all 0 ≤ k, l ≤ N and a ≥ √ 2N + 1, (4.23)
|t|≥a |φ k (t)φ l (t)|dt ≤ |t|≥a |φ k (t)| 2 dt 1 2 |t|≥a |φ l (t)| 2 dt 1 2 ≤ 2 k+l 2 +1 √ π √ k! √ l! a k+l-1 e -a 2 .
In order to extend these estimates in the multi-dimensional setting, we first notice that for all a > 0, α, β
|x j |≥ a √ n |Φ α (x)Φ β (x)|dx = |x j |≥ a √ n |φ α j (x j )φ β j (x j )|dx j 1≤k≤n k =j R |φ α k (x k )φ β k (x k )|dx k ≤ |x j |≥ a √ n |φ α j (x j )φ β j (x j )|dx j 1≤k≤n k =j φ α k L 2 (R) φ β k L 2 (R) ,
implies that for all a ≥ √ n
√ 2N + 1, α, β ∈ N n ,
γ α γ β |x|≥a Φ α (x)Φ β (x)dx ≤ |α|≤N |β|≤N |γ α ||γ β | |x|≥a |Φ α (x)Φ β (x)|dx ≤ 2 n π e -a 2 n a |α|≤N, |β|≤N 1≤j≤n
|γ α ||γ β | α j ! β j ! 2 n a α j +β j .
For any α = (α 1 , ..., α n ) ∈ N n , we denote α ′ = (α 2 , ..., α n ) ∈ N n-1 when n ≥ 2. We observe that (4.27)
|α|≤N |β|≤N |γ α ||γ β | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 = |α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ ||γ β 1 ,β ′ | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 and
(4.28)
0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ ||γ β 1 ,β ′ | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 ≤ 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | ( 2a 2 n ) α 1 +β 1 α 1 !β 1 ! 1 2 ,
thanks to the Cauchy-Schwarz inequality. On the other hand, we notice that (4.29)
0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | ( 2a 2 n ) α 1 +β 1 α 1 !β 1 ! 1 2 ≤ 4 N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | ( a 2 2n ) α 1 +β 1 α 1 !β 1 ! 1 2 ≤
|γ α ||γ β | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 ≤ 4 N e a 2 2n |α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 .
The Cauchy-Schwarz inequality implies that (4.31)
|α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 ≤ |α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 |α ′ |≤N |β ′ |≤N 1 1 2 .
By using that the family (Φ α ) α∈N n is an orthonormal basis of L 2 (R n ) and that the number of solutions to the equation α 2 + ... + α n = k, with k ≥ 0, n ≥ 2 and unknown α ′ = (α 2 , ..., α n ) ∈ N n-1 , is given by k+n-2 n-2 , we deduce from (4.31) that (4.32)
|α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 ≤ |α|≤N |γ α | 2 |α ′ |≤N 1 = N k=0 k + n -2 n -2 f 2 L 2 (R n ) ≤ 2 n-2 N k=0 2 k f 2 L 2 (R n ) ≤ 2 N +n-1 f 2 L 2 (R n ) , since k+n-2 n-2 ≤ k+n-
|γ α ||γ β | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 ≤ 2 n-1 8 N e a 2 2n f 2 L 2 (R n ) ,
when n ≥ 2. Notice that the very same estimate holds true as well in the one-dimensional case n = 1. We deduce from (4.26) and (4.33) that for all
N ∈ N, f ∈ E N and a ≥ √ n √ 2N + 1, (4.34) |x|≥a |f (x)| 2 dx ≤ 2 n n 3 2 √ π e -a 2 2n a 8 N f 2 L 2 (R n ) .
It follows from (4.34) that there exists a positive constant c n > 0 depending only on the dimension n ≥ 1 such that
∀N ∈ N, ∀f ∈ E N , |x|≥cn √ N +1 |f (x)| 2 dx ≤ 1 4 f 2 L 2 (R n ) .
This ends the proof of Lemma 4.2.
4.2.
Bernstein type and weighted L 2 estimates for Hermite functions. This section is devoted to the proof of the following Bernstein type and weighted L 2 estimates for Hermite functions:
Proposition 4.3. With E N the finite dimensional vector space spanned by the Hermite functions (Φ α ) |α|≤N defined in (2.1), finite combinations of Hermite functions satisfy the following estimates:
(i) ∀N ∈ N, ∀f ∈ E N , ∀0 < δ ≤ 1, ∀β ∈ N n , ∂ β x f L 2 (R n ) ≤ e e 2δ 2 (2δ) |β| |β|!e δ -1 √ N f L 2 (R n ) . (ii) ∀N ∈ N, ∀f ∈ E N , ∀0 < δ < 1 32n , ∀β ∈ N n , e δ|x| 2 ∂ β x f L 2 (R n ) + e δ|Dx| 2 x β f L 2 (R n ) ≤ 2 n 1 -32nδ 2 N 2 2 3 2 |β| |β|! f L 2 (R n ) .
Proof. We notice that (4.35)
x j = 1 √ 2 (a j,+ + a j,-), ∂ x j = 1 √ 2 (a j,--a j,+ ), with a j,+ = 1 √ 2 (x j -∂ x j ), a j,-= 1 √ 2 (x j + ∂ x j ).
By denoting (e j ) 1≤j≤n the canonical basis of R n , we obtain from (4.4) and (4.35) that for all N ∈ N and f ∈ E N ,
a j,+ f 2 L 2 (R n ) = a j,+ |α|≤N f, Φ α L 2 Φ α 2 L 2 (R n ) = |α|≤N α j + 1 f, Φ α L 2 Φ α+e j 2 L 2 (R n ) = |α|≤N (α j + 1)| f, Φ α L 2 | 2 ≤ (N + 1) |α|≤N | f, Φ α L 2 | 2 = (N + 1) f 2 L 2 (R n ) and a j,-f 2 L 2 (R n ) = a j,- |α|≤N f, Φ α L 2 Φ α 2 L 2 (R n ) = |α|≤N √ α j f, Φ α L 2 Φ α-e j 2 L 2 (R n ) = |α|≤N α j | f, Φ α L 2 | 2 ≤ N |α|≤N | f, Φ α L 2 | 2 = N f 2 L 2 (R n ) .
It follows that for all N ∈ N and f ∈ E N , (4.36)
x j f L 2 (R n ) ≤ 1 √ 2 ( a j,+ f L 2 (R n ) + a j,-f L 2 (R n ) ) ≤ √ 2N + 2 f L 2 (R n ) and (4.37) ∂ x j f L 2 (R n ) ≤ 1 √ 2 ( a j,+ f L 2 (R n ) + a j,-f L 2 (R n ) ) ≤ √ 2N + 2 f L 2 (R n ) .
We notice from (4.4) and (4.35) that
∀N ∈ N, ∀f ∈ E N , ∀α, β ∈ N n , x α ∂ β x f ∈ E N +|α|+|β| , with x α = x α 1 1 ...x αn n and ∂ β x = ∂ β 1 x 1 ...∂ βn xn .
We deduce from (4.36) that for all N ∈ N, f ∈ E N , and α, β ∈ N n , with α 1 ≥ 1,
x α ∂ β x f L 2 (R n ) = x 1 ( x α-e 1 ∂ β x f ∈E N+|α|+|β|-1 ) L 2 (R n ) ≤ √ 2 N + |α| + |β| x α-e 1 ∂ β x f L 2 (R n ) .
By iterating the previous estimates, we readily obtain from (4.36) and (4.37) that for all
N ∈ N, f ∈ E N and α, β ∈ N n , (4.38) x α ∂ β x f L 2 (R n ) ≤ 2 |α|+|β| 2 (N + |α| + |β|)! N ! f L 2 (R n ) .
We recall the following basic estimates,
(4.39) ∀k ∈ N * , k k ≤ e k k!, ∀t, A > 0, t A ≤ A A e t-A , ∀t > 0, ∀k ∈ N, t k ≤ e t k
≤ (2δ) |α|+|β| (δ -1 √ N ) |α|+|β| ≤ (2δ) |α|+|β| (|α| + |β|) |α|+|β| e δ -1 √ N -|α|-|β| ≤ (2δ) |α|+|β| (|α| + |β|)!e δ -1 √ N .
It follows from (4.38), (4.40) and (4.41) that for all N ∈ N, f ∈ E N and α, β ∈ N n , (4.42)
x α ∂ β x f L 2 (R n ) ≤ e e 2δ 2 (2δ) |α|+|β| (|α| + |β|)!e δ -1 √ N f L 2 (R n ) .
It provides in particular the following Bernstein type estimates
(4.43) ∀N ∈ N, ∀f ∈ E N , ∀0 < δ ≤ 1, ∀β ∈ N n , ∂ β x f L 2 (R n ) ≤ e e 2δ 2 (2δ) |β| |β|!e δ -1 √ N f L 2 (R n ) .
On the other hand, we deduce from (4.38) that for all N ∈ N, f ∈ E N and α, β ∈ N n , (4.44)
x α ∂ β x f L 2 (R n ) ≤ 2 |α|+|β| 2 (N + |α| + |β|)! N ! f L 2 (R n ) ≤ 2 N 2 2 |α|+|β| (|α| + |β|)! f L 2 (R n ) , since (k 1 + k 2 )! k 1 !k 2 ! = k 1 + k 2 k 1 ≤ k 1 +k 2 j=0 k 1 + k 2 j = 2 k 1 +k 2 .
We observe from (4.44) that for all N ∈ N, f ∈ E N , δ > 0 and α, β ∈ N n , (4.45)
δ |α| x 2α α! ∂ β x f L 2 (R n ) ≤ 2 N 2 δ |α| 2 2|α|+|β| α! (2|α| + |β|)! f L 2 (R n ) ≤ 2 N 2 δ |α| 2 4|α|+ 3 2 |β| |α|! α! |β|! f L 2 (R n ) ≤ 2 N 2 (16nδ) |α| 2 3 2 |β| |β|! f L 2 (R n ) , since (2|α| + |β|)! ≤ 2 2|α|+|β| (2|α|)!|β|! ≤ 2 4|α|+|β| (|α|!) 2 |β|! and (4.46) |α|! ≤ n |α| α!.
The last estimate is a direct consequence of the generalized Newton formula
∀x = (x 1 , ..., x n ) ∈ R n , ∀N ∈ N, n j=1 x j N = α∈N n ,|α|=N N ! α! x α .
By using that the number of solutions to the equation α 1 + ... + α n = k, with k ≥ 0, n ≥ 1 and unknown α = (α 1 , ..., α n ) ∈ N n , is given by k+n-1 n-1 , it follows from (4.45) that for all
N ∈ N, f ∈ E N , 0 < δ < 1 32n and β ∈ N n , e δ|x| 2 ∂ β x f L 2 (R n ) ≤ α∈N n δ |α| x 2α α! ∂ β x f L 2 (R n ) (4.47) ≤ 2 N 2 α∈N n (16nδ) |α| 2 3 2 |β| |β|! f L 2 (R n ) = 2 N 2 +∞ k=0 k + n -1 n -1 (16nδ) k 2 3 2 |β| |β|! f L 2 (R n ) ≤ 2 n-1 1 -32nδ 2 N 2 2 3 2 |β| |β|! f L 2 (R n ) , since k+n-1 n-1 ≤ k+n-1 j=0 k+n-1 j = 2 k+n-1
N ∈ N, f ∈ E N , 0 < δ < 1 32n and β ∈ N n , (4.48) e δ|Dx| 2 x β f L 2 (R n ) = 1 (2π) n 2 e δ|ξ| 2 ∂ β ξ f L 2 (R n ) ≤ 1 (2π) n 2 2 n-1 1 -32nδ 2 N 2 2 3 2 |β| |β|! f L 2 (R n ) = 2 n-1 1 -32nδ 2 N 2 2 3 2 |β| |β|! f L 2 (R n ) .
This ends the proof of Proposition 4.3.
4.3.
Gelfand-Shilov regularity. We refer the reader to the works [START_REF] Gelfand | Generalized Functions II[END_REF][START_REF] Gramchev | Classes of degenerate elliptic operators in Gelfand-Shilov spaces[END_REF][START_REF] Nicola | Global pseudo-differential calculus on Euclidean spaces, Pseudo-Differential Operators[END_REF][START_REF] Toft | Decompositions of Gelfand-Shilov kernels into kernels of similar class[END_REF] and the references herein for extensive expositions of the Gelfand-Shilov regularity theory. The Gelfand-Shilov spaces S µ ν (R n ), with µ, ν > 0, µ+ν ≥ 1, are defined as the spaces of smooth functions f ∈ C ∞ (R n ) satisfying the estimates
∃A, C > 0, |∂ α x f (x)| ≤ CA |α| (α!) µ e -1 A |x| 1/ν , x ∈ R n , α ∈ N n , or, equivalently ∃A, C > 0, sup x∈R n |x β ∂ α x f (x)| ≤ CA |α|+|β| (α!) µ (β!) ν , α, β ∈ N n .
These Gelfand-Shilov spaces S µ ν (R n ) may also be characterized as the spaces of Schwartz functions f ∈ S (R n ) satisfying the estimates
∃C > 0, ε > 0, |f (x)| ≤ Ce -ε|x| 1/ν , x ∈ R n , | f (ξ)| ≤ Ce -ε|ξ| 1/µ , ξ ∈ R n .
In particular, we notice that Hermite functions belong to the symmetric Gelfand-Shilov space S
1/2 1/2 (R n ). More generally, the symmetric Gelfand-Shilov spaces S µ µ (R n ), with µ ≥ 1/2, can be nicely characterized through the decomposition into the Hermite basis (Φ α ) α∈N n , see e.g. [45, Proposition 1.2],
f ∈ S µ µ (R n ) ⇔ f ∈ L 2 (R n ), ∃t 0 > 0, f, Φ α L 2 exp(t 0 |α| 1 2µ ) α∈N n l 2 (N n ) < +∞ ⇔ f ∈ L 2 (R n ), ∃t 0 > 0, e t 0 H 1 2µ f L 2 (R n ) < +∞,
where H = -∆ x + |x| 2 stands for the harmonic oscillator.
4.4. Remez inequality. The classical Remez inequality [START_REF] Remez | Sur une propriété des polynômes de Tchebycheff[END_REF], see also [START_REF] Erdélyi | The Remez inequality on the size of polynomials[END_REF][START_REF] Erdélyi | Remez-type inequalities and their applications[END_REF], is the following estimate providing a bound on the maximum of the absolute value of an arbitrary real polynomial function P ∈ R The Remez inequality was extended in the multi-dimensional case in [START_REF] Brudnyi | A certain extremal problem for polynomials in n variables, (Russian)[END_REF], see also [START_REF] Ganzburg | Polynomial inequalities on measurable sets and their applications[END_REF]Formula (4.1)] and [START_REF] Kroó | Some extremal problems for multivariate polynomials on convex bodies[END_REF], as follows: for all convex bodies2 K ⊂ R n , measurable subsets E ⊂ K of positive Lebesgue measure 0 < |E| < |K| and real polynomial functions P ∈ R[X 1 , ..., X n ] of degree d, the following estimate holds Thanks to this estimate, we can prove that the L 2 -norm • L 2 (ω) on any measurable subset ω ⊂ R n , with n ≥ 1, of positive Lebesgue measure |ω| > 0 defines a norm on the finite dimensional vector space E N defined in (2.1). Indeed, let f be a function in E N verifying f L 2 (ω) = 0, with ω ⊂ R n a measurable subset of positive Lebesgue measure |ω| > 0. According to (4.1) and (4.6), there exists a complex polynomial function P ∈ C[X 1 , ..., X n ] such that ∀(x 1 , ..., x n ) ∈ R n , f (x 1 , ..., x n ) = P (x 1 , ..., x n )e -x 2 1 +...+x 2 n 2 . The condition f L 2 (ω) = 0 first implies that f = 0 almost everywhere in ω, and therefore that P = 0 almost everywhere in ω. We deduce from (4.55) that the polynomial function P has to be zero on any convex body K verifying |K ∩ ω| > 0, and therefore is zero everywhere. We conclude that the L 2 -norm • L 2 (ω) actually defines a norm on the finite dimensional vector space E N .
On the other hand, the Remez inequality is a key ingredient in the proof of the following instrumental lemma needed for the proof of Theorem 2.1: Lemma 4.4. Let R > 0 and ω ⊂ R n be a measurable subset verifying |ω ∩ B(0, R)| > 0. Then, the following estimate holds for all complex polynomial functions P ∈ C[X 1 , ..., X n ] of degree d,
P L 2 (B(0,R)) ≤ 2 2d+1 √ 3 4|B(0, R)| |ω ∩ B(0, R)| 1 + (1 -|ω∩B(0,R)| 4|B(0,R)| ) 1 n 1 -(1 -|ω∩B(0,R)| 4|B(0,R)| ) 1 n d P L 2 (ω∩B(0,R)) ,
where B(0, R) denotes the open Euclidean ball in R n centered in 0 with radius R > 0.
Proof. Let P ∈ C[X 1 , ..., X n ] be a non-zero complex polynomial function of degree d and R > 0. We consider the following subset This ends the proof of Lemma 4.4.
where B(y ′ , r) denotes the open Euclidean ball centered in y ′ with radius r > 0. It allows to derive the null-controllability of parabolic equations associated with accretive quadratic operators with zero singular spaces in any positive time T > 0 from any open subset ω of R n satisfying (1.21).
3. 1 .
1 Case when the control subset is a non-empty open set. Let ω ⊂ R n be a nonempty open set. There exist x 0 ∈ R n and r > 0 such that the control subset ω contains the open Euclidean ball B(x 0 , r) centered at x 0 with radius r > 0, (3.1) B(x 0 , r) ⊂ ω.
1 2 N
2 ln(N +1)+CN P L 2 (B(x 0 ,r)) .It follows from (3.5), (3.6) and (3.14) that for all N ∈ N such that c n √ N + 1 > 2|x 0 | + r and for all f ∈ E N ,(3.15)
Lemma 3 . 1 .
31 ([30, Lemma 1]). Let I ⊂ R be an interval of length 1 such that 0 ∈ I and E ⊂ I be a subset of positive measure |E| > 0. There exists a positive constant C > 1 such that for all analytic function Φ on the open ball B(0, 5) centered in zero with radius 5 such that |Φ(0)| ≥ 1, then sup x∈I |Φ(x)| ≤ C |E| ln M ln 2 sup x∈E |Φ(x)|, with M = sup |z|≤4 |Φ(z)| ≥ 1. Applying Lemma 3.1 with I = [0, 1], E = I σ 0 ⊂ [0, 1] verifying |E| = |I σ 0 | > 0 according to (3.43), and the analytic function Φ = φ defined in (3.44) satisfying |φ(0)| ≥ 1, we obtain that (3.45) L n 2
δ n = 2 2
2 11 n 3 (2 n + 1) > 0 With this choice, it follows from (3.46), (3.52), (3.53) and (3
and the analytic function Φ = φ defined in (3.44) satisfying |φ(0)| ≥ 1, we obtain that (3.65) L n 2
2 - 1 )
21 [X] of degree d on [-1, 1] by the maximum of its absolute value on any measurable subsetE ⊂ [-1, 1] of positive Lebesgue measure 0 < |E| < 2, k (dk -1)! k!(d -2k)! 2 d-2k X d-2k = k X d-2k , see e.g. [7, Chapter 2], where [x] stands the integer part of x, denotes the d th Chebyshev polynomial function of first kind. We also recall from [7, Chapter 2] the definition of Chebyshev polynomial functions of second kind (4.51)∀d ∈ N, U d (X) = [ d 2 ] k=0 (-1) k dk k 2 d-2k X d-2kand (4.52) ∀d ∈ N * , U d-1 (
By recalling that all the zeros of the Chebyshev polynomial functions of first and second kind are simple and contained in the set ] -1, 1[, we observe from (4.50) and (4.52) that the function T d is increasing on [1, +∞) and that(4.54) ∀d ∈ N, ∀x ≥ 1, 1 = T d (1) ≤ T d (x) = 1) k (x + 1) k x d-2kwe deduce from (4.53) and (4.54) that for all convex bodies K ⊂ R n , measurable subsets E ⊂ K of positive Lebesgue measure 0 < |E| < |K|, and complex polynomial functions P ∈ C[X 1 , ..., X n ] of degree d,
( 4 .( 4 1l≥ 2 - 2 ≤ 2
44222 56)E ε = x ∈ B(0, R) : |P (x)| ≤ 2 -2d-1for all 0 < ε ≤ B(0, R), and F the decreasing function (4.57)∀0 < t ≤ 1, F (t) = 1 + (1t) that |E ε | < |B(0, R)|.We first check that the Lebesgue measure of this subset satisfies|E ε | ≤ ε. If |E ε | > 0, it follows from (4We obtain from (4.58) that (4.59)F ε |B(0, R)| ≤ F |E ε | |B(0, R)| .As F is a decreasing function, we deduce from (4.59) that(4.60) ∀0 < ε ≤ B(0, R), |E ε | ≤ ε.Let ω ⊂ R n be a measurable subset verifying |ω ∩ B(0, R)| > 0. We consider the positive parameterG ε 0 = x ∈ B(0, R) : |P (x)| > 2 -2d-Gε 0 (x)|P (x)| 2 dx ε 0 |.We deduce from (4.56), (4.60) and (4.62) that|ω ∩ G ε 0 | = |G ε 0 |x ∈ B(0, R) \ ω : |P (x)| > 2 -2d-1 F ε 0 |B(0, R)| -d sup B(0,R) |P | ≥ (|B(0, R)|-|E ε 0 |)-|B(0, R)\ω| ≥ |B(0, R)|-1 4 |ω ∩B(0, R)|-(|B(0, R)|-|ω ∩B(0, R)|), that is (4.64) |ω ∩ G ε 0 | ≥ 3 4 |ω ∩ B(0, R)| > 0.It follows from (4.61), (4.63) and (4.64) that (4.65)P 2 L 2 (B(0,R)) ≤ |B(0, R)| sup B(0,R) |P | 4d+2 4|B(0, R)| 3|ω ∩ B(0, R)| F |ω ∩ B(0, R)| 4|B(0, R)| 2d ω∩B(0,R) |P (x)| 2 dx.We deduce from (4.65) that (4.66) P L 2 (B(0,R)) ≤ 2 2d+1 √ 3 4|B(0, R)| |ω ∩ B(0, R)| F |ω ∩ B(0, R)| 4|B(0, R)| d P L 2 (ω∩B(0,R)) .
∈ N n , |α|, |β| ≤ N , denotes the Euclidean norm on R n and (Φ α ) α∈N n stand for the n-dimensional Hermite functions defined in (4.6). On the other hand, we notice from (4.23) and (4.24) that
n
(4.24) |x|≥a |Φ α (x)Φ β (x)|dx ≤ j=1 |x j |≥ a √ n |Φ α (x)Φ β (x)|dx,
where | • |
) k∈N is an orthonormal basis of L 2 (R). For any f = |α|≤N γ α Φ α ∈ E N and
since (φ k a ≥ √ n √ 2N + 1, we deduce from (4.25) that
(4.26) |x|≥a |f (x)| 2 dx = |α|≤N |β|≤N
|α|, |β| ≤ N ,
n
(4.25) j=1 |x j |≥ a √ n |φ α j (x j )φ β j (x j )|dx j
≤ 2 n π e -a 2 n a n j=1 1 α j ! β j ! 2 n a α j +β j ,
|x|≥a |Φ α (x)Φ β (x)|dx ≤
4 N e
a 2 2n .
It follows from (4.27), (4.28) and (4.29) that
(4.30)
|α|≤N |β|≤N
On the other hand, when N ≥ |α| + |β|, we deduce from (4.39) that
!, see e.g. [37] (formulas (0.3.12) and (0.3.14)) Let 0 < δ ≤ 1 be a positive constant. When N ≤ |α| + |β|, we deduce from (4.39) that (4.40) 2 |α|+|β| 2 (N + |α| + |β|)! N ! ≤ 2 |α|+|β| 2 |α|+|β| 2 ≤ (2 √ e) 2 |α|+|β| 2 (N + |α| + |β|)! N ! ≤ 2 |α|+|β| 2 (N + |α| + |β|) |α|+|β| 2 (4.41)
(N + |α| + |β|) |α|+|β| 2 ≤ 2 |α|+|β| (|α| + |β|) |α|+|β| (|α| + |β|)! = (2 √ e) |α|+|β| (|α| + |β|)! (|α| + |β|)! ≤ e e 2δ 2 (2δ) |α|+|β| (|α| + |β|)!.
. By noticing from (4.2) that f ∈ E N if and only if f ∈ E N , we deduce from the Parseval formula and (4.47) that for all
A compact convex subset of R n with non-empty interior. | 72,695 | [
"961400",
"2156",
"963516"
] | [
"75",
"27730",
"75"
] |
01766354 | en | [
"sdu"
] | 2024/03/05 22:32:13 | 2012 | https://hal.science/hal-01766354/file/doc00028875.pdf | Daniel R H O'connell
Jon P Ake
Fabian Bonilla
Pengcheng Liu
Roland Laforge
Dean Ostenaa
Strong Ground Motion Estimation
Introduction
At the time of its founding, only a few months after the great 1906 M 7.7 San Francisco Earthquake, the Seismological Society of America noted in their timeless statement of purpose "that earthquakes are dangerous chiefly because we do not take adequate precautions against their effects, whereas it is possible to insure ourselves against damage by proper studies of their geographic distribution, historical sequence, activities, and effects on buildings." Seismic source characterization, strong ground motion recordings of past earthquakes, and physical understanding of the radiation and propagation of seismic waves from earthquakes provide the basis to estimate strong ground motions to support engineering analyses and design to reduce risks to life, property, and economic health associated with earthquakes. When a building is subjected to ground shaking from an earthquake, elastic waves travel through the structure and the building begins to vibrate at various frequencies characteristic of the stiffness and shape of the building. Earthquakes generate ground motions over a wide range of frequencies, from static displacements to tens of cycles per second [Hertz (Hz)]. Most structures have resonant vibration frequencies in the 0.1 Hz to 10 Hz range. A structure is most sensitive to ground motions with frequencies near its natural resonant frequency. Damage to a building thus depends on its properties and the character of the earthquake ground motions, such as peak acceleration and velocity, duration, frequency content, kinetic energy, phasing, and spatial coherence. Strong ground motion estimation must provide estimates of all these ground motion parameters as well as realistic ground motion time histories needed for nonlinear dynamic analysis of structures to engineer earthquake-resistant buildings and critical structures, such as dams, bridges, and lifelines. Strong ground motion estimation is a relatively new science. Virtually every M > 6 earthquake in the past 35 years that provided new strong ground motion recordings produced a paradigm shift in strong motion seismology. The 1979 M 6.9 Imperial Valley, California, earthquake showed that rupture velocities could exceed shear-wave velocities over a significant portion of a fault, and produced a peak vertical acceleration > 1.5 g [START_REF] Spudich | Direct observation of rupture propagation during the 1979 Imperial Valley earthquake using a short baseline accelerometer array[END_REF]Archuleta;[START_REF] Archuleta | A faulting model for the 1979 Imperial Valley earthquake[END_REF]. The 1983 M 6.5 Coalinga, California, earthquake revealed a new class of seismic sources, blind thrust faults [START_REF] Stein | Seismicity and geometry of a 110-km-long blind thrust fault: 2. Synthesis of the 1982-1985 California earthquake sequence[END_REF]. The 1985 M 6.9 Nahanni earthquake produced horizontal accelerations of 1.2 g and a peak vertical acceleration > 2 g (Weichert et al., 1986). The 1989 M 7.0 Loma Prieta, California, earthquake occurred on an unidentified steeply-dipping fault adjacent to the San Andreas fault, with reverse-slip on half of the fault [START_REF] Hanks | The 1989 Loma Prieta, California, earthquake and its effects: Introduction to the Special Issue[END_REF], and produced significant damage > 100 km away related to critical reflections of shear-waves off the Moho [START_REF] Somerville | The influence of critical Moho reflections on strong ground motions recorded in San Francisco and Oakland during the 1989 Loma Prieta earthquake[END_REF][START_REF] Catchings | Reflected seismic waves and their effect on strong shaking during the 1989 Loma Prieta, California, earthquake[END_REF]. The 1992 M 7.0 Petrolia, California, earthquake produced peak horizontal accelerations > 1.4 g [START_REF] Oglesby | A faulting model for the 1992 Petrolia earthquake: Can extreme ground acceleration be a source effect?[END_REF]. The 1992 M 7.4 Landers, California, earthquake demonstrated that multisegment fault rupture could occur on fault segments with substantially different orientations that are separated by several km [START_REF] Li | Fine structure of the Landers fault zone; segmentation and the rupture process[END_REF]. The 1994 M 6.7 Northridge, California, earthquake produced a then world-record peak horizontal velocity (> 1.8 m/s) associated with rupture directivity (O'Connell, 1999a), widespread nonlinear soil responses [START_REF] Field | Nonlinear ground-motion amplification by sediments during the 1994 Northridge earthquake[END_REF][START_REF] Cultera | Nonlinear soil response in the vicinity of the Van Norman Complex following the 1994 Northridge, California, earthquake[END_REF], and resulted in substantial revision of existing ground motion-attenuation relationships [START_REF] Abrahamson | Overview[END_REF]. The 1995 M 6.9 Hyogoken Nanbu (Kobe) earthquake revealed that basin-edge generated waves can strongly amplify strong ground motions [START_REF] Kawase | The cause of the damage belt in Kobe: "The basin-edge effect," constructive interference of the direct S-wave with the basin-induced diffracted/Rayleigh waves[END_REF][START_REF] Pitarka | Three-dimensional simulation of the near-fault ground motions for the 1995 Hyogo-Nanbu (Kobe), Japan, earthquake[END_REF] and provided ground motion recordings demonstrating time-dependent nonlinear soil responses that amplified and extended the durations of strong ground motions [START_REF] Archuleta | Nonlinearity in observed and computed accelerograms[END_REF]. The 1999 M > 7.5 Izmit, Turkey, earthquakes produced asymmetric rupture velocities, including rupture velocities ~40% faster than shear-wave velocities, which may be associated with a strong velocity contrast across the faults [START_REF] Bouchon | How Fast is Rupture during an Earthquake? New Insights from the 1999 Turkey Earthquakes[END_REF]. The 1999 M 7.6 Chi-Chi, Taiwan, earthquake produced a world-record peak velocity > 3 m/s with unusually low peak accelerations [START_REF] Shin | A preliminary report of the 1999 Chi-Chi (Taiwan) earthquake[END_REF]. The 2001 M 7.7 Bhuj India demonstrated that M > 7.5 blind thrust earthquakes can occur in intraplate regions. The M 6.9 2008 Iwate-Miyagi, Japan, earthquake produced a current world-record peak vector acceleration > 4 g, with a vertical acceleration > 3.8 g (Aoi et al., 2008). The 2011 M 9.1 Tohoku, Japan, earthquake had a world-record peak slip on the order of 60 m (Shao et al., 2011) and produced a world-record peak horizontal acceleration of 2.7 g at > 60 km from the fault [START_REF] Nied | Off the Pacific Coast of Tohoku Earthquake, Strong Ground Motion[END_REF]. This progressive sequence of ground motion surprises suggests that the current state of knowledge in strong motion seismology is probably not adequate to make unequivocal strong ground motion predictions. However, with these caveats in mind, strong ground motion estimation provides substantial value by reducing risks associated with earthquakes and engineered structures. We present the current state of earthquake ground motion estimation. We start with seismic source characterization, because this is the most important and challenging part of the problem. To better understand the challenges of developing ground motion prediction equations (GMPE) using strong motion data, we present the physical factors that influence strong ground shaking. New calculations are presented to illustrate potential pitfalls and identify key issues relevant to ground motion estimation and future ground motion research and applications. Particular attention is devoted to probabilistic implications of all aspects of ground motion estimation.
Seismic source characterization
The strongest ground shaking generally occurs close to an earthquake fault rupture because geometric spreading reduces ground shaking amplitudes as distance from the fault increases. Robust ground motion estimation at a specific site or over a broad region is predicated on the availability of detailed geological and geophysical information about locations, geometries, and rupture characteristics of earthquake faults. These characteristics are not random, but are dictated by the physical properties of the upper crust including rock types, pre-existing faults and fractures, and strain rates and orientations. Because such information is often not readily available or complete, the resultant uncertainties of source characterization can be the dominant contributions to uncertainty in ground motion estimation. [START_REF] Lettis | Empirical observations regarding reverse earthquakes, blind thrust faults, and Quaternary deformation: Are blind thrust faults truly blind[END_REF] showed that intraplate blind thrust earthquakes with moment magnitudes up to 7 have occurred in intraplate regions where often there was no previously known direct surface evidence to suggest the existence of the buried faults. This observation has been repeatedly confirmed, even in plate boundary settings, by numerous large earthquakes of the past 30 years including several which have provided rich sets of ground motion data from faults for which neither the locations, geometries, or other seismic source characterization properties were known prior to the earthquake. Regional seismicity and geodetic measurements may provide some indication of the likely rate of earthquake occurrence in a region, but generally do not demonstrate where that deformation localizes fault displacement. Thus, an integral and necessary step in reducing ground motion estimation uncertainties in most regions remains the identification and characterization of earthquake source faults at a sufficiently detailed scale to fully exploit the full range of ground motion modelling capabilities. In the absence of detailed source characterizations, ground motion uncertainties remain large, with the likely consequence of overestimation of hazard at most locations, and potentially severe underestimation of hazard in those few locations where a future earthquake ultimately reveals the source characteristics of a nearby, currently unknown fault. The latter case is amply demonstrated by the effects of the 1983 M 6.5 Coalinga, 1986M 6.0 Whittier Narrows, 1989M 6.6 Sierra Madre, 1989M 7.0 Loma Prieta, 1992M 7.4 Landers, 1994M 6.7 Northridge, 1999 M 7.6 Chi-Chi Taiwan, 2001 M 7.7 Bhuj, India, 2010 M 7.0 Canterbury, New Zealand, and 2011 M 6.1 Christchurch, New Zealand, earthquakes. The devastating 2011 M 9.1 Tohoku, Japan, earthquake and tsunami were the result of unusually large fault displacement over a relatively small fault area (Shao et al., 2011), a source characteristic that was not forseen, but profoundly influenced strong ground shaking [START_REF] Nied | Off the Pacific Coast of Tohoku Earthquake, Strong Ground Motion[END_REF] and tsunami responses (SIAM News, 2011). All these earthquakes occurred in regions where the source faults were either unknown or major source characteristics were not recognized prior to the occurrence of these earthquakes.
Physical basis for ground motion prediction
In this section we present the physical factors that influence ground shaking in response to earthquakes. A discrete representation is used to emphasize the discrete building blocks or factors that interact to produce strong ground motions. For simplicity, we start with linear stress-strain. Nonlinear stress-strain is most commonly observed in soils and evaluated in terms of site response. This is the approach we use here; nonlinear site response is discussed in Section 4. The ground motions produced at any site by an earthquake are the result of seismic radiation associated with the dynamic faulting process and the manner in which seismic energy propagates from positions on the fault to a site of interest. We assume that fault rupture initiates at some point on the fault (the hypocenter) and proceeds outward along the fault surface. Using the representation theorem [START_REF] Spudich | Techniques for earthquake ground-motion calculation with applications to source parameterization to finite faults[END_REF]
u t s t g t k i j k i j ij nm (1)
where k is the component of ground motion, ij are the indices of the discrete fault elements, n is the number of fault elements in the strike direction and m is the number of elements in dip direction (Figure 3.1). We use the notation F() to indicate the modulus of the Fourier transform of f(t). It is instructive to take the Fourier transform of (1) and pursue a discussion similar to [START_REF] Hutchings | Empirical Green's functions from small earthquakes -A waveform study of locally recorded aftershocks of the San Fernando earthquakes[END_REF] and [START_REF] Hutchings | Kinematic earthquake models and synthesized ground motions using empirical Green's functions[END_REF]
using, U S e G e k i j i kij ij nm i ij kij
(2)
where at each element ij,
S ij is the source slip-velocity amplitude spectrum, ij is the source phase spectrum,
G kij is the Green's function amplitude spectrum, and kij is the Green's function phase spectrum. The maximum peak ground motions are produced by a combination of factors that produce constant or linear phase variations with frequency over a large frequency band. While the relations in (1) and ( 2) are useful for synthesizing ground motions, they don't provide particularly intuitive physical insights into the factors that contribute to produce specific ground motion characteristics, particularly large peak accelerations, velocities, and displacements. We introduce isochrones as a fundamental forensic tool for understanding the genesis of ground motions. Isochrones are then used to provide simple geometric illustrations of how directivity varies between dipping dip-slip and vertical strike-slip faults. Bernard and Madariaga (1984) and [START_REF] Spudich | Use of ray theory to calculate high-frequency radiation from earthquake sources having spatially variable rupture velocity and stress drop[END_REF]1987) developed the isochrone integration method to compute near-source ground motions for finite-fault rupture models. Isochrones are all the positions on a fault that contribute seismic energy that arrives at a specific receiver at the same time. By plotting isochrones projected on a fault, times of large amplitudes in a ground motion time history can be associated with specific regions and characteristics of fault rupture and healing. A simple and reasonable way to employ the isochrone method for sites located near faults is to assume that all significant seismic radiation from the fault consists of first shear-wave arrivals. A further simplification is to use a simple trapezoidal slip-velocity pulse. Let f(t) be the slip function, For simplicity we assume where t r is rupture time, and t h is healing time. Then, all seismic radiation from a fault can be described with rupture and healing isochrones. Ground velocities (v) and accelerations (a) produced by rupture or healing of each point on a fault can be calculated from [START_REF] Spudich | Use of ray theory to calculate high-frequency radiation from earthquake sources having spatially variable rupture velocity and stress drop[END_REF]Zeng et al., 1991;Smedes and Archuleta, 2008)
Isochrones analysis of rupture directivity
v t f t dl y t x sGc x , , (3) 2 , , y t d d d a t f t dl dq dq dq 2 2 x s G c x c G c s s G sGc ( 4
)
where c is isochrone velocity, s is slip velocity (either rupture or healing), G is a ray theory Green function, x are position vectors, y(t,x) are isochrones, is the curvature of the isochrone, dl denotes isochrone line integral integration increment, and dq denotes a spatial derivative. Since isochrones are central to understanding ground motions, we provide explicit expressions for rupture and healing isochrones to illustrate how source and propagation factors can combine to affect ground motions. The arrival times of rupture at a specific receiver are
T t t r r x x , (5)
where x is the receiver position, are all fault positions, t are shear-wave propagation times between the receiver and all fault positions, and t r are rupture times at all fault positions. The arrival times of healing at a specific receiver are
T T R h r x x , (6)
where R are the rise times (the durations of slip) at all fault positions. [START_REF] Archuleta | A faulting model for the 1979 Imperial Valley earthquake[END_REF] showed that variations in rupture velocity had pronounced effects on calculated ground motions, whereas variations in rise times and slip-rate amplitudes cause small or predictable changes on calculated ground motions. The effect of changing slipvelocity amplitudes on ground motions is strongly governed by the geometrical attenuation (1/r for far-field terms). Any change in the slip-velocity amplitudes affects most the ground motions for sites closest to the region on the fault where large slip-velocities occurred [START_REF] Spudich | Techniques for earthquake ground-motion calculation with applications to source parameterization to finite faults[END_REF]. This is not the case with rupture velocity or rise time; these quantities influence ground motions at all sites. However, as [START_REF] Anderson | Comparison of strong ground motion from several dislocation models[END_REF] showed, it takes a 300% change in rise time to compensate for a 17% change in rupture time. [START_REF] Spudich | Dense seismograph array observations of earthquake rupture dynamics[END_REF] show why this is so. Spatial variability of rupture velocity causes the integrand in (3) to become quite rough, thereby adding considerable highfrequency energy to ground motions. The roughness of the integrand in ( 3) is caused by variations of isochrone velocity c, where
c T r s 1 (7)
where T r are the isochrones from (5) and s is the surface gradient operator. Variations of T r on the fault surface associated with supershear rupture velocities, or regions on the fault where rupture jumps discontinuously can cause large or singular values of c, called critical points by [START_REF] Farra | Fast near source evaluation of strong ground motion for complex source models[END_REF]. [START_REF] Spudich | Use of ray theory to calculate high-frequency radiation from earthquake sources having spatially variable rupture velocity and stress drop[END_REF] showed that the reciprocal of c, isochrone slowness is equivalent to the seismic directivity function in the twodimensional case. Thus, by definition, critical points produce rupture directivity, and as is shown with simulations later, need not be associated strictly with forward rupture directivity, but can occur for any site located normal to a portion of a fault plane where rupture velocities are supershear. It is useful to interpret (3) and (4) in the context of the discrete point-source summations in (1) and (2). When isochrone velocities become large on a substantial area of a fault it simply means that all the seismic energy from that portion of the fault arrives at nearly the same time at the receiver; the summation of a finite, but large number of band-limited Green's functions means that peak velocities remain finite, but potentially large. Large isochrone velocities or small isochrone slownesses over significant region of a fault are diagnostic of ground motion amplification associated with rupture directivity; the focusing of a significant fraction of the seismic energy radiated from a fault at a particular site in a short time interval. In this way isochrones are a powerful tool to dissect ground motions in relation to various characteristics of fault rupture. Times of large ground motion amplitudes can be directly associated with the regions of the fault that have corresponding large isochrone velocities or unusually large slip velocities. From ( 5) and ( 6) it is clear that both fault rupture variations, and shear-wave propagation time variations, combine to determine isochrones and isochrone velocities.
3.1.1
The fundamental difference between strike-slip and dip-slip directivity [START_REF] Boore | The effect of directivity on the stress parameter determined from ground motion observations[END_REF] and [START_REF] Joyner | Directivity for nonuniform ruptures[END_REF] discussed directivity using a simple line source model. A similar approach is used here to illustrate how directivity differs between vertical strike-slip faults and dipping dip-slip faults. To focus on source effects, we consider unilateral, one-dimensional ruptures in a homogenous half-space (Figure 3.2). The influence of the free surface on amplitudes is ignored. The rupture velocity is set equal to the shearwave velocity to minimize time delays and to maximize rupture directivity. To eliminate geometric spreading, stress drops increase linearly with distance from the site in a manner that produces uniform ground motion velocity contribution to the surface site for all points on the faults. Healing is ignored; only the rupture pulse is considered. Thrust dip-slip faulting is used to produce coincident rake and rupture directions. Seismic radiation is simplified to triangular slip-velocity pulses with widths of one second. For the strike-slip fault, the fault orientation and rupture directional are coincident. But, as fault rupture approaches the site, takeoff angles increase, so the radiation pattern reduces amplitudes, and total propagation distances (rupture length plus propagation distance) increase to disperse shear-wave arrivals in time (Figures 3.2a and 3.2b). The surface site located along the projection of the thrust fault to the surface receives all seismic energy from the fault at the same time, and c is infinity because the fault orientation, rupture, and shearwave propagation directions are all coincident for the entire length of the fault (Figures 3.2c and 2d). Consequently, although the strike-slip fault is 50% longer than the thrust fault, the thrust fault produces a peak amplitude 58% larger than the strike-slip fault. The thrust fault site receives maximum amplitudes over the entire radiated frequency band. High-frequency amplitudes are reduced for the strike-slip site relative to the thrust fault site because shearwaves along the strike-slip fault become increasingly delayed as rupture approaches the site, producing a broadened ground motion velocity pulse. The geometric interaction between dip-slip faults and propagation paths to surface sites located above those faults produces a kinematic recipe for maximizing both isochrone velocities and radiation patterns for surface sites that is unique to dip-slip faults. In contrast, [START_REF] Schmedes | Near-source ground motion along strike-slip faults: Insights into magnitude saturation of PGV and PGA[END_REF] use kinematic rupture simulations and isochrone analyses to show why directivity becomes bounded during strike-slip fault along long faults. [START_REF] Schmedes | Near-source ground motion along strike-slip faults: Insights into magnitude saturation of PGV and PGA[END_REF] consider the case of subshear rupture velocities and use critical point analyses with (3) and (4) to show that for long strike-slip ruptures there is a saturation effect for peak velocities and accelerations at sites close to the fault located at increasing distances along strike relative to the epicenter, consistent with empirical observations (Cua, 2004;Abrahamson and Silva, 2008;[START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF][START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF][START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF]. Dynamic fault rupture processes during dip-slip rupture complicate dip-slip directivity by switching the region of maximum fault-normal horizontal motion from the hangingwall to the footwall as fault dips increase from 50 to 60 [START_REF] O'connell | Influence of dip and velocity heterogeneity on reverse-and normal-faulting rupture dynamics and near-fault ground motions[END_REF]. Typically, seismic velocities increase with depth, which changes positions of maximum rupture directivity compared to Figure 3.2. For dip-slip faults, the region of maximum directivity is moved away from the projection of the fault to the surface, toward the hanging wall. This bias is dependent on velocity gradients, and the dip and depth of the fault. For strike-slip faults, a refracting velocity geometry can increase directivity by reducing takeoff angle deviations relative to the rupture direction for depth intervals that depend on the velocity structure and position of the surface site (Smedes and [START_REF] Schmedes | Near-source ground motion along strike-slip faults: Insights into magnitude saturation of PGV and PGA[END_REF]. When the two-dimensional nature of finite-fault rupture is considered, rupture directivity is not as strong as suggested by this one-dimensional analysis [START_REF] Bernard | Modeling directivity of heterogeneous earthquake ruptures[END_REF], but the distinct amplitude and frequency differences between ground motions produced by strike-slip and dip-slip faulting directivity remain. Full two-dimensional analyses are presented in a subsequent section. A more complete discussion of source and propagation factors influencing ground motions is presented next to provide a foundation for discussion of amplification associated with rupture directivity. The approach here is to discuss ground motions separately in terms of source and propagation factors and then to discuss how source and propagation factors can jointly interact to strongly influence ground motion behavior.
Seismic source amplitude and phase factors
ij .
The flat portion of an amplitude spectrum is composed of the frequencies less than a corner frequency, c , which is defined as the intersection of low-and high-frequency asymptotes following [START_REF] Brune | Tectonic stress and the spectra of seismic shear waves from earthquakes[END_REF]. The stress drop, , defined as the difference between an initial stress, 0 , minus the dynamic frictional stress, f , is the stress available to drive fault slip [START_REF] Aki | Strong-motion seismology[END_REF]. Rise time, R, is the duration of slip at any particular point on the fault. Rise times are heterogeneous over a fault rupture surface. Because the radiation pattern for seismic phases such as body waves and surface waves are imposed by specification of rake (slip direction) at the source and are a function of focal mechanism, radiation pattern is included in the source discussion. Regressions between moment and fault area [START_REF] Wells | New empirical relationships amoung magnitude, rupture length, rupture width, rupture area, and surface displacement[END_REF][START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF]Leonard, 2010) show that uncertainties in moment magnitude and fault area are sufficient to produce moment uncertainties of 50% or more for any particular fault area. Consequently, the absolute scaling of synthesized ground motions for any faulting scenario have about factor of two uncertainties related to seismic moment (equivalently, average stress drop) uncertainties. Thus, moment-fault area uncertainties introduce a significant source of uncertainty in ground motion estimation. [START_REF] Andrews | A stochastic fault model, 2, Time-dependent case[END_REF] and [START_REF] Frankel | High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling strength on faults[END_REF] showed that correlated-random variations of stress drop over fault surfaces that produce self-similar spatial distributions of fault slip are required to explain observed ground motion frequency amplitude responses. [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF] showed that a self-similar slip model can explain inferred slip distributions for many large earthquakes and they derive relations between many fault rupture parameters and seismic moment. Their results provide support for specifying fault rupture models using a stochastic spatially varying stress drop where stress drop amplitude decays as the inverse of wavenumber to produce self-similar slip distributions. They assume that mean stress drop is independent of seismic moment. Based on their analysis and assumptions, [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF] provide recipes for specifying fault rupture parameters such as slip, rise times, and asperity dimensions as a function of moment. [START_REF] Mai | Source scaling properties from finite-fault-rupture models[END_REF] showed that 5.3 < M < 8.1 magnitude range dip-slip earthquakes follow self-similar scaling as suggest by [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF]. However, for strike-slip earthquakes, as moment increases in this magnitude range, they showed that seismic moments scale as the cube of fault length, but fault width saturates. Thus, for large strike slip earthquakes average slip increases with fault rupture length, stress drop increases with magnitude, and self-similar slip scaling does not hold. The large stress drops observed for the M 7.7 1999 Chi-Chi, Taiwan thrust-faulting earthquake [START_REF] Oglesby | The three-dimensional dynamics of dipping faults[END_REF] suggest that self-similar slip scaling relations may also breakdown at larger moments for dip-slip events.
Factor
Influence Moment rate,
S ij M 0
Moment rate scales peak velocities and accelerations. Moment determines the average slip for a fixed fault area and known shear moduli.
Stress drop,
S ij Since S ij ,
S ij C
Diffraction at the crack tip introduces a frequency dependent amplitude to the radiation pattern [START_REF] Madariaga | High-frequency radiation from crack (stress-drop) models of earthquake faulting[END_REF][START_REF] Boatwright | A dynamic model for far-field acceleration[END_REF][START_REF] Fukuyama | Integral equation method for plane crack with arbitary shape in 3D elastic medium[END_REF].
Dynamics, S ij D
Fault rupture in heterogeneous velocity structure can produce anisotropic slip velocities relative to rupture direction [START_REF] Harris | Effects of a low-velocity zone on dynamic rupture[END_REF] and slip velocities and directivity are a function of rake and dip for dip-slip faults [START_REF] Oglesby | Earthquakes on dipping faults:The effects of broken symmetry[END_REF]2000;[START_REF] O'connell | Influence of dip and velocity heterogeneity on reverse-and normal-faulting rupture dynamics and near-fault ground motions[END_REF]. Frictional heating, fault zone fluids, and melting may also influence radiated energy (Kanamori and Brodsky, 2001;Andrews, 200X)
S ij ) Factor Influence Rupture velocity, ij V r
High rupture velocities increase directivity. Rupture velocities interact with stress drops and rise times to modify the amplitude spectrum. Supershear rupture velocities can increase directivity far from the fault (Andrews, 2010). Healing velocity,
ij V h
High healing velocities increase amplification associated with directivity.
Healing velocities interact with stress drop and rise time variations to modify the amplitude spectrum, although to a smaller degree than rupture velocities, since rupture slip veloc ities are typically several times larger than healing slip velocities.
Rake,
ij A
Rake and spatial and temporal rake variations scale amplitudes as a function of azimuth and take-off angle. Rake spatial and temporal variations over a fault increase the spatial complexity of radiation pattern amplitude variations and produce frequency-dependent amplitude variability. Rise time, Diffraction at the crack tip introduces a frequency dependent amplitude to the radiation pattern [START_REF] Madariaga | High-frequency radiation from crack (stress-drop) models of earthquake faulting[END_REF][START_REF] Boatwright | A dynamic model for far-field acceleration[END_REF][START_REF] Fukuyama | Integral equation method for plane crack with arbitary shape in 3D elastic medium[END_REF].
ij R Since c R 1 ,
Dynamics, ij D
The same dynamic processes identified in Table 1 produce corresponding phase variability. (1998; 2000) showed that stress drop behaviors are fundamentally different between dipping reverse and normal faults. These results suggest that stress drop may be focal mechanism and magnitude dependent. There are still significant uncertainties as to the appropriate specifications of fault rupture parameters to simulate strong ground motions, particularly for larger magnitude earthquakes. [START_REF] O'connell | Influence of dip and velocity heterogeneity on reverse-and normal-faulting rupture dynamics and near-fault ground motions[END_REF] used dynamic rupture simulations to show that homogeneous and weakly heterogeneous half-spaces with faults dipping ≲50°, maximum fault-normal peak velocities occurred on the hanging wall. However, for fault dips ≳50°, maximum fault-normal peak velocities occurred on the footwall. Their results indicate that simple amplitude parameterizations based on the hanging wall and/or footwall and the fault normal and/or fault parallel currently used in ground motion prediction relations may not be appropriate for some faults with dips > 50°. Thus, the details of appropriate spatial specification of stress drops and/or slip velocities as a function of focal mechanism, magnitude, and fault dip are yet to be fully resolved. [START_REF] Day | Three-dimensional simulation of spontaneous rupture: The effect of nonuniform prestress[END_REF] showed that intersonic rupture velocities ( < V r < ) can occur during earthquakes, particularly in regions of high prestress (asperities), and that peak slip velocity is strongly coupled to rupture velocity for non-uniform prestresses. While average rupture velocities typically remain subshear, high-stress asperities can produce local regions of supershear rupture combined with high slip velocities. Supershear rupture velocities have been observed or inferred to have occurred during several earthquakes, including the M 6.9
1979 Imperial Valley strike-slip earthquake (Olson and Apsel, 1982;[START_REF] Spudich | Direct observation of rupture propagation during the 1979 Imperial Valley earthquake using a short baseline accelerometer array[END_REF][START_REF] Archuleta | A faulting model for the 1979 Imperial Valley earthquake[END_REF], the M 6.9 1980 Irpinia normal-faulting earthquake [START_REF] Belardinelli | Redistribution of dynamic stress during coseismic ruptures: Evidence for fault interaction and earthquake triggering[END_REF], the M 7.0 1992 Petrolia thrust-faulting earthquake [START_REF] Oglesby | A faulting model for the 1992 Petrolia earthquake: Can extreme ground acceleration be a source effect?[END_REF], the M 7.3 Landers strike-slip earthquake [START_REF] Olsen | Three-dimensional dynamic simulation of the 1992 Landers earthquake[END_REF][START_REF] Bouchon | Stress field associated with the rupture of the 1992 Landers, California, earthquake and its implications concerning the fault strenght at the onset of the earthquake[END_REF][START_REF] Hernandez | Contribution of radar interfermetry to a two-step inversion of the kinematic process of the 1992 Landers earthquake[END_REF] the M 6.7 1994 Northridge thrust-faulting earthquake [START_REF] O'connell | Possible super-shear rupture velocities during the 1994 Northridge earthquake[END_REF], and the 1999 M 7.5 Izmit and M 7.3 Duzce Turkey strike-slip earthquakes [START_REF] Bouchon | How Fast is Rupture during an Earthquake? New Insights from the 1999 Turkey Earthquakes[END_REF]. Bouchon et al. (2010) find that surface trace of the portions of strike-slip faults with inferred supershear rupture velocities are remarkably linear, continuous and narrow, that segmentation features along these segments are small or absent, and the deformation is highly localized. [START_REF] O'connell | Possible super-shear rupture velocities during the 1994 Northridge earthquake[END_REF] postulates that subshear rupture on the faster footwall in the deeper portion of the Northridge fault relative to the hangingwall produced supershear rupture in relation to hangingwall velocities and contributed to the large peak velocities observed on the hangingwall. [START_REF] Harris | Effects of a low-velocity zone on dynamic rupture[END_REF] showed that rupture velocities and slip-velocity functions are significantly modified when a fault is bounded on one side by a low-velocity zone. The lowvelocity zone can produce asymmetry of rupture velocity and slip velocity. This type of velocity heterogeneity produces an asymmetry in seismic radiation pattern and abrupt and/or systematic spatial variations in rupture velocity. These differences are most significant in regions subject to rupture directivity, and may lead to substantially different peak ground motions occurring at either end of a strike slip fault [START_REF] Bouchon | How Fast is Rupture during an Earthquake? New Insights from the 1999 Turkey Earthquakes[END_REF]. Thus, the position of a site relative to the fast and slow sides of a fault and rupture direction may be significant in terms of the dynamic stress drops and rupture velocities that are attainable in the direction of the site. Observations and numerical modeling show that the details of stress distribution on the fault can produce complex rupture velocity distributions and even discontinuous rupture, factors not typically accounted for in kinematic rupture models used to predict ground motions (e.g. [START_REF] Somerville | Simulations of strong ground motions recorded during the Michoacan, Mexico and Valparaiso, Chile, earthquakes[END_REF][START_REF] Schneider | Ground motion model for the 1989 M 6.9 Loma Prieta earthquake including effects of source, path, and site[END_REF][START_REF] Hutchings | Kinematic earthquake models and synthesized ground motions using empirical Green's functions[END_REF][START_REF] Tumarkin | Scaling relations for composite earthquake models[END_REF][START_REF] Zeng | A composite source model for computing realistic strong ground motions[END_REF][START_REF] Beresnev | Modeling finite-fault radiation from the n spectrum[END_REF]O'Connell, 1999c). Even if only smooth variations of subshear rupture velocities are considered (0.6* < Vr < 1.0*), rupture velocity variability introduces ground motion estimation uncertainties of at least a factor of two [START_REF] Beresnev | Modeling finite-fault radiation from the n spectrum[END_REF], and larger uncertainties for sites subject to directivity. Rupture direction may change due to strength or stress heterogeneities on a fault. [START_REF] Beroza | Linearized inversion for fault rupture behavior: Application to the 1984 Morgan Hill, California, earthquake[END_REF] inferred that rupture was delayed and then progressed back toward the hypocenter during the M 6.2 1984 Morgan Hill earthquake. [START_REF] Oglesby | A faulting model for the 1992 Petrolia earthquake: Can extreme ground acceleration be a source effect?[END_REF] inferred that arcuate rupture of an asperity may have produced accelerations > 1.40 g at Cape Mendocino during the M 7.0 1992 Petrolia earthquake. These results are compatible with numerical simulations of fault rupture on a heterogeneous fault plane. [START_REF] Das | A numerical study of two-dimensional spontaneous rupture propagation[END_REF] modeled rupture for a fault plane with high-strength barriers and found that rupture could occur discontinuously beyond strong regions, which may subsequently rupture or remain unbroken. [START_REF] Day | Three-dimensional simulation of spontaneous rupture: The effect of nonuniform prestress[END_REF] found that rupture was very complex for the case of nonuniform prestress and that rupture jumped beyond some points on the fault, leaving unbroken areas behind the rupture which subsequently ruptured. In the case of slip resistant asperity, [START_REF] Das | Breaking of a single asperity: Rupture process and seismic radiation[END_REF] found that when rupture began at the edge of the asperity, it proceeded first around the perimeter and then failed inward in a "double pincer movement". Thus, even the details of rupture propagation direction are not truly specified once a hypocenter position is selected. [START_REF] Guatteri | Coseismic temporal changes of slip direction: The effect of absolute stress on dynamic rupture[END_REF] showed that time-dependent dynamic rake rotations on a fault become more likely as stress states approach low stresses on a fault when combined with heterogeneous distributions of stress and nearly complete stress drops. [START_REF] Pitarka | Simulation of near-fault strong-ground motion using hybrid Green's functions[END_REF] found that eliminating radiation pattern coherence between 1 Hz and 3 Hz reproduced observed ground motions for the 1995 M 6.9 Hyogo-ken Nanbu (Kobe) earthquake. [START_REF] Spudich | Use of fault striations and dislocation models to infer tectonic shear stress during the 1995 Hyogo-ken Nanbu (Kobe) earthquake[END_REF] used fault striations to infer that the Nojima fault slipped at low stress levels with substantial rake rotations occurring during the 1995 Hyogo-ken Nanbu earthquake. This dynamic rake rotation can reduce radiation-pattern coherence at increasing frequencies by increasingly randomizing rake directions for decreasing time intervals near the initiation of slip at each point on a fault, for increasingly complex initial stress distributions on faults. [START_REF] Vidale | Influence of focal mechanism on peak accelerations of strong motions of the Whittier Narrows, California, earthquake and an aftershock[END_REF] showed that the standard double-couple radiation pattern is observable to 6 Hz based on analysis of the mainshock and an aftershock from the Whittier Narrows, California, thrust-faulting earthquake sequence. In contrast, [START_REF] Liu | The 23:19 aftershock of the 15 October 1979 Imperial Valley earthquake: More evidence for an asperity[END_REF] found that a double-couple radiation pattern was only discernible for frequencies extending to 1 Hz based on analysis the 1979 Imperial Valley earthquake and an aftershock. [START_REF] Bent | Source complexity of the October 1, 1987, Whittier Narrows earthquake[END_REF] estimate a of 75 MPa for the 1987 Whittier Narrows M 6.1 thrust faulting earthquake, but allow for a as low as 15.5 MPa. The case of high initial, nearly homogeneous stresses that minimize rake rotations may produce high-frequency radiation pattern coherence as observed by [START_REF] Vidale | Influence of focal mechanism on peak accelerations of strong motions of the Whittier Narrows, California, earthquake and an aftershock[END_REF]. These results suggest that there may be a correlation between the maximum frequency of radiation pattern coherence, initial stress state on a fault, focal mechanism, and stress drop. [START_REF] Frankel | A three-dimensional simulation of seimic waves in the Santa Clara Valley, California, from a Loma Prieta aftershock[END_REF][START_REF] Frankel | Three-dimensional simulations of ground motions in the San Bernardino Valley, California, for hypothetical earthquakes on the San Andreas fault[END_REF][START_REF] Olsen | Three-dimensional simulation of earthquakes on the Los Angeles fault system[END_REF][START_REF] Wald | The seismic response of the Los Angeles Basin, California[END_REF][START_REF] Archuleta | Direct observation of nonlinear soil response in acceleration time histories[END_REF][START_REF] Frankel | Three-dimensional simulations of ground motins in the Seattle region for earthquakes in the Seattle fault zone[END_REF][START_REF] Koketsu | Propagation of seismic ground motion in the Kanto Basin, Japan[END_REF][START_REF] Frankel | Observations of basin ground motions from a dense seismic array in San Jose, California[END_REF]. Basin-edge waves can substantially amplify strong ground motions in basins [START_REF] Liu | Array analysis of the ground velocities and accelerations from the 1971 San Fernando, California, earthquake[END_REF][START_REF] Frankel | High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling strength on faults[END_REF][START_REF] Phillips | Basin-induced Love waves observed using the strong motion array at Fuchu, Japan[END_REF][START_REF] Spudich | The seismic coda, site effects, and scattering in alluvial basins studied using aftershocks of the 1986 North Palm Springs, California, earthquakes as source arrays[END_REF][START_REF] Kawase | The cause of the damage belt in Kobe: "The basin-edge effect," constructive interference of the direct S-wave with the basin-induced diffracted/Rayleigh waves[END_REF][START_REF] Pitarka | Three-dimensional simulation of the near-fault ground motions for the 1995 Hyogo-Nanbu (Kobe), Japan, earthquake[END_REF][START_REF] Frankel | Observations of basin ground motions from a dense seismic array in San Jose, California[END_REF]. This is a particular concern for fault-bounded basins where rupture directivity can constructively interact with basin-edge waves to produce extended zones of extreme ground motions [START_REF] Kawase | The cause of the damage belt in Kobe: "The basin-edge effect," constructive interference of the direct S-wave with the basin-induced diffracted/Rayleigh waves[END_REF][START_REF] Pitarka | Three-dimensional simulation of the near-fault ground motions for the 1995 Hyogo-Nanbu (Kobe), Japan, earthquake[END_REF], a topic revisited later in the paper. Even smaller scale basin or lens structures on the order of several kilometers in diameter can produce substantial amplification of strong ground motions [START_REF] Alex | Lens-effect in Santa Monica?[END_REF][START_REF] Graves | Ground motion amplification in the Santa Monica area: Effects of shallow basin structure[END_REF][START_REF] Davis | Northridge earthquake damage caused by geologic focusing of seismic waves[END_REF]. Basin-edge waves can be composed of both body and surface waves [START_REF] Spudich | The seismic coda, site effects, and scattering in alluvial basins studied using aftershocks of the 1986 North Palm Springs, California, earthquakes as source arrays[END_REF][START_REF] Meremonte | Urban seismology: Northridge aftershocks recorded by multiscale arrays of portable digital seismographs[END_REF][START_REF] Frankel | Observations of basin ground motions from a dense seismic array in San Jose, California[END_REF]) which provides a rich wavefield for constructive interference phenomena over a broad frequency range. Critical reflections off the Moho can produce amplification at distances > ~75-100 km [START_REF] Somerville | The influence of critical Moho reflections on strong ground motions recorded in San Francisco and Oakland during the 1989 Loma Prieta earthquake[END_REF][START_REF] Catchings | Reflected seismic waves and their effect on strong shaking during the 1989 Loma Prieta, California, earthquake[END_REF]. The depth to the Moho, hypocentral depth, direction of rupture (updip versus downdip), and focal mechanism determine the amplification and distance range that Moho reflections may be important. For instance, [START_REF] Catchings | New Madrid and central California apparent Q values as determined from seismic refraction data[END_REF] showed that Moho reflections amplify ground motions in the > 100 km distance range in the vicinity of the New Madrid seismic zone in the central United States.
Seismic wave propagation amplitude and phase factors
Factor Influence Geometric spreading, G kij r
Amplitudes decrease with distance at 1/r, 1/r 2 , and 1/r 4 for body waves and 1/ r for surface waves. The 1/r term has the strongest influence on high-frequency ground motions. The 1/ r term can be significant for locally generated surface waves. Large-scale velocity structure,
G kij V D 3
Horizontal and vertical velocity gradients and velocity discontinuities can increase or decrease amplitudes and durations. Low-velocity basins can amplify and extend ground motion durations. Abrupt changes in lateral velocity structure can induce basin-edge-waves in the lower velocity material that amplify ground motions. Near-surface resonant responses,
G kij L Low-
G kij Q
Linear hysteretic behavior that reduces amplitudes of the form
e f r Q . High-frequency atten uation, G kij
Strong attenuation of high-frequencies in the shallow crust of the form
e r f . Scattering, G kij S
Scattering tends to reduce amplitudes on average, but introduces high amplitude caustics and low-amplitude shadow zones and produces nearly log-normal distributions of amplitudes (O'Connell, 1999a).
Anisotropy,
G kij A Complicates shear-wave amplitudes and modifies radiation pattern amplitudes and can introduce frequency-dependent amplification based on direction of polarization.
Topography,
G kij T Can produce amplification near topographic highs and introduces an additional sources of scattering.
Table 3. Seismic Wave Propagation Amplitude Factors (
G kij )
Numerous studies have demonstrated that the seismic velocities in the upper 30 to 60 m can greatly influence the amplitudes of earthquake grounds motions at the surface (e.g. [START_REF] Borcherdt | Progress on ground motion predictions for the San Francisco Bay region, California, in Progress on Seismic Zonation in the San Francisco Bay Region[END_REF][START_REF] Joyner | The effect of Quaternary alluvium on strong ground motion in the Coyote Lake, California earthquake of 1979[END_REF][START_REF] Seed | The Mexico earthquake of September 19, 1985 -relationships between soil conditions and earthquake ground motions[END_REF]. [START_REF] Williams | Surface seismic measurements of near-surface P-and S-wave seismic velocities at earthquake recording stations[END_REF] showed that significant resonances can occur for impedance boundaries as shallow as 7-m depth. Boore and Joyner (1997) compared the amplification of generic rock sites with very hard rock sites for 30 m depth averaged velocities. They defined very hard rocks sites as sites that have shear-wave velocities at the surface > 2.7 km/s and generic rock sites as sites where shear-wave velocities at the surface are ~0.6 km/s and increase to > 1 km/s at 30 m depth. Boore and Joyner (1997) found that amplifications on generic rock sites can be in excess of 3.5 at high frequencies, in contrast to the amplifications of less than 1.2 on very hard rock sites. Considering the combined effect of attenuation and amplification, amplification for generic rocks sites peaks between 2 and 5 Hz at a maximum value less than 1.8 (Boore and Joyner, 1997).
Factor Influence Geometric spreading, kij r
Introduces frequency dependent propagation delays.
Large-scale velocity structure,
kij V D 3
Horizontal and vertical velocity and density gradients and velocity and density discontinuities produce frequency dependent phase shifts. Near-surface resonant responses,
kij L
Interactions of shear-wave arrivals of varying angles of incidence and directions produce frequency dependent phase shifts.
Nonlinear soil responses,
kij N u , (equivalent linear), kij N u t , ( fully nonlinear)
Depending on the dynamic soil properties and pore pressure responses, nonlinear responses can increase or reduce phase dispersion. In the case of coupled pore-pressure with dilatant materials can collapse phase producing intermittent amplification caus tics.
Frequency indepen dent attenuation,
kij Q
Linear hysteretic behavior produces frequency-dependent velocity dispersion that produces frequency dependent phase variations.
Scattering,
kij S
The scattering strength and scattering characteristics determine propagation distances required to randomize the phase of shear waves as a function of frequency.
Anisotropy,
kij A
Complicates shear-wave polarizations and modifies radiation pattern polarizations.
Topography,
kij T
Complicates phase as a function of topographic length scale and near-surface velocities.
kij )
A common site-response estimation method is to use the horizontal-to-vertical (H/V) spectral ratio method with shear waves [START_REF] Lermo | Site effect evaluation using spectral ratios with only one station[END_REF] to test for site resonances. The H/V method is similar to the receiver-function method of [START_REF] Langston | Structure under Mount Ranier, Washington, inferred from teleseismic body waves[END_REF].
Several investigations have shown the H/V approach provides robust estimates of resonant frequencies (e.g., [START_REF] Field | A comparison and test of various site response estimation techniques including threee that are not reference site dependent[END_REF][START_REF] Castro | S-wave site-response estimates using horizontal-to-vertical spectra ratios[END_REF][START_REF] Tsubio | Verification of horizontal-to-vertical spectral-ratio technique for estimate of site response using borehole seismographs[END_REF] although absolute amplification factors are less well resolved [START_REF] Castro | S-wave site-response estimates using horizontal-to-vertical spectra ratios[END_REF]Bonilla et al., 1997).
One-dimensional site-response approaches may fail to quantify site amplification in cases when upper-crustal three-dimensional velocity structure is complex. In southern California, [START_REF] Field | A modified ground-motion attenaution relationship for southern California that accounts for detailed site classification and a basin-depth effect[END_REF] found that the basin effect had a stronger influence on peak acceleration than detailed geology used to classify site responses. [START_REF] Hartzell | Variability of site response in Seattle, Washington[END_REF] found that site amplification characteristics at some sites in the Seattle region cannot be explained using 1D or 2D velocity models, but that 3D velocity structure must be considered to fully explain local site responses. [START_REF] Chavez-Garcia | Lateral propagation effects observed at Parkway, New Zealand. A case history to compare 1D versus 2D site effects[END_REF] showed that laterally propagating basingenerated surface waves can not be differentiated from 1D site effects using frequency domain techniques such as H/V ratios or reference site ratios. The ability to conduct sitespecific ground motion investigations is predicated on the existence of geological, geophysical, and geotechnical engineering data to realistically characterize earthquake sources, crustal velocity structure, local site structure and conditions, and to estimate the resultant seismic responses at a site. Lack of information about 3D variations in local and crustal velocity structure are serious impediments to ground motion estimation.
It is now recognized that correlated-random 3D velocity heterogeneity is an intrinsic property of Earth's crust (see [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF] for a discussion). Correlated-random means that random velocity fluctuations are dependent on surrounding velocities with the dependence being inversely proportional to distance. Weak (standard deviation, , of ~5%), random fractal crustal velocity variations are required to explain observed short-period (T < 1 s) body-wave travel time variations, coda amplitudes, and coda durations for ground motions recorded over length scales of tens of kilometers to tens of meters [START_REF] Frankel | Finite difference simulations of seismic scattering: implications for the propagation of short-period seismic waves in the crust and models of crustal heterogeneity[END_REF], most well-log data [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF], the frequency dependence of shear-wave attenuation [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF], and envelope broadening of shear waves with distance [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF]. As a natural consequence of energy conservation, the excitation of coda waves in the crust means that direct waves (particularly direct shear waves that dominate peak ground motions) that propagate along the minimum travel-time path from the source to the receiver lose energy with increasing propagation distance as a result of the dispersion of energy in time and space. Following [START_REF] Frankel | Finite difference simulations of seismic scattering: implications for the propagation of short-period seismic waves in the crust and models of crustal heterogeneity[END_REF] fractal, self-similar velocity fluctuations are described with an autocorrelation function, P, of the form,
P k a k a r n r n 1 (8)
where a is the correlation distance, k r is radial wavenumber, n=2 in 2D, and n=3 in 3D. When n=4 an exponential power law results [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF]. Smoothness increasing with distance as a increases in ( 8) and overall smoothness is proportional to n in (8). This is a more realistic model of spatial geologic material variations than completely uncorrelated, spatially independent, random velocity variations. "Correlated-random" is shortened here to "random" for brevity. Let denote wavelength. Forward scattering dominates when << a [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF]. The situation is complicated in self-similar fractal media when considering a broad frequency range relevant to strong motion seismology (0.1 to 10 Hz) because spans the range >> a to << a and both forward and backscattering become important, particularly as n decreases in (8). Thus, it is difficult to develop simple rigorous expressions to quantify amplitude and phase terms associated with wave propagation through the heterogeneous crust (see [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF]. O'Connell (1999a) showed that direct shear-wave scattering produced by P-SV-wave coupling associated with vertical velocity gradients typical of southern California, combined with 3D velocity variations with n=2 and a standard deviation of velocity variations of 5% in (8), reduce high-frequency peak ground motions for sediment sites close to earthquake faults. O'Connell (1999a) showed that crustal scattering could substantially influence the amplification of near-fault ground motions in areas subjected to significant directivity. Scattering also determines the propagation distances required to randomize phase as discussed later in this paper. Dynamic reduction of soil moduli and increases in damping with increasing shear strain can substantially modify ground motion amplitudes as a function of frequency [START_REF] Ishihara | Soil Behavior in Earthquake Geotechnics[END_REF]. While there has been evidence of nonlinear soil response in surface strong motion recordings [START_REF] Field | Nonlinear ground-motion amplification by sediments during the 1994 Northridge earthquake[END_REF][START_REF] Cultera | Nonlinear soil response in the vicinity of the Van Norman Complex following the 1994 Northridge, California, earthquake[END_REF], interpretation of these surface records solely in terms of soil nonlinearity is intrinsically non-unique (O'Connell, 1999a). In contrast, downhole strong motion arrays have provided definitive evidence of soil nonlinearity consistent with laboratory testing of soils [START_REF] Chang | Development of shear modulus reduction curves based on Lotung downhole ground motion data[END_REF]Wen et al., 1995, Ghayamghamain and[START_REF] Ghayamghamain | On the characteristics of non-linear soil response and dynamic soil properties using vertical array data in Japan[END_REF][START_REF] Satoh | Nonlinear behavior of soil sediments identified by using borehole records observed at the Ashigara Valley, Japan[END_REF][START_REF] Satoh | Nonlinear behavior of scoria evaluated from borehole records in eastern Shizuoka prefecture, Japan[END_REF][START_REF] Satoh | Inversion of strain-dependent nonlinear characteristics of soils using weak and strong motions observed by borehole sites in Japan[END_REF]. Idriss and Seed (1968a, b) introduced the "equivalent linear method" to calculate nonlinear soil response, which is an iterative method based on the assumption that the response of soil can be approximated by the response of a linear model whose properties are selected in relation to the average strain that occurs at each depth interval in the model during excitation. [START_REF] Joyner | Calculation of nonlinear ground response in earthquakes[END_REF] used a direct nonlinear stress-strain relationship method to demonstrate that the equivalent linear method may significantly underestimate shortperiod motions for thick soil columns and large input motions. [START_REF] Archuleta | Nonlinearity in observed and computed accelerograms[END_REF] and [START_REF] Bonilla | Computation of linear and nonlinear response for near field ground motion[END_REF] demonstrated that dynamic pore-pressure responses can substantially modify nonlinear soil response and actually amplify and extend the durations of strong ground motions for some soil conditions. When a site is situated on soil it is critical to determine whether soil response will decrease or increase ground amplitudes and durations, and to compare the expected frequency dependence of the seismic soil responses with the resonant frequencies of the engineered structure(s). When soils are not saturated, the equivalent linear method is usually adequate with consideration of the caveats of [START_REF] Joyner | Calculation of nonlinear ground response in earthquakes[END_REF]. When soils are saturated and interbedding sands and/or gravels between clay layers is prevalent, a fully nonlinear evaluation of the site that accounts for dynamic pore pressure responses may be necessary [START_REF] Archuleta | Nonlinearity in observed and computed accelerograms[END_REF]. [START_REF] Lomnitz | Seismic coupling of interface modes in sedimentary bains: A recipe for distaster[END_REF] showed that for the condition 0.91 1 < 0 , where 1 is the shear-wave velocity of low-velocity material beneath saturated soils, and 0 is the acoustic (compressional-wave) velocity in the near-surface material, a coupled mode between Rayleigh waves propagating along the interface and compressional waves in the near surface material propagates with phase velocity 0 . This mode can propagate over large distances with little attenuation. [START_REF] Lomnitz | Seismic coupling of interface modes in sedimentary bains: A recipe for distaster[END_REF] note that this set of velocity conditions provides a "recipe" for severe earthquake damage on soft ground when combined with a large contrast in Poisson's ratio between the two layers, and when the resonant frequencies of the mode and engineering structures coincide. Linear 2D viscoelastic finite-difference calculations demonstrate the existence of this wave mode at small strains, but nonlinear 2D finite-difference calculations indicate that long-distance propagation of this mode is strongly attenuated [START_REF] O'connell | Influence of 2D Soil Nonlinearity on Basin and Site Responses[END_REF]. Anisotropy complicates polarizations of shear waves. [START_REF] Coutant | Observations of shallow anisotropy on local earthquake records at the Garner Valley, Southern California, downhole array[END_REF] showed that shallow (< 200 m) shear-wave anisotropy strongly influences surface polarizations of shear waves for frequencies < 30 Hz. [START_REF] Chapman | Ray tracing in azimuthally anisotropic media-II. Quasi-shear wave coupling[END_REF] show that quasi-shear (qS) wave polarizations typically twist along ray paths through gradient regions in anisotropic media, causing frequency-dependent coupling between the qS waves. They show that this coupling is much stronger than the analogous coupling between P and SV waves in isotropic gradients because of the small difference between the qS-wave velocities. [START_REF] Chapman | Ray tracing in azimuthally anisotropic media-II. Quasi-shear wave coupling[END_REF] show that in some cases, far-field excitation of both quasi-shear wave and shear-wave splitting will result from an incident wave composed of only one of the quasishear waves. The potential for stronger coupling of quasi-shear waves suggests that the influence of anisotropy on shear-wave polarizations and peak ground motion may be significant in some cases. While the influence of anisotropy on strong ground motions is unknown, it is prudent to avoid suggesting that only a limited class of shear-wave polarizations are likely for a particular site based on isotropic ground motion simulations of ground motion observations at other sites. Velocity anisotropy in the crust can substantially distort the radiation pattern of body waves with shear-wave polarization angles diverging from those in an isotropic medium by as much as 90 degrees or more near directions where group velocities of quasi-SH and SV waves deviate from corresponding phase velocities [START_REF] Kawasaki | Radiation pattern of body waves due to the seismic dislocation occurring in an anisotropic source medium[END_REF].
Thus, anisotropy has the potential to influence radiation pattern coherence as well as ground motion polarization. A common approach is to assume the double-couple radiation pattern disappears over a transition frequency band extending from 1 Hz to 3 Hz [START_REF] Pitarka | Simulation of near-fault strong-ground motion using hybrid Green's functions[END_REF] or up to 10 Hz [START_REF] Zeng | Evaluation of numerical procedures for simulating nearfault long-period ground motions using the Zeng method[END_REF]. The choice of frequency cutoff for the radiation pattern significantly influences estimates of peak response in regions prone to directivity for frequencies close to and greater than the cutoff frequency. This is a very important parameter for stiff (high-frequency) structures such as buildings that tend to have natural frequencies in the 0.5 to 5 Hz frequency band (see discussion in [START_REF] Frankel | How does the ground shake[END_REF]. Topography can substantially influence peak ground motions [START_REF] Boore | A note of the effect of simple topography on seismic SH waves[END_REF][START_REF] Boore | The effect of simple topography on seismic waves: Implications for the acceleration recorded at Pacoima Dam, San Fernando Valley, California[END_REF]. [START_REF] Schultz | Enhanced backscattering of seismic waves from irregular interfaces[END_REF] showed that an amplification factor of 2 can be easily achieved near the flanks of hills relative to the flatter portions of a basin and that substantial amplification and deamplification of shear-wave energy in the 1 to 10 Hz frequency range can occur over short distances. [START_REF] Bouchon | Effect of three-dimensional topography on seismic motion[END_REF] showed that shear-wave amplifications of 50% to 100% can occur in the 1.5 Hz to 20 Hz frequency band near the tops of hills, consistent with observations from the 1994 Northridge earthquake [START_REF] Spudich | Directional topographic site response at Tarzana observed in aftershocks of the 1994 Northrige, Calfornia, earthquake: Implications for mainshock motions[END_REF]. Topography may also contribute to amplification in adjacent basins as well as the contributing to differential ground motions with dilatational strains on the order of 0.003 [START_REF] Hutchings | Ground-motion variability at the Highway 14 and I-5 interchange in the northern San Fernando Valley[END_REF]. Topography has a significant influence on longer-period amplification and groundshaking durations. [START_REF] O'connell | Influence of dip and velocity heterogeneity on reverse-and normal-faulting rupture dynamics and near-fault ground motions[END_REF] showed that topography of the San Gabriel Mountains scatters the surface waves generated by the rupture on the San Andreas fault, leading to lessefficient excitation of basin-edge generated waves and natural resonances within the Los Angeles Basin and reducing peak ground velocity in portions of the basin by up to 50% for frequencies 0.5 Hz or less. These discussions of source and propagation influences on amplitudes and phase are necessarily abbreviated and are not complete, but do provide an indication of the challenges of ground motion estimation, and developing relatively simple, but sufficient ground motion prediction equations based on empirical strong ground motion data. Systematically evaluating all the source and wave propagation factors influencing site-specific ground motions is a daunting task, particularly since it's unlikely that one can know all the relevant source and propagation factors. Often, insufficient information exists to quantitatively evaluate many ground motion factors. Thus, it is useful to develop a susceptibility checklist for ground motion estimation at a particular site. The list would indicate available information for each factor on a scale ranging from ignorance to strong quantitative information and indicate how this state of information could influence ground motions at the site. The result of such a checklist would be a susceptibility rating for potential biases and errors for peak motion and duration estimates of site-specific ground motions.
Nonlinear site response
Introduction
The near surface geological site conditions in the upper tens of meters are one of the dominant factors in controlling the amplitude and variation of strong ground motion, and the damage patterns that result from large earthquakes. It has long been known that soft sediments amplify the earthquake ground motion. Superficial deposits, especially alluvium type, are responsible for a remarkable modification of the seismic waves. The amplification of the seismic ground motion basically originates from the strong contrast between the rock and soil physical properties (e.g. [START_REF] Kramer | Geothechnical Earthquake Engineering[END_REF]. At small deformations, the soil response is linear: strain and stress are related linearly by the rigidity modulus independently of the strain level (Hooke's law). Mainly because most of the first strong motion observations seemed to be consistent with linear elasticity, seismologists generally accept a linear model of ground motion response to seismic excitation even at the strong motion level. However, according to laboratory studies (e.g. [START_REF] Seed | Influence of soil conditions on ground motions during earthquakes[END_REF]), Hooke's law breaks down at larger strains and the nonlinear relation between strain and stress may significantly affect the strong ground motion at soil sites near the source of large earthquakes. Since laboratory conditions are not the same as those in the field, several authors have tried to find field data to understand nonlinear soil behavior. In order to isolate the local site effects, the transfer function of seismic waves in soil layers has to be estimated by calculating the spectral ratio between the motion at the surface and the underlying soil layers. Variation of these spectral ratios between strong and weak motion has actively been searched in order to detect nonlinearity. For example, [START_REF] Darragh | The site response of two rock and soil station pairs to strong and weak ground motion[END_REF] observed an amplification reduction at the Treasure Island soft soil site in San Francisco. [START_REF] Beresnev | Nonlinear site response -a reality?[END_REF] also reported a decrease of amplification factors for the array data in the Lotung valley (Taiwan). Such a decrease has also been observed at different Japanese sites including the Port Island site (e.g. Satoh et al., 1997, Aguirre and[START_REF] Aguirre | Nonlinearity, Liquefaction, and Velocity Variation of Soft Soil Layers in Port Island, Kobe, during the Hyogo-ken Nanbu Earthquake[END_REF]. On the other hand, [START_REF] Darragh | The site response of two rock and soil station pairs to strong and weak ground motion[END_REF] also reported a quasi-linear behavior for a stiff soil site in the whole range from 0.006 g to 0.43g. According to these results there is a need to precise the thresholds corresponding to the onset of nonlinearity and the maximum strong motions amplification factors according to the nature and thickness of soil deposits [START_REF] Field | Nonlinear sediment response during the 1994 Northridge earthquake: observations and finite-source simulations[END_REF]. Nevertheless, the use of surface ground motion alone does not help to directly calculate the transfer function and these variations. Rock outcrop motion is then usually used to estimate the motion at the bedrock and to calculate sediments amplification for both weak and strong motion (e.g. Celebi et al., 1987;Singh et al., 1988;[START_REF] Darragh | The site response of two rock and soil station pairs to strong and weak ground motion[END_REF][START_REF] Field | Nonlinear ground-motion amplification by sediments during the 1994 Northridge earthquake[END_REF][START_REF] Beresnev | Nonlinearity at California generic soil sites from modeling recent strongmotion data[END_REF]. The accuracy of this approximation strongly depends on near surface rock weathering or topography complexity [START_REF] Steidl | What is A Reference Site? Bull[END_REF]. Moreover, the estimate of site response can be biased by any systematic difference for the path effects between stations located on soil and rock. One additional complication is also due to finite source effects such as directivity. In case of large earthquakes, waves arriving from different locations may interfere causing source effects to vary with site location [START_REF] Oglesby | A faulting model for the 1992 Petrolia earthquake: Can extreme ground acceleration be a source effect?[END_REF]. Since these finite source effects strongly depend on the source size, they could mimic the observations cited as evidence for soil nonlinearity. Finally, O'Connell (1999) and Hartzell et al. (2005) show that in the near-fault region of M > 6 earthquakes linear wave propagation in weakly heterogeneous, random three dimensional crustal velocity can mimic observed, apparently, nonlinear sediment response in regions with large vertical velocity gradients that persist from near the surface to several km depth, making it difficult to separate soil nonlinear responses from other larger-scale linear wave propagation effects solely using surface ground motion recordings. Because of these difficulties, the most effective means for quantifying the modification in ground motion induced by soil sediments is to record the motion directly in boreholes that penetrate these layers. Using records from vertical arrays it is possible to separate the site from source and path effects and therefore clearly identify the nonlinear behavior and changes of the soil physical properties during the shaking (e.g. [START_REF] Zeghal | Analysis of Site Liquefaction Using Earthquake Records[END_REF][START_REF] Aguirre | Nonlinearity, Liquefaction, and Velocity Variation of Soft Soil Layers in Port Island, Kobe, during the Hyogo-ken Nanbu Earthquake[END_REF][START_REF] Satoh | Inversion of strain-dependent nonlinear characteristics of soils using weak and strong motions observed by borehole sites in Japan[END_REF][START_REF] Assimaki | Inverse analysis of weak and strong motion borehole array data from the Mw7.0 Sanriku-Minami earthquake[END_REF][START_REF] Assimaki | A Wavelet-based Seismogram Inversion Algorithm for the In Situ Characterization of Nonlinear Soil Behavior[END_REF][START_REF] Bonilla | Nonlinear site response evidence of K-NET and KiK-net records from the 2011 off the Pacific coast of Tohoku Earthquake[END_REF].
Nonlinear soil behavior
For years, it has been established in geotechnical engineering that soils behave nonlinearly. This fact comes from numerous experiments with cyclic loading of soil samples. The stressstrain curve has a hysteretic behavior, which produces a reduction of shear modulus as well as an increasing in damping factor. 4.1 shows a typical stress-strain curve with a loading phase and consequent hysteretic behavior for the later loading process. There have been several attempts to describe mathematically the shape of this curve, and among those models the hyperbolic is one of the easiest to use because of its mathematical formulation as well as for the number of parameters necessary to describe it [START_REF] Ishihara | Soil Behavior in Earthquake Geotechnics[END_REF][START_REF] Kramer | Geothechnical Earthquake Engineering[END_REF][START_REF] Beresnev | Nonlinear site response -a reality?[END_REF]
) = 1 + | |
where is the undisturbed shear modulus, and τ is the maximum stress that the material can support in the initial state.
is also known as G max because it has the highest value of shear modulus at low strains. In order to have the hysteretic behavior, the model follows the so-called Masing's rule, which in its basic form translates the origin and expands the horizontal and vertical axis by a factor of 2. Thus,
- 2 = - 2
where (γ , τ ) is the reversal point for unloading and reloading curves. This behavior produces two changes in the elastic parameters of the soil. First, the larger the maximum strain, the lower the secant shear modulus obtained as the slope of the line between the origin and the reversal point of the hysteresis loop. Second, hysteresis shows a loss of energy in each cycle, and as it was mentioned above, the energy is proportional to the area of the loop. Thus, the larger the maximum strain, the larger the damping factor. How can the changes in the elastic parameters be detected when looking at transfer functions? We know that the resonance frequencies are proportional to (2 + 1) 4 ⁄ (the fundamental frequency corresponds to = 0). Where is the shear velocity and is the soil thickness. Thus, if the shear modulus is reduced then the resonance frequencies are also reduced because =
, where is the material density. In other words, in the presence of nonlinearity the transfer function shifts the resonance frequencies toward lower frequencies. In addition, increased dissipation reduces soil amplification. Figure 4.2 shows an example of nonlinear soil behavior at station TTRH02 (Vs 30 = 340 m/s), KiK-net station that recorded the M JMA 7.3 October 2000 Tottori in Japan. The orange shaded region represents the 95% borehole transfer function computed using events having a PGA less than 10 cm/s 2 . Conversely, the solid line is the borehole transfer function obtained using the data from the Tottori mainshock. One can clearly see the difference between these two estimates of the transfer function, namely a broadband deamplification and a shift of resonance frequencies to lower values. The fact that the linear estimate is computed at the 95% confidence limits means that we are confident that this site underwent nonlinear site responses at a 95% probability level. However, nonlinear effects can also directly be seen on acceleration time histories. Figure 4.3 shows acceleration records, surface and downhole, of the 1995 Kobe earthquake at Port Island (left) and the 1993 Kushiro-Oki earthquake at Kushiro Port (right). Both sites have shear wave velocity profiles relatively close each other, except in the first 30 meters depth. Yet, their response is completely different. Port Island is a man-made site composed of loose sands that liquefied during the Kobe event [START_REF] Aguirre | Nonlinearity, Liquefaction, and Velocity Variation of Soft Soil Layers in Port Island, Kobe, during the Hyogo-ken Nanbu Earthquake[END_REF]. Practically there is no energy after the S-wave train in the record at the surface. Conversely, Kushiro Port is composed of dense sands and shows, in the accelerometer located at ground level, large acceleration spikes that are even higher than their counterpart at depth. [START_REF] Iai | Response of a dense sand deposit during 1993 Kushiro-Oki Earthquake[END_REF], [START_REF] Archuleta | Direct observation of nonlinear soil response in acceleration time histories[END_REF][START_REF] Bonilla | Hysteretic and Dilatant Behavior of Cohesionless Soils and Their Effects on Nonlinear Site Response: Field Data Observations and Modeling[END_REF] showed that the appearance of large acceleration peak values riding a low frequency carrier are an indicator of soil nonlinearity known as cyclic mobility. Laboratory studies show that the physical mechanism that produces such phenomenon is the dilatant nature of cohesionless soils, which introduces the partial recovery of the shear strength under cyclic loads. This recovery translates into the ability to produce large deformations followed by large and spiky shear stresses. The spikes observed in the acceleration records are directly related to these periods of dilatancy and generation of pore pressure. These examples indicate that nonlinear soil phenomena are complex. We cannot see the effects of nonlinear soil behavior on the transfer function only, but also on the acceleration time histories. This involves solving the wave equation by integrating nonlinear soil rheologies in the time domain, the subject treated in the next section.
The strain space multishear mechanism model
The multishear mechanism model [START_REF] Towhata | Modeling Soil Behavior Under Principal Axes Rotation[END_REF] is a plane strain formulation to simulate pore pressure generation in sands under cyclic loading and undrained conditions. Iai et al. (1990aIai et al. ( , 1990b) ) modified the model to account for the cyclic mobility and dilatancy of sands. This method has the following strong points: It is relatively easy to implement. It has few parameters that can be obtained from simple laboratory tests that include pore pressure generation.
This model represents the effect of rotation of principal stresses during cyclic behavior of anisotropically consolidated sands.
Since the theory is a plane strain condition, it can be used to study problems in two dimensions, e.g. embankments, quay walls, among others. In two dimensional cartesian coordinates and using vectorial notation, the effective stress σ′ and strain ϵ tensors can be written as
{ } = ′ ′ { } =
where the superscript T represents the vector transpose operation; σ′ , σ′ , ϵ , and ϵ represent the effective normal stresses and strains in the horizontal and vertical directions; τ and γ are the shear stress and shear strain, respectively. The multiple mechanism model relates the stress and strain through the following incremental equation (Iai et al., 1990a(Iai et al., , 1990b)),
{ } = [ ]({ } - )
where the curly brackets represent the vector notation; ϵ is the volumetric strain produced by the pore pressure, and is the tangential stiffness matrix given by
[ ] = K ( ) ( ) + ( ) ( ) ( )
The first term is the volumetric mechanism represented by the bulk modulus . The second part is the shear mechanism represented by the tangential shear modulus ( ) idealized as a collection of springs (Figure 4.4). Each spring follows the hyperbolic stress-strain model [START_REF] Konder | A hyperbolic stress-strain formulation for sands[END_REF] during the loading and unloading hysteresis process. The shear mechanism may also be considered as a combination of pure shear and shear by differential compression.
In addition,
( ) = {1 1 0} ( ) = {cos -cos sin } = ( -1)
where ∆θ = π I ⁄ is the angle between each spring as shown in Figure 4.4. [START_REF] Towhata | Modeling Soil Behavior Under Principal Axes Rotation[END_REF] found, using laboratory data, that the pore pressure excess is correlated with the cumulative shear work produced during cyclic loading. Iai et al. (1990aIai et al. ( , 1990b) developed a mathematical model that needs five parameters, called hereafter dilatancy parameters, to take into account this correlation. These parameters represent the initial and final phases of dilatancy, p and p ; overall dilatancy w ; threshold limit and ultimate limit of dilatancy, c and S . These parameters are obtained by fitting laboratory data, from either undrained stress controlled cyclic shear tests or from cyclic stress ratio curves. Details of this constitutive model can be found in Iai et al. (1990aIai et al. ( , 1990b)). Ishihara, 1985).
At this point, this formulation provides only the backbone curve. It is here that the hysteresis is now taken into account by using the generalized Masing rules. In fact, they are not simple rules but a state equation that describes hysteresis given a backbone curve [START_REF] Bonilla | Computation of linear and nonlinear response for near field ground motion[END_REF]. They are called generalized Masing rules because its formulation contains [START_REF] Pyke | Nonlinear soil model for irregular cyclic loadings[END_REF] and the original Masing models as special cases. Furthermore, this formulation allows, by controlling the hysteresis scale factor, the reshaping of the backbone curve as suggested by [START_REF] Ishihara | Modelling of stress-strain relations of soils in cyclic loading[END_REF] so that the hysteresis path follows a prescribed damping ratio.
The generalized Masing rules
In previous sections we use the hyperbolic model to describe the stress-strain space of soil materials subjected to cyclic loads. In the hyperbolic model, the nonlinear relation can be written as
= 1 1 + | | ⁄
where γ = τ G ⁄ is the reference strain. Introducing the equation above into = , where is the shear stress and is the shear strain; and adding the hysteresis operator, we have
= ( ) = ⁄ 1 + | ⁄ |
where is the backbone curve, and (. ) is the hysteresis operator (application of the generalized Masing rules). Hysteresis behavior can be implemented in a phenomenological manner with the help of the Masing and extended Masing rules [START_REF] Vucetic | Normalized behavior of clay under irregular cylic loading[END_REF][START_REF] Kramer | Geothechnical Earthquake Engineering[END_REF]. However, these rules are not enough to constrain the shear stress τ to values not exceeding the strength τ . This happens when the time behavior of the shear strain departs from the simple cyclic behavior, and of course, noncyclic time behavior is common in seismic signals. Inadequacy of the Masing rules to describe the hysteretic behavior of complicated signals has been already pointed out and some remedies have been proposed (e.g. [START_REF] Pyke | Nonlinear soil model for irregular cyclic loadings[END_REF][START_REF] Li | Dynamic skeleton curve of soil stress-strain relation under irregular cyclic loading Earthquake research in China[END_REF]. The Masing rules consist of a translation and dilatation of the original law governing the strain-stress relationship. While the initial loading of the material is given by the backbone curve ( ), the subsequent loading and unloading, the strain-stress relationship is given by:
- = -
where the coordinate ( , ) corresponds to the reversal points in the strain-stress space, and is the so-called hysteresis scale factor [START_REF] Archuleta | Nonlinearity in observed and computed accelerograms[END_REF]. In Masing's original formulation, the hysteresis scale factor is equal to 2. A first extension to the Masing rules can be obtained by releasing the constraint = 2. This parameter controls the shape of the loop in the stress-strain space (Bonilla et al., 1998). However, numerical simulations suggest spurious behavior of for irregular loading and unloading processes even when extended Masing rules are used. A further generalization of Masing rules is obtained choosing the value of in such way to assure that the path , at a given unloading or reloading, in the strain-stress space will cross the backbone curve, and becomes bounded by the maximum strength of the material . This can be achieved by having the following condition,
lim ⟶ ( )| | - ≤ ≤ |∞|
where is the specified finite or infinite strain condition, and correspond to the turning point and the hysteresis shape factor at the jth unloading or reloading; and
( ) is the sign of the strain rate. Thus,
= lim ⟶ ( )| | - + where = (| |)
, and ( , ) is the turning point pair at the jth reversal. Replacing the functional form of the backbone (the hyperbolic model) and after some algebra we have,
= - | -| | - - ( -) ≤ ≤ |∞|
The equation above represents a general constraint on the hysteresis scale factor, so that the computed stress does not exceed depending on the chosen maximum deformation that the material is thought to resist. The limit → ∞ corresponds to the Cundall-Pyke hypothesis [START_REF] Pyke | Nonlinear soil model for irregular cyclic loadings[END_REF], while → is similar to some extent to a method discussed in [START_REF] Li | Dynamic skeleton curve of soil stress-strain relation under irregular cyclic loading Earthquake research in China[END_REF]. In the following section, we will see an example of application of this soil constitutive model [START_REF] Towhata | Modeling Soil Behavior Under Principal Axes Rotation[END_REF]Iai et al., 1990aIai et al., , 1990b) ) together with the Generalized Masing hysteresis operator [START_REF] Bonilla | Computation of linear and nonlinear response for near field ground motion[END_REF].
Analysis of the 1987 Superstition Hills Earthquake
On 24 November 1987, the M L 6.6 Superstition Hills earthquake was recorded at the Wildlife Refuge station. This site is located in southern California in the seismically active Imperial Valley. In 1982 it was instrumented by the U.S. Geological Survey with downhole and surface accelerometers and piezometers to record ground motions and pore water pressures during earthquakes [START_REF] Holzer | Dynamics of liquefaction during the 1987 Superstition Hills, California, earthquake[END_REF]. The Wildlife site is located in the flood plain of the Alamo River, about 20 m from the river's western bank. In situ investigations have shown that the site stratigraphy consists of a shallow silt layer approximately 2.5 m thick underlain by a 4.3 m thick layer of loose silty sand, which is in turn underlain by a stiff to very stiff clay. The water table fluctuates around 2-m depth [START_REF] Matasovic | Analysis of seismic records obtained on November 24, 1987 at the Wildlife Liquefaction Array[END_REF]. This site shows historically one direct in situ observation of nonlinearity in borehole data. The Wildlife Refuge liquefaction array recorded acceleration at the surface and 7.5-m depth, and pore pressure on six piezometers at various depths [START_REF] Holzer | Dynamics of liquefaction during the 1987 Superstition Hills, California, earthquake[END_REF]. The acceleration time histories for the Superstition Hills events at GL-0 m and GL-7.5 m, respectively, are shown in Figure 4.5 (left). Note how the acceleration changes abruptly for the record at GL-0 m after the S wave. Several sharp peaks are observed; they are very close to the peak acceleration for the whole record. In addition, these peaks have lower frequency than the previous part of the record (the beginning of the S wave, for instance). [START_REF] Zeghal | Analysis of Site Liquefaction Using Earthquake Records[END_REF] used the Superstition Hills earthquakes to estimate the stress and strain from borehole acceleration recordings. They approximated the shear stress (ℎ, ) at depth ℎ, and the mean shear strain ̅ between the two sensors as follows,
(ℎ, ) = 1 2 ℎ[ (0, ) + (ℎ, )] (ℎ, ) = (0, ) + ℎ [ ( , ) -(0, )] ̅ ( ) = (ℎ, ) -(0, ) 2
where (0, ) is the horizontal acceleration at the ground surface, (ℎ, ) is the acceleration at depth ℎ (evaluated through linear interpolation); ( , ) is the acceleration at the bottom of the layer; ( , ) and (0, ) are the displacement histories obtained by integrating twice the corresponding acceleration histories; is the thickness of the layer; and is the density. Using this method, the stress and strain at GL-2.9 m were computed (Figure 4.5). This figure clearly shows the large nonlinearity developed during the Superstition Hills event. The stress-strain loops form an S-shape and the strains are as large as 1.5%. At this depth, there is a piezometer (P5 according to [START_REF] Holzer | Dynamics of liquefaction during the 1987 Superstition Hills, California, earthquake[END_REF]. With this information it is also possible to reconstruct the stress path (bottom right of Figure 4.5). Note that some of the pore pressure pulses are correlated with episodes of high shear stress development. The stress path shows a strong contractive phase followed by dilatancy when the effective mean stress is close to 15 kPa. Fig. 4.5. Wildlife Refuge station that recorded the 1987 Superstition Hills earthquake both acceleration and pores pressure time histories (left). Computed stress and strain time histories according to [START_REF] Zeghal | Analysis of Site Liquefaction Using Earthquake Records[END_REF], stress-strain loops and stress path history reconstitution (right).
Using the stress and stress time histories at GL-2.9 m computed earlier, [START_REF] Bonilla | Hysteretic and Dilatant Behavior of Cohesionless Soils and Their Effects on Nonlinear Site Response: Field Data Observations and Modeling[END_REF] performed a trial-and-error procedure in order to obtain the dilatancy parameters that best reproduce such observations. Figure 4.6 compares the computed shear stress time history with the observed shear strain at GL-2.9 m. The stress-strain hysteresis loops are also shown. We observe that the computed shear stress is well simulated; the stress-strain space also shows the same dilatant behavior (S-shape hysteresis loops) as the observed data.
Once the model parameters were determined, they proceed to compute the acceleration time history at GL-0 m using the north-south record at GL-7.5 m as input motion.
NOAH2D 2D P-SV analyses of the maximum observed peak acceleration
Current nonlinear formulations generally reproduce all first-order aspects of nonlinear soil responses. To illustrate this point, we present a nonlinear analyses of the largest peak ground acceleration recorded to date of > 4 g, that has a peak vertical acceleration of 3.8 g (Aoi et al., 2008). Aoi et al. (2008) analyzed ground motions recorded by the Kyoshin Network (Kik-net) during the M 6.9 2008 Iwate-Miyagi earthquake that included one soilsurface site that recorded a vertical acceleration of 3.8g (station IWTH25). The horizontal borehole and surface motions reported in Aoi et al. ( 2008) for station IWTH25 are generally consistent with the soil reducing surface horizontal accelerations at high frequencies, as is widely observed at soil sites [START_REF] Field | Nonlinear ground-motion amplification by sediments during the 1994 Northridge earthquake[END_REF][START_REF] Archuleta | Direct observation of nonlinear soil response in acceleration time histories[END_REF][START_REF] Seed | Analyses of ground motions at Union Bay, Seattle, during earthquakes and distant nuclear blasts[END_REF][START_REF] Beresnev | Nonlinear soil amplification: Its corroboration in Taiwan[END_REF][START_REF] Beresnev | Properties of vertical ground motions[END_REF] 2006) uses a plane-strain model (Iai et al., 1990a(Iai et al., , 1990b)). In this section we show that this model could explain the first-order soil responses observed at station IWTH25 using fairly generic approximation to the site's nonlinear soil properties. The P-SV nonlinear rheology developed by Iai et al. (1990aIai et al. ( , 1990b) ) was used in the [START_REF] Bonilla | 1D and 2D linear and non linear site response in the Grenoble area[END_REF] implementation of 2D nonlinear wave propagation. The constitutive equation implemented corresponds to the strain space multishear mechanism model developed by [START_REF] Towhata | Modeling Soil Behavior Under Principal Axes Rotation[END_REF] and Iai et al. (1990aIai et al. ( , 1990b) ) with its backbone characterized by the hyperbolic equation (Hardin and Drnevich, 1972). The multishear mechanism model is a plane strain formulation to simulate cyclic mobility of sands under undrained conditions. In the calculations of this study, a total stress rheology (pore pressure was ignored) was used in the second order staggered grid P-SV plane-strain finite difference code. Perfectly matched layer (PML) absorbing boundary conditions were used to approximate elastic (transmitting) boundary conditions at the bottom and side edges using an implementation adapted for finite differences from Ma and Liu (2006). Linear hysteretic damping (Q) was implemented using the method of [START_REF] Liu | Efficient modeling of Q for 3D numerical simulation of wave propagation[END_REF]. The horizontal-and vertical component plane waves are inserted in the linear viscoelastic portion of the 2D with a userselectable range of incident angles. The Kyoshin Network, or Kik-net, in Japan (Fujiwara et al., 2005), has recorded numerous earthquakes with ground motion data recorded at surface and at depth in underlying rock and soil. We use the recording at Kik-net station IWTH25, where a 3.8g peak vertical acceleration was recorded (Aoi et al., 2008). Analyses of the combined downhole and surface ground motions from IWTH25 provide an opportunity to evaluate several strategies to estimate vertical ground motions since a P-and S-wave velocity profile is available to the bottom of the borehole at 260m (Aoi et al., 2008). Station IWTH25 is located in a region of rugged topography adjacent to a high-gradient stream channel on a fluvial terrace. The ruggest topography reflects the station's hangingwall location relative to the reverse fault . Station IWTH25 is located near a region of large slip along strike and updip of the hypocenter. Consequently, IWTH25 is subjected to significant rupture directivity and near-fault radiation associated with strong gradients of slip and rupture velocity on the portions of the fault close to the station (Miyazaki et al., 2009). The IWTH25 ground motion has been of particular interest because of the extreme peak vertical acceleration (3.8g) and peculiar asymmetric amplitudes distribution of the vertical accelerations (Aoi et al., 2008;[START_REF] O'connell | Assessing Ground Shaking[END_REF]Hada et al., 2009;Miyazaki et al., 2009;Yamada et al, 2009a); the upward vertical acceleration is much larger than the downward direction, although in the borehole record at a depth of 260 m at the same site, the upward and downward accelerations have symmetric amplitudes (Aoi et al., 2008) The geologic environment at station IWTH25 will clearly produce lateral changes in shallow velocity structure. In particular, the hangingwall uplift associated with repeated faulting similar to the 2008 earthquake will produce a series of uplift terraces adjacent to the stream next to station IWTH25, with the lowest shallow velocities being found on the lowest terrace adjacent to the stream, where station IWTH25 is located. The width of the stream and lowest terrace is about 100m near station IWTH25. We constructed a 2D velocity model by including a region 100 m wide with surface Vs=300 m/s layer 2-m deep and then extended Vs=500 m/s to the free surface in the region surrounding the 100-m-wide low-velocity surface layer. Station IWTH25 is assumed to be located relatively close (4-5 m) to the lateral velocity change within the lowst-velocity portion of the 2D velocity model because the geologic log from station IWTH25 indicates a only 1-2 m of young terrace deposits (Aoi et al., 2008), but the youngest terrace probably extends across and encompasses the stream channels and their margins. The dominant large-amplitude arrivals in the borehole motions are associated with large slip regions below and just south of station IWTH25. Consequently, a planewave incident at 80 degrees from the south was used to propagate the borehole motion to the surface in the 2D model. It is important to mention some factors that are not explicitly accounted for in the approach of [START_REF] Bonilla | 1D and 2D linear and non linear site response in the Grenoble area[END_REF]. Goldberg (1960) was among the first to theoretically show the interaction between P and S waves in an elastic medium for large-amplitude seismic waves. His solution yielded the following results: (1) P-and S-waves couple, (2) S waves induce P waves, (3) the induced waves have a dominant frequency twice the Swave frequency, (4) the induced P waves propagate ahead with P-wave velocity.
Ground motion prediction equations based on empirical data
Ground motion observations are the result of a long history of instrument development and deployment, instigated primarily by earthquake engineers, to acquire data to develop an empirical foundation to understand and predict earthquake ground motions for use in the design of engineered structures. Strong motion instruments usually record time histories of ground acceleration that can be post-processed to estimate ground velocities and displacements. A particularly useful derived quantity for engineering analyses are response spectra, which are the maximum amplitudes of modestly damped resonant responses of single-degree-of-freedom oscillators (an idealization of simple building responses) to a particular ground motion time history, as a function of natural period or natural frequency. While peak accelerations are always of concern for engineering analyses, peak ground velocity is now recognized as a better indicator of damage potential for large structures than is peak ground acceleration (EERI, 1994). Engineering analyses often consist of linear approaches to determine if structures reach their linear strength limits. Ground motion estimation quantities required for linear analyses are peak accelerations and velocities and associated response spectra. Nonlinear engineering analyses require estimates of future acceleration time histories. The discussion presented in this section focuses on empirical ground motion parameter estimation methods. Ground motion estimation methods required for nonlinear engineering analyses are presented in subsequent sections.
Historically the estimation of ground motion parameters such as peak acceleration, velocity, and displacement, and response spectral ordinates, and duration has been based on regression relationships developed using strong motion observations. These ground motion prediction equations strive to interpolate and extrapolate existing ground motion measurements to serve the needs to design for seismic loads.
Function form of GMPEs for regression
In their simplest form, these empirical GMPEs predict peak ground motions based on a limited parametric description of earthquake and site characteristics. Peak ground motion amplitudes generally increase with increasing magnitude up to a threshold magnitude range where peak accelerations saturate, i.e., only slightly increase or stay nearly constant above the threshold magnitude range [START_REF] Campbell | Near-source attenuation of peak horizontal acceleration[END_REF][START_REF] Boore | Estimation of response spectra and peak accelerations from western North American earthquakes: An interim report[END_REF]. Similarly, observed peak ground motion amplitudes decrease with increasing distance from the earthquake fault, but saturate at close distances to faults such that the decrease in amplitudes with increasing distance is small within several km of faults. These GMPEs relate specific ground motion parameters to earthquake magnitude, reduction (attenuation) of ground motion amplitudes with increasing distance from the fault (geometric spreading), and local site characteristics using either site classification schemes or a range of quantitative measures of shallow to deeper velocity averages or thresholds. The 30-m-average shear-wave velocity ( Vs30) is most commonly used to account for firstorder influences of shallow site conditions. Depths to shear-wave velocities of 1.0, 1.5, and 2.5 km/s (Z1.0 in Abrahamson and Silva (2008) and [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF], Z1.5 in Choi et al. (2005) and [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]Z2.5 in Campbell andBozorgnia (2008), respectively) are sometimes used to account for influences of larger scale crustal velocity structure on ground motions. The "Next Generation Attenuation" (NGA) Project was a collaborative research program with the objective of developing updated GMPEs (attenuation relationships) for the western U.S. and other worldwide active shallow tectonic regions. These relationships have been widely reviewed and applied in a number of settings [START_REF] Stafford | An Evaluation of the Applicability of the NGA Models to Ground Motion Prediction in the Euro-Mediterranean Region[END_REF][START_REF] Shoja-Taheri | A Test of the Applicability of the NGA Models to the Strong-Motion Data in the Iranian Plateau[END_REF]. Five sets of updated GMPEs were developed by teams working independently but interacting throughout the NGA development process. The individual teams all had previous experience in the development of GMPEs. The individual teams all had access to a comprehensive, updated ground motion database that had been consistently processed (Chiou et al., 2008). Each team was free to identify portions of the database to either include or exclude from the development process. A total of 3551 recordings were included in the PEER-NGA database. The number of records actually used by the developers varied from 942 to 2754. The individual GMPEs are described in Abrahamson and Silva (2008), [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF], [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF], Chiou andYoungs (2008), and[START_REF] Idriss | An NGA Empirical Model for Estimating the Horizontal Spectral Values Generated by Shallow Crustal Earthquakes[END_REF]. These models are referred to as AS08, BA08, CB08, CY08, and I08, respectively, below. The NGA GMPEs developed equations for the orientation-independent average horizontal component of ground motions [START_REF] Boore | GMRotD and GMRotI: Orientation-Independent Measures of Ground Motion[END_REF].
The NGA account for these ground motion factors using the general form,
1 2 3 4 5 6 7 8 9 ln N N REF Source source site HW main Y A A M A M M A ln R C A R A F A F A F A F (9) and, 10 , 30 , lnY A M Vs ( 10
)
where Y is the ground motion parameter of interest (peak acceleration, velocity, displacement, response spectral ordinate, etc.), M is magnitude, R is a distance measure, M REF and C SOURCE are magnitude and distance terms that define the change in amplitude scaling and the F [source, site, HW, main] are indicator variables of source type, site type, hanging wall geometry and main shock discriminator. The A i are coefficients to be determined by the regression. Not all of the five NGA GMPEs utilize all of these F indicator variables. The lnY term represents the estimate of the period dependent standard deviation in the parameter lnY at the magnitude and distance of interest. The NGA models use different source parameters and distance measures. Some of the models include the depth to top of rupture (TOR) as a source parameter. This choice was partially motivated by research [START_REF] Somerville | Differences in earthquake source and ground motion characteristics between surface and buried earthquakes[END_REF]) that suggested a systematic difference in the ground motion for earthquakes with buried ruptures producing larger short period ground motions as compared to earthquakes with surface rupture. Large reverse-slip earthquakes tend to be buried ruptures more often than large strike-slip earthquakes so the effect of buried ruptures may be partially incorporated in the style-offaulting factor. Not all the NGA developers found the inclusion of TOR to be a statistically significant factor. All of the models except for I08 use the time-averaged S-wave velocity in the top 30 m of a site, Vs30, as the primary site response parameter. I08 is defined only for a reference rock outcrop with Vs30 = 450-900 m/s. Approximately two thirds of the recordings in the PEER-NGA database were obtained at sites without measured values of shear-wave velocity. Empirical correlations between the surface geology and Vs30, were developed (Chiou and others, 2008) and used with assessments of the surface geology to estimate values of Vs30 at the sites without measured velocities. The implications of the use of estimated Vs30 on the standard deviation ( T ) was evaluated and included by AS08.
All of the relationships that model site response incorporate nonlinear site effects. Two different metrics for the strength of the shaking are used to quantify nonlinear site response effects. AS08, BA08, and CB08 use the median estimate of PGA on a reference rock outcrop in the nonlinear site response term. CY08 uses the median estimate of spectral acceleration on a reference rock outcrop at the period of interest. The definition of "reference rock" varies from Vs30=535 m/s (I08) to Vs30=1130 m/s (CY08). A very small fraction of the strong-motion data in the PEER-NGA data set was obtained at sites with Vs30> 900m/s. Depths to shear-wave velocities of 1.0, 1.5, and 2.5 km/s (Z1.0 in Abrahamson and Silva (2008) and [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF], Z1.5 in Choi et al. (2005) and [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF], and Z2.5 in [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF], respectively) are sometimes used to account for influences of larger scale crustal velocity structure on ground motions. The implications of the methodology chosen to represent larger scale crustal velocity structure on ground motions is discussed in more detail below.
The standard deviation or aleatory variability, often denoted as sigma (σ T ), exerts a very strong influence on the results of probabilistic seismic hazard analysis (PSHA) [START_REF] Bommer | Why do modern probabilistic seismic-hazard analyses often lead to increased hazard estimates?[END_REF]. For this reason it is important to note that the total aleatory uncertainties, as well as the intra-and inter-event uncertainties are systematically larger for the new NGA equations relative to previous relationships (Boore et al., 1997;[START_REF] Sadigh | Atenuation relationships for shallow crustal earthquakes based on California strong motion data[END_REF][START_REF] Campbell | Empirical near-source attenuation relationships for horizontal and vertical components of peak ground acceleration, peak ground velocity, and pseudo-absolute acceleration response spectra[END_REF]. Three of the NGA models incorporate a magnitude dependence in the standard deviation. For magnitudes near 7, the five NGA models have similar standard deviations. However, for M < 5.5, there is a large difference in the standard deviations, with the three magnitude-dependent models exhibiting much larger standard deviations ( T > 0.7) than the magnitude-independent models ( T ~0.54). The three models that include a magnitude-dependent standard deviation (AS08, CY08 and I08) all included aftershocks, whereas the two models that used a magnitude-independent standard deviation (BA08 and CB08) excluded them. Including aftershocks greatly increases the number of small-magnitude earthquakes. However, there is a resulting trade-off of significantly larger variability in predicted ground motions than if only large magnitude mainshocks are used. Significant differences in the standard deviations are also noted for soil sites at short distances, this is most likely due to inclusion or exclusion of nonlinear site effects on the standard deviation.
In general, the NGA models predict similar median values (within a factor of ~1.5) for vertical strike-slip earthquakes with 5.5 < M < 7.5. The largest differences are for small magnitudes (M < 5.5), for very large magnitudes (M = 8), and for sites located over the hanging wall of dipping faults (Abrahamson et al., 2008). As more data has become available to the GMPE developers the number of coefficients in the relationships has increased significantly (>20 in some cases). However, the aleatory variability values ( T ) have not decreased through time (J. Bommer, pers. comm.). Since empirical GMPEs, including NGA GMPEs, are by necessity somewhat generic compared to the wide range of seismic source, crustal velocity structure, and site conditions encountered in engineering applications, there are cases when application of empirical GMPEs is difficult and most importantly, more uncertain. In the context of PSHA, these additional epistemic (knowledge) uncertainties, when quantified, are naturally incorporated into the probabilistic estimation of ground motion parameters. We present two situations of engineering interest, where the application of empirical GMPEs is challenging, to illustrate the difficulties and suggest a path forward in the ongoing process to update and improve empirical GMPEs.
Application of NGA GMPEs for near-fault Vs30 > 900 m/s sites
Independent analysis of the performance of the NGA GMPEs with post-NGA earthquake ground motion recordings demonstrate that use of measured site Vs30 characteristics leads to greatly improved ground motion predictions, with lower performance for sites where Vs30 is inferred instead of directly measured [START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF]. Thus, the use of Vs30 represents a significant improvement over previous generations of GMPEs that use a simple qualitative site classification scheme. [START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF] suggest that development of of better site characteristics than Vs30 may also improve the prediction accuracy of GMPEs. In this section we illustrate the challenges presented in the use and application of Vs30 in the NGA GMPE regressions and application of the NGA GMPEs for "rock" sites.
It is becoming more common to need ground motion estimates for "rock" site conditions to specify inputs for engineering analyses that include both structures and shallow lowervelocity materials within the analysis model. In this section we consider the challenges in estimating ground motions for site conditions of Vs30 > 900 m/s close to strike slip faults. The problem is challenging for empirical GMPEs because most of the available recordings of near-fault strike-slip ground motions are from sites with Vs30 on the order of 300 m/s. The NGA GMPEs that implement Vs30 used empirical and/or synthetic amplification functions that involve modifying the observed ground motions prior to regression. In this section we discuss some of the challenges of this approach as it applies to estimating ground motions at rock (Vs30 > 900 m/s) sites that are typical of foundation conditions for many large and/or deeply embedded structures. The four NGA GMPEs that implement Vs30 using deterministic ("constrained") amplification coefficients to remap the observed near-fault strike-slip strong motion data that have an average Vs30=299 m/s PSA (Table 5) prior to regression. In contrast, Boore et al. (1997) applied non-linear multi-stage regression using the observed data directly; the observed ground motion values were directly employed in their regression with no remapping of values due to site characteristics). [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] used the Choi and Stewart (2005) linear amplification coefficients to remap observed response spectra to a reference Vs30=760 m/s. [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] used 1D nonlinear soil amplification simulation results of [START_REF] Walling | Nonlinear site amplification factors for constraining the NGA models[END_REF] to deterministically fix nonlinear amplification and remap all response spectra with Vs30 < 400-1086 m/s, depending on period, to create the response spectral "data" input into the non-linear multi-stage regression. Abrahamson and Silva (2008) use an approach similar to [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF]. [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF] do not explicitly specify how the coefficients for linear and nonlinear amplification were constrained or obtained. Thus, [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] remap observed response spectra prior to regression using the linear coefficients from Choi and Stewart (2005), [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] and Abrahamson and Silva (2008) remap observed response spectra prior to regression using the nonlinear coefficients from [START_REF] Walling | Nonlinear site amplification factors for constraining the NGA models[END_REF], and it is not clear what [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF] did. We use [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] and [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] to illustrate how the observed response spectral data for sites with Vs30=300 m/s are changed to create the actual "data" used in the regression to estimate Vs30=915 m/s ground motions. It is instructive to compare the approaches and resulting near-fault ground motion predictions. For [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] amplification normalized by the GMPEs longest period amplification (10 s) is used in Fig. 5.1, to clearly illustrate the scale of the a priori deterministic linear amplification as a function of period. The a priori deterministic linearamplification normalization (Fig. 5.1a) takes the original median near-fault response spectra that have a peak amplitude at about 0.65 s (Figure 5.1) and create response spectra with peak amplitude at 0.2 s that is used as the "observed data" (red curve in Figure 5.1b) in the nonlinear multi-stage GMPE regression. For [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF], the nonlinear Vs30 amplification coefficients are fixed and create the deterministic nonlinear amplification function (Figure 5.2a) that is always applied to Vs30 < 400 m/s PSA at all periods to create the "data" (red curve in Figure 5.2b) used in the non-linear multi-stage GMPE regression. In the case of nonlinear deterministic amplification it is necessary to specify a reference PGA. We use 0.45 g for the reference PGA for illustration since this is close to the median ground motion case for sites about 2 km from strike-slip faults and M > 6.; use of a higher reference PGA would increase the nonlinear amplification in Fig. 5.2a. The use of a single deterministic amplification function for Vs30, whether linear or nonlinear, assumes that there is a one-to-one deterministic mapping of period-dependent amplification to Vs30, which Idriss (2008) suggests is not likely; a single Vs30 can be associated with a wide variety of amplification functions. Further, in the case of nonlinear amplification (Campbell andBozorgnia, 2008 and[START_REF] Abrahamson | Effects of rupture directivity on probabilistic seismic hazard analysis[END_REF]Silva, 2008), a single deterministic nonlinear amplification function used to account for modulus reduction and damping that vary widely as a function of soil materials, as discussed in Section 4 and [START_REF] Bonilla | Hysteretic and Dilatant Behavior of Cohesionless Soils and Their Effects on Nonlinear Site Response: Field Data Observations and Modeling[END_REF]. Chiou et al., 2008) to have a peak acceleration response at about 0.2 s, prior to regression. Thus, in hindsight it may not be a surprise that the NGA response spectra maintain a strong bias to peak at 0.2 s period that in large part is the result of the deterministic amplification modifications to the observed data prior to non-linear multi-stage regression. What is remarkable is that all four NGA GMPEs that implement Vs30 and [START_REF] Idriss | An NGA Empirical Model for Estimating the Horizontal Spectral Values Generated by Shallow Crustal Earthquakes[END_REF] predict that spectral accelerations normalized by peak ground acceleration always peak at about 0.2 s, virtually independent of magnitude for M > 6; the overall shape of [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] response spectra normalized by peak ground acceleration in Fig. 5.3a are representative of all NGA GMPE response spectral shapes in terms of overall spectra shape and the 0.2 s period of maximum response. Boore et al. (1997) obtained a quite different result, with the period of peak spectral amplitude shifting to longer periods as magnitude increases above M 6.6 (Fig. 5.3b). The few near-fault data from sites with Vs30 > 900 m/s (Table 6 andFig Ground motion acceleration at high frequency scales in proportion to dynamic stress drop [START_REF] Boore | Stochastic simulation of high-frequency ground motions based on seismological models of the radiated spectra[END_REF]. Average slip is proportional to the product of average dynamic stress drop and average rise time. Dynamic stress drop averaged over the entire fault plane is generally found to remain relatively constant with magnitude [START_REF] Aki | Strong-motion seismology[END_REF][START_REF] Shaw | Constant stress drop from small to great earthquakes in magnitude-area scaling[END_REF]. Thus, as average slip increases with magnitude [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF][START_REF] Mai | Source scaling properties from finite-fault-rupture models[END_REF][START_REF] Mai | On scaling of fracture energy and stress drop in dynamic rupture models: Consequences for near-source ground motions, Earthquakes: Radiated Energy and the Physics of Faulting[END_REF] average rise time must also increase with increasing magnitude. [START_REF] Somerville | Magnitude scaling of the near fault rupture directivity pulse[END_REF] notes that the period of the dominant amplitude near-fault motions is related to source parameters such as the rise time and the fault dimensions, which generally increase with magnitude. [START_REF] Mai | On scaling of fracture energy and stress drop in dynamic rupture models: Consequences for near-source ground motions, Earthquakes: Radiated Energy and the Physics of Faulting[END_REF] present an analysis of scaling of stress drop with seismic moment and find a strong increase of maximum stress drop on the fault plane as a function of increasing moment. In contrast, average stress drop over the entire fault plane at most only slightly increases with increasing moment; the substantial scatter of average stress drop values in Figure 1 of [START_REF] Mai | On scaling of fracture energy and stress drop in dynamic rupture models: Consequences for near-source ground motions, Earthquakes: Radiated Energy and the Physics of Faulting[END_REF] is consistent with average stress drop that is constant with moment. The [START_REF] Mai | On scaling of fracture energy and stress drop in dynamic rupture models: Consequences for near-source ground motions, Earthquakes: Radiated Energy and the Physics of Faulting[END_REF] results for maximum stress drop are consistent with first-order constraints on stochastic aspects of seismic source properties [START_REF] Andrews | A stochastic fault model, 2, Time-dependent case[END_REF][START_REF] Boore | Stochastic simulation of high-frequency ground motions based on seismological models of the radiated spectra[END_REF][START_REF] Frankel | High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling strength on faults[END_REF]. As fault area increases, the probability of observing a larger stress drop somewhere on the fault plane increases since stress drop must exhibit correlatedrandom variability over the fault to explain the first-order observations of seismic source properties inferred from ground motion recordings, such as the 2 spectral shape [START_REF] Andrews | A stochastic fault model, 2, Time-dependent case[END_REF][START_REF] Frankel | High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling strength on faults[END_REF]. However, for the moment range (6.5 < M < 7.5) that dominates the hazard at many sites, the stress drop averaged over the entire fault plane is generally found to remain relatively constant with magnitude [START_REF] Aki | Strong-motion seismology[END_REF][START_REF] Shaw | Constant stress drop from small to great earthquakes in magnitude-area scaling[END_REF], thus requiring average rise time to increase with increasing magnitude. These fundamental seismological constraints derived from analyses of many earthquakes require that the period that experiences peak response spectral amplitudes should increase with magnitude for some threshold magnitude. The results of the Boore et al. (1997) GMPEs suggest the threshold magnitude is about M 6.6 (Fig. 5.3b). That all five NGA GMPEs predict invariance of the period of peak spectral response amplitude for M > 6.6 to M 8.0 (example from M 6.6 to M 7.4 are shown in Fig. 5.3a) implies that stress drop increases strongly with increasing magnitude, which is inconsistent with current knowledge of seismic source properties. In contrast, the Boore et al. (1997) response-spectral-magnitude-period-dependent results are more consistent with available seismological constraints. It is important to understand why. Boore et al. (1997) implements Vs30 site factors in a quite different manner than the four NGA GMPEs that use Vs30. Boore et al. (1997) applied non-linear multi-stage regression using the observed data directly, with no deterministic remapping of data by Vs30 prior to regression. Except for their deterministic treatment of Vs30, [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] use a similar regression approach for Boore et al. (1997). Since [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] regress period-by-period, the linear site-response remapping (Fig. 5.1a) effectively swamps any signal associated with a period shift with increasing magnitude observed by Boore et al. (1997); a non-linear regression will operate on the largest signals. The deterministic linear amplification function in [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] becomes a very large signal (Figure 5.1a) when operating on data from Vs30=300 m/s sites. The other NGA GMPE regressions normalize response spectra by peak ground acceleration prior to regression, which Boore et al. (1997) suggest tends to reduce resolution of the period-amplitude response-spectra variations in multi-stage regression. Figs 5.1 and 5.2 illustrate why the NGA GMPEs predict PSA shapes that barely change with magnitude (Fig. 5.3a) and why the NGA GMPEs do not match the first-order characteristics of M > 6.6 near-fault PSA (Fig. 5.3c). It simply might be true that once nonlinear amplification occurs it is impossible to resolve differences between period shifts associated with source processes and site responses. Yet, implicitly the NGA GMPE non-linear regressions assume resolution of all possible response-spectral shape changes as a function of magnitude using deterministic site response amplification functions, an assumption [START_REF] Idriss | An NGA Empirical Model for Estimating the Horizontal Spectral Values Generated by Shallow Crustal Earthquakes[END_REF] does not find credible. In contrast, Boore et al. (1997) used the actual unmodified response spectral data in their multi-stage regression and obtained results compatible with existing seismological constraints. Unfortunately, this leaves us in a bit of a conundrum based on GMPE grading criteria suggested by [START_REF] Bommer | On the selection of ground-motion prediction equations for seismic hazard analysis[END_REF] and [START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF], which clearly establish that NGA is a significant improvement for a wide range of applications than previous generation GMPEs, including Boore et al. (1997). A primary contributor to this conundrum about appropriate spectral behaviour for near-fault Vs30 > 900 m/s sites is the lack of near-fault ground motion data for Vs30 > 900 m/s (Table 6 andFigure 5.3c), providing a vivid real-world example of epistemic uncertainty.
Earthquake
The site amplification approach used in NGA is discussed by [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF], "The rationale for pre-specifying the site amplifications is that the NGA database may be insufficient to determine simultaneously all coefficients for the nonlinear soil equations and the magnitude-distance scaling, due to trade-offs occur between parameters, particularly when soil nonlinearity is introduced. It was therefore deemed preferable to "hard-wire" the soil response based on the best-available empirical analysis in the literature, and allow the regression to determine the remaining magnitude and distance scaling factors. It is recognized that there are implicit trade-offs involved, and that a change in the prescribed soil response equations would lead to a change in the derived magnitude and distance scaling. Note, however, that our prescribed soil response terms are similar to those adopted by other NGA developers who used different approaches; thus there appears to be consensus as to the appropriate level for the soil response factors." This consensus is both a strength and weakness of the NGA results. The weakness is that if there is a flaw in the deterministic site response approach, then all the NGA GMPEs that use Vs30 are adversely impacted. Ultimately, three data points (Table 6 and Fig. 5.3c) are insufficient for the data to significantly speak for themselves in this particular case. Consequently, one can argue for one interpretation (invariant spectral shape) or the other (spectral peaks shift to longer periods at M > 6.6), and while a Bayesian evidence analysis shows that limited available data support a spectral shift with increasing magnitude, without data from more earthquakes, an honest result is that large epistemic uncertainty remains a real issue for Vs30 > 900 m/s near-fault sites. Epistemic uncertainties can be rigorously accounted for in probabilistic ground motion analyses. However, it is necessary to develop a quantitative description of the epistemic uncertainties to accomplish this. Uncertainty in spectral shape as a function of magnitude, particular the period band of maximum acceleration response are important issues because many structures have fundamental modes of vibration at periods significantly longer than 0.2 s, the period the NGA GMPEs suggest that maximum acceleration responses will occur for M > 6.6 earthquakes at Vs30 > 900 m/s near-fault sites. We can reduce these site uncertainties and improve ground motion prediction, with the ground motion data that currently exist by collecting more quantitative information about site characteristics that more directly and robustly determine site amplification, like Vsdepth profiles. [START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF] showed through empirical statistical analyses that actual Vs30 measurements produced better performance than occurred at sites where Vs30 is postulated based on geology or other proxy data. Boore and Joyner (1997) suggested that quarter-wavelength approximation of [START_REF] Joyner | The effect of Quaternary alluvium on strong ground motion in the Coyote Lake, California earthquake of 1979[END_REF] would likely be a better predictor of site responses than Vs30. For a particular frequency, the quarter-wavelength approximation for amplification is given by the square root of the ratio between the seismic impedance (velocity times density) averaged over a depth corresponding to a quarter wavelength and the seismic impedance at the depth of the source. The analyses of this section suggest that the combination of Vs30 and its deterministic implementation in NGA is not the best approach. [START_REF] Thompson | Multiscale site-response mapping: a case study of Parkfield, California[END_REF] show that the quarter-wavelength approximation more accurately estimates amplification than amplification estimated using Vs30. Given the rapid growth in low-cost, verified passive measurement methods to quickly estimate robust Vs-depth to 50-100 m or more [START_REF] Stephenson | Blind shear-wave velocity comparison of ReMi and MASW results with boreholes to 200 m in Santa Clara Valley: Implications for earthquake ground-motion assessment[END_REF][START_REF] Boore | Comparisons of shear-wave slowness in the Santa Clara Valley, California, using blind interpretations of data from invasive and noninvasive methods[END_REF][START_REF] Miller | Advances in near-surface seismology and ground-penetrating radar: Introduction[END_REF][START_REF] O'connell | Interferometric multichannel analysis of surface waves (IMASW)[END_REF], it would greatly improve the prospects for substantial improvements in future GMPEs to acquire Vs-depth data for as much of the empirical ground motion database as possible to improve resolution of site amplification. These results illustrate how difficult it is to formulate a GMPE formulation and regression strategy a-priori, for a "single" parameter like Vs30. This analysis does not show that the NGA GMPEs are incorrect. Instead, it demonstrates some of the trade-offs, dependencies, and uncertainties that occur in the NGA GMPEs between Vs30 and spectral shape. This near-fault high Vs30 example illustrates that it is important to conduct independent analyses to determine which GMPEs are best suited for a particular application and to use multiple GMPEs, preferably with some measure of independence in their development to account for realistic epistemic GMPE uncertainties.
5.3
Near-fault application of NGA GMPEs and site-specific 3D ground motion simulations: Source and site within the basin In tectonically active regions near plate boundaries active faults are often located within or along the margins of sedimentary basins. Basins are defined by spatially persistent strong lateral and vertical velocity contrasts that trap seismic waves within the basin. Trapped seismic waves interact to amplify ground shaking and sometimes substantially increase the duration of strong shaking. Basin amplification effect is the result the combination of lateral and vertical variations in velocity that make the basin problem truly three-dimensional in nature and difficult to quantify empirically with currently available strong motion data. The basin problem is particularly challenging for estimating amplifications for periods longer than 1 s and sedimentary basin thicknesses exceeding about 3 km [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF].
Unfortunately, some of the largest urban populations in the world are located within basins containing active faults, including many parts of Japan [START_REF] Kawase | The cause of the damage belt in Kobe: "The basin-edge effect," constructive interference of the direct S-wave with the basin-induced diffracted/Rayleigh waves[END_REF][START_REF] Pitarka | Three-dimensional simulation of the near-fault ground motions for the 1995 Hyogo-Nanbu (Kobe), Japan, earthquake[END_REF][START_REF] Nied | Off the Pacific Coast of Tohoku Earthquake, Strong Ground Motion[END_REF], the Los Angeles and other basins in southern California [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF], and Seattle, Washington, [START_REF] Frankel | Sedimentary basin effects in Seattle, Washington: Ground-motion observations and 3D simulations[END_REF]. Consequently, estimation of long-period ground motions in sedimentary basins associated with near-fault faulting is an important practical need. Choi et al. (2005) used empirical and synthetic analyses to consider the effects of two types of basin situations. They denoted sites located in a basin overlying the source as having coincident source and site basin locations (CBL) and differentiated them from distinct source and site basin locations (DBL). They used pre-NGA GMPEs for "stiff-soil/rock", but modified to account for Vs30 using Choi and Stewart (2005) to regress for additional basin amplification factors as a function a scalar measure of basin depth, Z1.5, the depth to a shear-wave velocity of 1.5 km/s. Using ground motion data from southern and northern California basins, Choi et al. (2005) found strong empirical evidence that ground-motion amplification in coincident source and site basin locations (CBL) is significantly depthdependent at medium to long periods (T > 0.3 s). In contrast, They found that when the seismic source lies outside the basin margin (DBL), there is a much lower to negligible empirical evidence for depth-dependent basin amplification.
In support of NGA GMPE development [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] proposed a model for the effect of sedimentary basin depth on long-period response spectra. The model was based on the analysis of 3D numerical simulations (finite element and finite difference) of long-period 2-10 s ground motions for a suite of sixty scenario earthquakes (M 6.3 to M 7.1) within the Los Angeles basin region. [START_REF] Day | 3-D ground motion simulation in basins: Final report for Lifelines Project 1A03[END_REF] used a deterministic 3D velocity model for southern California (Magistrale et al., 2000) to calculate the wave responses on a grid and determine the amplification of basin sites as a function of Z1.5 in the 3D model. Being a purely synthetic model, but primarily concerned with ratios (amplification), it is relatively unimportant to consider the correlated-random effects on wave amplitude (Table 3) and phase (Table 4) to calculate to first-order amplification effects for shallow (< 2 km) and/or relatively fast basins.
In shallow and/or fast basins the additional stochastic basin path length difference between the shallow basin and bedrock paths is less than a couple wavelengths at periods > 1 s, so the effects of differential correlated-random path lengths on S-wave amplification are negligible (O'Connell, 1999a). For typical southern California lower-velocity basins deeper than 3 km both the 3D viscoelastic finite-difference simulations of O'Connell (1999a) and phase-screen calculations of Hartzell et al. (2005) show correlated-random velocity variations will significantly reduce estimated basin amplification relative to deterministic 3D models. The primary purpose of O'Connell's (1999a) investigations was to determine the likely amplification of higher-velocity rock sites where few empirical data exist (see Section 5.2), relative to the abundant ground motion recordings obtained from stiff soil sites. O'Connell (1999a) showed that basin amplification in > 3 km deep basins is reduced relative to rock as the standard deviation of correlated-random velocity variations increases because the mean-freepath scattering in the basins significantly increases relative to rock at periods of 1-4 s. Consequently, because [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] use a deterministic 3D velocity model, we expect that their estimated basin amplifications will generally correspond to upper bounds of possible mean amplifications for southern California basins deeper than 3 km, but provide accurate first-order estimates of basin amplification for shallower (< 2 km) basins.
Several NGA GMPEs worked to empirically evaluate and incorporate "basin effects" in some way, but it is important to note that none of the empirical NGA GMPEs explicitly consider 3D basin effects by separately considering data in coincident source and site basin locations (CBL) from other data as Choi et al. (2005) showed is necessary to empirically estimate 3D basin effects for coincident source and site basin locations. NGA GMPEs lack sufficient parameterization to make this distinction, thus lumping all sites, CBL, DBL, and sites not located in basins into common Z1.0 or Z2.5 velocity-depth bins. All these sites, no matter what their actual location relative to basins and sources are apportioned some "basin-effect" through their Vs30 site velocity, Z1.0, and Z2.5 "basin-depth" terms [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]. It is important to understand that Z1.0 and Z2.5 are not empirically "basin-depth" terms, but "velocity-depth" terms. We use "velocity-depth" to refer to Z1.0 and Z2.5 instead of "basin-depth" because empirically, the NGA empirical GMPEs do not make the necessary distinctions in their GMPE formulations for these terms to actually apply to the problem of estimating 3D CBL amplification effects, the only basin case where a statistically significant empirical basin signal has been detected (Choi et al, 2005). [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] found empirical support for significant "velocity-depth" Z2.5 term after application of their Vs30 term, but only for sites where Z2.5 < 3 km, which roughly correspond to Z1.5 < 1.5. For Z2.5 > 3 km, [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] used the parametric 3D synthetic basin-depth model from [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]. [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] note that correlation between Vs30 and "basin" depth is sufficiently strong to complicate the identification of a basin effect in the residuals after having fit a regression model to Vs30. [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF] found that to implement a velocity-depth term using Z1.5 would require removing the Vs30 site term from their GMPE because of the Z1.5-Vs30 correlation. Instead, [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF] retained Vs30 at all periods and included a "velocitydepth" Z1.0 term to empirically capture the portion of velocity-depth amplification not fully accounted for by the correlation between Vs30 and Z1.0. Abrahamson and Silva ( 2008) used a similar Vs30 and Z1.0 parameterization approach for their GMPE. Since none of the NGA implementations of Z1.0 make the distinction whether a site is actually contained in a CBL or is not even in a basin, it is useful to evaluate the predictions of the four NGA GMPEs that implement Vs30, including the three NGA GMPEs that incorporate Z1.0 and Z2.5 velocitydepth terms, for four CBL sites along a portion of the North Anatolia Fault, where the fault is embedded below a series of connected 3D basins (Fig. Three hypocenters positions are used to evaluate forward, bilateral, and reverse rupture directivity (Fig. 5.4). These simulated ground motions are compared to NGA GMPE response spectra predictions from the four NGA GMPEs with Vs30, including the three with velocity-depth terms (Abrahamson and Silva, 2008;[START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF][START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF], and [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] for periods of 1 s. The NGA results are modified to account for rupture directivity using [START_REF] Rodriguez-Marek | An empirical geotechnical seismic site response procedure[END_REF], Spudich andChiou (2008), and[START_REF] Rowshandel | Directivity correction for the next generation attenuation (NGA) relations[END_REF] to isolate residual 3D basin and directivity effects relative to the NGA-based empirical predictions. A 3D velocity model encompassing the eastern Marmara Sea and Izmit Bay regions was constructed to span a region including the fault segments of interest, the ground motion estimation sites, and local earthquakes and recording stations (Fig. 5.5). Synthetic waveform modeling of local earthquake ground motions was used to iteratively improve and update the 3D model. The initial 3D velocity model was constructed using published 1-D velocity model data [START_REF] Bécel | Moho, crustal architecture and deep deformation under the North Marmara Trough, from the SEISMARMARA Leg 1 offshore-onshore reflection-refraction survey[END_REF][START_REF] Bayrakci | Approach to the complex 3D upper-crustal seismic structure of the Sea of Marmara by artificial source tomography on a 2D grid of OBS[END_REF], tomographically assessed top of basement contours [START_REF] Bayrakci | Approach to the complex 3D upper-crustal seismic structure of the Sea of Marmara by artificial source tomography on a 2D grid of OBS[END_REF], seismic reflection profiles [START_REF] Carton | Seismic imaging of the threedimensional architecture of the Çınarcık Basin along the North Anatolian Fault[END_REF]Kurt and Yucesoy, 2009), Boguer gravity profiles [START_REF] Ates | Structural interpretation of the Marmara region, NW Turkey, from aeromagnetic, seismic and gravity data[END_REF], geologic mapping [START_REF] Okyar | Late quaternary seismic stratigraphy and active faults of the Gulf of İzmit (NE Marmara Sea)[END_REF]) and fault mapping [START_REF] Armijo | Asymmetric slip partitioning in the Sea of Marmara pull-apart: a clue to propagation processes of the North Anatolian Fault?[END_REF]. Additional understanding of the basin-basement contact was gained by assessment of seismic reflection data collected by the SEISMARMARA cruise and made available at http://www.ipgp.fr/~singh/DATA-SEISMARMARA/.
The empirical wavespeed and density relations from Brocher (2005) were used to construct 3D shear-wave and density models based on the initial 3D acoustic-wave model. Shear-wave velocities were clipped so that they were not less than 600 m/s to ensure that simulated ground motions would be accurate for periods > 0.7 s for the 3D variable grid spacing used in the finite-difference calculations. This initial 3D velocity model was used to generate synthetic seismograms to compare with recordings of local M 3.2-4.3 earthquakes recorded on the margins of the Sea of Marmara, Izmit Bay, and inland locations north of Izmit Bay to assess the ground motion predictive performance of the initial 3D model. Several iterations of forward modeling were used to modify the 3D velocity model to obtain models that produce synthetic ground motions more consistent with locally-recorded local earthquake ground motions. The resulting shear-wave surface velocities mimic the pattern of acoustic-wave velocities that are consistent to first-order with the 3D acoustic-wave tomography results for the eastern Marmara Sea from [START_REF] Bayrakci | Approach to the complex 3D upper-crustal seismic structure of the Sea of Marmara by artificial source tomography on a 2D grid of OBS[END_REF]. Following O'Connell (1999a) and Hartzell et al. (2005) the final 3D model incorporates 5% standard deviation correlated-random velocity variations to produce more realistic peak ground motion amplitudes than a purely deterministic model. Since there are three distinct geologic volumes in the 3D model, three independent correlatedrandomizations were used, one for the basin materials with a correlation length of 2.5 km, and one each for the basement north and south of the NAF that both used a correlation length of 5 km. Similar to [START_REF] Hartzell | Effects of 3D random correlated velocity perturbations on predicted ground motions[END_REF] we use a von Karman randomization with a Hurst coefficient close to zero and 5% standard deviation. Velocity variations are clipped to that shear-wave velocities are never smaller than 600 m/s to ensure a consistent dispersion limit for all calculations and randomized acoustic velocities were never larger than the maximum deterministic acoustic velocity to keep the same time step for all simulations. Realistic ground motion simulations require accounting for first-order anelastic attenuation, even at long periods [START_REF] Olsen | Estimation of Q for long-period(>2 sec) waves in the Los Angeles basin[END_REF]. The fourth-order finite-difference code employs the efficient and accurate viscoelastic formulation of [START_REF] Liu | Efficient modeling of Q for 3D numerical simulation of wave propagation[END_REF] that accurately A kinematic representation of finite fault rupture is used where fault slip (displacement), rupture time, and rise time are specified at each finite-difference grid node intersected by the fault. The 3D viscoelastic fourth-order finite-difference method of [START_REF] Liu | The effect of a low-velocity surface layer on simulated ground motion[END_REF]2006) was used to calculate ground motion responses from the kinematic finite fault rupture simulations. The kinematic rupture model mimics the spontaneous dynamic rupture behavior of a self-similar stress distribution model of [START_REF] Andrews | Dynamic simulation of spontaneous rupture with heterogeneous stress drop[END_REF]. The kinematic rupture model is also similar to the rupture model of Herrero and Benard (1994). Self-similar displacements are generated over the fault with rise times that are inversely proportional to effective stress. Peak rupture slip velocities evolve from ratios of 1:1 relative to the sliding (or healing peak) slip velocity at the hypocenter to a maximum ratio of 4:1. This form of slip velocity evolution is consistent with the dynamic rupture results of [START_REF] Andrews | Dynamic simulation of spontaneous rupture with heterogeneous stress drop[END_REF] that show a subdued Kostrov-like growth of peak slip velocities as rupture grows over a fault. The kinematic model used here produces slip models with 1/k 2 (k is wavenumber) distributions consistent with estimates of earthquake slip distributions [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF] and 2 ( is angular frequency) displacement spectra in the far-field. [START_REF] Oglesby | Stochastic fault stress: Implications for fault dynamics and ground motion[END_REF] and [START_REF] Schmedes | Correlation of earthquake source parameters inferred from dynamic rupture simulations[END_REF] used numerical simulations of dynamic fault rupture to show that rupture velocity, rise time, and slip are correlated with fault strength and stress drop, as well as each other. The kinematic rupture model used here enforces correlations between these parameters by using a common fractal seed to specify relationships between all these fault rupture parameters. [START_REF] Oglesby | Stochastic fault stress: Implications for fault dynamics and ground motion[END_REF], [START_REF] Guatteri | Strong ground motion prediction from stochastic-dynamic source models[END_REF][START_REF] Schmedes | Correlation of earthquake source parameters inferred from dynamic rupture simulations[END_REF] used dynamic rupture simulations to demonstrate that rupture parameter correlation, as implemented in the stochastic kinematic rupture model outlined here, is necessary to produce realistic source parameters for ground motion estimation. The fault slip variability incorporates the natural log standard deviation of strike-slip displacement observed by [START_REF] Petersen | Fault displacement hazard for strike-slip faults[END_REF] in their analyses of global measurements of strike-slip fault displacements. Consequently, although mean displacements are on the order of 1.5 m for the M 7.1 three-segment scenario earthquake, asperities within the overall rupture have displacements of up to 3-4 m. The [START_REF] Liu | Efficient modeling of Q for 3D numerical simulation of wave propagation[END_REF] slip velocity function is used with the specified fault slips and rise times to calculate slipvelocity time functions at each grid point. Three hypocenters were used to simulate forward, reverse, and bilateral ruptures relative to Izmit Bay sites (Fig. 5.4). To find an appropriate "median" randomization of the 3D velocity model, ten correlated-random 3D velocity models were created and then a single, threesegment randomized kinematic rupture model was used to simulate ten sets of ground motions. The randomized 3D model that most consistently produced nearly median motions across the five sites over the 1-10 s period band was used to calculate all the ground motion simulations for all two-segment and three-segment rupture scenarios. Ten kinematic randomizations were used for each case resulting in 60 rupture scenario ground motion simulations.
The simulated ground motions were post-processed to calculate acceleration response spectra for 5% damping. The geometric mean of [START_REF] Boore | GMRotD and GMRotI: Orientation-Independent Measures of Ground Motion[END_REF] (GMRotI50) was calculated from the two horizontal components to obtain GMRotI50 response spectra (SA).
Response spectral results are interpreted for periods longer than 1 s, consistent with the fourth-order finite-difference accuracy for the variable grid spacing, minimum shear-wave velocity of 600 m/s, and broad period influence of oscillator response [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]. The four NGA ground motion prediction equations (GMPE) that implement Vs30 were used to calculate ground motion estimates at all four sites using the Z1.0 and Z2.5 below each site in the 3D synthetic velocity model, Vs30=600 m/s, the directivity corrections of [START_REF] Rodriguez-Marek | An empirical geotechnical seismic site response procedure[END_REF], [START_REF] Spudich | Directivity in NGA earthquake ground motions: Analysis using isochrone theory[END_REF], and Rowshandel (2010) equally weighted, and the three rupture hypocenters (forward, bilateral, and reverse directivity in Fig. 5.3) equally weighted. Site 4 was located away from basin-edge effects and in the shallow portion of the basin with the same Vs30=600 m/s, as sites 1-3 and consistent with a relatively linear site response making direct comparison of linear 3D simulated motions with empirical GMPE feasible. Site 4 horizontal spectra where estimated as the log-mean average of the set of six earthquake rupture scenarios (two-segment and three-segment rupture, and forward, bilateral, and reverse directivity) used in the 3D ground motion simulations. To obtain robust estimates of mean synthetic spectra, we omitted the two largest and smallest amplitudes at each period to estimate log-mean spectra for comparison. The Site 4 horizontal responses are comparable in amplitude to NGA predicted response spectra for periods > 1 s (Fig. 5.7a). The reduced synthetic responses between 1-2 s in Fig. 5.7 are an artifact of finite-difference grid dispersion similar to that noted by [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]. The site 4 3D simulated responses are generally slightly less than the empirical GMPE median estimates over the 1-8 second period range, except for a small amplification at 3 seconds of < 10% (Fig. 5.7b). This confirmed that the 3D ground motion simulations and empirical NGA GMPE predict comparable spectral hazard at site 4 and establish site 4 as an appropriate reference point to compare to responses at sites 1-3 closer to the fault and within deeper portions of the basin. We use the empirical NGA GMPEs to estimate the amplitude effects of differential sourcesite distances on amplitudes relative to site 4 and changes in Z1.0 and Z2.5 between sites 1-3 and reference site 4. The three empirical directivity relations are used with equal weight to remove the differential directivity effects for two separate sets of rupture cases designed to determine if 3D basin amplification is dependent on rupture directivity. We consider in the first case, the two rupture scenarios away from the sites, to determine 3D basin amplification in the absences of forward rupture directivity. In the second case, we average all six rupture scenarios, four of which have strong forward rupture directivity, to see if any of the sites shows significantly 3D basin amplification in the case of solely reverse rupture direction ground motions. Empirical NGA distance, directivity, and Z1.0-Z2.5 sites 1-3 amplifications relative to reference site 4 are the lowest curves at the bottom of Fig. 5.8 and represent the sum total of the effects of all NGA GMPE terms related to differential distance, directivity, and Z1.0 and Z2.5 velocity-depth. Although sites 1-3 are much closer to the fault than site 4, the relative changes in amplitudes are much smaller than the proportional differences in site-source distances as a result of saturation, the condition enforced in NGA that ground motion amplitudes cease to increase as distance to the fault approaches zero. The directivity amplitude reduction from [START_REF] Rowshandel | Directivity correction for the next generation attenuation (NGA) relations[END_REF] for reverse rupture accounts for the dip in longer-period NGA differential responses at periods of about 5 s in Fig. 5.8. The most striking aspect of the NGA transfer functions is that although three of the four GMPE include Z1.0 or Z2.5 "basin-depth" terms, there is no hint of an empirical resonant 3D basin response, just slight steady increases of "amplification" with increasing period. The non 3Dbasin-like NGA differential amplification results are not surprising because the NGA basindepth formulation pools ground motion observations from all scales of basins and nonbasins in each Z1.0 and Z2.5 bin. Consequently, the NGA Vs30 and velocity-depth Z1.0 and Z2.5 basin terms do not capture any of the strongly period-dependent amplification associated with the site-specific basin of < 2 km total depth near the sites.
The residual site-specific synthetic 3D amplifications at sites 1-3 relative to reference site 4 are essentially independent of rupture direction (Fig. 5.8). Site 1 closest to the fault shows the largest amplification for case 2 with 2/3 forward rupture directivity, but the difference at site 1 between 2/3 forward rupture directivity basin amplification and reverse rupture basin amplification is < 10%. For sites 2 and 3 located slightly further from the fault, differences in case one and case two directivity 3D basin amplifications deviate < 4% from their mean peak amplifications. The remarkable result is that even in this case of a strike-slip fault embedded below the center of a basin and rupturing within basins continually along the entire rupture length, to first order 3D basin amplification is independent of rupture directivity/rupture direction. These 3D synthetic calculations shows that the three empirical directivity corrections applied with NGA GMPE effectively accounted for first-order directivity in this rather severe case of strike-slip fault rupture within a basin. The Izmit Bay basins are quite similar in width, depth, and velocity characteristics to the San Fernando Basin, one of the basins included in the [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] 3D synthetic calculations to represent basin amplification in younger shallower basins. Thus, it is interesting to compare the [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] synthetic amplification predictions calculated across a spectrum of shallow and deeper basins using a deterministic 3D velocity model with these simulations using a site-specific weakly-randomized 3D velocity model. We calculate the 3D simulation and [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] response ratios of sites 1-3 to site 4 using the Z1.5 values from the 3D simulation model in the [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] Z1.5-amplification relationships (Fig. 5.9). Both 3D synthetic approaches predict comparable peak amplifications at comparable periods (Fig. 5.9), with the site-specific 3D model predicting a more rapid decrease with in increasing period that reflects the details of the site-specific 3D model; [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] have a wider period range of larger amplification because they pooled basin amplifications from a wider range of basin configurations than representative of the site-specific 3D velocity structure. When soils are significantly less linear than clays with a plasticity index of 20, the fully nonlinear shear P-SV 2D investigations of [START_REF] O'connell | Influence of 2D Soil Nonlinearity on Basin and Site Responses[END_REF] suggest that combining the outputs of linear 3D simulations that omit the very-low-velocity basin with 1D nonlinear analyses to account for the very-low-velocity basins will produce comparable amplifications within the basin to full nonlinear 2D or 3D analyses. Linear 1D P-SV vertical analyses in the central portions of basins will typically provide appropriate vertical amplifications throughout most of the basin. Thus, it appears that it may be feasible in most of these cases to omit the shallow soft low-velocity regions from the top of basins from 3D linear or nonlinear analyses and use the outputs from linear 3D analyses with simplified 1D nonlinear SH and P-SV nonlinear amplification calculations to estimate realistic horizontal and vertical peak velocities and accelerations in the upper low-velocity soft soils. These results illustrate that at present, the NGA GMPE do not effectively estimate sitespecific 3D basin amplification for the most extreme case of a strike slip source and sites located within a closed basin. In such situations it is necessary to use site-specific 3D basin amplification calculations or compiled synthetic 3D generic basin amplification relations like [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] to estimate realistic site-specific 3D basin amplification effects. However, the NGA GMPEs and associated empirical directivity relations are shown to effectively account for geometric spreading and directivity in the demanding application of source and site located within a closed basin and provide a robust means to extract residual 3D basin amplification relative to NGA GMPE predictions. This approach requires a suitable reference site in shallow portions of the basin that are not strongly influenced by basin effects or a site outside the basin.
In future GMPE development, the basin analyses of Choi et al. (2005), [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF], and this analysis suggest separate consideration and analysis of data that is within closed basins with faults beneath or adjacent to the basin is warranted to evaluate empirical evidence for systematic basin responses. Such analyses need to be done separately for ground motion observations outside of this specific basin configuration to discern the relative effects of velocity-depth versus basin-depth on parameters like Vs30, Z1.0, and Z1.5. We suggest that it is more appropriate and prudent to refer to Z1.0 and Z1.5 as velocity-depth terms, not basin-terms, since they will fail to account for significant systematic period-dependent 3D basin amplification in the cases of sources and sites located within low-velocity basins.
Conclusion and recommendations
Geologic seismic source characterization is the fundamental first step in strong ground motion estimation. Many of the largest peak ground motion amplitudes observed over the past 30 years have occurred in regions where the source faults were either unknown or major source characteristics were not recognized prior to the occurrence of earthquakes on them. The continued development of geologic tools to discern and quantify fundamental characteristics of active faulting remains a key strong ground motion estimation research need.
As [START_REF] Jennings | Engineering seismology, In: "Earthquakes: Observation, Theory, and Interpretation[END_REF] noted, by the early 1980s efforts to develop empirical ground motion prediction equations were hampered not only by the insufficient recordings of ground motions to constrain the relationships between magnitude, distance, and site conditions, but insufficient physical understanding of how to effectively formulate the problem. Strong ground motion estimation requires both strong motion observations and understanding of the physics of earthquake occurrence, earthquake rupture, seismic radiation, and linear and nonlinear wave propagation. In sections 2-4 we provided an overview of the physics of strong ground motions and forensic tools to understand their genesis. The physics are complex, requiring understanding of processes operating on scales of mm to thousands of km, most of the physical system is inaccessible, and the strong motion observations are sparse. As O'Connell (1999a) and Hartzell et al. (2005) showed, surface ground motion observations alone are insufficient to constrain linear and nonlinear amplification and seismic source properties. The observational requirements to understand the earthquake system and how ground motions are generated are immense, and require concurrent recording of ground motions at the surface and at depth. These observations have only recently been undertaken at a comprehensive large scale. In Japan, the National Research Institute for Earth Science and Disaster Prevention (NIED) operates K-NET (Kyoshin Network) with 660 strong motion stations. Each station records triaxial accelerations both at the surface and at sufficient depth in rock to understand the physics of earthquake fault rupture and to directly observe linear and nonlinear seismic wave propagation in the shallow crust. These borehole-surface data have provided fundamental new constraints on peak ground motions (Aoi et al., 2008), direct observation of nonlinear wave propagation, and new constraints on ground motion variability (Rodriguez-Marek et al., 2011). It will be necessary to expand the deployment of K-NET scale networks to other tectonically active regions like the western United States, to make real long-term progress understanding and significantly improving our ability to predict strong ground shaking. The synergy between earthquake physics research and strong ground motion estimation is based on ground motion observations and geologic knowledge.
The need for new recordings of strong ground motions in new locations is clear, but there is immensely valuable information yet to be extracted from existing strong ground motion data. One of single biggest impediments to understanding strong ground motions is the lack of site velocity measurements for most of the current strong ground motion database (Chiou et al., 2008;[START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF]. The last 10 years has seen an explosion in the development and successful application of rapid, inexpensive, and non-invasive methods to measure site shear-wave velocities over depths of 50-1000 m that can provide site amplification estimates accurate to on the order of 10-20% [START_REF] Stephenson | Blind shear-wave velocity comparison of ReMi and MASW results with boreholes to 200 m in Santa Clara Valley: Implications for earthquake ground-motion assessment[END_REF][START_REF] Boore | Comparisons of shear-wave slowness in the Santa Clara Valley, California, using blind interpretations of data from invasive and noninvasive methods[END_REF]. Using the large borehole-surface station network in Japan, Rodriguez-Marek et al. (2011) showed that the difference in the single-station standard deviation of surface and borehole data is consistently lower than the difference in ergodic standard deviations of surface and borehole data. This implies that the large difference in ergodic standard deviations can be attributed to a poor parameterization of site response. Vs30 does not constrain frequency-dependent site amplification because literally, an infinite number of different site velocity-depth profiles can have the same Vs30. Even given geologic constraints on near-surface material variability, the scope of distinct velocity profiles and amplification characteristics that share a common Vs30 is vast. Ironically, the implementation of Vs30 in four of the NGA GMPE produced significant uncertainties in spectral shape as a function of magnitude as illustrated in Section 5.1. Vs30 also trades off with other velocity-depth factors (Section 5.2). We propose that one of the most valuable new strong ground motion datasets that can be obtained now is measurement of site shearwave velocity profiles at the sites of existing strong ground motion recordings. These measurements would provide a sound quantitative basis to constrain frequency-dependent linear-site amplification prior to regression and reduce uncertainties in ground motion estimations, particularly spectral shape as a function of site conditions. As Rodriguez-Marek et al. ( 2011) note, reduction of exaggerated ground motion variability results in more realistic ground motion estimates across widely differing sites in probabilistic analyses.
The analyses of Choi et al. (2005) and section 5.2 suggest that accounting for positions of ground motion recordings and earthquake inside or outside of closed basins may provide a path forward to improve the ability of future empirical GMPE to accurately estimate responses within basins.
the convolution of the time evolution of the slip-time functions, responses between the fault and the site (Figure3.1) as,
Fig. 3
3 Fig. 3.1. Schematic diagram of finite-fault rupture ground motion calculations. Three discrete subfault elements in the summation are shown. Rings and arrows emanating from the hypocenter represent the time evolution of the rupture. The Green functions actually consist of eight components of ground motion and three components of site ground velocities. Large arrows denote fault slip orientation, which is shown as predominantly reverse slip with a small component of right-lateral strike slip. Hatched circles schematically represent regions of high stress drop.
Fig. 3
3 Fig. 3.2. Schematics of line source orientations for strike-slip (a) and thrust faults (c) and (e) relative to ground motion sites (triangles). Black arrows show the orientation of the faults, red arrows show fault rupture directions, and blue arrows show shear-wave propagation directions (dashed lines) to the sites. Discrete velocity contributions for seven evenly-spaced positions along the fault are shown to the right of each rupture model (b, d, f) as triangles with amplitudes (heights) scaled by the radiation pattern. The output ground motions for each fault rupture are shown in (g). Isochrone velocity, c, is infinity in (d), is large, but finite, in (f), and decreases as the fault nears the ground motion site in (b).
Fig. 4
4 Fig. 4.1. Hyperbolic model of the stress-strain space for a soil under cyclic loading. Initial loading curve has a hyperbolic form, and the loading and unloading phases of the hysteresis path are formed following Masing's criterion.
Figure
Figure 4.1 shows a typical stress-strain curve with a loading phase and consequent hysteretic behavior for the later loading process. There have been several attempts to describe mathematically the shape of this curve, and among those models the hyperbolic is one of the easiest to use because of its mathematical formulation as well as for the number of parameters necessary to describe it[START_REF] Ishihara | Soil Behavior in Earthquake Geotechnics[END_REF][START_REF] Kramer | Geothechnical Earthquake Engineering[END_REF][START_REF] Beresnev | Nonlinear site response -a reality?[END_REF]
Fig. 4
4 Fig. 4.2. Borehole transfer functions computed at KiK-net station TTRH02 in Japan. The orange shaded area represents the 95% confident limits of the transfer function using weakmotion events (PGA < 10cm/s 2 ). The solid line is the transfer function computed using the October 2000 Tottori mainshock data.
Fig. 4
4 Fig. 4.3. Surface and borehole records of the 1995 Kobe earthquake at Port Island (left), and the 1993 Kushiro-Oki earthquake at Kushiro Port (right). The middle panel shows the shear wave velocity distribution at both sites.
Fig. 4
4 Fig. 4.4. Schematic figure for the multishear mechanism. The plane strain is the combination of pure shear (vertical axis) and shear by compression (horizontal axis) (after Towhata andIshihara, 1985).
Figure 4.7 shows the accelerograms (left) and the corresponding response spectra (right). The observed data are shown with no filtering, whereas the computed data are low-pass filtered at 10 Hz.The computed accelerogram shows the transition from high-frequency content between 0 and 15 sec to the intermittent spiky behavior after 15 sec. The response spectra show that the computed accelerogram accurately represents the long periods; yet, the short periods are still difficult to model accurately. This is the challenge of nonlinear simulations; the fit should be as broadband as possible.
Fig. 4
4 Fig. 4.6. The top panel shows the computed strain time history at the middle of the borehole. Middle panels show the computed stress by trial-and-error using the multispring model in order to find the best dilatancy parameters. Bottom panels indicate the computed stress time history from acceleration records (after Bonilla et al., 2005).
The nonlinear properties were simplified to a depth-independent plasticity index (PI) of 20% for the NOAH2D calculations. Overall the 2D synthetic nonlinear horizontal motions provide a good fit to the acceleration response spectra(Figs. 4.8a and 4.8d) and acceleration seismograms(Figs. 4.8b and 4.8e). The 2D synthetic horizontal velocities match the observed velocity seismograms well, except in the early portion of the record where the translation ("fling") associated with permanent displacement that dominates early portions of the observed seismograms(Figs. 4.8c and 4.8f). Synthetic vertical responses were calculated for each horizontal-vertical component pair which is a crude approximation to total 3D wavefield. The east component is nearly fault-normal and has the largest peak accelerations and velocities of the two horizontal components, so the eastvertical combination probably best corresponds to the dominant P-SV responses. Except for the obvious asymmetry in both the acceleration and velocity vertical seismograms, both the north-vertical and east-vertical 2D nonlinear synthetic vertical surface motions provide a good fit to the observed acceleration response spectra(Figures 4.9a and 4.9d), acceleration seismograms (Figures 4.9b and 4.9eb), and velocity seismograms (Figures 4.9cand 4.9f). Since station IWTH25 is located in the deformed hangingwall of a reverse fault in rugged topography, it is clear that even these 2D nonlinear calculations are a crude approximation to the field conditions and complex incident wavefield associated with the finite fault rupture. However, the 2D nonlinear calculations summarized in Figs. 4.8 and 4.9 for station IWTH25 clearly show that the 2D P-SV nonlinear approach of[START_REF] Bonilla | 1D and 2D linear and non linear site response in the Grenoble area[END_REF] provide a sound basis to evaluate first-order nonlinear horizontal and vertical nonlinear responses, even for cases of extremely large incident accelerations and velocities.
Fig. 4
4 Fig. 4.8. Observed and simulated surface ITWH25 horizontal response spectra (a,d), and acceleration (b,e), and velocity (c,f) time histories for the north (a-c) and east (d-f) components.
Fig. 4.9. Observed and simulated surface ITWH25 vertical response spectra (a,d), and acceleration (b,e), and velocity (c,f) time histories using the north-vertical (a-c) and eastvertical (d-f) components.
Fig. 5
5 Fig. 5.1. Boore and Atkinson (2008) amplification functions (a) and original Vs30=300 m/s (black) and "Vs30=915 m/s remapped observed" responses spectra (red) (b) for M=7.0, distance of 2 km, and PGA=0.45 g.
Fig. 5
5 Fig. 5.2. Campbell and Bozorgnia (2008) amplification functions (a) and original Vs30=300 m/s (black) and "Vs30=915 m/s remapped observed" responses spectra (red) (b) for M=7.0, distance of 2 km, and PGA=0.45 g.
Fig. 5.3. Boore and Atkinson (2008) (a) and Boore et al. (1997) (b) response spectra normalized by peak ground acceleration for Vs30=900 m/s. The geometric mean spectral accelerations from the three observed Vs30 > 900 m/s ground motions in Table6is compared to the mean[START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] andBoore et al. (1997) estimates in (c).
Fig. 5.4. North Anatolia Fault segments and sites for 3D ground motion modeling.
Fig. 5
5 Fig. 5.5. Shear-wave (Vs) cross sections through the 3D velocity model along profiles shown in map view in Fig. 5.4.
Fig. 5
5 Fig. 5.6. Near-median bilateral three-segment rupture synthetic velocity seismograms for the five sites shown in Figs. 5.4 and 5.5a.
Fault-normal peak velocities decrease from sites 1-3 close to the fault and near the deeper portion of the basin (Fig. 5.5a) toward the shallow basin (site 4 in Figs. 5.4a and Fig. 5.6), and bedrock outside the basin (site 5 in Figs. 5.4a and Fig. 5.6).
Fig. 5
5 Fig. 5.7. Site 4 3D synthetic and NGA GMPE mean response spectra (a) and the 3D/NGA GMPE ratio (b).
Fig. 5
5 Fig. 5.8. Mean and reverse-rupture only residual 3D basin amplifications for sites 1-3 relative to reference site 4 with NGA differential site amplitude correction functions.
Fig. 5
5 Fig. 5.9. Mean 3D site-specific simulation 3D amplification and Day et al. (2008) 3D amplifications for sites 1-3 relative to reference site 4.
Table 1 list factors influencing source amplitudes,
source phase, S ij . Table 2 lists factors influencing
Table 2
2
. Seismic Source Phase Factors (
ij ) Oglesby et al.
Table 3
3
lists factors influencing propagation amplitudes, G kij (). Table 4 lists factors influencing propagation phase, ij . Large-scale basin structure can substantially amplify and
extend durations of strong ground motions
velocity materials near the surface amplify ground motions for frequencies > /(4*h), where h is the thickness of near-surface low velocity materials. Coupled interface modes can amplify and extend durations of ground motions.
Nonlinear soil responses, G kij (equivalent linear) u N , ,
G u t
G u t
kij N , (fully nonlinear) Depending on the dynamic soil properties and pore pressure responses, nonlinear soil responses can decrease intermediate-and high-frequency amplitudes, amplify low-and high-frequency amplitudes, and extend or reduce duration of large amplitudes. The equivlanet linear approximation is G u kij N , . The fully nonlinear form, kij N , , can incorporate any time-dependent behavior such as pore-pressure responses. Frequency independent attenuation,
Table 4 .
4 Seismic Wave Propagation Phase Factors (
and reproducible using 1D nonlinear site response modeling[START_REF] O'connell | Assessing Ground Shaking[END_REF]. However, the surface vertical peak acceleration exceeded 3.8g, exceeding the maximum expected amplification, based on the site velocity profile between the borehole and the surface accelerometers, and current 1D linear or nonlinear theories of soil behavior[START_REF] O'connell | Assessing Ground Shaking[END_REF]. In particular, application of the nonlinear approach of shear-modulus reduction advocated and tested byBersenev et al. (2002) to predict nonlinear vertical responses, failed to predict peak vertical accelerations in excess of 2g[START_REF] O'connell | Assessing Ground Shaking[END_REF]). Further, Aoi et al. (2008) observed largest upward accelerations at the surface that were 2.27 times larger than the largest downward accelerations, a result not reproduced using 1D approaches to approximate soil nonlinearity. The 2D nonlinear wave propagation implementation ofBonilla et al. (
.Aoi et al. (2008) propose a conceptual model for this asymmetry. Their model uses a loose soil with nearly zero confining pressure near the surface. The soil particles separate under large downward acceleration, and in this quasi free-fall state, the downward accelerations at the surface only modestly exceed gravity. Conversely, large upward accelerations compact the soil and produce much larger upward accelerations. Aoi et al. (2008) report three cases of these anomalous large vertical acceleration amplifications in a search of 200,000 strong motion recordings.Hada et al. (2009) successfully reproduce the strong vertical asymmetric accelerations at IWTH25 with a simple 1D discrete element model, a model that is not a rigorous model of wave propagation.Yamada et al. (2009a) interpret the large upward spikes in acceleration as slapdown phases, which are also typically observed in near-field recordings of nuclear explosion tests. Our focus here is not the asymmetry of the IWTH25 vertical accelerations recorded at the surface, but showing that the simple total stress plane-
strain model of soil nonlinearity in
[START_REF] Bonilla | 1D and 2D linear and non linear site response in the Grenoble area[END_REF]
reproduces both the first-order peak horizontal and vertical velocities and accelerations and acceleration response spectra at station IWTH25 using the borehole motions at 260 m depth as inputs.
Yamada et al. (2009b)
conducted geophysical investigations at the site and found lower velocities in the top several meters than reported in
Aoi et al. (2008)
. Trail-and-error modeling was used to obtain the final refined velocity model consistent with the results of
Yamada et al. (2009b)
; a lowest-velocity first layer of about 2 m thickness and shear-wave velocity on the order of 200 m/s was required to produce the maximum horizontal spectral responses observed near 10 Hz.
Table 5 .
5 NGA Near-Fault Strike-Slip Ground Motions
(name) Date (day,mon,yr) M Station Vs30 (m/s) JB Fault Distance (km)
Parkfield 28 Jun. 1966 6.1 Cholame 2WA 185 3.5
Imperial Valley 15 Oct. 1979 6.5 El Centro Array #7 212 3.1
Superstition Hills 24 Nov. 1987 6.6 Parachute 349 1.0
Erzincan 13 Mar. 1992 6.9 95 Erzincan 275 2.0
Landers 28 Jun. 1993 7.3 Lucerne 665 1.1
Kobe, Japan 16 Jan. 1995 6.9 KJMA 312 0.6
Kocaeli, Turkey 17 Aug. 1999 7.4 Yarimca 297 2.6
Kocaeli, Turkey 17 Aug. 1999 7.4 Sakarya 297 3.1
Duzce, Turkey 12 Nov. 1999 7.1 Duzce 276 8.1
Geometric Mean 6.9 299 2.1
Acknowledgments
This paper is dedicated to the memory of William Joyner, who generously participated in discussions of directivity, wave propagation, site response, and nonlinear soil response, and who encouraged us to pursue many of the investigations presented here. David Boore kindly read the initial draft and provided suggestions that improved it. The authors benefited from helpful discussions with David Boore, Joe Andrews, Paul Spudich, Art Frankel, Dave Perkins, Chris Wood, Ralph Archuleta, David Oglesby, Steve Day, Bruce Bolt, Rob Graves, Roger Denlinger, Bill Foxall, Larry Hutchings, Ned Field, Hiro Kanamori, Dave Wald, and Walt Silva. Shawn Larson provided the e3d software, and Paul Spudich provided isochrone software. Supported by U.S. Bureau of Reclamation Dam Safety Research projects SPVGM and SEIGM and USGS award no. 08HQGR0068. The National Information Center | 189,961 | [
"1337288"
] | [
"221994"
] |
00176642 | en | [
"shs"
] | 2024/03/05 22:32:13 | 2007 | https://shs.hal.science/halshs-00176642/file/52-bornes2005_04_07.pdf | Elyès Jouini
email: [email protected]
Marie Chazal
email: [email protected]
Equilibrium Pricing Bounds on Option Prices
Keywords: Option bounds, equilibrium prices, conic duality, semi-infinite programming OR Subjects: Finance: Asset pricing. Programming: Infinite dimensional. Utility/preference: Applications Area of Review: Financial engineering
come
Introduction
A central question in finance consists of finding the price of an option, given information on the underlying asset. We investigate this problem in the case where the information is imperfect. More precisely, we are interested in determining the price of an option without making any distributional assumption on the price process of the underlying asset. It is well known that, in a complete financial market, by the no-arbitrage condition, the price of an option is given by the expectation of its discounted payoff under the risk-neutral probability, i.e. the unique probability measure that is equivalent to the historical one, and under which the discounted price processes of the primitive assets are martingales. The identification of this pricing probability requires the perfect knowledge of the primitive assets dynamics. Hence, in our restricted information context, one cannot use the exact pricing rule. But, one can always search for a bounding principle for the price of an option.
One question is how to compensate part of the lack of information on the underlying asset dynamics ? Assuming (lowly) knowledge on investors' preferences, i.e. risk aversion, and using equilibrium arguments, one obtains a qualitative information of the risk-neutral probability density, on which our bounding rule is based. It has a great advantage from an empirical point of view since it requires no market data. Our rule also uses a quantitative information on the underlying asset but only on its price at maturity, as it is done in the pioneer works of [START_REF] Lo Lo | Semi-parametric upper bounds for option prices and expected payoffs[END_REF].
Lo initiated a literature on semi-parametric bounds on option prices. He derived upper bounds on the prices of call and put options depending only on the mean and variance of the stock terminal value under the risk-neutral probability : he obtained a closed-form formula for the bound as a function of these mean and variance. This work has been extended to the case of conditions on the first and the nth moments, for a given n, by [START_REF] Grundy | Option prices and the underlying asset's return distribution[END_REF]. [START_REF] Bertsimas | On the relation between option and stock prices: an optimization approach[END_REF] generalized these results to the case of n ≥ 2 moments restrictions. When the payoff is a piecewise polynomial, the bounding problem can be rewritten, by considering a dual problem, as a semi-definite programming problem and thus can be solved from both theoretical and numerical points of view. [START_REF] Gotoh | Bounding option prices by semidefinite programming: a cutting plane algorithm[END_REF] proposed an efficient cutting plane algorithm which solves the semi-definite programming problem associated to the bound depending on the first n moments. According to their numerical results, the upper bound of Lo is significantly tightened by imposing more than 4 moments conditions. Since the mean of the terminal stock discounted price under the martingale measure is given by the current stock price, the first moment condition is totally justified. However, the knowledge of the n ≥ 2 moments under the risk-neutral probability is a little illusive. We restrict ourselves to put constraints on the two first risk-neutral moments and use some qualitative information on the risk-neutral measure in order to improve the bound of Lo. In Black-Scholes like models the variance of the stock price is the same under the true and the risk-neutral probabilities. This provides then a justification for the knowledge of the second moment under the risk-neutral probability.
The restriction that we put on the martingale measure comes from equilibrium and hence preferences considerations : in an arbitrage-free and complete market with finite horizon T , the equilibrium can be supported by a representative agent, endowed with one unit of the market portfolio, that maximises the expected utility U of his terminal wealth X T under his budget constraint. The first order condition implies that the Radon-Nikodym density with respect to the true probability measure of the martingale measure, dQ dP , is positively proportional to U ′ (X T ). Under the usual assumption that agents are risk-averse, the utility function U is concave. It is therefore necessary that the density dQ dP is a nonincreasing function of the terminal total wealth X T . When the derivative asset under consideration is written on the total wealth or on some index seen as a proxy of the total wealth, one can restrict his attention to a pricing probability measure that has a nonincreasing Radon-Nikodym density with respect to the actual probability measure (remark that in the Black-Scholes model, the risk-neutral density satisfies this monotonicity condition if and only if the underlying drift is upper than the risk-free rate, which is a necessary and sufficient condition for the stock to be positively held). This ordering principle on the martingale probability measure with respect to the underlying asset price has been introduced by [START_REF] Perrakis | Option pricing bounds in discrete time[END_REF]. Together with [START_REF] Ritchken | On option pricing bounds[END_REF], they launched an important part of the literature on bounding option prices by taking into account preferences properties as for instance risk-aversion. [START_REF] Bizid | Pricing of nonredundant asset in a complete market[END_REF] and [START_REF] Jouini | Continuous time equilibrium pricing of nonredundant assets[END_REF] obtained, in different settings, that this ordering principle is a necessary condition for options prices to be compatible with an equilibrium.
Following their terminology, we call "equilibrium pricing upper bound" on the price of an option maturing at the terminal date, a bound that is obtained under the restriction that the Radon-Nikodym density of the pricing probability measure is in reverse order with the underlying terminal value (see also [START_REF] Jouini | Convergence of the equilibrium prices in a family of financial models[END_REF] for the definitions of equilibrium prices, equilibrium pricing intervals in incomplete markets and their convergence properties).
As an example, B
P &R := sup{E Q [ψ(S T )], Q : E Q [S T ] = S 0 , dQ/dP ց w.r.t. S T } is
an equilibrium pricing upper bound on the price of an option with payoff ψ(S T ), when we only know the distribution of the terminal stock price S T , under the true probability measure P. We obtain that, for the call option,
B P &R = S 0 E P [S T ] E P [ψ(S T )]
. This expression has already been obtained as a bound on the price of a call option, starting from different considerations, by [START_REF] Levy | Upper and lower bounds of put and call option value: stochastic dominance approach[END_REF], [START_REF] Perrakis | Option pricing bounds in discrete time[END_REF] and [START_REF] Ritchken | On option pricing bounds[END_REF]. [START_REF] Levy | Upper and lower bounds of put and call option value: stochastic dominance approach[END_REF] obtained it as the minimum price for the call above which there exists a portfolio, made up of the stock and the riskless asset, of which the terminal value dominates, in the sense of second order stochastic dominance, the terminal value of some portfolio with the same initial wealth but made of call units. [START_REF] Perrakis | Option pricing bounds in discrete time[END_REF] derived it as the upper bound on a call option arbitrage price, for stock price distributions such that the normalized conditional expected utility for consumption is nonincreasing in the stock price. [START_REF] Ritchken | On option pricing bounds[END_REF] derived the same upper bound, with a finite number of states of the world, by restricting the state-contingent discount factors to be in reverse order with the aggregate wealth which is itself assumed to be nondecreasing with the underlying security price. When interpreting the state j discount factor as the discounted marginal utility of wealth of the representative agent in state j, this restriction corresponds to the concavity of the representative utility function. The concavity assumption accounts for risk-aversion and means that agent have preferences that respect the second order stochastic dominance principle. By extension, in an expected-utility model, preferences are said to respect the nth order stochastic dominance rule if the utility function is such that its derivatives are successively nonnegative and nonpositive up to nth order. [START_REF] Ritchken | Stochastic Dominance and Decreasing Absolute Risk Averse Option Pricing Bounds[END_REF], [START_REF] Basso | Option pricing bounds with standard risk aversion preferences[END_REF] proposed the application of such rules to put additional restrictions of the state discount factors and thus improve Ritchken's bounds.
These works are also to be related to more recent results, in a continuous state of the world framework, by e.g. [START_REF] Constantinides | Stochastic dominance bounds on derivatives prices in a multiperiod economy with proportional transaction costs[END_REF] who derived stochastic dominance upper (lower) bounds on the reservation write (purchase) price of call and put options in a multi-period economy and in the presence of transaction costs.
Our main contribution is to provide an equilibrium pricing upper bound for the price of a European call option, given a consensus on the actual distribution of the underlying terminal value and given its second risk-neutral moment. The novelty is in combining moment constraints and the monotonicity condition on the Radon-Nikodym density of the risk-neutral probability with respect to the true probability.
We adopt a conic duality approach to solve the constrained optimization problem corresponding to our bounding problem. By the use of some classical result in moments theory, given in [START_REF] Shapiro | On duality theory of conic linear problems[END_REF], we obtain some sufficient condition for strong duality and existence in the dual problem to hold, for derivative assets defined by general payoff functions. Explicit bounds are derived for the call option, by solving the dual problem which is a linear programming problem with an infinite number of constraints. This also allows us to solve the primal problem. We observe on some numerical example that Lo's bound is at least as tightened by the qualitative restriction on the risk-neutral probability measure as by the quantitative information on the third and fourth risk-neutral moments of the underlying asset.
The paper is organized as follows. Section 1 is devoted to the equilibrium pricing upper bound formulation. The duality results are provided in Section 2 and the equilibrium pricing upper bound for the call option is derived in Section 3. We provide a numerical example in Section 4 and finally make concluding remarks. All proofs are given in a mathematical appendix.
The model formulation
We consider a financial market with a finite horizon T , with assets with prices defined on a given probability space (Ω, F, P). One of these asset is riskfree. We assume, without loss of generality and for sake of simplicity, that the riskfree rate is 0. The market is assumed to be arbitrage-free, complete and at the equilibrium. Hence there exists a probability measure Q, equivalent to P, under which the assets prices processes are martingales. Since the market is at equilibrium, the Radon-Nikodym density
d Q
dP is a nonincreasing function of the terminal total wealth or equivalently of the terminal value of the market portfolio. We want to put an upper bound on the price of an option written on the market portfolio or on on some index, which can be seen as a proxy of the market portfolio.
We denote by m the price of the underlying asset at time 0 and by S T its price at the terminal time. We assume that m ∈ R + . The price S T is assumed to be a nonnegative random value on (Ω, F, P) which is square integrable under P and Q. We suppose that its distribution under P has a density with respect to the Lebesgue measure, which is known.
This density is denoted by f and it is assumed to be positive on [0, ∞). We denote by
p 1 ∞ 0 xf (x)dx and p 2 ∞ 0 x 2 f (x)dx (1)
the first and second moments of S T under P.
We have m = E Q[S T ] and we set δ
:= E Q[S 2 T ].
We further assume that S T is an increasing function of the terminal value of the market portfolio. Hence, there exists a function ḡ which is positive and nonincreasing on (0, ∞)
such that d Q dP = ḡ(S T
) and such that the functions f ḡ, xf ḡ and x 2 f ḡ are in L 1 (0, ∞) and satisfy
∞ 0 f (x)ḡ(x)dx = 1 , ∞ 0 xf (x)ḡ(x)dx = m and ∞ 0 x 2 f (x)ḡ(x)dx = δ .
(2)
Given a payoff function ψ such that the functions ψf and ψf ḡ are in L 1 (0, ∞), we denote by X the vector space generated by the nonnegative measures µ on ([0, ∞), B([0, ∞))), such that the functions ψf , f , xf and x 2 f are µ-integrable. We assume that 0 is a Lebesgue point of both ψf and f , i.e.
lim r→0 1 r (0,r) |ψ(x)f (x) -ψ(0)f (0)|dx = lim r→0 1 r (0,r) |f (x) -f (0)|dx = 0.
The space X therefore contains the Dirac measure at 0, δ 0 . Let C be the convex cone of X generated by δ 0 and by the elements µ of X that have nonnegative and nonincreasing densities on (0, ∞).
We put the following upper bound on the equilibrium price of an option with payoff
ψ(S T ) (P ) sup µ∈C m,δ ∞ 0 ψ(x)f (x)dµ(x)
where C m,δ is the set of µ ∈ C which satisfy
∞ 0 f (x)dµ(x) = 1 , ∞ 0 xf (x)dµ(x) = m and ∞ 0 x 2 f (x)dµ(x) = δ .
We denote by val(P ) the value of problem (P ).
Remark 1.1 Let G be the set of nonnegative, nonincreasing functions g on (0, ∞) such that ψf g, f g, xf g and x 2 f g are in L 1 (0, ∞). Any element µ of C can be decomposed as follows: dµ = αdδ 0 + gdx where α ∈ R + and g ∈ G.
Remark 1.2 One can always assume that ψ(0) = 0. Indeed, if ( P ) is the problem associated to ψψ(0) then, it is clear that val(P ) = val( P ) + ψ(0). Therefore, in the sequel, we work under the assumption that ψ(0) = 0 .
The dual problem formulation
In this section, we formulate the dual problem of (P ). Let X ′ be the vector space generated by ψf , f , xf and x 2 f . The spaces X and X ′ are paired by the following bilinear form
(h, µ) ∈ X ′ × X -→ ∞ 0 h(x)dµ(x) .
Let us introduce the polar cone of C:
C * = {h ∈ X ′ | ∞ 0 h(x)dµ(x) ≥ 0 , ∀µ ∈ C} .
In all the sequel, when considering v ∈ R 3 , we will denote v (v 0 , v 1 , v 2 ).
It is clear that for all λ ∈ R 3 such that λ 0 f + λ 1 xf + λ 2 x 2 fψf ∈ C * , and for all measure µ ∈ C m,δ we have
∞ 0 ψ(x)f (x)dµ(x) ≤ λ 0 + λ 1 m + λ 2 δ .
It is therefore natural to consider the following problem (D) inf
λ∈R 3 λ 0 + λ 1 m + λ 2 δ subject to λ 0 f + λ 1 xf + λ 2 x 2 f -ψf ∈ C * .
We denote by val(D) the value of problem (D) and by Sol(D) the set of solutions to (D),
i.e.
Sol(D) {λ ∈ R 3 | λ 0 f + λ 1 xf + λ 2 x 2 f -ψf ∈ C * and λ 0 + λ 1 m + λ 2 δ = val(D)} .
From Proposition 3.4 in [START_REF] Shapiro | On duality theory of conic linear problems[END_REF], we have some strong duality between the two problems under the condition given in the following proposition.
Let In Proposition 2.2 below, we determine F , we check that (1, m, δ) is in F and we provide some sufficient condition for (1, m, δ) to be in Int(F ). For this purpose, we first introduce a function ξ, by means of which we express F .
F v ∈ R 3 | ∃µ ∈ C : v = ∞ 0 f (x)dµ(x), ∞ 0 xf (x)dµ(x), ∞ 0 x 2 f (x)dµ(x) .
We will prove (see Lemma A.3) that, for all r ∈ (0, p 2 /p 1 ], there exists a unique
ξ(r) ∈ (0, ∞] such that ξ(r) 0 x 2 f (x)dx = r ξ(r) 0 xf (x)dx . (3)
Moreover, we have ξ(r) < ∞ ⇐⇒ r < p 2 /p 1 and
x 0 u 2 f (u)du > r x 0 uf (u)du ⇐⇒ x ∈ (ξ(r), ∞].
We define
W v ∈ (0, ∞) 3 | v 1 /v 2 ≥ p 1 /p 2 , v 1 /v 0 ≤ ξ(v 2 /v 1 ) 0 xf (x)dx ξ(v 2 /v 1 ) 0 f (x)dx . ( 4
) Proposition 2.2 (i) F = (R + × {0} × {0}) ∪ W . (ii) (1, m, δ) ∈ W . (iii) If m/δ > p 1 /p 2 then (1, m, δ) ∈ Int(W ).
The proof is given in the mathematical appendix, Section A.
λ 0 + λ 1 m + λ 2 δ (5) subject to x 0 [λ 0 + λ 1 u + λ 2 u 2 -ψ(u)]f (u)du ≥ 0 , for all x ≥ 0 .
The proof is given in the mathematical appendix, Section A.
3 The upper bound determination for the call option
In this section, we calculate val(P ) in the case of a European call option with strike K > 0:
in this section we put
ψ(x) = (x -K) + , for all x ≥ 0 ,
where we use the notation (x -K) + max{x -K, 0}.
Remark 3.1 Since for all x ≥ 0, we have 0 ≤ ψ(x) ≤ x and since for all measure
µ ∈ C m,δ , ∞ 0 xf (x)dµ(x) = m, we have val(P) ≤ m .
The value of problem (P ) is therefore finite. In this framework, Proposition 2.1, means that the proposition "val(P ) = val(D) and Sol(D) is non-empty and bounded" is equivalent to the condition (1, m, δ) ∈ Int(F ).
We start with considering the case where m/δ = p 1 /p 2 .
Theorem 3.1 If m/δ = p 1 /p 2 then the set Sol(D) is non-empty, we have
val(P ) = val(D) = (m/p 1 ) ∞ 0 ψ(u)f (u)du
and the measure µ defined by
dµ (1 -(m/p 1 ))dδ 0 + (m/p 1 )1 (0,∞) dx is in Sol(P ).
The proof is given in the mathematical appendix, Section B.
From Remark 2.1, we see that it remains to consider the case where m/δ > p 1 /p 2 . In that case, the value of (D) depends on several parameters that we now present. When
m/δ > p 1 /p 2 , we can consider x ξ (δ/m) ( 6
)
where ξ is defined by (3): it is the unique positive real number satisfying
x 0 x 2 f (x)dx = (δ/m) x 0 xf (x)dx .
We introduce another parameter x m which also depends on the risk-neutral moments m and δ and on the true density f . We will prove (see Lemma B.1) that when m/δ > p 1 /p 2 there exists a unique x m ∈ (0, ∞) such that
xm 0 xf (x)dx = m xm 0 f (x)dx .
Moreover, we have
x 0 uf (u)du > m x 0 f (u)du ⇐⇒ x ∈ (x m
, ∞] and x > x m . We are now in position to provide the result for the case where m/δ > p 1 /p 2 . Since, from Remark 3.1, the value of (P ) is finite, we know by Remark 2.1, that (P ) and (D) are in strong duality and existence holds for the dual problem. For sake of simplicity, we use the following notation
I(x) x 0 f (u)du , M (x) x 0 uf (u)du , ∆(x) x 0 u 2 f (u)du , x ≥ 0 . ( 7
)
Let us also write d(x)
x2 x 0 ψ(u)f (u)du -ψ(x) x 0 u 2 f (u)du. Theorem 3.2 Let us assume that m/δ > p 1 /p 2 . (i) If d(x) > 0 or if d(x) = 0 and x > K then val(P ) = val(D) = m x 0 uf (u)du x 0 ψ(u)f (u)du
and the measure µ defined by
dµ 1 - m x 0 uf (u)du dδ 0 + m x 0 uf (u)du 1 (0,x) dx is in Sol(P ). (ii) If d(x) < 0 or if d(x) = 0 and x ≤ K then there exists (x 0 , x 1 ) ∈ R + × R + such that x 0 ∈ (0, min{x m , K}) and x 1 ∈ (max{x, K}, ∞) , (8)
M (x 0 )∆(x 1 ) -M (x 1 )∆(x 0 ) = δ [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] + m [I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] , (9)
(x 2 0 -δ)[I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] + (x 0 -m)[I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] = x 1 0 ψ(u)f (u)du x 1 -x 0 ψ(x 1 ) [∆(x 0 ) -(x 0 + x 1 )M (x 0 ) + x 0 x 1 I(x 0 )] . (10)
We have
val(P ) = val(D) = M (x 0 ) -mI(x 0 ) M (x 0 )I(x 1 ) -I(x 0 )M (x 1 ) x 1 0 ψ(u)f (u)du
and the measure µ defined by
dµ M (x 1 ) -mI(x 1 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 ) 1 (0,x 0 ) + M (x 0 ) -mI(x 0 ) M (x 0 )I(x 1 ) -I(x 0 )M (x 1 ) 1 (0,x 1 ) dx
is in Sol(P ), for any couple (x 0 , x 1 ) ∈ R + × R + which satisfies conditions ( 8), ( 9) and ( 10).
The proof is given in the mathematical appendix, Section B.
Notice that, in light of the proof of Theorem 3.2, it can be seen that the alternative between "d(x) > 0 or d(x) = 0 and x > K" and "d(x) < 0 or d(x) = 0 and x ≤ K" corresponds to an alternative concerning the properties of the solutions to problem (D),
i.e. according to Proposition 2.3 concerning the solutions to problem (5). Under the first condition, all solutions to problem ( 5) are such that exactly on constraint is binding.
Under the second condition, all solutions are such that exactly two constraints are binding.
It can be seen that the first condition amounts to say that x is smaller than the smallest positive point for which there exists λ satisfying the constraints of problem ( 5) and such that one exactly of these constraints is binding at this point.
To put an end to this section, we recall the bound on the call option price derived by [START_REF] Levy | Upper and lower bounds of put and call option value: stochastic dominance approach[END_REF], [START_REF] Perrakis | Option pricing bounds in discrete time[END_REF] and Ritchken (1984). In our framework it is given by
B P &R sup µ∈Cm ∞ 0 ψ(x)f (x)dµ(x) .
where C m is the set of measures µ in C satisfying
∞ 0 f (x)dµ(x) = 1 and ∞ 0 xf (x)dµ(x) = m .
Proposition 3.1 We have
B P &R = (m/p 1 ) ∞ 0 ψ(x)f (x)dx .
The proof is given in the mathematical appendix, Section B.
Numerical Example
In this section we observe on some numerical example how the bound of Lo on the call option, i.e.
B Lo sup {Q | E Q [S T ]=m , E Q [S 2 T ]=δ} E Q [(S T -K) + ] ,
can be improved by imposing the equilibrium pricing rule, i.e by considering probability measures that have Radon-Nikodym densities with respect to the true one which decrease with the stock terminal value.
Following some example of [START_REF] Gotoh | Bounding option prices by semidefinite programming: a cutting plane algorithm[END_REF], we can report the bound that they obtained by imposing up to fourth moments conditions :
B 4 sup {Q | E Q [S T ]=m , E Q [S 2 T ]=δ , E Q [S 3 T ]=m 3 , E Q [S 4 T ]=m 4 } E Q [(S T -K) + ]
and thus compare the improvement of Lo's bound entailed by the additional moments conditions to the one entailed by the qualitative restriction on the pricing probability measure.
The example uses the framework of the Black-Scholes model. The market contains one riskfree asset with rate of return r ≥ 0 and one stock following a log-normal diffusion with drift µ ∈ R and volatility σ ∈ R * . The discounted stock price process (S t ) t∈[0,T ] satisfies, for all t ∈ [0, T ], S t = S 0 exp{(µrσ 2 /2)t + σW t }, and there exists a probability measure Q equivalent to the true one under which (S t ) t∈[0,T ] is a martingale. Its Radon-Nikodym density with respect to the historical probability measure is given by
L T = exp -((µ -r)/2σ) 2 T -((µ -r)/σ)W T . It is easy to see that L T = S T S 0 -µ-r σ 2 exp - µ -r 2 + (µ -r) 2 2σ 2 T .
The density L T is therefore a nonincreasing function of the stock terminal value if and only the drift µ is greater than the riskfree rate r.
To follow the example presented in Gotoh and Konno, we set the horizon time T to 24/52, the riskfree rate to 6% and the drift µ to 16%. The stock price at time 0 is fixed to 400, i.e. m = S 0 = 400. We provide the bounds B Lo , B 4 , B P &R and val(P ) as well as the Black-Scholes price BS, for a call option with strike K, for several values of the strike K. We also let variate the volatility σ and hence δ, i.e. the corresponding moment of order 2 under Q of S T . We also provide the relative deviation of each bound B from the Black-Scholes price: e = (B -BS)/BS.
We Here again, the equilibrium pricing rule permits to tighten the bound on the call option price (which is given by the current stock price) more significantly than the risk-neutral moment of order 2 restriction.
Here should be inserted Table 1.
Concluding remarks
We observe on the numerical example that adding the equilibrium pricing constraints provides, in general, a better bound than the one obtained by adding information on the risk-neutral moments. This encourages us to carry on this work for options with more general payoffs. As it is done by [START_REF] Basso | Option pricing bounds with standard risk aversion preferences[END_REF] in the case of a finite probability space and without restriction on moments, it would also be of interest to take into account stronger restrictions on preferences such as decreasing absolute risk-aversion, decreasing absolute prudence and so on, with or without putting restrictions on moments and in the context of a general probability space.
Also notice that the equilibrium pricing rule can also be valid for a European option expiring at date t lower than the terminal time T . Typically, consider an arbitrage-free and complete financial market, with one risky asset S, which distributes some dividend D. The price at time 0 of a European option with maturity t and payoff ψ(S t ) is given by
E Q [ψ(S t )] = E P [ψ(S t )M t ],
where
M t := E P [ dQ dP | F t ]
is the martingale probability measure density with respect to P, conditionally on the information at time t. Since the economy is supported by a representative agent, endowed with one unit of the market portfolio, which maximizes some utility of its consumption c and terminal wealth, a necessary condition for equilibrium is that the agent's optimal consumption rate c t is a nonincreasing function of the state price density M t (see e.g. [START_REF] Karatzas | Optimization problems in theory of continuous trading[END_REF]). Since at the equilibrium, the consumption process c t must equal the cumulative dividend process D t , if we assume that the stock price is an increasing function of this dividend, we obtain that the stock price is a nonincreasing function of the state price density. This last assumption is justified by [START_REF] Jouini | A class of models satisfying a dynamical version of the CAPM[END_REF]. They show that for a large class of utility functions, there always exist equilibria satisfying this monotonicity condition.
It is possible to derive option prices bounds given other option prices. For example D.
Bertsimas and I. [START_REF] Popescu | A semidefinite programming approach to optimal moment bounds for distributions with convex properties[END_REF] derived closed form bounds on the price of a European call option, given prices of other options with the same exercise date but different strikes on the same stock. It seems reasonable to assume that, for liquidity reasons, the prices of 1 to 3 near-the-money call options, e.g. with strikes between 70% and 130% of the current stock price, are known. Given this information, one can seek for bounds on the equilibrium prices of the call options for other strikes values. This permits to put bounds on the smile, which constitutes a way to separate unrealistic from realistic stochastic volatility models that are used in practice.
Finally, we have set our bounding option prices principle in the case of complete markets in order to use properly the equilibrium condition that provides the decreasing feature of the Radon-Nikodym density of the risk-neutral probability measure with respect to the terminal value of the market portfolio. But, under some circumstances, one can argue that in an incomplete market, this latter necessary condition for the pricing probabilities to be compatible with an equilibrium still holds. Of course, in the incomplete market case, the equivalent martingale measure is not unique and there is no reason for the second moment of the underlying asset to be the same under all martingale probability measures. However, one can assume that an upper bound on this second moment under any martingale measure is known. Our bounding principle could then be extended to the incomplete market case, by establishing, for example, that our bound increases with the second moment constraint. This should be the case for the call option and more generally, for derivatives with convex payoffs.
Mathematical Appendix
A Proofs of the results stated in Section 2
In order to shorten and make clear the proofs of Propositions 2.2 and 2.3, we state the five following lemmas. But the reader can directly read the proofs of Propositions 2.2 and 2.3 in Sections A.2 and A.3.
A.1 Technical Lemmas
The following lemma permits, in particular, to obtain the simple formulation of problem (D) given in Proposition 2.3.
Lemma A.1 Let h ∈ L 1 (0, ∞).
The following statements are equivalent.
(i) For any function g which is nonnegative and nonincreasing on (0, ∞) and such that hg ∈ L 1 (0, ∞), we have
x 0 h(u)g(u)du ≥ 0, for all x ≥ 0. (ii) x 0 h(u)du ≥ 0, for all x ≥ 0.
Proof Let h ∈ L 1 (0, ∞). It is clear that (i) implies (ii). Conversely, let us assume that x 0 h(u)du ≥ 0 , for all x ≥ 0 .
(A.1)
Let g be a function satisfying the requirements of (i) and let x ∈ (0, ∞). For any n ∈ N * , consider {x 0 , • • • , x n } the regular subdivision of [0, x], with x 0 = 0 and
x n = x. Let us set, for all u ∈ [0, x], g n (u) n i=1 g(x i )1 (x i-1 ,x i ] (u).
It is easy to see that, if g is continuous at some u ∈ (0, x) then the sequence (g n (u)) n converges towards g(u). Since g is nonincreasing, it has a countable number of discontinuities and hence the sequence (g n ) n∈N * converges to g a.e. on [0, x]. One can further check that 0 ≤ g n ≤ g on [0, x], for all n. Consequently, the sequence (hg n ) n∈N * converges to hg a.e. on [0, x] and satisfies:
|hg n | ≤ |hg| on [0, x], for all n. Since hg ∈ L 1 (0, ∞), it
follows from the dominated convergence theorem that
x 0 h(u)g(u)du = lim n→∞ x 0 h(u)g n (u)du . (A.2)
By rewriting g n in the following form g n = g(x n )1 (0,xn] + n i=1 (g(x i-1 )g(x i ))1 (0,x i-1 ] we obtain:
x 0 h(u)g n (u)du = g(x n ) x 0 h(u)du + n i=1 (g(x i-1 ) -g(x i )) x i-1 0 h(u)du. Since
g is nonnegative and nonincreasing on (0, ∞), it then follows from (A.1) that, for all n,
x 0 h(u)g n (u)du ≥ 0. Finally, by (A.2), we have
x 0 h(u)g(u)du ≥ 0 , for all x ≥ 0. This completes the proof of Lemma A.1.
The following properties of the functions M/I, ∆/I and ∆/M , where I, M and ∆ are defined in (7), will be used in the sequel. They are easy to obtain by derivation.
Lemma A.2 The functions x -→ M (x)/I(x), x -→ ∆(x)/I(x) and x -→ ∆(x)/M (x)
are derivable and increasing on (0, ∞). Now, we prove the existence of the function ξ presented in (3).
Lemma A.3 For all r ∈ (0, p 2 /p 1 ], there exists a unique ξ(r) ∈ (0, ∞] such that ξ(r)
0 x 2 f (x)dx = r ξ(r) 0 xf (x)dx . Moreover x 0 u 2 f (u)du > r x 0 uf (u)du ⇐⇒ x ∈ (ξ(r), ∞],
and the function r -→ ξ(r) is continuous on (0, p 2 /p 1 ).
Proof Let r ∈ (0, p 2 /p 1 ] and let φ be the function defined on R + by φ(x) = x 0 (u 2ru)f (u)du. Since f is positive, φ is decreasing on (0, r) and increasing on (r, ∞). As φ is continuous and satisfies φ(0) = 0, lim x→∞ φ(x) = p 2rp 1 > 0 when r < p 2 /p 1 or lim x→∞ φ(x) = 0 when r = p 2 /p 1 , it follows that there exists a unique ξ ∈ (0, ∞] such that φ < 0 on (0, ξ), φ(ξ) = 0 and φ > 0 on (ξ, ∞]. We clearly have ξ(r) < ∞ ⇐⇒ r < p 2 /p 1 .
Noticing that r = ∆(ξ(r))/M (ξ(r)) for all r ∈ (0, p 2 /p 1 ) and that, by Lemma A.2, the function ∆/M is continuous and increasing on (0, ∞), we obtain, from the inverse function theorem, that ξ is continuous on (0, p 2 /p 1 ). This ends the proof of Lemma A.3.
The following technical result is used in the proof of Proposition 2.2. x 0 (a + bu + cu 2 )f (u)du. By construction, P (y) = 0. Let us check that P (x) ≥ 0 for all x ≥ 0. Since P (0) = P (y) = 0 and f > 0, there exists z ∈ (0, y), such that a + bz + cz 2 = 0. Since a > 0 and c > 0, we have a + bx + cx 2 > 0 on [0, z) ∪ (y, ∞) and a + bx + cx 2 < 0 on (z, y) . It follows that P is increasing on [0, z] and on [y, ∞) and decreasing on (z, y). Since it satisfies P (0) = P (y) = 0, this proves that P (x) ≥ 0, for all x ≥ 0. This ends the proof of Lemma A.4.
A.2 Proof of Proposition 2.2
Proof of Proposition 2.2 (i) We prove that F = (R + × {0} × {0}) ∪ W .
Step I. Let us prove that (R
+ × {0} × {0}) ∪ W ⊂ F . Let v ∈ (R + × {0} × {0}) ∪ W and
consider the measure µ defined by:
dµ (v 0 /f (0))dδ 0 if v ∈ R + × {0} × {0}, dµ v 0 -v 1 R ξ(v 2 /v 1 ) 0 f (x)dx R ξ(v 2 /v 1 ) 0 xf (x)dx 1 f (0) dδ 0 + v 1 R ξ(v 2 /v 1 ) 0 xf (x)dx 1 (0,ξ(v 2 /v 1 )) dx, if v ∈ W . One can check that µ ∈ C and (v 0 , v 1 , v 2 ) = ∞ 0 f dµ, ∞ 0 xf dµ, ∞ 0 x 2 f dµ and hence v ∈ F . Step II. Let us prove that F ⊂ (R + × {0} × {0}) ∪ W . Let v ∈ F and µ ∈ C be such that v = ∞ 0 f dµ, ∞ 0 xf dµ, ∞ 0
x 2 f dµ . By Remark 1.1 there exists α ∈ R + and g ∈ G such that dµ = αdδ 0 + gdx. We have:
(v 0 , v 1 , v 2 ) = αf (0) + ∞ 0 f (x)g(x)dx, ∞ 0 xf (x)g(x)dx, ∞ 0 x 2 f (x)g(x)dx . (A.4)
Let us denote by |{g > 0}| the Lebesgue measure of {g > 0}. If |{g > 0}| = 0 then g = 0 a.e. and hence, v = (αf (0), 0, 0) ∈ R + × {0} × {0}.
Let us now consider the case where |{g > 0}| > 0. In that case, it is clear that
v ∈ (0, ∞) 3 . Let us prove that v 1 /v 2 ≥ p 1 /p 2 . (A.5)
Consider the function h defined on (0, ∞) by h(x) x (p 2 /p 1x) f (x). By construction, ∞ 0 h(x)dx = 0 and since f is positive, the function x -→
x 0 h(u)du is increasing on (0, p 2 /p 1 ) and decreasing on (p 2 /p 1 , ∞). It follows that x 0 h(u)du ≥ 0, for all x ≥ 0. Then, by Lemma A.1, we have x 0 h(u)g(u)du ≥ 0, for all x ≥ 0 and hence, by letting x tend to ∞, (p 2 /p 1 ) v 1v 2 ≥ 0. We have proved (A.5).
Let us prove that
v 1 /v 0 ≤ R ξ(v 2 /v 1 ) 0 xf (x)dx R ξ(v 2 /v 1 ) 0 f (x)dx . When v 2 /v 1 = p 2 /p 1 , since ξ(p 2 /p 1 ) = ∞, this amounts to prove that v 1 /v 0 ≤ p 1 . (A.6)
As above, we can apply Lemma A.1 to the function h 1 defined on (0, ∞) by h 1 (x) = (p 1x) f (x) and to the function g in order to obtain that x 0 h 1 (u)g(u)du ≥ 0 for all x ≥ 0 and hence, by passing to the limit when x tend to ∞, p 1 (v 0αf (0))v 1 ≥ 0.
Since αf (0) ≥ 0, that proves (A.6).
From (A.5) we know that, when |{g > 0}| > 0 we always have v 1 /v 2 ≥ p 1 /p 2 . That proves that, when v 1 /v 2 = p 1 /p 2 , we have v ∈ W . It remains to prove that it is also true when v 1 /v 2 > p 1 /p 2 . So, we assume that v 1 /v 2 > p 1 /p 2 and prove that
v 1 /v 0 ≤ ξ(v 2 /v 1 ) 0 xf (x)dx ξ(v 2 /v 1 ) 0 f (x)dx . (A.7)
For sake of readability, we write ξ = ξ(v 2 /v 1 ). Since ξ ∈ (0, ∞), we can consider the real numbers, a > 0, b ∈ R and c > 0, given by Lemma A.4, which are such that x 0 (a + bu + cu 2 )f (u)du ≥ 0, for all x ≥ 0 and ξ 0 (a + bu + cu 2 )f (u)du = 0. Recall that, by Lemma A.3, we have
ξ 0 x 2 f (x)dx = (v 2 /v 1 ) ξ 0 xf (x)dx. Therefore ξ 0 (a + bu + cu 2 )f (u)du = ξ 0 xf (x)dx v 1 a v 1 ξ 0 f (x)dx ξ 0 xf (x)dx + bv 1 + cv 2 ,
and hence
a v 1 ξ 0 f (x)dx ξ 0 xf (x)dx + bv 1 + cv 2 = 0 . (A.8)
We now show that av 0 + bv 1 + cv 2 ≥ 0. With (A.8), this will prove (A.7).
We have
x 0 (a + bu + cu 2 )f (u)du ≥ 0, for all x ≥ 0. Therefore, by Lemma A.1, we have x 0 (a + bu + cu 2 )f (u)g(u)du ≥ 0 for all x ≥ 0 and hence, by letting x tend to ∞, a(v 0 -αf (0))+bv 1 +cv 2 ≥ 0. Since a > 0 and αf (0) ≥ 0, it follows that av 0 +bv 1 +cv 2 ≥ 0.
We have obtained that if v 1 /v 2 > p 1 /p 2 then v ∈ W .
Finally we proved that F ⊂ (R + × {0} × {0}) ∪ W . This completes Step II and hence proves Proposition 2.2 (i).
Proof of Proposition 2.2 (ii) By definition of ḡ (see 2), we have
(1, m, δ) = ∞ 0 f (x)ḡ(x)dx, ∞ 0 xf (x)ḡ(x)dx, ∞ 0 x 2 f (x)ḡ(x)dx
and ḡ is positive and nonincreasing on (0, ∞).
Hence (1, m, δ) ∈ F \ {R + × {0} × {0}}, i.e. (1, m, δ) ∈ W . Proof of Proposition 2.2 (iii) Let us prove that when m/δ > p 1 /p 2 , we have (1, m, δ) ∈ Int(W ). We show that m < R ξ(δ/m) 0 xf (x)dx R ξ(δ/m) 0 f (x)dx . Since ξ(δ/m) ∈ (0, ∞), from Lemma A.4, there exists a ′ > 0, b ′ ∈ R, c ′ > 0 such that we have x 0 (a ′ + b ′ u + c ′ u 2 )f (u)du ≥ 0 , x ≥ 0 and ξ(δ/m) 0 (a ′ + b ′ u + c ′ u 2 )f (u)du = 0 . (A.9)
Then by Lemma A.1, we have ) for some large M . Hence, from the above inequalities, we deduce that:
x 0 (a ′ + b ′ u + c ′ u 2 )f (u)ḡ(u)du ≥ 0 , for all x ≥ 0 . Since f > 0 and ḡ > 0 on (0, ∞) and c ′ > 0, the function x -→ x 0 (a ′ + b ′ u + c ′ u 2 )f (u)ḡ(u)du is increasing on [M, ∞
∞ 0 (a ′ + b ′ u + c ′ u 2 )f (u)ḡ(u)du > 0 and hence a ′ + b ′ m + c ′ δ > 0. Now, using the fact that ξ(δ/m) 0 x 2 f (x)dx = (δ/m) ξ(δ/m) 0 xf (x)dx, we deduce from (A.9) that a ′ m ξ(δ/m) 0 f (x)dx ξ(δ/m) 0 xf (x)dx + b ′ m + c ′ δ = 0 . Then, since a ′ > 0, it follows that m < R ξ(δ/m) 0 xf (x)dx R ξ(δ/m) 0 f (x)dx . Thus (1, m, δ) is in the following subset of W : O v ∈ (0, ∞) 3 | v 1 /v 2 > p 1 /p 2 , v 1 /v 0 < ξ(v 2 /v 1 ) 0 xf (x)dx ξ(v 2 /v 1 ) 0 f (x)dx .
From Lemma A.3, the function ξ is continuous on (0, p 2 /p 1 ) and takes values in (0, ∞).
Therefore O is an open set and (1, m, δ) ∈ Int(W ). The proof of Proposition 2.2 is completed.
A.3 Proof of Proposition 2.3
Let us prove that the value and the set of solutions to problem (D) coincide respectively with the value and the set of solutions to the following problem:
min λ∈R 3 λ 0 + λ 1 m + λ 2 δ subject to x 0 [λ 0 + λ 1 u + λ 2 u 2 -ψ(u)]f (u)du ≥ 0 , for all x ≥ 0 .
It suffices to check that, for all λ ∈ R 3 , the following statements are equivalent.
λ 0 f + λ 1 xf + λ 2 x 2 f -ψf ∈ C * . (A.10) x 0 [λ 0 + λ 1 u + λ 2 u 2 -ψ(u)]f (u)du ≥ 0 , for all x ≥ 0 . (A.11) Let λ ∈ R 3 . (A.10) holds if and only if ∞ 0 [λ 0 + λ 1 x + λ 2 x 2 -ψ(x)]f (x)dµ(x)
≥ 0 for all µ ∈ C. By Remark 1.1, this amounts to the condition
α(λ 0 -ψ(0))f (0) + ∞ 0 [λ 0 + λ 1 x + λ 2 x 2 -ψ(x)]f (x)g(x)dx ≥ 0 , for all α ∈ R + , g ∈ G .
But, f (0) > 0 and ψ(0) = 0. It follows that (A.10) holds if and only a)
λ 0 ≥ 0 , b) ∞ 0 [λ 0 + λ 1 x + λ 2 x 2 -ψ(x)]f (x)g(x)dx ≥ 0 . (A.12)
Since by assumption the functions ψf , f , xf and x 2 f are in L 1 (0, ∞), it is clear that G contains the set {1 (0,x) , x > 0}. Hence, in (A.12), b) implies a). It follows that (A.12) implies (A.11). Conversely, let us assume that (A.11) holds. Let g ∈ G. Then, from Lemma A.1, we have (A.12). We have therefore obtained that the conditions (A.10) and (A.11) are equivalent. This ends the proof of Proposition 2.3.
B Proofs of the results stated in Section 3
In this section, we solve problem (P ) in the case of the call option. For this purpose, we use problem (D). For sake of simplicity we introduce the following notation. For λ ∈ R 3 , we denote by G λ the function defined on R + by
G λ (x) x 0 [λ 0 + λ 1 u + λ 2 u 2 -ψ(u)]f (u)du , for all x ≥ 0 (B.1)
and we set
A {λ ∈ R 3 | G λ (x) ≥ 0 , ∀ x ≥ 0 } . (B.2)
With this notation and Proposition 2.3, we know that problem (D) can be formulated as follows min λ∈A λ 0 + λ 1 m + λ 2 δ .
In the sequel, we will work only with this formulation of problem (D).
The proof of Theorem 3.2 relies on the study of the binding constraints of problem (D). So, we introduce a notation for the set of positive real numbers where some of the
constraints {G λ (x) ≥ 0, x > 0} are binding. For λ ∈ R 3 , we set bind(λ) { x ∈ (0, ∞) | G λ (x) = 0 } .
As in the previous section, we begin with stating some lemmas that allow us to shorten the proofs of the main results (Theorems 3.1 and 3.2). But the reader can go directly to the proofs of the theorems in Sections B.2 and B.3.
B.1 Technical Lemmas
We first show that the parameter x m introduced before the statement of Theorem 3.1 is well defined.
Lemma B.1 Let us assume that m/δ > p 1 /p 2 . Then, there exists a unique x m ∈ (0, ∞)
such that xm 0 xf (x)dx = m xm 0 f (x)dx
and we have
x 0 uf (u)du > m x 0 f (u)du ⇐⇒ x ∈ (x m , ∞]. Moreover x > x m
where we recall that x ξ(δ/m).
Proof
We begin with proving that m < p 1 . Since m/δ > p 1 /p 2 , from Proposition 2.2 (iii) we know that (1, m, δ) ∈ Int(W ) and hence that m <
R x 0 xf (x)dx R x 0 f (x)dx , i.e. m < M (x)/I(x). From Lemma A.2, the function M/I is increasing on (0, ∞). Hence we have m < R ∞ 0 xf (x)dx R ∞ 0 f (x)dx , i.e. m < p 1 .
Let us consider the function φ defined on R + by φ(x)
x 0 (um)f (u)du. The function φ is continuous on R + . It is decreasing on (0, m), increasing on (m, ∞) and satisfies: φ(0) = 0 and lim x→∞ φ(x) = p 1m > 0. It follows that there exists a unique
x m ∈ (0, ∞) such that φ < 0 on (0, x m ), φ(x m ) = 0 and φ > 0 on (x m , ∞]. Finally, since m < R x 0 xf (x)dx R x 0 f (x)dx we have x > x m .
This completes the proof of Lemma B.1. We now state some basic properties of the sets A and bind(λ) for λ ∈ A.
Lemma B.2 (o) A ⊂ R + × R 2 . (i) Let λ ∈ A.
The set bind(λ) has at most two elements.
(ii) Let λ ∈ A. If λ 2 ≤ 0 then bind(λ) = ∅. (iii) Let λ ∈ A. If λ 2 > 0 then lim x→∞ G λ (x) > 0. (iv) Let λ ∈ A. If bind(λ) = {x 0 , x 1 } with x 0 < x 1 then λ 0 > 0, λ 1 < 0, λ 2 > 0 and x 0 < K < x 1 . Conversely, let λ ∈ R 3 . If λ 0 > 0, λ 1 < 0, λ 2 > 0 and bind(λ) = {x 0 , x 1 } with 0 < x 0 < x 1 and G λ ′ (x 0 ) = G λ ′ (x 1 ) = 0 then λ ∈ A.
The proof of the lemma is essentially based on the fact that, for λ ∈ A, the set bind(λ) is included in the set of G λ 's minima and hence, since f is positive, in the set of the points where the parabola x -→ λ 0 + λ 1 x + λ 2 x 2 intersects the graph of x -→ ψ(x) = (x -K) + .
Since it is quite long but basic, the proof is omitted. One can have a good intuition on these results and their proofs with a graphical study of the possible intersections of the parabola and the call payoff.
Lemma B.3 Let us assume that
m/δ > p 1 /p 2 . If λ is a solution to problem (D) then the set bind(λ) is non-empty.
Proof Let λ be a solution to problem (D). We assume that bind(λ) = ∅ and obtain a contradiction with the optimal feature of λ. By assumption, we have G λ (x) > 0, for all x > 0. Since m/δ = p 1 /p 2 , there exist a, b ∈ R such that 1 + am + bδ < 0 and 1 + ap 1 + bp 2 > 0 .
(B.3) For all ε > 0, by setting λ ε 0 λ 0 + ε, λ ε 1 λ 1 + εa and λ ε 2 λ 2 + εb, we have:
λ ε 0 + λ ε 1 m + λ ε 2 δ < λ 0 + λ 1 m + λ 2 δ.
Let us prove that there exists ε > 0 such that λ ε (λ ε 0 , λ ε 1 , λ ε 2 ) ∈ A. We write
G ε G λ ε . By construction we have G ε (x) = G λ (x) + εH(x)
where
H(x) x 0 (1 + au + bu 2 )f (u)du .
Since f is positive and since, from the second row of system (B.3), lim x→∞ H(x) =
(1 + ap 1 + bp 2 ) > 0, there exists η > 0 and
X ≥ η such that H ≥ 0 on [0, η] ∪ [X, ∞).
Since G λ is nonnegative, this implies that for all ε > 0,
G ε ≥ 0 on [0, η] ∪ [X, ∞) . (B.4)
Since G λ is continuous and positive on (0, ∞), it is bounded from below by some constant
M > 0 on [η, X].
Since the function H is continuous, and thus bounded on [η, X], it follows that there exists ε > 0 such that, for all
x ∈ [η, X], G ε (x) = G λ (x) + εH(x) ≥ M + εH(x).
This last inequality together with (B.4) prove that λ ε is in A and achieve the proof of Lemma B.3.
From Lemmas B.2 (i) and B.3, we know that, at the optimum for problem (D), there exists at least one and at most two positive real numbers where some constraints are binding. In the following lemma, we provide a necessary condition on the value of problem (D) under which a solution λ is such that exactly one constraint is binding at some positive real number.
Lemma B.4
Let us assume that m/δ > p 1 /p 2 . Let λ be a solution to problem (D) such that bind(λ) = {y}. Then
λ 0 = 0 , y = x and val(D) = λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du .
Besides we have, for all x ≥ 0, G ε (x) = G λ (x) + εH(x) where H is defined by
H(x) a x 0 f (u)du + b x 0 uf (u)du + c x 0 u 2 f (u)du .
From (B.6), there exists a neighborhood (α, β) of y where H > 0. It follows that, for all ε > 0 and for all x ∈ (α, β),
G ε (x) ≥ G λ (x) ≥ 0 . (B.8)
Since bind(λ) = ∅, from Lemma B.2 (ii), we have λ 2 > 0 and then by Lemma B.2 (iii),
lim x→∞ G λ (x) > 0. Hence G λ > 0 on (0, ∞] \ {y}. As it is continuous, it is therefore bounded from below by some positive constant on [η, α] ∪ [β, ∞]. Since the functions f , xf and x 2 f are in L 1 (0, ∞), the function H is bounded. Thus there exists ε ∈ (0, ε 0 ) such that for all x ∈ [η, α] ∪ [β, ∞), G ε (x) = G λ (x) + εH(x) ≥ 0 . (B.9)
It follows from (B.7), (B.8) and (B.9) that G ε ≥ 0 on R + , i.e. λ + ε(a, b, c) ∈ A. This ends the proof of (B.5).
Let us now prove that y = x and that
val(D) = λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du . (B.10)
Using the same kind of arguments as above, one can deduce from the optimal feature of λ that, for all (a, b, c) ∈ (0, ∞) × R 2 , we have
a y 0 f (u)du + b y 0 uf (u)du + c y 0 u 2 f (u)du > 0 =⇒ a + bm + cδ ≥ 0 .
This implies that y satisfies m y 0 u 2 f (u)du = δ y 0 uf (u)du and hence, by definition of x, that y = x. Since λ 0 = 0, we have
G λ (x) = 0 ⇔ λ 1 x 0 uf (u)du + λ 2 x 0 u 2 f (u)du = x 0 ψ(u)f (u)du
and then it is easy to see that (B.10) holds. This concludes the proof of Lemma B.4.
We now provide a lower bound for the value of problem (D) in the case where m/δ > p 1 /p 2 . Lemma B.5 If m/δ > p 1 /p 2 then for all λ ∈ A we have
λ 0 + λ 1 m + λ 2 δ ≥ m x 0 uf (u)du x 0 ψ(u)f (u)du ,
with strict inequality when λ 0 > 0.
Proof Let λ ∈ A. Recall that x satisfies x 0 x 2 f (x)dx = (δ/m) x 0 xf (x)dx. We therefore have
λ 0 + λ 1 m + λ 2 δ = m x 0 uf (u)du λ 0 x 0 uf (u)du m + λ 1 x 0 uf (u)du + λ 2 x 0 u 2 f (u)du . But, from Lemma B.1, R x 0 uf (u)du m > x 0 f (u)du. Since, from Lemma B.2 (o), λ 0 ≥ 0, it follows that λ 0 + λ 1 m + λ 2 δ ≥ m R x 0 uf (u)du λ 0 x 0 f (u)du + λ 1 x 0 uf (u)du + λ 2 x 0 u 2 f (u)du ≥ m R x 0 uf (u)du x 0 ψ(u)f (u)du
where the first inequality is strict when λ 0 > 0 and the second one holds because λ ∈ A.
This ends the proof of Lemma B.5.
In the following lemma we give a necessary and sufficient condition for the lower bound, given in Lemma B.5, to be attained in problem (D).
Recall that d(x) = x2 x 0 ψ(u)f (u)du -ψ(x) x 0 u 2 f (u)du. Lemma B.6 Assume that m/δ > p 1 /p 2 . Then, there exists (λ 1 , λ 2 ) ∈ R 2 which satisfies (0, λ 1 , λ 2 ) ∈ A and λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du
if and only if d(x) > 0 or d(x) = 0 and x > K.
Proof Let (λ 1 , λ 2 ) ∈ R 2 and set λ (0, λ 1 , λ 2 ). Using the fact that x 0 x 2 f (x)dx = (δ/m) x 0 xf (x)dx we obtain the following equivalences
λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du ⇔ x 0 (λ 1 u + λ 2 u 2 -ψ(u))f (u)du = 0 ⇔ G λ (x) = 0 . Since λ ∈ A ⇔ G λ ≥ 0 it follows that: λ ∈ A and λ 1 m + λ 2 δ = m R x 0 uf (u)du x 0 ψ(u)f (u)du, if and only if, λ ∈ A and x is minimum of G λ with G λ (x) = 0, which is equivalent to, λ ∈ A, G λ (x) = 0 and G λ ′ (x) = 0.
Consequently, since f is positive, we have the equivalence between the existence of
(λ 1 , λ 2 ) ∈ R 2 such that we have (0, λ 1 , λ 2 ) ∈ A and λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du
and the existence of a solution (λ 1 , λ 2 ) ∈ R 2 to the system
λ 1 x 0 uf (u)du + λ 2 x 0 u 2 f (u)du = x 0 ψ(u)f (u)du λ 1 x + λ 2 x2 = ψ(x) (B.11)
which satisfies (0, λ 1 , λ 2 ) ∈ A.
Since x > 0, the determinant of the system (B.11) is positive and hence the system has a unique solution. Let (λ 1 , λ 2 ) be this solution. In order to conclude it remains to prove that (0, λ 1 , λ 2 ) ∈ A ⇐⇒ d(x) > 0 or d(x) = 0 and x > K .
From (B.11), (λ 1 , λ 2 ) satisfies
λ 1 x2 x 0 uf (u)du -x x 0 u 2 f (u)du = x2 x 0 ψ(u)f (u)du -ψ(x) x 0 u 2 f (u)du (B.12) λ 2 x x 0 u 2 f (u)du -x2 x 0 uf (u)du = x x 0 ψ(u)f (u)du -ψ(x) x 0 uf (u)du . (B.13)
Let us check that when d(x) < 0 or d(x) = 0 and x ≤ K, we have (0, λ 1 , λ 2 ) / ∈ A. We have
x 2 x 0 uf (u)du -x x 0 u 2 f (u)du > 0 , for all x > 0 . (B.14)
Therefore when d(x) < 0, by (B.12) we have λ 1 < 0 and hence (0, λ 1 , λ 2 ) / ∈ A. Indeed, for small enough x we would have
G (0,λ 1 ,λ 2 ) (x) = x 0 (λ 1 u + λ 2 u 2 )f (u)du < 0.
In the case where d(x) = 0 and x ≤ K, we have λ 1 = 0 from (B.12) and λ 2 = 0 from (B.13) and (B.14), hence (0, λ 1 , λ 2 ) = (0, 0, 0) / ∈ A.
Now we assume that d(x) > 0 or d(x) = 0 and x > K and prove that (0, λ 1 , λ 2 ) ∈ A.
We first prove that λ 1 ≥ 0 and λ 2 > 0. Since, in that case, d(x) ≥ 0, from (B.12) we have λ 1 ≥ 0. Let us prove that λ 2 > 0. From (B.14), it suffices to prove that the right-hand term in (B.13) is negative. By construction, if x ≤ K then d(x) = 0. Since here d(x) > 0 or d(x) = 0 and x > K, we have in any case x > K and thus, r(x)
x x 0 ψ(u)f (u)du -ψ(x) x 0 uf (u)du = x K (x(u -K) -u(x -K))f (u)du -(x -K) K 0 uf (u)du = -K x K (x -u)f (u)du -(x -K) K 0 uf (u)du < 0 (B.15)
This proves that λ 2 > 0.
We are now in position to prove that (0, λ 1 , λ 2 ) ∈ A. Let us write λ = (0, λ 1 , λ 2 ).
Since
λ 1 ≥ 0, λ 2 > 0 and ψ = 0 on [0, K], it is clear that G λ ≥ 0 on [0, K]. On (K, ∞),
the function G λ is piecewise monotone, it is nondecreasing (resp. nonincreasing) on the intervals where the polynomial p(x) = λ 1 x + λ 2 x 2 -(x -K) is nonnegative (resp nonpositive). Since λ 1 ≥ 0 and λ 2 > 0, we have p(K) = λ 1 K + λ 2 K 2 > 0 and lim x→∞ p(x) = ∞.
Besides, from the second row of system (B.11), we have p(x) = 0. Let us prove that there exists y ∈ (K, x) such that p(y) = 0. Assume to the contrary that p = 0 on (K, x). Since p(K) > 0, we then have p > 0 on (K, x) and hence G λ is increasing on (K, x). Since G λ is continuous, this contradicts the fact that G λ (K) > 0, G λ (x) = 0. So, there exists y ∈ (K, x) such that p(y) = 0, p > 0 on [K, y) ∪ (x, ∞) and p < 0 on (y, x). The function G λ is therefore increasing on [K, y), decreasing on (y, x) and increasing on (x, ∞). Since G λ (K) > 0 and G λ (x) = 0, it follows that G λ (x) ≥ 0, for all x ≥ K. It ensues that G λ ≥ 0 on R + and hence λ ∈ A. This completes the proof of Lemma B.6.
We now provide a necessary condition for a solution λ to problem (D) to be such that exactly two constraints are binding at some positive real numbers.
Lemma B.7 Let us assume that m/δ > p 1 /p 2 . Let λ be a solution to problem (D) such that bind(λ) = {x 0 , x 1 } with x 0 < x 1 . Then there exists (α, β) ∈ (0, ∞) 2 such that
α x 0 0 f (u)du + β x 1 0 f (u)du = 1 α x 0 0 uf (u)du + β x 1 0 uf (u)du = m α x 0 0 u 2 f (u)du + β x 1 0 u 2 f (u)du = δ and we have val(D) = λ 0 + λ 1 m + λ 2 δ = β x 1 0 ψ(u)f (u)du.
Proof Let λ be a solution to problem (D) such that bind(λ) = {x 0 , x 1 } with x 0 < x 1 .
From Lemma B.2 (iv), we have x 0 < K < x 1 , λ 0 > 0, λ 1 < 0 and λ 2 > 0. Since λ 0 > 0 and λ 2 > 0, we can use the same kind of arguments as in the proof of Lemma B.4 in order to deduce from the optimal feature of λ that, for all (a, b, c) ∈ R 3 , if
a x 0 0 f (u)du + b x 0 0 uf (u)du + c x 0 0 u 2 f (u)du > 0 and a x 1 0 f (u)du + b x 1 0 uf (u)du + c x 1 0 u 2 f (u)du > 0 then a + bm + cδ ≥ 0.
From Farkas Lemma, this implies that there exists (α, β) ∈ R + 2 such that
α x 0 0 f (u)du + β x 1 0 f (u)du = 1 α x 0 0 uf (u)du + β x 1 0 uf (u)du = m α x 0 0 u 2 f (u)du + β x 1 0 u 2 f (u)du = δ . (B.16)
We have already remarked, in the proof of Lemma B.4 that for fixed i, the vectors
x i 0 f (u)du , x i 0 uf (u)du,
x i 0 u 2 f (u)du and (1, m, δ) can not be linearly dependent. We therefore have α > 0 and β > 0.
Let us check that val(D
) = λ 0 + λ 1 m + λ 2 δ = β x 1 0 ψ(u)f (u)du. From (B.16), the fact that G λ (x 0 ) = G λ (x 1 ) = 0 and x 0 < K we obtain val(D) = λ 0 + λ 1 m + λ 2 δ = α x 0 0 ψ(u)f (u)du + β x 1 0 ψ(u)f (u)du = β x 1 0 ψ(u)f (u)du .
This ends the proof of Lemma B.7.
Lemma B.8
Let us assume that m/δ > p 1 /p 2 . Let (x 0 , x 1 ) ∈ R 2 be such that 0 < x 0 < x 1 . The system
α x 0 0 f (u)du + β x 1 0 f (u)du = 1 α x 0 0 uf (u)du + β x 1 0 uf (u)du = m α x 0 0 u 2 f (u)du + β x 1 0 u 2 f (u)du = δ (B.17)
has a solution (α, β) ∈ (0, ∞) × (0, ∞) if and only if x 0 and x 1 satisfy the following conditions
x 1 ∈ (x, ∞)x 0 ∈ (0, x m ) and x 1 ∈ (x, ∞) , (B.18) M (x 0 )∆(x 1 ) -M (x 1 )∆(x 0 ) = δ [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] +m [I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] . (B.19)
Under these conditions, we have
β = M (x 0 ) -mI(x 0 ) I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )
.
Proof Let (x 0 , x 1 ) ∈ R 2 be such that 0 < x 0 < x 1 . We first prove that the system (B.17) has a solution (α, β) ∈ R 2 if and only if x 0 and x 1 satisfy (B.19). For sake of simplicity, we set I i = I(x i ), M i = M (x i ) and ∆ i = ∆(x i ), for i = 0, 1. Since 0 < x 0 < x 1 and the functions M/I and ∆/M are increasing on (0, ∞) (see Lemma A.2), we have
I 0 M 1 -I 1 M 0 > 0 and M 0 ∆ 1 -M 1 ∆ 0 > 0 . (B.20)
It follows that the system made of the first (resp. last) two rows of (B.17) has a unique solution (ᾱ, β) ∈ R 2 (resp. (α, β) ∈ R 2 ). Thus, the system (B.17) has a solution (α, β)
∈ R 2 if and only if (ᾱ, β) = (α, β). We have (ᾱ, β) =
M 1 -mI 1 I 0 M 1 -I 1 M 0 , M 0 -mI 0 I 1 M 0 -I 0 M 1 and (α, β) = m∆ 1 -δM 1 M 0 ∆ 1 -M 1 ∆ 0 , m∆ 0 -δM 0 M 1 ∆ 0 -M 0 ∆ 1 .
One can check that these couples coincide if and only if x 0 and x 1 satisfy (B.19). Under this condition, we have
(α, β) = m∆ 1 -δM 1 M 0 ∆ 1 -M 1 ∆ 0 , M 0 -mI 0 I 1 M 0 -I 0 M 1 .
From (B.20), it then follows that, (α, β) is in (0, ∞) 2 if and only if m∆ 1 -δM 1 > 0 and M 0 -mI 0 < 0. But, from Lemmas A.3 and B.1, we have m∆(x) -δM (x) > 0 ⇔ x > x and M (x) -mI(x) < 0 ⇔ x < x m .
Finally, we have obtained that, for (x 0 , x 1 ) ∈ R 2 such that 0 < x 0 < x 1 , the system (B.17) has a solution (α, β) ∈ (0, ∞) 2 if and only if x 0 and x 1 satisfy (B.19) and
x 0 ∈ (0, x m ) and x 1 ∈ (x, ∞). This ends the proof of Lemma B.8.
Lemma B.9 Let (x 0 , x 1 ) ∈ R 2 be such that 0 < x 0 < K < x 1 . There exists λ ∈ A such that bind(λ) = {x 0 , x 1 } if and only if
x 1 0 ψ(u)f (u)du ((x 1 -x 0 )/ψ(x 1 )) [∆(x 0 ) -(x 0 + x 1 )M (x 0 ) + x 0 x 1 I(x 0 )] = x 0 [I(x 0 )∆(x 1 ) -∆(x 0 )I(x 1 )] + x 2 0 [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] +M (x 1 )∆(x 0 ) -M (x 0 )∆(x 1 ) . (B.21)
Proof Let (x 0 , x 1 ) R 2 be such that 0 < x 0 < K < x 1 . We first prove that the system below has a solution λ ∈ R 3 if and only if (x 0 , x 1 ) satisfy condition (B.21).
λ 0 + λ 1 x 0 + λ 2 x 2 0 = 0 λ 0 + λ 1 x 1 + λ 2 x 2 1 = ψ(x 1 ) λ 0 I(x 0 ) + λ 1 M (x 0 ) + λ 2 ∆(x 0 ) = 0 λ 0 I(x 1 ) + λ 1 M (x 1 ) + λ 2 ∆(x 1 ) = x 1 0 ψ(u)f (u)du . (B.22)
Here again, for sake of simplicity, we set I(x I ) = I i , M (x i ) = M i and ∆ i = ∆(x i ), for i = 0, 1. Let us prove that the system made of the first three rows of (B.22) has a unique solution. Let d be its determinant. We prove that d > 0. After a few calculations we obtain
d := 1 x 0 x 2 0 1 x 1 x 2 1 I 0 M 0 ∆ 0 = (x 1 -x 0 )I 0 ∆ 0 I 0 -(x 0 + x 1 ) M 0 I 0 + x 1 x 0 .
By Jensen's inequality, we have
∆ 0 I 0 = R x 0 0 u 2 f (u)du R x 0 0 f (u)du ≥ R x 0 0 uf (u)du R x 0 0 f (u)du 2 = M 0 I 0 2 . Hence ∆ 0 I 0 -(x 0 + x 1 ) M 0 I 0 + x 1 x 0 ≥ M 0 I 0 2 -(x 0 + x 1 ) M 0 I 0 + x 1 x 0 = x 0 -M 0 I 0
x 1 -M 0 I 0 . Since x 0 < x 1 and M 0 I 0 < x 0 , it follows that d > 0. Therefore the system (B.22) has a solution if and only if the solution to the system made of the first 3 equations, that we denote by λ, is a solution to the fourth. One can obtain λ in function of x 0 and x 1 as follows
λ 0 = x 0 (x 0 M 0 -∆ 0 ) ψ(x 1 ) (x 1 -x 0 )[∆ 0 -(x 0 + x 1 )M 0 + x 1 x 0 I 0 ] , (B.23) λ 1 = ∆ 0 -x 2 0 I 0 ψ(x 1 ) (x 1 -x 0 )[∆ 0 -(x 0 + x 1 )M 0 + x 1 x 0 I 0 ] , (B.24) λ 2 = (x 0 I 0 -M 0 ) ψ(x 1 ) (x 1 -x 0 )[∆ 0 -(x 0 + x 1 )M 0 + x 1 x 0 I 0 ] . (B.25)
One can check that λ satisfies
λ 0 I 1 + λ 1 M 1 + λ 2 ∆ 1 = x 1 0 ψ(u)f (u)du
if and only if x 0 and x 1 satisfy (B.21). We therefore have obtained that the system (B.22) has a solution λ ∈ R 3 if and only if x 0 and x 1 satisfy condition (B.21).
We have remarked that (
x 1 -x 0 )[∆ 0 -(x 0 + x 1 )M 0 + x 1 x 0 I 0 ] > 0. Since x 1 > K,
we have ψ(x 1 ) > 0 and since x 0 > 0, we have x 0 M 0 -∆ 0 > 0, ∆ 0x 2 0 I 0 < 0 and
x 0 I 0 -M 0 > 0. Thus, from (B.23), (B.24) and (B.25), when the system (B.22) has a solution λ, this solution satisfies λ 0 > 0, λ 1 < 0 and λ 2 > 0.
We are now in position to prove the equivalence stated in the lemma. First notice that, using the fact that x 0 < K, it is easy to see that λ ∈ R 3 satisfies (B.22) if and only
if G λ (x 0 ) = G λ (x 1 ) = G λ ′ (x 0 ) = G λ ′ (x 1 ) = 0.
Let us assume that there exists λ ∈ A such that bind(λ) = {x 0 , x 1 }. Then G λ ′ (x 0 ) = G λ ′ (x 1 ) = 0 and thus, from what precedes, x 0 and x 1 satisfy (B.21). Conversely, if x 0 and x 1 satisfy (B.21) then there exists some λ ∈ R 3 which is solution to system (B.22) and such that λ 0 > 0, λ 1 < 0 and λ 2 > 0. From Lemma B.2 (iv) , it follows that λ ∈ A.
This ends the proof of Lemma B.9.
B.2 Proof of Theorem 3.1
Let us assume that m/δ = p 1 /p 2 . We first prove that val
(D) ≥ (m/p 1 ) ∞ 0 ψ(u)f (u)du (B.26) Let λ ∈ A. Since (1, m, δ) is in F and m/δ = p 1 /p 2 , by Proposition 2.2 we have 0 < m ≤ p 1 . By Lemma B.2 (o) we know that λ 0 ≥ 0. Hence we have λ 0 + λ 1 m + λ 2 δ ≥ (m/p 1 ) (λ 0 + λ 1 p 1 + λ 2 p 2 ).
We then obtain (B.26) by using the equality λ
0 + λ 1 p 1 + λ 2 p 2 = ∞ 0 (λ 0 +λ 1 x+λ 2 x 2 )f (x)dx and the fact that λ ∈ A and hence ∞ 0 (λ 0 +λ 1 u+λ 2 u 2 )f (u)du- ∞ 0 ψ(u)f (u)du ≥ 0.
It remains to prove that the lower bound in (B.26) is attained. Admit for the moment that the function Ψ defined on R + by Ψ(0) = 0 and Ψ
(x) = x 0 ψ(u)f (u)du x 0 uf (u)du for x > 0 is nondecreasing. Then, for λ 0, ∞ 0 ψ(u)f (u)du p 1 , 0 , we have λ 0 + λ 1 m + λ 2 δ = (m/p 1 ) ∞ 0 ψ(u)f (u)du and for all x ≥ 0, G λ (x) = (m/p 1 ) ∞ 0 ψ(u)f (u)du x 0 uf (u)du - x 0 ψ(u)f (u)du ≥ 0 , so that λ ∈ A.
(D) = (m/p 1 ) ∞ 0 ψ(u)f (u)du .
In order to check that val(P ) = val(D), one first notice that by construction val(P ) ≤ val(D) and hence val(P ) ≤ (m/p 1 ) ∞ 0 ψ(x)f (x)dx. 0ne second show that the measure µ defined by dµ (1m/p 1 ) dδ 0 + (m/p 1 ) 1 (0,∞) dx is in C m,δ and satisfies
∞ 0 ψf dµ = (m/p 1 ) ∞ 0 ψ(u)f (u)du.
It remains to prove what we have admitted above, i.e. that the function Ψ is nondecreasing on R + . Using the fact that f is positive, it is easy to check that sign[Ψ
′ (x)] = sign[-r(x)] with r(x) = x x 0 ψ(u)f (u)du -ψ(x)
x 0 uf (u)du, for all x ∈ R + . We have r ≡ 0 on [0, K] and we already saw that r < 0 on (K, ∞), see (B.15). This proves that Ψ is nondecreasing on R + . The proof of Theorem 3.1 is completed.
B.3 Proof of Theorem 3.2
We assume that m/δ > p 1 /p 2 . We know from Remark 3.1 that the value of problem (P ) is finite. We then deduce from Remark 2.1 that strong duality holds between the primal and dual problems: Proof of Theorem 3.2 (ii) We now assume that d(x) < 0 or d(x) = 0 and x ≤ K.
Let λ be a solution to problem (D). From Lemmas B.2 (i) and B.3, we know that the set bind(λ) is not empty and has at most two elements. We prove that it contains exactly two elements. Assume to the contrary that bind(λ) = {y} for some y ∈ (0, ∞). Then by Let us write bind(λ) = {x 0 , x 1 } with 0 < x 0 < x 1 . By Lemma B.2 (iv) we have 0 < x 0 < K < x 1 . Then, from Lemmas B.7 and B.8 we deduce that x 0 and x 1 satisfy x 0 ∈ (0, min{x m , K}), x 1 ∈ (max{x, K}, ∞) and M (x 0 )∆(x 1 ) -M (x 1 )∆(x 0 ) = δ [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] +m [I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] (B.28) and that val(D) = λ 0 + λ 1 m + λ 2 δ = M (x 0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 )
x 1 0 ψ(u)f (u)du .
Finally, by Lemma B.9, x 0 and x 1 satisfy We just proved that, when d(x) < 0 or d(x) = 0 and x ≤ K, there exists (x 0 , x 1 ) ∈ R 2 which satisfies conditions (8), ( 9) and (10). It remains to prove that we have val(D) = M (x 0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 )
x 1 0 ψ(u)f (u)du , and that the measure µ defined by dµ M (x 1 ) -mI(x 1 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 ) 1 (0,x 0 ) + M (x 0 ) -mI(x 0 ) M (x 0 )I(x 1 ) -I(x 0 )M (x 1 ) 1 (0,x 1 ) dx is in Sol(P ), for any couple (x 0 , x 1 ) ∈ R 2 which satisfies the conditions (8), ( 9) and (10).
Let (x 0 , x 1 ) be such a couple. Then, on the one hand, by ( 8) and ( 9) and from Lemma B.8, there exists (α, β) ∈ (0, ∞) 2 such that α It follows that, for all v ∈ A,
v 0 + v 1 m + v 2 δ = α x 0 0 (v 0 + v 1 u + v 2 u 2 )f (u)du + β x 1 0 (v 0 + v 1 u + v 2 u 2 )f (u)du ≥ α x 0 0 ψ(u)f (u)du + β x 1 0 ψ(u)f (u)du . (B.30)
On the other hand, by ( 9) and ( 10), and from Lemma B.9, there exists λ ∈ A such that bind(λ) = {x 0 , x 1 }. The equality therefore holds for λ, i.e. Finally, it is easy to check that the measure µ defined by dµ M (x 1 ) -mI(x 1 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 ) 1 (0,x 0 ) + M (x 0 ) -mI(x 0 ) M (x 0 )I(x 1 ) -I(x 0 )M (x 1 ) 1 (0,x 1 ) dx is in C m,δ and that we have
Proposition 2. 1
1 If (1, m, δ) ∈ Int(F ) then val(P ) = val(D). If this common value is further finite, then the set of solutions to (D) is non-empty and bounded. Conversely, if val(D) is finite and the set of solutions to (D) is non-empty and bounded then (1, m, δ) ∈ Int(F ).
Lemma A. 4
4 For every y > 0, there exist a > 0, b ∈ R and c > 0 such that x 0 (a + bu + cu 2 )f (u)du ≥ 0 for all x ≥ 0 and y 0 (a + bu + cu 2 )f (u)du = 0 . Proof Let y > 0. Let us fix a > 0. The system (b, c) because y 2 y 0 uf (u)duy y 0 u 2 f (u)du > 0. From (A.3) we have c > 0 because a > 0 and y y 0 u 2 f (u)duy 2 y 0 uf (u)du < 0. Let us denote by P the function defined on R + by P (x)
problem (D) has at least one solution. We can therefore use optimality conditions on some solution to problem (D) in order to prove the theorem.Proof of Theorem 3.2 (i) Let us assume that d(x) > 0 or d(x) = 0 and x > K. Then by Lemma B.6, there exists (λ1 , λ 2 ) ∈ R 2 such that (0, λ 1 , λ 2 ) ∈ A and λ 1 m + λ 2 δ = )f (u)du .Then, it is easy to see that the measure µ defined by u)f (u)du. Hence, µ ∈ Sol(P ). This ends the proof of Theorem 3.2 (i).
Lemma B.4, we have λ 0 = 0, y = x and val(D) = m R x 0 uf (u)du x 0 ψ(u)f (u)du. So, we have λ = (0, λ 1 , λ 2 ) ∈ A and λ 1 m + λ 2 δ = m R x 0 uf (u)du x 0 ψ(u)f (u)du.From Lemma B.6, this can happen only in the case where d(x) > 0 or d(x) = 0 and x > K. We conclude that bind(λ) contains exactly two elements.
)f (u)du ((x 1x 0 )/ψ(x 1 )) [∆(x 0 ) -(x 0 + x 1 )M (x 0 ) + x 0 x 1 I(x 0 )] = x 0 [I(x 0 )∆(x 1 ) -∆(x 0 )I(x 1 )] + x 2 0 [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] +M (x 1 )∆(x 0 ) -M (x 0 )∆(x 1 )i.e. by (B.28), f (u)du ((x 1x 0 )/ψ(x 1 )) [∆(x 0 ) -(x 0 + x 1 )M (x 0 ) + x 0 x 1 I(x 0 )] = (x 0m)[I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] + (x 2 0δ)[I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] .
0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 ) . (B.29)
λ 0 +
0 λ 1 m + λ 2 δ = α x 0 0 [λ 0 + λ 1 u + λ 2 u 2 ]f (u)du + β f (u)du .It ensues then from (B.30), from the fact that x 0 < K and from (B.29)0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 )
f (x)dµ(x) = M (x 0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 )x 1 0 ψ(u)f (u)du , so that µ ∈ Sol(P ). This ends the proof of Theorem 3.2 (ii) and completes the proof of Theorem 3.2.B.4 Proof of Proposition 3.1As the proof is very similar to the one of Theorem 3.1, we only give a sketch of it. First it is shown thatsup µ∈Cm ∞ 0 ψ(x)f (x)dµ(x) ≤ inf λ∈A 2 λ 0 + λ 1 m , where A 2 is the set of λ ∈ R 2 satisfying x 0 [λ 0 + λ 1 uψ(u)]f (u)du ≥ 0 for all x ∈ R + .It is easy to see that, if λ ∈ A 2 then λ 0 ≥ 0. Then recalling that m ≤ p 1 , one shows that, for all λ ∈ A 2 , we haveλ 0 + λ 1 m ≥ (m/p 1 ) (λ 0 + λ 1 p 1 ) ≥ (m/p 1 ) ∞ 0 ψ(x)f (x)dx .The proof is completed in the same way as the proof of Theorem 3.1 by considering
observe on table 1, that in general, val(P ) is much smaller than B 4 . This is false in 2 cases, where the strikes and the volatility are low (K = 300 or 350 and σ = 20%), but the values of val(P ) and B 4 are very close to each other. Hence, this example shows that when we consider equilibrium pricing probability measures, there is no need to put
(unrealistic) additional risk-neutral moments restrictions to improve Lo's bound. The bound that we obtain is very satisfactory since the relative deviation from the Black-Scholes price is less than 5%, expect in 4 cases among 15 where it is between 11% and 22%. The average relative deviation is about 6% whereas it is about 24% for B 4 and 48% for B Lo . Also notice that B P &R is much smaller than B Lo .
That proves that the lower bound in (B.26) is attained, i.e. that problem (D) has a solution and its value is given by val
Table 1 .
1 Black-Scholes price, equilibrium bound with 2 moment constraints, equilibrium bound with 1 moment constraint (Perrakis and Ryan), bound with 4 moment constraints, bound with 2 moment constraints (Lo), for different strike prices and volatilities.
σ K BS val(P ) (e) B P &R (e) B 4 (e) B Lo (e)
Proof We start with proving that λ 0 = 0. Assume for the moment that the following result holds: if λ 0 > 0, then, for all (a, b, c
Thus, if λ 0 > 0 then the vectors (1, m, δ) and We now prove the result that we have assumed above i.e. if λ 0 > 0 then (B.5) holds for all (a, b, c) ∈ R 3 . Let (a, b, c) ∈ R 3 be such that
Let us prove that there exists ε > 0 such that λ + (εa, εb, εc) ∈ A. Since λ is a solution to problem (D), it will follow that
i.e. a + bm + cδ ≥ 0 and hence, (B.5) will be proved.
Let ε > 0. For simplicity, we write G ε G λ+ε (a,b,c) . We have
Since λ 0 > 0, there exists ε 0 > 0 such that for all ε ∈ [0, ε 0 ], λ 0 + εa ≥ λ 0 /2 > 0. Since f is positive, it follows that there exists η > 0 such that, for all ε ∈ [0, ε 0 ],
G ε ≥ 0 on [0, η) . (B.7) | 65,083 | [
"6654"
] | [
"60",
"2579"
] |
01766425 | en | [
"sde"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01766425/file/Hoy%20et%20al%20JAE%20REVISIONS%20FINAL.pdf | Sarah R Hoy
Alexandre Millon
Steve J Petty
D Philip Whitfield
Xavier Lambin
email: [email protected]
Food availability and predation
Keywords: Accipiter gentilis, breeding decisions, breeding propensity, clutch size, juvenile survival, life-history trade-offs, northern goshawk, reproductive strategies, Strix aluco, tawny owl
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
conditions (e.g. food availability or predation) varies according to its intrinsic attributes (e.g. age, previous allocation of resources towards reproduction).
2. We used 29 years of reproductive data from marked female tawny owls and natural variation in food availability (field vole) and predator abundance (northern goshawk) to quantify the extent to which extrinsic and intrinsic factors interact to influence owl reproductive traits (breeding propensity, clutch size and nest abandonment).
3.
Extrinsic and intrinsic factors appeared to interact to affect breeding propensity (which accounted for 83% of the variation in owl reproductive success). Breeding propensity increased with vole density, although increasing goshawk abundance reduced the strength of this relationship. Owls became slightly more likely to breed as they aged, although this was only apparent for individuals who had fledged chicks the year before.
4.
Owls laid larger clutches when food was more abundant. When owls were breeding in territories less exposed to goshawk predation, 99.5% of all breeding attempts reached the fledging stage. In contrast, the probability of breeding attempts reaching the fledging stage in territories more exposed to goshawk predation depended on the amount of resources an owl had already allocated towards reproduction (averaging 87.7% for owls with clutches of 1-2 eggs compared to 97.5% for owls with clutches of 4-6 eggs).
Introduction
Understanding how different factors influence reproductive decisions is a central issue in ecology and conservation biology, as the number of offspring produced is a key driver of population dynamics [START_REF] Nichols | Estimation of sexspecific survival from capture-recapture data when sex is not always known[END_REF][START_REF] Sedinger | Fidelity and breeding probability related to population density and individual quality in black brent geese Branta bernicla nigricans[END_REF]. The impact of some extrinsic factors on reproductive decisions, such as food availability, are well understood (reviewed in [START_REF] White | The role of food, weather and climate in limiting the abundance of animals[END_REF]. In contrast the impact of others, such as predation risk is more equivocal, even when the same predator and prey species are examined [START_REF] Sergio | Intraguild predation in raptor assemblages: a review[END_REF]. Quantifying the indirect effect of predation risk on prey reproductive decisions under natural conditions is difficult, but merits further investigation as it can theoretically destabilize predator-prey dynamics, under certain circumstances [START_REF] Kenward | Breeding suppression and predator-prey dynamics[END_REF].
Furthermore, despite the influence of food availability and predation risk on reproductive success being extensively studied, the extent to which these two extrinsic factors interact to affect reproductive decisions remains poorly understood (but see [START_REF] Sergio | Spatial refugia and the coexistence of a diurnal raptor with its intraguild owl predator[END_REF].
Food availability is frequently reported to have a positive influence on the proportion of individuals in the population breeding and the number of offspring produced [START_REF] Arcese | Effects of population density and supplemental food on reproduction in song sparrows[END_REF][START_REF] Pietiäinen | Seasonal and individual variation in the production of offspring in the Ural owl, Strix uralensis[END_REF][START_REF] Petty | Ecology of the Tawny Owl Strix Aluco in the Spruce Forests of Northumberland and Argyll[END_REF][START_REF] Millon | Dampening prey cycle overrides the impact of climate change on predator population dynamics: A long-term demographic study on tawny owls[END_REF]. However, breeding individuals and individuals producing more offspring per breeding attempt are often more vulnerable to predation compared to non-breeding individuals [START_REF] Magnhagen | Predation risk as a cost of reproduction[END_REF][START_REF] Hoogland | Selective predation on Utah prairie dogs[END_REF] or those producing fewer offspring [START_REF] Ercit | Egg load decreases mobility and increases predation risk in female black-horned tree crickets (Oecanthus nigricornis)[END_REF]. Consequently, in years when predation risk is high, individuals of long-lived iteroparous species may attempt to minimize their vulnerability to predation by: i) refraining from breeding [START_REF] Spaans | Dark-bellied Brent geese Branta bernicla bernicla forego breeding when arctic foxes Alopex lagopus are present during nest initiation[END_REF]; ii) reducing the number or quality of offspring [START_REF] Doligez | Clutch size reduction as a response to increased nest predation rate in the collared flycatcher[END_REF][START_REF] Zanette | Perceived Predation Risk Reduces the Number of Offspring Songbirds Produce per Year[END_REF]; or iii) abandoning the breeding attempt at an early stage [START_REF] Sergio | Spatial refugia and the coexistence of a diurnal raptor with its intraguild owl predator[END_REF][START_REF] Chakarov | Mesopredator release by an emergent superpredator: a natural experiment of predation in a three level guild[END_REF]. Indeed, experimental studies have shown that individuals respond to variation in predation risk by making facultative decisions to alter their allocation of resources towards reproduction, so as to reduce their own, or their offspring's vulnerability to predators [START_REF] Ghalambor | Fecundity-survival trade-offs and parental risktaking in birds[END_REF][START_REF] Doligez | Clutch size reduction as a response to increased nest predation rate in the collared flycatcher[END_REF][START_REF] Fontaine | Parent birds assess nest predation risk and adjust their reproductive strategies[END_REF][START_REF] Zanette | Perceived Predation Risk Reduces the Number of Offspring Songbirds Produce per Year[END_REF]. However, according to life history theory, such changes in reproductive strategies should arise only when the losses incurred from not breeding, or not completing a breeding attempt, are compensated for by future reproductive success [START_REF] Stearns | The Evolution of Life Histories[END_REF]).
This intrinsic trade-off between current reproductive success and future reproductive potential is thought to be an important factor shaping reproductive decisions [START_REF] Stearns | The Evolution of Life Histories[END_REF].
For many long-lived species, the strength of this trade-off is thought to vary over an individual's lifetime [START_REF] Proaktor | Age-related shapes of the cost of reproduction in vertebrates[END_REF], as both survival-and reproduction-related traits are age-dependant, often declining in later life [START_REF] Nussey | Senescence in natural populations of animals: widespread evidence and its implications for biogerontology[END_REF]. Furthermore, changes in extrinsic conditions can also cause the strength of this intrinsic trade-off to vary, via their influence on survival probabilities and ultimately the individual's future reproductive potential [START_REF] Barbraud | Environmental conditions and breeding experience affect costs of reproduction in Blue Petrels[END_REF][START_REF] Hamel | Maternal characteristics and environment affect the costs of reproduction in female mountain goats[END_REF]. Consequently, an individual's reproductive response to changes in extrinsic conditions is predicted to vary according to their intrinsic attributes, with individuals becoming increasingly committed to their current reproductive attempt as they age, to compensate for the decline in future breeding prospects [START_REF] Clutton-Brock | Reproductive effort and terminal investment in iteroparous animals[END_REF]. However, few studies have examined whether intrinsic and extrinsic factors interact to explain variation in reproductive success (but see [START_REF] Wiklund | The adaptive significance of nest defence by merlin, Falco columbarius, males[END_REF][START_REF] Kontiainen | Aggressive ural owl mothers recruit more offspring[END_REF][START_REF] Rauset | Reproductive patterns result from age-related sensitivity to resources and reproductive costs in a mammalian carnivore[END_REF], despite theory predicting such a link [START_REF] Williams | Natural selection, the cost of reproduction, and a refinement of lack's principle[END_REF][START_REF] Ricklefs | On the evolution of reproductive strategies in birds: Reproductive effort[END_REF].
In this study, we used 29-years of breeding data collected on an intensively monitored population of individually identifiable female tawny owls (Strix aluco) to examine the extent to which owl reproductive decisions varied in relation to two extrinsic factors, natural variation in the abundance of their main prey (field vole, Microtus agrestis; Petty 1999), and their main predator (a diurnal raptor, northern goshawk, Accipiter gentilis; [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF].
In another study site, predation by diurnal raptors was found to account for 73% of natural tawny owl mortality after the fledging stage, when parents are still provisioning food for their young [START_REF] Sunde | Diurnal exposure as a risk sensitive behaviour in tawny owls Strix aluco ?[END_REF] and in our study site predation on adult owls was biased towards breeding females [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. It is expected that breeders and parents of larger broods spend more time hunting to provision food for their offspring, which may make these parents more exposed to predation by goshawks. Consequently, in years when predation risk is high, individuals may attempt to minimise their vulnerability to predation by reducing the amount of resources they allocate towards reproduction (breeding less frequently or laying smaller clutches). However, as the seasonal peak in goshawk predation on tawny owls occurs after owls have already initiated breeding attempts [START_REF] Petty | The decline of common kestrels Falco tinnunculus in a forested area of northern England: the role of predation by Northern Goshawks Accipiter gentilis[END_REF], the main response of individuals to variation in predation risk may manifest itself as an increased tendency to abandon breeding attempts at an early stage. Therefore in this study we examined how three different reproductive decisions: i) breeding propensity; ii) clutch size; and iii) whether breeding attempts were completed to the fledging stage varied in relation to fluctuations in food availability and predation risk.
We also investigated whether owl reproductive decisions were related to the following intrinsic attributes, current and previous allocation of resources towards breeding (clutch size and reproductive success the year before, respectively) and the age of the individual, as lifehistory theory predicts an intrinsic trade-off between current and future allocation of resources towards reproduction [START_REF] Williams | Natural selection, the cost of reproduction, and a refinement of lack's principle[END_REF], as survival and reproductive rates are agedependent in tawny owls [START_REF] Millon | Natal conditions alter age-specific reproduction but not survival or senescence in a long-lived bird of prey[END_REF].
Changes in extrinsic conditions are also likely to affect the probability of offspring being recruited into the breeding population, via their effect on juvenile owl survival [START_REF] Sunde | Diurnal exposure as a risk sensitive behaviour in tawny owls Strix aluco ?[END_REF][START_REF] Sunde | Predators control post-fledging mortality in tawny owls, Strix aluco[END_REF][START_REF] Koning | Long-term study on interactions between tawny owls Strix aluco , jackdaws Corvus monedula and northern goshawks Accipiter gentilis[END_REF][START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]. Thus, the influence of extrinsic conditions on juvenile survival should influence the adaptive basis for reproductive decisions, for instance, how beneficial it is to allocate resources towards a reproductive attempt. Consequently, we also examined how juvenile survival varied in relation to temporal fluctuations in food availability and predation risk.
Methods
Study site and owl monitoring
Tawny owl reproduction has been continuously monitored in a 176 km² central section of Kielder Forest (55°13′N, 2°33′W) since 1979, using nest boxes [START_REF] Petty | Value of nest boxes for population studies and conservation of owls in coniferous forests in Britain[END_REF]. Kielder
Forest, mainly planted with Sitka Spruce (Picea sitchensis), lacks natural tree cavities, therefore owls breed almost exclusively in nestboxes [START_REF] Petty | Value of nest boxes for population studies and conservation of owls in coniferous forests in Britain[END_REF]. Each year, all nest boxes were checked for occupancy, to record clutch size, the number of chicks fledging and to ring chicks. Tawny owls do not breed every year after becoming reproductively active and only breed once per year, but can re-lay if the first breeding attempt fails early (during laying or the early incubation period; [START_REF] Petty | Ecology of the Tawny Owl Strix Aluco in the Spruce Forests of Northumberland and Argyll[END_REF]). In such cases, we only included the second breeding attempt, such that each individual contributed only one breeding attempt per year to our analysis. In some cases, the monitoring of a nestbox resulted in owls abandoning their breeding attempts. We therefore excluded all such breeding attempts (N = 51/965) from all our analyses. Breeding females were captured every year using a modified angler's landing net which was placed over the entrance of the nestbox, when their chicks were 1-2 weeks old. The identity of breeding females was established from their metal ring numbers, and any unmarked breeding females (entering the population as immigrants) were ringed upon capture so that they would subsequently be individually identifiable. Tawny owls are highly site faithful, and in our study site >98% remained in the same territory where they first started breeding [START_REF] Petty | Ecology of the Tawny Owl Strix Aluco in the Spruce Forests of Northumberland and Argyll[END_REF]). Therefore we determined the identity of a female occupying a territory when no breeding took place or when the breeding attempt failed prior to trapping in the following way. When the same female was recorded breeding in a territory both before and after the year(s) where no female was caught, we assumed the same individual was involved.
However, when different females were recorded either side of a year(s) when females were not caught, we deemed the identity of the breeder unknown and excluded such breeding attempts from our analyses. A total of 914 breeding attempts took place between 1985 and 2013 where the identity of the female was known, or could be assumed in 89% of cases (N = 813).
Analysis
To determine the extent to which owl breeding decisions were affected by fluctuating extrinsic and intrinsic factors, we examined: i) breeding propensity, ii) clutch size and iii) whether breeding attempts were completed using generalised linear mixed effect models (GLMM) with the appropriate error structure in R version 3.0.3 (R Core Development Team 2014). The identity of the breeding female and the year of a breeding attempt were fitted as random effects to account for individuals breeding in more than one year, and any residual temporal variation in response variables not attributable to the fitted temporal covariates of interest (food availability and predation risk). In all analyses both the additive and 2-way interactive effects of fixed effect covariates were tested. We visually checked for any residual spatial-autocorrelation in all response variables not explained by the covariates included in the selected best models using correlograms [START_REF] Zuur | Mixed Effects Models and Extensions in Ecology with R[END_REF].
We examined causes of variation in breeding propensity by analysing whether an individual bred or did not breed each year after becoming reproductively active, up until its last recorded breeding attempt (fitted as a binary covariate). We examined breeding propensity in this way for the following reasons. We excluded first-time breeding attempts as the breeding propensity of such attempts would necessarily be one and this may bias the results. We did not include the years prior to the first breeding attempt because there is no way to identify a new recruit in a territory before it first bred and it was unknown whether individuals had made a facultative decision not to breed the year(s) before they first bred, or whether they were incapable of breeding regardless of extrinsic conditions. Furthermore, some individuals were only recorded breeding once, thus we had no way of determining whether such individuals were alive and had decided to not to breed in the subsequent year(s) after their only recorded breeding attempt or whether these individuals were dead. When at least one egg was laid in a territory known to be occupied by a particular female, we recorded that as a breeding attempt. Less than 2% (N= 5) of the 268 different females recorded breeding in Kielder Forest were known to have skipped breeding for three or more consecutive years.
Therefore, we assumed an individual was dead if it had not been re-captured in the last 3 years of the study (i.e. after 2010). In this analysis, we excluded all individuals that could not be assumed dead or were known to be alive (i.e. were recorded breeding) in 2013 (N = 40), to remove any bias that unknown non-breeding events occurring in the last few years of the study period could induce.
To determine the extent to which owls adjust the amount of resources they allocate towards reproduction in response to variation in food availability and predation risk, we modelled variation in clutch size. In addition, we examined the decision or capability to continue with a breeding attempt by classifying each breeding attempt as "complete", if at least one chick fledged, or "incomplete" if not (fitted as a binary covariate). These two analyses were based on a different dataset to that used for the breeding propensity analysis, as it contained all breeding attempts by all known individuals (N = 241), including first time-breeders between 1985-2013.
Measures of food availability and predation risk
Field voles are the main year-round prey of tawny owls in Kielder Forest, representing on average 62% of prey brought to the nestbox (N = 1423; Petty 1999). As tawny owls are vole specialists in our study site, variation in the abundance of alternative food sources probably had only a limited impact on owl breeding decisions. Field vole densities were monitored in spring and autumn at 17-21 sites within the owl monitoring area, every year since 1985 (for methods see Lambin, Petty, & MacKinnon 2000). Vole densities in the spring and autumn were positively correlated (r = 0.65, N = 27, P <0.001). The amount of vole prey available in early spring (prior to egg laying) has previously been shown to affect owl reproduction; in years of high food availability more pairs attempted to breed and clutch sizes were larger [START_REF] Petty | Ecology of the Tawny Owl Strix Aluco in the Spruce Forests of Northumberland and Argyll[END_REF][START_REF] Millon | Dampening prey cycle overrides the impact of climate change on predator population dynamics: A long-term demographic study on tawny owls[END_REF]. Therefore, spring vole densities were used as a proxy for owl food availability in all analyses. Field vole densities were asynchronous but spatially structured across Kielder Forest (i.e. travelling waves; [START_REF] Lambin | Spatial asynchrony and periodic travelling waves in cyclic populations of field voles[END_REF]. However, this pattern has changed over time with a gradual loss of spatial structure [START_REF] Bierman | Changes over time in the spatiotemporal dynamics of cyclic populations of field voles (Microtus agrestis L.)[END_REF].
Such changes in prey spatial synchrony may affect how easy it is for owls to predict the amount of food available in their territory, and hence influence their reproductive decisions. Therefore, we also examined the extent to which tawny owl breeding decisions were affected by changes in the spatial synchrony of field vole densities. To do so, we first calculated spatial variation in field vole densities as the coefficient of variation (standard deviation divided by the mean) in spring vole densities between survey sites, each year. However, spatial variation in vole densities may be less important in years when food is abundant, compared to when it is scarce. Therefore, we classified years as either being of low overall food abundance if the averaged spring vole density was below the median value for all years, or high if not. We then included an interaction between spatial variation in vole densities and the categorical covariate of overall vole densities to test this hypothesis.
Northern goshawks (hereafter goshawks) have been continuously monitored since the first breeding attempt in 1973 [START_REF] Petty | Goshawks Accipiter gentilis. The Atlas of Breeding Birds in Northumbria[END_REF]. Each year occupied goshawk homeranges were identified and over the last 40 years the number of occupied home-ranges has increased from one to 25-33. Goshawks are known predators of tawny owls, with breeding female owls being three times more likely to be killed than adult males; predation is also heavily biased towards juveniles [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. Goshawk dietary data collected in Kielder Forest suggests that as the breeding population of goshawks increased, the mean number of owls killed each year by goshawks has also increased. An average of 5 [3-8; 95% CI] owls were killed each year when less than 15 goshawk home-ranges were occupied, compared to an average of 159 [141-176; 95% CI] owls killed each year when more than 24 goshawk home-ranges were occupied (see Appendix S1). Consequently, as predation on owls has increased with the abundance of goshawks in the forest, we used the total number of occupied goshawk home-ranges in a 964 km² area of Kielder Forest as a proxy of temporal variation in predation risk. However, as goshawks were monitored over a larger area than tawny owls, we also used an additional proxy of temporal variation in predation risk. Local goshawk abundance was measured as the number of goshawk home-ranges whose nest sites were within 5.8 km (the estimated goshawk foraging distance) of the owl monitoring area, calculated in the same way described in [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. Spatial variation in predation risk has also been found to influence reproductive decisions [START_REF] Sergio | Spatial refugia and the coexistence of a diurnal raptor with its intraguild owl predator[END_REF]. Therefore, we investigated the extent to which owl reproductive decisions varied in relation to two spatial proxies of predation risk: (i) distance from an owl's nest to the nearest goshawk nest site; and (ii) the location of an owl's territory in relation to all goshawks nest sites, (i.e. connectivity of an owl territory to all goshawk nest sites). The connectivity measure of predation risk takes into account all goshawk nest sites, but weights the influence each goshawk nest site has on this index of predation risk, according to its distance from the focal owl nest site (for further details and method see Appendix S2). These spatial covariates of predation risk were calculated for each owl territory, every year. Although common buzzards Buteo buteo are abundant in our study site and are known to kill tawny owls [START_REF] Mikkola | Owls killing and killed by other owls and raptors in Europe[END_REF]), we did not include buzzards in any of our analyses of owl predation risk. This was because dietary data showed us that buzzard predation on owls in our study site was negligible (unpublished data). None of the temporal proxies of food availability were significantly correlated with the temporal covariates of predation risk. However, no two proxies of predation risk or two proxies of food availability were included in the same model as they were collinear (see Appendix S3 for all cross correlation coefficients). All temporal and spatial covariates were standardised (had a mean of 0 and a standard error of 1) to enable their effect sizes to be compared.
Intrinsic attributes
When testing the hypothesis that the response of an individual to changes in extrinsic conditions varied according to age, we used the number of years elapsed since the individuals first recorded breeding attempt, because the exact age of 94 breeding females entering the population as adult immigrants was unknown. However, most (89%) female owls had commenced breeding by the time they were 3 years old [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF] and there had been no change in the mean age at first reproduction over the study period, neither for immigrants nor local recruits entering the owl population (unpublished data).
Consequently, the number of years elapsed since an individual's first recorded breeding attempt is closely related to its age, and the length of an individual's breeding lifespan is also highly correlated with actual lifespan (r = 0.91; N = 163). We tested the hypothesis that previous investment in reproduction influenced an individual's current reproductive decisions in relation to changes in predation risk and food availability by fitting a binary covariate reflecting whether a female owl had successfully raised offspring to the fledgling stage the previous year. Lastly, we investigated whether the likelihood of an individual completing a breeding attempt to the fledging stage was related to clutch size, taking clutch size as a proxy for the extent to which an individual had already allocated resources towards the current reproductive attempt. All descriptive statistics are shown with the standard deviation (SD).
Juvenile survival
As recapture data were not available for male owls in all years, our analysis of juvenile owl survival was based on female owls only, ringed as chicks between 1985 and 2012 (N=1,082), with the last recapture of individuals in 2013. The sex of individuals never recaptured as adults or sexed as chicks using DNA was unknown, as juvenile owls cannot be accurately sexed without molecular analyses. However, the sex ratio of chicks born in our study site was even 1:1 (N =312, over 4 years; Appleby et al. 1997). Consequently, we randomly assigned half the number of chicks born each year minus the number known to be female as females, as done in previous analyses [START_REF] Nichols | Estimation of sexspecific survival from capture-recapture data when sex is not always known[END_REF][START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]. The rest of these chicks were assumed to be males and excluded from the analysis. Owls were only recaptured when breeding and owls usually starting to breeding between the ages 1-4 (89% before age 3; [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]. Recapture probabilities were therefore modelled as time-dependent and age-specific [(1, 2-3, 4+)] as done in [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]. This analysis was carried out in E-SURGE version 1.9.0 [START_REF] Choquet | Program E-SURGE: A Software Application for Fitting Multievent Models[END_REF]. Goodness-of-fit tests were carried out in U-CARE 2.3.2 [START_REF] Choquet | U-CARE 2.2 User's Manual[END_REF]. In this analysis only, rather than using spring vole densities (measured in March) as the measure of food availability, we used autumn densities of field voles (measured in September-October), as they have previously been shown to be more closely related to changes in juvenile tawny owl survival [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF][START_REF] Millon | Natal conditions alter age-specific reproduction but not survival or senescence in a long-lived bird of prey[END_REF][START_REF] Millon | Dampening prey cycle overrides the impact of climate change on predator population dynamics: A long-term demographic study on tawny owls[END_REF]. Temporal proxies of predation risk were the same as those used in the previous analyses. Spatial proxies of predation risk were calculated as before, but using the natal nestbox, and were modelled as an individual covariate. Model selection in all of the above analyses was based on Akaike's information criterion corrected for small sample size (AICc; [START_REF] Burnham | Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd Editio[END_REF].
Results
Breeding propensity
When averaged across years, the probability of a female breeding after becoming reproductively active was 0.78 ± 0.17 (range: 0.21-0.99). Variation in breeding propensity appeared most strongly related to changes in extrinsic conditions (Table 1). In years when local goshawk abundance was relatively low (fewer than 10 home-ranges occupied) breeding propensity increased from an average of 0.33 ± 0.18, when food availability was also low, to an average of 0.95 ± 0.06 in years of high food availability (Fig. 1a). However, in years when goshawk abundance was high the relationship between breeding propensity and food availability was less apparent (Fig. 1a). Breeding propensity also appeared to vary according to intrinsic attributes (proxies of age and previous allocation of resources to reproduction); however the association between breeding propensity and intrinsic attributes was much weaker in comparison to the relationship with extrinsic factors (Fig. 1b; Table 1; Appendix S4). Breeding propensity was estimated to increase slightly as owls aged. However this trend was only observed for individuals who had successfully fledged chicks the year before.
Clutch size
Owl clutch size averaged 2.85 ± 0.82 (range: 1-6; N = 850), with 92.8% of clutches containing 2-4 eggs. The largest clutches were laid in years of high spring vole densities with clutch size increasing from an average of 2.38 [2.28-2.48; 95% CI] in years when vole densities were below 50 voles ha -1 to 2.98 [2.82-3.14; 95% CI] in years when vole densities were above 150 voles ha -1 (Fig. 2). There was no evidence to suggest that variation in clutch size was related to predation risk or female age (Table 2; Appendix S5).
Completing a breeding attempt to the fledging stage
On average, 96% of breeding attempts (N = 813) were completed. Clutch size and connectivity to goshawk nest sites explained the most variation in whether a breeding attempt was completed (Table 3; Appendix S6). Irrespective of clutch size, the percentage of a breeding attempts observed to reach the fledging stage was close to 100% (N = 193/194) for owls breeding in territories not well connected to goshawk nest sites, hence less exposed to predation (i.e. in territories not in close proximity to many goshawks nest sites; Fig. 3).
However, for owls breeding in territories relatively well connected to goshawk nest sites, hence exposed to predation (i.e. in close proximity to several goshawk nest sites in that year) the probability of breeding attempts being completed decreased from 97.5% (N = 39/40 breeding attempts) when owls had clutches containing four or more eggs to 87.7% (N = 57/65 breeding attempts) when clutches contained 1-2 eggs (Fig 3).
Juvenile survival
Juvenile survival averaged 0.18 ± SE 0.02. Autumn vole densities explained the most variation in juvenile survival (slope on logit scale: β = 0.42 ± 0.1; %Deviation = 34.5).
Juvenile survival was estimated to increase with autumn vole densities (Appendix S7
). There was no evidence of a relationship between juvenile owl survival and any proxy of predation risk (Table 4).
Discussion
In this study we examined how reproduction in female tawny owls (breeding propensity, clutch size and nest abandonment) was influenced by both extrinsic (food availability and predation risk) and intrinsic factors (age, previous and current allocation of resources towards reproduction) and any interactions between these factors. Our main findings were as follows: i) breeding propensity was highest in years when food (field vole densities in spring) was abundant and predation risk (goshawk abundance) was low. However, in years when goshawk abundance was relatively high the association between breeding propensity and food availability was less apparent. Breeding propensity also appeared to be related to intrinsic attributes (but to a lesser extent than extrinsic factors), as owls which had successfully fledged chicks the year before were slightly more likely to breed as they aged compared to owls which had not fledged chicks. ii) Clutch size was positively associated with spring vole densities but was unrelated to predation risk or any intrinsic attributes examined.
iii) On average 96% of breeding attempts were completed, however owls with small clutches (1-2 eggs), and breeding in territories more exposed to goshawk predation, were less likely to complete their breeding attempt compared to owls with larger clutches breeding in less exposed territories. iv) Juvenile owl survival was positively correlated with food availability in the autumn but was unrelated to predation risk. Overall, these findings represent rare evidence about how extrinsic and intrinsic factors interact to shape reproductive decisions in a long-lived iteroparous predator.
Breeding propensity
Breeding propensity was closely correlated with food availability (measured as field vole densities in spring) in the early years of the study, when predation risk (goshawk abundance) was relatively low (Fig. 1a). However, as predator abundance increased over the study period, the positive effect of food availability on breeding propensity diminished. These results indicate that breeding propensity is not purely constrained by the amount of food available prior to the breeding season. They also suggest that owls may be capable of assessing changes in predation risk and make facultative decisions about whether to allocate resources to reproduction, as shown for other species [START_REF] Sih | The effects of predators on habitat use, activity and mating behaviour of a semi-aquatic bug[END_REF][START_REF] Candolin | Reproduction under predation risk and the trade-off between current and future reproduction in the threespine stickleback[END_REF][START_REF] Ghalambor | Fecundity-survival trade-offs and parental risktaking in birds[END_REF][START_REF] Zanette | Perceived Predation Risk Reduces the Number of Offspring Songbirds Produce per Year[END_REF]. Unfortunately, we were unable to determine the exact nature of the link between food availability, predation risk and the observed changes in owl reproduction, as our approach was necessarily correlative, given the spatial scale of the processes considered. Therefore, we cannot rule out the possibility that changes other than average vole density in spring or goshawk abundance may have co-occurred to cause the observed variation in breeding propensity. However, we also examined whether changes in the spatial dynamics of food availability and predation risk were related to breeding propensity. Life history theory predicts that individuals should only forgo breeding when the cost of not breeding is compensated for by future reproductive gains [START_REF] Stearns | The Evolution of Life Histories[END_REF]). An analysis of breeding female owl survival in our study site suggests that it was lowest in years when goshawk abundance was relatively high and owl food availability was low (unpublished data). Consequently, we suggest that the higher breeding propensity observed in years when goshawks were abundant and food was scarce could plausibly reflect that these environmental conditions (being adverse for owls for a number of consecutive years towards the end of the study period) have made intermittent breeding a less beneficial strategy, as the cost of not breeding now is less likely to be compensated for in the future.
We also found evidence suggesting that a detectable but relatively small amount of variance in breeding propensity was associated with the age of the female owl and their previous allocation of resources towards reproduction, as breeding propensity increased slightly with age for females which had fledged chicks the previous year. This could indicate that some individuals are inherently of "high quality" and do not face a strong trade-off between current and future investment in reproduction. While, the effect sizes were relatively small in comparison with the strength of the correlations between breeding propensity and extrinsic conditions (food availability and predation risk; Fig. 1), our results demonstrate the dual intrinsic and extrinsic influence on the decision to reproduce.
Clutch size
The strong positive effect of food availability on clutch size is concordant with results from several other studies (Fig. 2; [START_REF] Ballinger | Reproductive strategies: food availability as a source of proximal variation in a lizard[END_REF][START_REF] Crawford | The influence of food availability on breeding success of African penguins Spheniscus demersus at Robben Island, South Africa[END_REF][START_REF] Lehikoinen | The impact of climate and cyclic food abundance on the timing of breeding and brood size in four boreal owl species[END_REF]).
However, we found no evidence of an association between clutch size and any proxy of predation risk. Due to the latitude of our study site, nights are relatively long prior to the breeding season. Hence, there is little overlap in the activity-periods of nocturnal tawny owls and diurnal goshawks, compared to late spring and summer when nights are relatively short.
Furthermore, female goshawks are thought to leave Kielder Forest in winter, returning in February, just prior to owls laying (unpublished data). Therefore, predation risk for owls might potentially be relatively low prior to the breeding season, when female owls are building up the body reserves needed for breeding, which could, in part, explain why we found no evidence of a relationship between clutch size and predation risk.
Completing a breeding attempt to the fledging stage
As predicted by life-history theory, individuals who had allocated more towards reproduction (e.g. by laying larger clutches), were more likely to continue their breeding attempt to the fledging stage, a finding consistent with previous studies (e.g. [START_REF] Delehanty | Effect of clutch size on incubation persistence in male Wilson's Phalaropes (phalaropus tricolor[END_REF].
Predation risk was the only extrinsic predictor of whether breeding attempts reached the fledging stage, with individuals breeding in territories more exposed to predation risk being less likely to complete a breeding attempt (Fig. 3); a result congruent with another study examining the effect of spatial variation in predation risk on reproductive success [START_REF] Sergio | Spatial refugia and the coexistence of a diurnal raptor with its intraguild owl predator[END_REF]. Goshawks start displaying over territories and building nests in late March and April in the UK [START_REF] Kenward | Breeding suppression and predator-prey dynamics[END_REF], hence are likely to become even more conspicuous to owls, after owls have already committed to breeding. Furthermore, predation risk for both adult and fledgling owls increased throughout the breeding season [START_REF] Petty | The decline of common kestrels Falco tinnunculus in a forested area of northern England: the role of predation by Northern Goshawks Accipiter gentilis[END_REF][START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. Therefore, the tendency of owls not to complete breeding attempts in territories where predation risk is presumably high, is consistent with females (having already commenced breeding), attempting to reduce their own vulnerability to predation as the breeding season progresses. Alternatively, as 23% of breeders which did not complete a breeding attempt were never recaptured in the study site again, the higher failure rates in territories well connected to areas of high goshawk activity could also reflect that some parents in those territories were predated by goshawks and hence were unable to complete the breeding attempt.
Juvenile survival
Our analysis confirmed that juvenile owl survival was positively related to food availability [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF][START_REF] Millon | Natal conditions alter age-specific reproduction but not survival or senescence in a long-lived bird of prey[END_REF][START_REF] Millon | Dampening prey cycle overrides the impact of climate change on predator population dynamics: A long-term demographic study on tawny owls[END_REF]. Estimates of juvenile owl survival were lowest in low vole years (Appendix S7). If mothers were able to predict the food conditions that their offspring would experience they should be less inclined to allocate resources towards reproduction in low vole years, due to the reduced probability of these offspring being recruited into the population. This may in part explain why individuals allocated relatively few resources towards reproduction (i.e. smaller clutch sizes) in years when food was scarce.
Reproductive strategies in relation to changing environmental conditions
A reproductive strategy can be defined as the set of decisions which influence the number of offspring an individual produces. Owl breeding strategies appeared to change in response to extrinsic conditions. Individuals allocated more resources towards reproduction (in terms of breeding propensity and clutch size) in years when food was abundant (Fig. 1 & Fig. 2).
Although we found no evidence to support our prediction that owls would attempt to minimise their vulnerability to predation by breeding less frequently or laying smaller clutches in years when predation risk was high, we did find evidence to suggest that owls responded to changes in predation risk, by making facultative decisions about whether to continue with their breeding attempt. However, the observed increase in incomplete nesting attempts with increasing predation risk also be partly due to parent(s) being killed, hence being unable to complete the breeding attempt, rather than a facultative decision not to continue the attempt. There was no year-to-year collinearity between our temporal covariates of predation risk and food availability. However, when averaged over a larger time scale (5 years) these covariates were correlated, and hence both environmental conditions changed simultaneously in opposite ways, with spring vole densities decreasing and predation risk increasing over the course of the study period. Therefore, we were unable to fully disentangle the effects of food availability and predation risk on owl breeding decisions. As the overall percentage of failed breeding attempts was very low (4% on average), the main reproductive decisions influencing reproductive output were primarily breeding propensity then clutch size. Indeed, the proportion of the population breeding and average clutch size explained 83% and 16% of the total variation in annual reproductive success (measured as the average number of chicks fledged per occupied owl territory) of the tawny owl population respectively. Whereas whether breeding attempts were completed only explained 0.1% of the total variation in reproductive success (see Appendix S8). Consequently, food availability seemed to have a greater impact on breeding propensity than changing predation risk (Fig. 1, Table 1) and be the main extrinsic factor driving variation in reproductive output, thus shaping reproductive strategies in tawny owls. However, the strength of the relationship between reproductive output and food availability weakened as predation risk increased.
As food availability declined (specifically as vole populations switched from high to low amplitude cycles; [START_REF] Cornulier | Europe-wide dampening of population cycles in keystone herbivores[END_REF] and predation risk increased, tawny owls seemed to breed more frequently, but invested less per breeding attempt. By spreading reproductive effort more evenly across years, a 'bet-hedging' reproductive strategy, minimises variation in reproductive success, and can actually increase an individual's fitness in certain situations [START_REF] Slatkin | Hedging one's evolutionary bets[END_REF][START_REF] Starrfelt | Bet-hedging-a triple trade-off between means, variances and correlations[END_REF]. Consequently, given that owl survival was lowest in years when food was scarce and goshawk abundance was high, our results could reflect that owls have switched from an intermittent reproductive strategy of saving resources to invest more in one, or a few reproductive attempts in the future, to a 'bet-hedging' reproductive strategy.
Together our results suggest that extrinsic conditions and intrinsic attributes have a combined and interactive effect on reproductive decisions. Changes in extrinsic conditions, particularly food availability, were the main factors shaping owl reproductive decisions, as the association between intrinsic attributes and owl breeding decisions was relatively weak in comparison.
This could in part be due to environmental variation in this system being relatively high because of the cyclical dynamics of vole populations, and the relatively recent recovery of an apex predator, thus swamping the contribution of intrinsic attributes to reproductive strategies. Although many of our results were in line with previous studies and theoretical predictions, our comprehensive approach highlights the complex nature of how intrinsic and extrinsic trade-offs act in combination to shape tawny owl reproduction. Furthermore, the length of this study has enabled us to provide some empirical evidence, albeit correlative, of long-lived predators altering their life-history strategies in response to changes in multiple interacting environmental factors.
Fig. 1 .
1 Fig. 1. Variation in the probability of adult female tawny owls breeding in relation to changes
Fig. 3 .
3 Fig. 3. The mean proportion of tawny owl breeding attempts which were observed to reach
Table 1 .
1 Parameter estimates and model selection examining how tawny owls breeding 685 propensity varies in relation to fluctuations in predation risk (total goshawk abundance; local 686 goshawk abundance; connectivity of the owls territory to all predator nest sites; distance the 687 owl was nesting from the nearest predator) and food availability (spring vole densities; spatial 688 variation in vole densities across the study site). Breeding propensity was also analysed in 689 relation to whether the individual had successfully bred the previous year and the number of 690 years elapsed since the owl first started breeding (a measure of age). The most parsimonious
691
692 model is emboldened.
Model np Estimate SE ΔAICc
1. Null 3 27.99
2. Total goshawk 4 0.40 0.24 27.37
3. Local goshawk 4 0.45 0.25 27.08
4. Connectivity to goshawks 4 -0.03 0.12 29.97
5. Nearest goshawk 4 0.05 0.10 29.75
6. Spring voles density 4 1.09 0.26 16.20
7. Categorical spring vole density (CSV) 6 -0.83 0.56 23.87
Spatial variation in vole densities (SVVD) -0.62 0.44
CSV x SVVD 0.03 0.60
8. Breeding success previous year (BS) 4 0.34 0.22 27.81
9. Years since 1 st reproduction (Y1st) 4 0.07 0.03 24.45
10. Spring voles 5 1.14 0.23 10.84
+ Local goshawk 0.51 0.18
11. Spring voles (SV) 6 1.15 0.23 6.29
+ Local goshawk (LG) 0.14 0.21
SV x LG -0.68 0.26
12. Breeding success previous year 5 0.34 0.23 24.33
+ Years since 1st reproduction 0.07 0.03
13. Breeding success previous year 6 -0.30 0.35 21.03
Years since 1st reproduction -0.01 0.05
BS x Y1st 0.14 0.06
14. Breeding success previous year 9 -0.34 0.35 0
Years since 1st reproduction -0.01 0.05
BS x Y1st 0.13 0.06
Spring voles 1.17 0.23
Local goshawk 0.13 0.22
SV x LG -0.69 0.26
693
694
Table 2 .
2 Parameter estimates and model selection to determine whether variation in tawny owl investment in reproduction (clutch size) was related to proxies of predation risk (total goshawk abundance; local goshawk abundance; connectivity of the owls territory to all predator nest sites; distance the owl was nesting from the nearest predator), food availability (spring vole densities; spatial variation in vole densities across the study site) and intrinsic attributes (whether the individual had successfully bred the previous year and the number of years since the individuals first breeding attempt). The most parsimonious model is highlighted in bold.
Model np Estimate SE ΔAICc
1. Null 3 17.11
2. Total goshawk 4 -0.035 0.032 17.99
3. Local goshawk 4 -0.017 0.033 18.88
4. Connectivity to goshawk 4 0.007 0.024 19.04
5. Nearest goshawk 4 -0.007 0.022 19.02
6. Spring vole density 4 0.125 0.023 0.00
7. Categorical spring vole density (CSV) 6 -0.130 0.059 6.52
Spatial variation in vole densities (SVVD) -0.068 0.036
CSV x SVVD -0.020 0.060
8. Breeding success previous year 4 0.028 0.046 18.75
9. Years since 1st reproduction 4 0.002 0.006 18.97
Table 4 .
4 Model selection for annual survival of female tawny owls in their first year of life 714 between 1985 and 2013 in relation to predation risk (total goshawk abundance; local goshawk 715 abundance; connectivity of the owls territory to all predator nest sites; distance the owl was nesting 716 from the nearest predator) and food availability (autumn vole density). Recapture probability was 717 modelled as [a(1,2-3,4+)+t]. The most parsimonious model is emboldened.
718
Models
Acknowledgments
We thank B. Sheldon and two anonymous reviewers for all their helpful comments on a previous version of the manuscript. Our thanks also go to M. Davison, B. Little, P. Hotchin, D. Anderson and all other field assistants for their help with data collection and Forest Enterprise, particularly Tom Dearnley and Neville Geddes for facilitating work in Kielder Forest. This work was partly funded by Natural Research Limited and a Natural Environment Research Council studentship NE/J500148/1 to SH and grant NE/F021402/1 to XL. Forest
Research funded all the fieldwork on goshawks, tawny owls and field voles during 1973-1996. In addition, we are grateful to English Nature and the BTO for issuing licences to visit goshawk nest sites.
Data accessibility
All data associated with the study which is not given in the text is available in the Dryad Digital Repository. http://dx.doi.org/10.5061/dryad.6n579.
Table 3. Model estimates and selection for analyses investigating the relationship between the probability of tawny owl breeding attempts being completed to the fledgling stage and proxies of predation risk (total goshawk abundance; local goshawk abundance; connectivity of the owls territory to all predator nest sites; distance the owl was nesting from the nearest predator), food availability (spring vole densities; spatial variation in vole densities across the study site) and attributes intrinsic to the breeder (whether they had successfully bred the previous year and the number of years since their first breeding attempt) and the breeding attempt (clutch size). The most parsimonious model is emboldened.
Model
Supporting Information
The following supporting information is available for this article:
Appendix S1: Estimating the number of tawny owls killed each year by the goshawk population.
Appendix S2: Method used to calculate the connectivity measure of predation risk for each owl territory. | 53,580 | [
"18913",
"843982"
] | [
"188653",
"57186"
] |
01766430 | en | [
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01766430/file/25_Lieury%20Millon%20et%20al%202017%20Mam%20Biol_Ageing%20in%20vixens.pdf | Nicolas Lieury
Nolwenn Drouet-Hoguet
Sandrine Ruette
email: [email protected]
Sébastien Devillard
Michel Albaret
Alexandre Millon
Reproductive senescence in the red fox Rural populations of the red fox Vulpes vulpes show little evidence of reproductive senescence
Keywords: Litter size, Vulpes vulpes, Placental scar count, Embryo count, Reproductive senescence
The ageing theory predicts fast and early senescence for fast-living species. We investigated whether the pattern of senescence of a medium-sized, fast-living and heavily-culled mammal, the red fox (Vulpes vulpes), fits this theoretical prediction. We used cross-sectional data from a large-scale culling experiment of red fox conducted over six years in five study sites located in two regions of France to explore the age-related variation in reproductive output. We used both placental scars and embryos counts from 755 vixens' carcasses aged by the tooth cementum method (age range: 1-10), as proxies for litter size. Mean litter size per vixen was 4.7 ± 1.4. Results from Generalized Additive Mixed Models revealed a significant variation of litter size with age. Litter size peaked at age 4 with 5.0 ± 0.2 placental scars and decreased thereafter by 0.5 cubs per year. Interestingly, we found a different age-specific variation when counting embryos which reached a plateau at age 5-6 (5.5 ± 0.2) and decreased slower than placental scars across older ages, pointing out embryo resorption as a potential physiological mechanism of reproductive senescence in the red fox. Contrary to our expectation, reproductive senescence is weak, occurs late in life and takes place at an age reached by less than 11.7% of the population such that very few females exhibit senescence in these heavily culled populations.
Introduction
Senescence, or ageing is the gradual deterioration of physical condition and cellular functioning, which results in a decline in fitness with age [START_REF] Kirkwood | Why do we age?[END_REF][START_REF] Sharp | Reproductive senescence in a cooperatively breeding mammal[END_REF]. Ageing can be expressed as a reduction in survival probability and/or a deterioration of reproductive efficiency, including decrease in the probability to give birth and reduced litter size. It is now recognized that both reproductive and actuarial senescence are widespread in the wild. Senescence rate greatly vary across individuals [START_REF] Bouwhuis | Individual variation in rates of senescence: natal origin effects and disposable soma in a wild bird population[END_REF], populations [START_REF] Lemaître | Early-late life trade-offs and the evolution of ageing in the wild[END_REF] and species [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF][START_REF] Nussey | Senescence in natural populations of animals: widespread evidence and its implications for biogerontology[END_REF]. Life-history theory provides a framework for predicting the variability of ageing across species. Major life-history traits, such as the age at first reproductive event, reproductive lifespan and number and size of offspring, vary across species, even when bodysize is controlled for [START_REF] Bielby | The fast-slow continuum in mammalian life history: an empirical reevaluation[END_REF][START_REF] Gittleman | Carnivore life history patterns: Allometric, phylogenetic, and ecological associations[END_REF][START_REF] Harvey | Life history variation in Primates[END_REF][START_REF] Read | Life history differences among the eutherian radiations[END_REF][START_REF] Stearns | The influence of size and phylogeny on patterns of covariation among life-history traits in the mammals[END_REF]. Such response led to the concept of "fast-slow continuum" of life-history variations, which categorises species from short-lived and highly reproductive species to long-lived species showing reduced reproductive output [START_REF] Cody | A general theory of clutch size[END_REF][START_REF] Cole | The population consequences of life history phenomena[END_REF][START_REF] Dobzhansky | Evolution in the tropics[END_REF][START_REF] Gaillard | An analysis of demographic tactics in bird and mammals[END_REF][START_REF] Lack | The significance of clutch size[END_REF][START_REF] Promislow | Living fast and dying young: A comparative analysis of life-history variation among mammals[END_REF][START_REF] Read | Life history differences among the eutherian radiations[END_REF][START_REF] Stearns | The influence of size and phylogeny on patterns of covariation among life-history traits in the mammals[END_REF]. As synthesised by [START_REF] Gaillard | Life Histories, Axes of Variation[END_REF], the fast-slow continuum can be interpreted as the range of possible solutions to the trade-off between reproduction and survival. The variation in ageing pattern along the continuum of senescence has been assessed by [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF]. These authors showed that both agespecific mortality and fertility patterns were strongly heterogeneous among vertebrates. Using data from 20 populations of intensively monitored vertebrates, they concluded that ageing is influenced by the species' position on the fast-slow continuum, which sets the principles of a continuum of senescence that predicts fast and early senescence for fast-living species [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF].
The red fox Vulpes vulpes is a medium-sized carnivore, known to have a fast reproductive rate with high productivity and early sexual maturity [START_REF] Englund | Some aspects of reproduction and mortality rates in Swedish foxes (Vulpes vulpes), 1961 -63 and 1966 -69[END_REF][START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF][START_REF] Harris | Demography of two urban fox (Vulpes vulpes) populations[END_REF][START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF]. According to the life history theory of ageing, red fox is therefore expected to display an early and fast senescence. To date, the demography of red fox has been mainly studied in anthropogenic contexts, and evidence of senescence in this species is mixed [START_REF] Artois | Reproduction du renard roux (Vulpes vulpes) en France: rythme saisonnier et fécondité des femelles[END_REF][START_REF] Cavallini | Reproduction of the red fox Vulpes vulpes in Central Italy[END_REF][START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF][START_REF] Harris | Demography of two urban fox (Vulpes vulpes) populations[END_REF][START_REF] Marlow | Demographic characteristics and social organisation of a population of red foxes in a rangeland area in Western Australia[END_REF].
In France, red fox are hunted or even culled when locally classified as a pest species preying upon farmed and game species. Between 2002 and 2011, we conducted a fox culling experiment to measure the impact of removals on fox population dynamics in two rural regions [START_REF] Lieury | Compensatory Immigration Challenges Predator Control: An Experimental Evidence-Based Approach Improves Management[END_REF]. This landscape-scale experiment thus provided a unique opportunity to study the age-specific variation in reproduction. We addressed the variation in reproductive output with age, expecting an early onset of senescence. Recent papers have recommended a better comprehension of heterogeneity among life history traits in the wild, so as to improve the detection of cryptic senescence and its underlying mechanisms [START_REF] Hewison | Phenotypic quality and senescence affect different components of reproductive output in roe deer[END_REF][START_REF] Massot | An integrative study of ageing in a wild population of common lizards[END_REF][START_REF] Nussey | Senescence in natural populations of animals: widespread evidence and its implications for biogerontology[END_REF]. Thus, looking at a single reproductive trait might be misleading regarding senescence. Therefore we analysed two proxies of litter size (counts of placental scars and embryos) which may shed light on the underlying physiology of reproductive senescence.
Material and Methods
Study area and data collection
Data were obtained from culling campaigns performed as part of a large-scale culling experiment of the red fox in two French regions over six years [START_REF] Lieury | Compensatory Immigration Challenges Predator Control: An Experimental Evidence-Based Approach Improves Management[END_REF]. The carcasses of 899 vixens were collected in five distinct rural study areas (average size: 246 ± 53 km²; Fig. 1). All sites were located in the same range latitude: in Brittany (sites A, B and C; ≥10 km apart; 48°10'N, 03°00'W) and Champagne (sites D and E separated by the Seine River; 48°40'N, 04°20'E). Brittany landscape was dominated by bocage mixing farming and arable lands, with little forested area. In contrast, Champagne sites presented open field systems (mostly cereals and vineyard) and a larger forest cover compared to Brittany. The study took place from 2002 to 2011 but was not synchronous across all five sites. Hunting occurred between October and February, and trapping occurred between December and April.
Culling at the den occurred in April. Night shooting occurred only in sites D-E between December and May (see Lieury et al., 2015 for details).
Reproductive parameters
An estimation of the litter size could be made for 755 reproductive females with undamaged uterus among the 899 vixens collected (84%; Table 1). We used the number of embryos and the number of placental scars as two proxies for litter size (respectively on 394 and 361 individuals). When counting embryos only prenatal losses during early-pregnancy stages are considered while with placental scar counts, all losses between implantation and birth are taken into account. For pregnant females (i.e. females which were culled from February to April), embryos were counted. For the others, uteri were collected 12-48 h after the death of the animal, and soaked in water before freezing and stored at -20°C until examination. Uteri horns were opened longitudinally and examined for placental scars [START_REF] Elmeros | Placental scar counts and litter size estimations in ranched red foxes (Vulpes vulpes)[END_REF][START_REF] Lindström | Placental scar in the red fox (Vulpes vulpes L.) revisited[END_REF]. When the evaluation of litter size was questionable, we used a staining method to facilitate the identification of active placental scars [START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF].
The staining method allows for the identification of atypical scars, i.e. with a singular aspect when compared to others from the same uterus or from other uteri at the same period of examination. However it does not permit the distinction of scars that could have persisted from earlier pregnancies from those that have been due to resorption or abortion [START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF]. So, we did not estimate resorption rates from atypical placental scars counts.
Age determination and age classes
The age of foxes at death was determined from the carcasses based on the number of annual growth lines visible in the tooth cementum, the date of death and the expected date of birth on April 1 st [START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF]. Canine teeth, or premolar teeth when canines were unavailable or damaged, were extracted from the lower jaw following Matson's laboratories (Milltown, MT, USA) procedures [START_REF] Harris | Age determination in the red fox (Vulpes vulpes) -an evaluation of technique efficiency as applied to as sample of suburban fixes[END_REF]. Foxes were assigned to age-classes based on their recruitment into the adult population on February 1 st of the year following birth (i.e. at the age of 10 months old). Animals between 10 and 22 months of age were classified as ageclass 1 (yearlings) whereas older ones were classified as age-class 2, 3, and up to 10.
Modelling and data analysis
Although the Poisson distribution has been often applied to the counts of offspring such as litter size, the Gaussian distribution actually fits better such reproductive data that are typically associated with a narrower variance than expected under a Poisson distribution (Devenish-Nelson et al., 2013a;[START_REF] Mcdonald | A Comparison of regression models for small counts[END_REF]. We thus developed a model for age-dependent variation in litter size accounting for both among-sites and among-years variability [START_REF] Artois | Reproduction du renard roux (Vulpes vulpes) en France: rythme saisonnier et fécondité des femelles[END_REF]Devenish-Nelson et al., 2013b;[START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF] with a Gaussian distribution of error.
We used generalized additive mixed models (GAMM; [START_REF] Wood | Generalized additive models: an introduction with R[END_REF] to explore the relationship between vixen age and litter size without a priori hypothesis on its shape [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF]. Year and geographic area (study sites, 'Site', or region, 'Region') were tested as random factors to account for their potential confounding effects on litter size. Litter size may indeed depend on i) variations in habitat quality among sites or regions, ii) inter-annual variations in climate conditions or resources availability and iii) spatio-temporal variations of population densities between sites or regions.
Finally, we also tested the effect of the type of measure for litter size (i.e. placental scars vs. embryos) by adding a fixed effect 'Type' in the model.
We thus developed a full GAMM for the variations of litter size (LS) as follows: LS = s(Age)×Type + Age|Site + 1|Year,
The bars indicate the addition of a random effect of the 'Year' on the intercept (1) or of the site on the slope (Age). The parameterization s(Age)×Type denotes that the non-linear effect of vixen age was modelled independently for each type of the litter size proxy 'Type'.
Following [START_REF] Zuur | Mixed effects models and extensions in ecology with R[END_REF], we first started from the full random model and evaluated whether the age-specific variation in LS was similar among sites (random parameterisations: Age|Site vs. 1|Site), whether the spatial variation among regions was negligible when compared to the spatial variation among sites (1|Region vs. 1|Site) and whether the random effect of the year (1|Year) was important. According to [START_REF] Zuur | Mixed effects models and extensions in ecology with R[END_REF], parameters were estimated using Restricted Maximum Likelihood (REML) for random effects and Maximum Likelihood (ML) for fixed effects. Model selection was based on the AICc (Akaike Information Criterion corrected from small sample size; [START_REF] Burnham | Model selection and multimodel inference: a practical information-theoric approach[END_REF]. Once the random effects were selected, we performed an AICc-based model selection of fixed effects [START_REF] Zuur | Mixed effects models and extensions in ecology with R[END_REF] to test whether the type of measure affected age-specific variation in LS.
Finally, we estimated the rate of senescence by using least-squares linear regression models fitted through the mean values of each litter size, from the onset of senescence onwards, as predicted by the most parsimonious GAMM. Each point was weighted by the inverse of the variance so as to account for the small number of individuals in the oldest age classes.
All analyses were carried out in R.2.15.1 using packages mgcv and AICcmodavg (R Development Core Team, 2012;[START_REF] Wood | Generalized additive models: an introduction with R[END_REF]. Descriptive statistics of the data were presented as mean ± 1 SD and model estimates as mean ± 1 SE.
Results
Pooled over sites, years and age, litter size averaged 4.9 ± 1.4 when based on embryo counts and 4.5 ± 1.4 from counts of placental scars (see Table 2 for detailed results by age class).
From GAMM, all models including the random effects of the Year, the Site or the Region and the fixed effect of Age and Type had substantial support (ΔAICc < 2, Table 3). We retained the simplest of those models (Table 3). Placental scars count increased up to 5.0 ± 0.2 at the age of 4 (black line and dots in Fig. 2). From the age of 4 onwards, it significantly declined at a rate of senescence of 0.5 ± 0.02 cubs per year (Fig. 2). This pattern was consistent across study areas (random effects '1|Site' retained; Table 3.A), thereby suggesting that senescence pattern is likely to be a generalized process in red fox populations. We found divergence in senescence patterns between the two proxies of litter size (fixed effect s(Age)×Type retained; Table 3.B). Embryo counts peaked at age five but the rate of senescence in embryo counts afterwards was much reduced compared to placental scars (0.1 ± 0.01 cubs per year; Fig. 2). Finally, only a small proportion of females were killed after the age of 4 and 5 (11.7 and 5.6% respectively, median age at death: 2 years, Fig. 2), such that very few females exhibited senescence in these heavily culled populations.
Discussion
We took advantage of a large dataset collected over 10 years from a landscape-scale culling experiment in rural France, to investigate the deterioration in reproductive output with age in the red fox. Contrary to our expectation, our results revealed a weak and late reproductive senescence in this species. The onset of senescence occurred late (four years old) relatively to the age structure of the population (median age at death: two years old). The decline in litter size after four or five years old depending on the proxy used was significant but clearly more pronounced for placental scars count than for embryos count, suggesting increased embryo resorption as a likely physiological mechanism of senescence. This weak and late senescence concerned very few females in the populations (i.e. less than 11.7% of the females in the population reached the age of the onset of senescence) so that the impact of senescence on the dynamics of these heavily culled populations is likely to be negligible.
Limits inherent to post-mortem and cross-sectional data for investigating senescence
Monitoring reproductive performance in red fox is challenging on a large scale, due to its nocturnal, cryptic and elusive behaviour. We used post-mortem examination of carcasses to measure litter size and age. Although these methods may overcome some of the challenges of studying reproduction in free-ranging carnivore populations, we are aware of the inherent weaknesses in their applications. First, we estimated red fox age from cementum annuli lines in teeth. Although the method is widely used in carnivores studies such as red fox [START_REF] Harris | Age determination of badgers (Meles meles) from tooth wear: the need for a pragmatic approach[END_REF] or hyaena [START_REF] Van Horn | Age estimation and dispersal in the spotted hyena (Crocuta crocuta)[END_REF], misclassification has been noted due to some animals that did not develop a cementum line in one year (Grau et al., 1970 on raccoons;King, 1991 on stoats;[START_REF] Matson | Progress in cementum aging of martens and fishers[END_REF]. Deposition of cementum annuli and tooth wear may also vary with diet, season and region [START_REF] Costello | Reliability of the cementum annuli technique for estimating age of black bears in New Mexico[END_REF] on black bears). The method has not been applied on red foxes of known age. Thus we could not rule out some misclassifications although not quantifiable. Working with dead animals, we used placental scars and embryos counts as proxies for litter size. Placental scars counts provide a possible overestimate of litter size, due to embryos resorption, prenatal mortality and stillborn litters [START_REF] Vos | Reproductive performance of the red fox, Vulpes vulpes, in Garmish-Partenkirchen, Germany, 1987-1992[END_REF][START_REF] Elmeros | Placental scar counts and litter size estimations in ranched red foxes (Vulpes vulpes)[END_REF]. Inversely in a certain time postpartum, litter size might be underestimated by placental scars count due to the regeneration of uterine tissues [START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF][START_REF] Harris | Demography of two urban fox (Vulpes vulpes) populations[END_REF][START_REF] Heydon | Demography of rural foxes (Vulpes vulpes) in relation to cull intensity in three contrasting regions of Britain[END_REF][START_REF] Lindström | Placental scar in the red fox (Vulpes vulpes L.) revisited[END_REF][START_REF] Marlow | Demographic characteristics and social organisation of a population of red foxes in a rangeland area in Western Australia[END_REF][START_REF] Mcilroy | The reproductive performance of female red foxes, Vulpes vulpes, in central-western New South Wales during and after a drought[END_REF][START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF].
Our approach relies on the use of data from large-scale culling experiments to investigate senescence in five population replicates. Yet, the inference of senescence from life-table studies using cross-sectional data has been questionable for a long time. Indeed, the needs to consider sources of heterogeneity, such as unequal probability of sampling, individual heterogeneity, climate, density or early life conditions, advocate for following individuals throughout their life [START_REF] Gaillard | Senescence in natural populations of mammals: a reanalysis[END_REF][START_REF] Gaillard | An analysis of demographic tactics in bird and mammals[END_REF][START_REF] Nussey | Senescence in natural populations of animals: widespread evidence and its implications for biogerontology[END_REF][START_REF] Reid | Age-specific reproductive performance in red-billed choughs Pyrrhocorax pyrrhocorax: patterns and processes in a natural population[END_REF].
However, as non-selective methods of culling (trapping and hunting) were used, there is no reason to expect bias toward low or high reproductive individuals, since the age of adult's foxes could not be visually assessed. Moreover, we took into account the variability between populations by using samples from two contrasted regions and over several years. Nevertheless, it is important to consider both within (improvement, senescence) and betweenindividuals (selective appearance and disappearance) process in the estimation of patterns of age-dependent reproduction [START_REF] Reid | Age-specific reproductive performance in red-billed choughs Pyrrhocorax pyrrhocorax: patterns and processes in a natural population[END_REF][START_REF] Van De Pol | Age-dependent traits: a new statistical model to separate within and between individual effects[END_REF]. For instance, if individuals with high reproduction have poorer survival, mean reproduction may decline across older age because only individuals that invest little on reproduction survive. Selective disappearance has thus been found to partly mask the age-related changes in reproductive traits in ungulates [START_REF] Nussey | Measuring senescence in wild animal populations: towards a longitudinal approach[END_REF][START_REF] Nussey | The rate of senescence in maternal performance increases with early-life fecundity in red deer[END_REF].
We have no possibility to check for that kind of individual heterogeneity, determined by genetic and/or natal environment conditions. However, we found senescence in both traits i.e. numbers of placental scars and embryos and have no reason to expect a different sampling bias in vixens collected before or after parturition. Moreover, we did not observe a reduction in litter size variance with age expected in case of selective appearance or disappearance process (result not shown).
Besides, cross-sectional data are not systematically biased by individual heterogeneity and earlier studies revealing reproductive senescence from such data had been a posteriori validated by longitudinal data [START_REF] Hanks | Reproduction of elephant, Loxodonta africana, in the Luangwa Valley, Zambia[END_REF]. Hence, we are confident that our approach provides a relatively accurate picture of the age-related pattern in red fox reproduction. However, we call for long-term individual-based time series throughout longitudinal datasets to confirm senescence in free-ranging red fox populations.
Reproductive senescence in the red fox
Age-related reproductive output in red fox has long been discussed, but without reaching unanimous findings regarding senescence. Our results confirmed the increase of litter size with age among the young age-classes with a maximum reached at the age of 4-5 years old (see also [START_REF] Englund | Some aspects of reproduction and mortality rates in Swedish foxes (Vulpes vulpes), 1961 -63 and 1966 -69[END_REF][START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF][START_REF] Lindström | Food limitation and social regulation in a red fox population[END_REF]. However, the decrease in litter size for older vixens has rarely been evidenced [START_REF] Artois | Reproduction du renard roux (Vulpes vulpes) en France: rythme saisonnier et fécondité des femelles[END_REF][START_REF] Cavallini | Reproduction of the red fox Vulpes vulpes in Central Italy[END_REF][START_REF] Marlow | Demographic characteristics and social organisation of a population of red foxes in a rangeland area in Western Australia[END_REF]. Moreover, litter size estimated from placental scars was even reported to be independent of age in several red foxes populations (France: [START_REF] Artois | Reproduction du renard roux (Vulpes vulpes) en France: rythme saisonnier et fécondité des femelles[END_REF]Central Italy: Cavallini and Santini, 1996;Denmark: Elmeros et al., 2003;and in Western Australia: Marlow et al., 2000). Here we were able to reveal a weak senescence pattern in reproduction in vixens from five to ten years old, and that affects reproduction at a rate of one cub less every two years when considering placental scars. [START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF] and [START_REF] Harris | Demography of two urban fox (Vulpes vulpes) populations[END_REF] described for the first time reproductive senescence in London urban fox population. In a sample of 192 vixens, litter size significantly decreased in their fifth and sixth breeding season. Our results obtained in rural areas where fox densities are lower, are in accordance with those results. Interestingly in South-East Australia, i.e. in a context of invasion, reproductive parameters peaked in fifth-and sixth-year vixens, but vixens over eight years of age produced as many cubs as first-year breeders did [START_REF] Mcilroy | The reproductive performance of female red foxes, Vulpes vulpes, in central-western New South Wales during and after a drought[END_REF].
Reproductive senescence has been identified in several natural populations of mammals including ungulates, primates and domestic livestock [START_REF] Beehner | The ecology of conception and pregnancy failure in wild baboons[END_REF][START_REF] Ericsson | Age-related reproductive effort and senescence in free-ranging moose, Alces alces[END_REF][START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF][START_REF] Nussey | The rate of senescence in maternal performance increases with early-life fecundity in red deer[END_REF][START_REF] Promislow | Senescence in Natural Populations of Mammals: A Comparative study[END_REF]. To date only little evidence of reproductive senescence exists in carnivores, most of them focusing on long-lived species such as lions [START_REF] Packer | Reproductive success of lions[END_REF], bears [START_REF] Schwartz | Reproductive maturation and senescence in the female brown bear[END_REF]; but see Dugdale et al., 2011 on badgers). Only recently, senescence has been detected in free-ranging American mink, Neovison vison, a short-lived species with early age at first parturition [START_REF] Melero | Density-and age-dependent reproduction partially compensates culling efforts of invasive non-native American mink[END_REF].
The proposal formulated by [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF] that the magnitude of senescence is tightly associated with life history, mainly the slow-fast continuum, has been previously verified in populations of similar traits such as marmots [START_REF] Berger | Agespecific survival in the socially monogamous alpines marmot (Marmota marmota): evidence of senescence[END_REF], meerkats [START_REF] Sharp | Reproductive senescence in a cooperatively breeding mammal[END_REF], ground squirrels [START_REF] Broussard | Senescence and age-related reproduction of female Columbian ground squirrels[END_REF], opossums [START_REF] Austad | Retarded senescence in an insular population of Virginia opossums (Didelphis virginiana)[END_REF], and badgers [START_REF] Dugdale | Age-specific breeding success in a wild mammalian population: selection, constraint, restraint and senescence[END_REF]. Our findings provided evidence of weak reproductive senescence in the fast-living red fox, which occurred late (4-5 years old) relatively to the agestructure of our populations, and, therefore does not fully support the proposal of [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF]. Furthermore, it concerned only very few females, since only a small proportion of vixens were killed after the age of 4 and 5.
Increasing embryo resorption with age: a physiological mechanism underpinning reproductive senescence?
Interestingly, senescence was more pronounced on placental scars than on the number of embryos. It suggested that gestation failure is the most likely cause of the decline in red fox litter size, rather than a decrease in ovulation rate. Spontaneous embryo resorption is an important issue in obstetrics, but also in livestock breeding and wildlife breeding programs. In wild species, increasing implantation failure with age has been identified in several taxa such as roe deer (Borg, 1970, Hewison andGaillard, 2011). Accordingly, reproductive senescence resulted from a combination uterine defects and reduction in oocyte numbers in elephant [START_REF] Hanks | Reproduction of elephant, Loxodonta africana, in the Luangwa Valley, Zambia[END_REF]. The success of embryo to develop depends on a complex series of cellular and molecular mechanisms associated with hormonal balance [START_REF] Cross | Implantation and the placenta: key pieces of the developmental puzzle[END_REF][START_REF] Finn | The implantation reaction[END_REF].
According to the disposal soma theory of ageing, individuals should invest less effort in the maintenance of somatic tissues for those that invested early in life, based on the best allocation of resources among the various metabolic tasks [START_REF] Kirkwood | Evolution of ageing[END_REF][START_REF] Kirkwood | Evolution of senescence: late survival sacrificed for reproduction[END_REF]. In the case of red fox, the ageing of reproductive tracts (mainly uterus) probably plays an important role in the decrease of red fox litter size with age.
Finally, our study highlighted that reproductive senescence occurs in red fox populations, although being weak and occurring late in life. The consequences of reproductive senescence on red fox population dynamics might be negligible due to the low proportion of females in the population that reached the age at the onset of senescence. In the context of intensive removal through hunting and trapping acting on population densities [START_REF] Lieury | Compensatory Immigration Challenges Predator Control: An Experimental Evidence-Based Approach Improves Management[END_REF], a proper assessment of the effect of the variation in population density and removal pressure over time and populations on the reproductive performance is needed to investigate process such as compensatory reproduction. Lines represent GAMM predictions (plain) and their associated standard error (dashed).
Figure 1 .
1 Figure 1. Location showing the sites where the landscape-scale culling experiments of red
Figure 2 .
2 Figure 2. Variation in litter size of the red fox in relation to the age of vixens (in years)
Fig. 1
Acknowledgements
We are grateful to the regional and local Hunter's associations, and we warmly thank all people involved in field activity. We are grateful to the regional and local Hunters' Associations, especially Y. Desmidt, J.-L. Pilard, P. Hecht, C. Mercuzot, J. Desbrosse, and C. Urbaniak for sustaining the program. We warmly thank F. Drouyer, B. Baudoux, N. Haigron, C. Mangeard, and T. Mendoza for efficient support in the fieldwork, our colleagues working on hares, especially Y. Bray and J. Letty and all local people in charge of hunting, and hunters and trappers who helped in counting and collecting foxes. This study was partially funded by the Regional Hunters' Association of Champagne-Ardenne, and the Hunters' Associations of Aube and Ille-et-Vilaine.
This work was supported by the Regional Hunters' Association of Champagne-Ardenne, and the Hunters' Associations of Aube and Ille-et-Vilaine.
Reproductive senescence in the red fox | 34,718 | [
"1174741",
"21698",
"18913"
] | [
"260238",
"173615",
"194495",
"173616",
"188653"
] |
01766433 | en | [
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01766433/file/Lieury%20et%20al%20Designing%20cost-effective%20CR%20survey_main%20text_RESUB.pdf | Nicolas Lieury
Sébastien Devillard
Aurélien Besnard
Olivier Gimenez
Olivier Hameau
Cécile Ponchon
Alexandre Millon
email: [email protected]
Designing cost-effective capture-recapture surveys for improving the monitoring of survival in bird populations
Keywords: survey design, optimisation, statistical power, cost efficiency, stage-structured population Running head: Cost-effective Capture-Recapture surveys
Population monitoring traditionally relies on population counts, accounting or not for the issue of detectability. However, this approach does not permit to go into details on demographic processes. Therefore, Capture-Recapture (CR) surveys have become popular tools for scientists and practitioners willing to measure survival response to environmental change or conservation actions. However, CR surveys are expensive and their design is often driven by the available resources, without estimation about the level of precision they provide for detecting changes in survival, despite optimising resource allocation in wildlife monitoring is increasingly important. Investigating how CR surveys could be optimised by manipulating resource allocation among different design components is therefore critically needed. We have conducted a simulation experiment exploring the statistical power of a wide range of CR survey designs to detect changes in the survival rate of birds. CR surveys differ in terms of number of breeding pairs monitored, number of offspring and adults marked, resighting effort and survey duration. We compared open-nest (ON) and nest-box (NB) monitoring types, using medium-and long-lived model species. Increasing survey duration and number of pairs monitored increased statistical power. Long survey duration can provide accurate estimations for long-lived birds even for small population size (15 pairs). A costbenefit analysis revealed that for long-lived ON species, ringing as many chicks as possible appears as the most effective survey component, unless a technique for capturing breeding birds at low cost is available to compensate for reduced local recruitment. For medium-lived NB species, focusing the NB rounds at a period that maximises the chance to capture breeding females inside nest-boxes is more rewarding than ringing all chicks. We show that integrating economic costs is crucial when designing CR surveys and discuss ways to improve efficiency by reducing duration to a time scale compatible with management and conservation issues.
Introduction
Studies aiming at detecting the response of wild populations to environmental stochasticity, anthropogenic threats or management actions (e.g. harvest, control or conservation), traditionally rely on the monitoring of population counts. Such data, however, suffers from a variable detectability of individuals that can alter the reliability of inferred temporal trends [START_REF] Williams | Analysis and Management of Animal Populations: Modeling, Estimation, and Decision Making[END_REF]. Methods have been developed to account for the issue of detectability, based on the measure of the observer-animal distance (Distance Sampling; [START_REF] Buckland | Introduction to Distance Sampling. Estimating abundance of biological populations[END_REF] or on multiple surveys (hierarchical modeling, [START_REF] Royle | Hierarchical Modeling and Inference in Ecology: the Analysis of Data from Populations, Metapopulations and Communities[END_REF]. Still, population size being the result of a balance between survival, recruitment, emigration and immigration, inferring population status from counts, whatever detectability is accounted for or not, may impair the assignment of the demographic status of a population (source vs. sink; Furrer and Pasinelli 2016, [START_REF] Weegman | Integrated population modelling reveals a perceived source to be a cryptic sink[END_REF].
Surveys that consist of capturing, marking with permanent tags, releasing and then recapturing wild animals (i.e. capture-recapture surveys, hereafter CR surveys), to gather longitudinal data and hence derive survival rates while accounting for imperfect detection [START_REF] Lebreton | Modeling survival and testing biological hypotheses using marked animals: a unified approach with case studies[END_REF], have become highly popular tools in both applied and evolutionary ecology [START_REF] Clutton-Brock | Individuals and populations: the role of longterm, individual-based studies of animals in ecology and evolutionary biology[END_REF]. Opting for a mechanistic instead of a phenomenological approach has indeed proved to be particularly informative for identifying the response of a population to any perturbation, and ultimately allows to pinpoint the appropriate management strategy. Over the last decade, an increasing number of practitioners have set up CR surveys with the aim of quantifying survival variation in response to i) changing environment such as climate or habitat loss [START_REF] Grosbois | Assessing the impact of climate variation on survival in vertebrate populations[END_REF], ii) hunting [START_REF] Sandercock | Is hunting mortality additive or compensatory to natural mortality ? Effects of experimental harvest on the survival and cause-specific mortality of willow ptarmigan[END_REF], iii) other anthropogenic mortality causes (e.g. collision with infrastructures; [START_REF] Chevallier | Retrofitting of power lines effectively reduces mortality by electrocution in large birds: an example with the endangered Bonelli's eagle[END_REF], and iv) the implementation of management/conservation actions [START_REF] Lindberg | A review of designs for capturemarkrecapture studies in discrete time[END_REF][START_REF] Koons | Effects of exploitation on an overabundant species: the lesser snow goose predicament[END_REF], review in Frederiksen et al. 2014). In all these contexts, the estimation of survival, and its temporal variation, is particularly informative for building effective evidence-based conservation [START_REF] Sutherland | The need for evidencebased conservation[END_REF]. As an example, the high adult mortality due to electrocution in an Eagle owl Bubo bubo population of the Swiss Alps, as revealed by a CR survey, would have not been detected if the survey was solely based on population counts, that remained stable over 20 years [START_REF] Schaub | Massive immigration balances high anthropogenic mortality in a stable eagle owl population: Lessons for conservation[END_REF].
The effectiveness of a CR survey to detect and explain changes in survival rates over time depends on the levels of field effort dedicated to several survey components: i) the size of the sample population, ii) the proportion of offspring and adults marked, iii) the recapture/resighting rate of previously marked individuals and iv) the number of surveying years (or survey duration; [START_REF] Yoccoz | Monitoring of biological diversity in space and time[END_REF][START_REF] Williams | Analysis and Management of Animal Populations: Modeling, Estimation, and Decision Making[END_REF]. In a conservation context, considering only the usual trade-off between the number of marked individuals and the number of surveyed years is of little help when designing a CR survey. Indeed, practitioners need to know as soon as possible whether survival is affected by a potential threat or has alternatively benefited from a management action. Implementing CR surveys is however particularly costly in terms of financial and human resources, as it requires skilled fieldworkers over an extensive time period. Therefore, most surveys are actually designed according to the level of available resources only, and without any projection about the precision they provide for estimating survival and the statistical power they obtain for detecting survival variability.
The life-history characteristics (e.g. survival and recruitment rates) of the study species largely determine which of the different components of a CR survey will provide the most valuable data. For instance, low recruitment of locally-born individuals (due to high juvenile mortality rate and/or high emigration rates) limits the proportion of individuals marked as juveniles recruited in the local population. In such a case, we expect that reducing the effort dedicated to mark offspring in favour of marking and resighting breeding individuals would improve survey efficiency. Therefore, manipulating both sampling effort and sampling design offer opportunities to optimise CR surveys. A few attempts have been made to improve the effectiveness of CR according to species' life-histories, though most of them remain speciesspecific [START_REF] Devineau | Planning Capture-Recapture Studies: Straightforward Precision, Bias, and Power Calculations[END_REF][START_REF] Williams | Cost-effective abundance estimation of rare animals: Testing performance of small-boat surveys for killer whales in British Columbia[END_REF][START_REF] Chambert | Heterogeneity in detection probability along the breeding season in Black-legged Kittiwakes: Implications for sampling design[END_REF][START_REF] Lindberg | A review of designs for capturemarkrecapture studies in discrete time[END_REF][START_REF] Lahoz-Monfort | Exploring the consequences of reducing survey effort for detecting individual and temporal variability in survival[END_REF]. Moreover, improving CR surveys in regards to the precision of survival estimates constitutes only one side of the coin and yet, the quantification of economic costs in the optimisation process is currently lacking. Assessing costs and benefits is therefore critical if we are to provide cost-effective guidelines for designing CR surveys. This optimisation approach is increasingly considered as an important step forward for improving the robustness of inferences in different contexts such as for population surveys [START_REF] Moore | Optimizing ecological survey effort over space and time[END_REF] or environmental DNA sampling [START_REF] Smart | Assessing the cost-efficiency of environmental DNA sampling[END_REF]).
Here we offer a simulation experiment investigating the relative efficiency of a wide array of CR survey designs in terms of statistical power to detect a change in survival rates. Alongside the usual how many and how long considerations, we focused our simulations on the how to and what to monitor. We further balanced the statistical benefit of each survey component with human/financial costs, derived from actual monitoring schemes. Our aim was to provide cost-effective guidelines for the onset of new CR surveys and the improvement of existing ones. Although our work was primarily based on the monitoring of bird populations, we discussed how this approach can be applied to improve the monitoring of other taxa.
Material and methods
2.1. Bird monitoring types and model species Our simulation experiment encompassed the two most common types of bird monitoring when applied on two different life-history strategies: long-lived and open-nesting species with high but delayed local recruitment vs. medium-lived and cavity-nesting species with rapid but low recruitment of locally-born individuals. These two types of monitoring are representative of what practitioners come across in the field and further largely determine the nature of the survey and the level of resources needed. Moreover, another prerequisite of our simulations was to ensure the availability of both detailed demographic data on the model species together with a precise estimation of the human and financial costs entailed by the monitoring.
In open-nesting (ON) surveys, chicks are typically ringed at the nest before fledging with a combination of coloured rings or a large engraved plastic ring with a simple alphanumeric code, in addition to conventional metal rings. Resightings can then be obtained without recapturing the birds using binoculars or telescopes. The identification of breeding birds is typically obtained when monitoring breeding success. For our model species for ON monitoring, we combined life-history and survey characteristics of two long-lived diurnal raptors, the Bonelli's eagle Aquila fasciata and the Egyptian vulture Neophron percnopterus [START_REF] Lieury | Relative contribution of local demography and immigration in the recovery of a geographically-isolated population of the endangered Egyptian vulture[END_REF][START_REF] Lieury | Geographically isolated but demographically connected: Immigration supports efficient conservation actions in the recovery of a range-margin population of the Bonelli's eagle in France[END_REF]. Monitoring typically consists of repeated visits of known territories during the breeding season for checking whether breeding occurs and the identity of breeding birds, and eventually ringing chicks. Breeding birds are difficult to capture, therefore limiting the number of newly marked breeders each year, although additional trapping effort can be deployed (adults are occasionally trapped, for fitting birds with GPS). Such captures are however highly time-consuming as it requires monitoring several pre-baiting feeding stations.
The second, highly common, monitoring type concerns cavity-nesting birds, whose surveys typically involve artificial nest-boxes (NB thereafter). All NBs are checked at least once a year, and additional visits concentrate on the restricted set of occupied NBs for ringing/recapturing both chicks and breeding birds. For building simulations on the NB type of monitoring, we combined information on life-history and survey characteristics from two medium-lived nocturnal raptors, the barn owl Tyto alba [START_REF] Altwegg | Age-specific fitness components and their temporal variation in the barn owl[END_REF]) and the little owl Athene noctua (OH & AM, unpub. data). These two species are known to prefer NB over natural or semi-natural cavities. NB monitoring typically consists of repeated visits of NB during the breeding season for checking whether breeding occurs, catching breeding females in NB and eventually ringing chicks. Breeding females are usually relatively easy to catch, thus allowing many newly marked adults to enter the CR dataset each year, in contrast to ON. Breeding males are typically more difficult to capture than females and require alternative, time-consuming, types of trapping [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF].
For the two types of monitoring, the resighting probability of non-breeding individuals (hereafter floaters) is low as such individuals are not attached to a spatially restricted nesting area. Life-cycle graphs and values of demographic parameters are given in the appendix (Table S1; Fig. S1).
Definition of the main components of CR surveys
We designed a set of surveys for both types of monitoring by varying the level of effort dedicated to four main components (Fig. S2): 1.
Survey duration:
For each type of monitoring, we set two different durations corresponding to 1-2 and 3-4 generations of the model species (i.e. 10/20 years and 5/10 years for long-and medium-lived species respectively).
Number of breeding pairs surveyed:
The number of pairs available for monitoring is usually lower in ON monitoring of long-lived species (with larger home-range) compared to NB monitoring of medium-lived species. Number of breeding pairs varied between 15-75 and 25-100 for ON and NB monitoring respectively.
3.
Proportion of monitored nests in which chicks are ringed: This proportion was made to vary from 25 to 100% for both types of monitoring.
4.
Proportion of breeders (re)captured/resighted: This proportion was set at three different levels (0.50, 0.65, 0.80). For ON monitoring, breeding birds are not physically caught but resighted at distance. However, we evaluated the added value of a monitoring option which consists of capturing and ringing unmarked breeding adults so as to compensate for the absence of ringed adults during the early years of the survey, due to delayed recruitment in long-lived species (five adults caught every year during the first five years of the survey). In order to reduce the number of computer-intensive simulations, we removed survey designs unlikely to be encountered in the field (e.g. only 25% of nests in which chicks are ringed when 25 breeding pairs are monitored for NB). Overall, a total of 132 and 66 sampling designs were built for ON and NB monitoring respectively (Fig. S2).
Simulating time-series of demographic rates and CR histories
The relevance of each sampling design was assessed from 3500 simulated CR datasets. As we were interested in exploring the ability of different sampling designs to detect changes in survival, each CR dataset was generated from a survival time-series that incorporated a progressive increase in survival, mimicking the effect of conservation actions. Note here that simulating a decrease in survival would have led to similar results. The slope of the conservation effect was scaled in an additive way among ages and/or territorial status according to empirical estimates from populations having benefited from conservation plans (adult survival rate increased from 0.77 to 0.88 for Bonelli's eagle, Chevalier et al. 2015; from 0.84 to 0.93 for Egyptian vulture, [START_REF] Lieury | Relative contribution of local demography and immigration in the recovery of a geographically-isolated population of the endangered Egyptian vulture[END_REF]. This increase in survival rate corresponds to an increase of approximately 1.0 on the logit scale. We simulated a gradual implementation of the conservation action over the years (3 and 7 years for medium-and long-lived species respectively) that resulted in an increase of e.g. adult survival from 0.37 to 0.61 and from 0.81 to 0.92 for medium-and long-lived species respectively (Fig. S3). We checked the range of survival rates obtained for medium-lived species fell within the temporal variation observed in the barn owl [START_REF] Altwegg | Age-specific fitness components and their temporal variation in the barn owl[END_REF]. For each simulated CR dataset, we added random environmental variations around average survival to match variation observe in specific studies (standard deviation constant across ages on logit scale: 0.072 for ON longlived species, [START_REF] Lieury | Relative contribution of local demography and immigration in the recovery of a geographically-isolated population of the endangered Egyptian vulture[END_REF]0.36 for NB medium-lived species, [START_REF] Altwegg | Age-specific fitness components and their temporal variation in the barn owl[END_REF]. Individual CR histories were thus simulated based on survival trends (plus environmental noise) and according to the defined life-history stages (see online supplementary material for the detailed simulation procedure).
CR analyses and contributions to statistical power
We analysed each simulated CR dataset using a multi-state (breeder, floater) CR model for ON monitoring and a single-state model for NB monitoring (detailed structures shown in Fig. S1, Table S1). We then ran three models with survival i) constant , ii) varying over years and iii) linearly related to the conservation action . We used ANODEV as a measure of the conservation effect on survival variation, as recommended by [START_REF] Grosbois | Assessing the impact of climate variation on survival in vertebrate populations[END_REF]. This statistic ensures a proper estimation of the effect of a temporal covariate whatever the level of the residual process variance. The ANODEV, follows a Fisher-Snedecor distribution, and was calculated as: where and are, respectively, the deviance and the number of parameters of the models [START_REF] Skalski | Testing the significance of individual-and cohort-level covariates in animal survival studies[END_REF]. As a measure of the statistical power to detect a change in survival rate, we counted the number of simulations in which the ANODEV was significant. Given the limited number of years typically available in a conservation context, we chose an -level = 0.2 to favour statistical power, at the expense of inflated probability of type I error [START_REF] Yoccoz | Use, overuse, and misuse of significance tests in evolutionary biology and ecology[END_REF][START_REF] Grosbois | Assessing the impact of climate variation on survival in vertebrate populations[END_REF]. A specific CR survey was considered efficient when the proportion of significant ANODEV exceeded a threshold of 0.7 [START_REF] Cohen | Statistical Power Analysis for the Behavioral Sciences[END_REF].
For each design, we calculated the relative increase in power by dividing the difference between the power of a given sampling design and the minimum power across all scenarios, by the difference between the maximum and minimum power across all scenarios. This ratio, power, was used as a response variable in a linear model to quantify the effect of three explanatory variables: i) the proportion of monitored nests in which chicks are ringed, ii) the proportion of breeders (re)captured/resighted and iii) whether adult breeders were caught (in ON survey only). The survey duration and the number of surveyed nests were fixed. As explanatory variables explained 100% of the variance of power, coefficients of the linear model sum to 1. Therefore, coefficients can be interpreted as the relative contribution of each design component to the increase in statistical power.
Calculating the cost of CR surveys
Human and financial costs of each design were derived from our own field experiences. Costs included the number of working-days required to monitor a territorial pair (resighting for ON, capture/recapture for NB), to ring chicks and capture territorial breeders (for ON only). For both types of monitoring, these costs were multiplied by the number of breeding pairs surveyed, the number of monitored nests in which chicks are ringed and the total number of breeders caught. The specific case of the resighting of breeders in the ON monitoring required knowing the distribution of working-days used to check whether a given breeder was ringed and to identify it (Fig. S4). Indeed, since all territorial birds were not ringed, some observations did not provide information for the CR dataset. To account for this issue, we recorded from simulated demography the annual proportion of ringed breeders in the population and the number of observations. Then we calculated the costs of all bird observations, ringed or not, by sampling the number of working-days in the observed distribution of working-days (mean = 3.7 ± 3.3 per bird, Fig. S3). Finally, we converted the total number of working-days required for each simulation into financial cost in euros, according to the average wage of conservation practitioners in France, assuming no volunteerbased work and accounting for travel fees and supplementary materials (e.g. binocular, traps). Note that we are interested in the relative, not absolute, cost of survey designs. Finally, as for statistical power, we calculated the relative contribution of the different components of a survey design to the increase of the total cost by performing a linear model with costs (calculated as power) as the response variable.
Finally, we calculated cost-effective contributions of each design component, by dividing the relative contribution in statistical power increase by the relative contribution in cost increase. This allowed us to specifically assess in which component one should preferentially invest to increase CR survey efficiency.
All simulations and analyses were run with R 3.1.2 (R Core Team 2014). We used RMark (Laake 2013) package calling program MARK [START_REF] Cooch | Program MARK: a gentle introduction[END_REF] from R for CR analyses. We provided all R scripts as supplementary information (Appendices S2-S5).
Results
Survey components affecting the power to detect a change in survival for Open-Nesting monitoring
The survey duration and the number of nests surveyed were identified as the two major components for improving the ability of CR surveys in detecting a change in survival rates (Fig. 1). All long-duration surveys reached the power threshold, whereas the majority of short-duration surveys did not (44/66).
The capture of five territorial birds each year during the five first years greatly increased the effectiveness of CR surveys (Fig. 1). This component actually compensated for the absence of ringed territorial birds in the early years, a consequence of delayed recruitment in long-lived species. Most survey designs lacking the initial capture of territorial birds (27/33) failed to reach the power threshold in short-duration surveys. However, the benefit in terms of statistical power of this component diminished as i) the survey duration increased from 10 to 20 years and ii) the number of breeding pairs monitored increased. For example, when 25 breeding pairs were monitored, a survey involving the initial capture of territorial birds and 50% of nests with chicks ringed, was more efficient than a survey involving 100% of nests with chicks ringed but no territorial bird caught. Similarly, initial captures of territorial birds were more valuable than increasing the proportion of breeders resighted, although this effect tended to vanish as the survey duration and/or the number of surveyed nests increased. These interactions arose from the fact that we considered an absolute number of captures, and not a fixed proportion among the birds monitored. The smaller the number of breeding pairs surveyed and the shorter the survey duration, the more valuable became the initial capture of territorial breeders. Interestingly enough, monitoring as few as 15 pairs might provide a satisfactory statistical power, understanding that study has been conducted over 20 years (Fig. 1).
Survey components affecting the power to detect a change in survival for Nest-Box monitoring
The important environmental random variation implemented in the simulations (≥ to conservation effect) produced a noisy relationship between statistical power and the level of effort dedicated to the different survey components (Fig. 2a,b). Indeed, survival of medium-lived species suffer from a high level of residual temporal variation, compared to long-lived species, which reduces statistical power. A solution to this issue might be found in the addition of relevant environmental covariates (e.g. prey abundance, climate indices) into CR models, to increase the ability of analyses to detect the genuine effect of conservation actions [START_REF] Grosbois | Assessing the impact of climate variation on survival in vertebrate populations[END_REF].
Trends can nevertheless be extracted and we provided an additional figure without environmental variation to ascertain these inferences (Fig. 2c,d). First, while the majority of long-duration survey reached the statistical power threshold (24/33), no sampling design did so in short-duration survey. Second, monitoring 25 pairs provided little statistical power whatever the survey duration and the level of effort dedicated to other components. Overall, the proportion of nests in which chicks were ringed had virtually no effect, partly because this component increases the proportion in the CR dataset of young birds subject to higher environmental stochasticity than adults. The number of nest-boxes monitored increased statistical power and the threshold was reached for long-term survey designs including 50 nest-boxes monitored and an intermediate effort dedicated to the capture of breeding birds. The proportion of breeding birds caught appeared as the most effective component of NB surveys for medium-lived birds. This is essentially due to the fact that capturing breeding birds allowed ringing a large number of new birds, therefore enriching the CR dataset and compensating for the low recruitment rates of individuals ringed as chicks. It appeared more effective to increase the effort in terms of proportion of breeding birds caught (from 0.5 to 0.8), than increasing the number of pairs surveyed by 25, especially for short-duration surveys.
Cost of CR surveys
The number of working-days represented 97 and 88% of the total financial cost of CR surveys for ON and NB monitoring respectively. Due to the multiple visits needed to monitor breeding success, the number of nests surveyed contributed the most to the cost of CR surveys in both types of monitoring (Fig. 3). Survey duration also largely contributed to the overall costs, by multiplying this expense over the number of years (Fig. S5). In contrast, improving the recapture/resighting probability of breeders only marginally increased the survey cost. With all other things being equal, the capture of territorial birds in ON monitoring was more costly than improving the proportion of territorial birds resighted or increasing the proportion of chicks ringed. For NB monitoring, increasing the proportion of chicks ringed was more costly than improving the recapture probability of breeders. This discrepancy between monitoring can be explained by the cost difference for a same component (Table S2): capturing a breeder in ON monitoring was much more expensive than ringing chicks (15 vs. 2 working-days), compared to NB monitoring (25 vs. 40 min).
The identification of cost-effective surveys
The most efficient CR surveys were those that surveyed small numbers of nests but over long durations. However, these durations generally exceeded the timescale of management planning and did not represent an effective way to quickly adapt conservation actions in response to a threat affecting survival. Therefore, we have chosen here to focus on shortduration surveys to identify the key design components providing the highest added value.
For ON monitoring conducted on 50 breeding pairs of a long-lived species, the most important contribution to the increase in statistical power came from the initial capture of breeding birds (29%) but increasing the proportion of nests in which chicks are ringed proved also to be efficient (57% cumulated gain when passing from 25 to 100% of ringed chicks; Fig. 4a). Surprisingly, increasing the proportion of resighted territorial birds provided only limited gain of power (14%). The contribution of these different components to the overall survey cost was highly heterogeneous with the capture of territorial breeders being particularly expensive (58%), whereas ringing chicks was cheap (14%; Fig. 4b). When balancing costs and benefits, it turned out than investing in the ringing of the chicks was the most rewarding option (Fig. 4c).
For NB monitoring conducted on 75 breeding pairs of a medium-lived species, the major contribution to the increase in statistical power was achieved through the proportion of breeding adults caught (97% cumulative gain), with the proportion of chicks ringed providing only little added value (3%). This trend was reinforced when considering cost contributions, such that the proportion of breeding adults caught was unambiguously pointed out as the most rewarding component of a NB sampling design (Fig. 5d,e,f).
Discussion
We offered a methodological framework for exploring the relative efficiency of alternative survey designs to detect a change in survival, a key demographic parameter widely used by scientists and practitioners for monitoring animal populations. The set of sampling designs (N = 198) encompass the most common types of monitoring dedicated to the demographic study of birds by capture-recapture (nest-box and open-nest) and applied on medium-or long-lived species. More importantly, we conducted a cost-benefit analysis balancing the increase in statistical power with costs in working-days, entailed by the four main components of CR surveys (survey duration, number of breeding pairs surveyed, proportion of monitored nests in which chicks are ringed and proportion of breeders (re)captured/resighted). For long-lived open-nesting species, increasing the proportion of chicks ringed is the most valuable option once the survey duration was fixed to a conservation-relevant timescale. In contrast, for medium-lived species monitored in nest-boxes, dedicating resources to increase the proportion of breeding adults caught reduces the number of pairs monitored necessary to reach an adequate statistical power in short-duration surveys.
Our simulation experiment pointed out that extended survey durations (over 3-4 generation time) and/or high numbers of monitored breeding pairs (50-75) were often necessary to allow the detection of a change in survival. This is however problematic, as long-duration surveys exceed the timescale of management planning and is unsatisfactorily regarding the implementation of conservation actions [START_REF] Yoccoz | Monitoring of biological diversity in space and time[END_REF]. Moreover, practitioners dealing with species of conservation concern have to do the best of limited resources. Thus, the answers to the classical questions how long and how many are highly constrained in a management context. On the one hand, practitioners need an answer as soon as possible, so as to ensure the success of the management action while limiting costs. On the other hand, the number of breeding pairs monitored is either dictated by the total number of pairs available when studying restricted populations or by the level of human/financial resources available. Overall, we believe that the questions how to and what to monitor can provide a significant added value to the design of monitoring schemes in a conservation/management context. Below we discuss several ways to overcome issues regarding monitoring design, in link with monitoring type and species life-history.
On the relevance of ringing 100% of the offspring monitored
Based on our own experience, the ability of practitioners/scientists to ring all the monitored chicks is a common quality control of CR surveys. Here we challenge this view as our simulation results showed that the validity of this 'gold standard' depends on the species' lifehistory. For long-lived species, with high recruitment of locally-born individuals, this surely constitutes a pertinent option given the low cost of this component. For species with lower local recruitment rates such as medium-lived species however, our results showed that investing in the capture of breeding adults, instead of seeking for an exhaustive ringing of chicks, is more efficient. Specifically, this strategy would consist of increasing the number of nest-box's rounds when breeding adults are most likely to be caught, at the expense of rounds dedicated to the ringing of the last broods.
It can be argued, however, that this strategy may reduce our ability to estimate juvenile survival. The population growth rate of short-and medium-lived species is theoretically more sensitive to juvenile than adult survival (e.g. [START_REF] Altwegg | Age-specific fitness components and their temporal variation in the barn owl[END_REF], although the actual contribution of different demographic traits to population dynamics may differ from theoretical expectations (e.g. Hunter et al. 2010). Therefore, it could be of prime importance to avoid CR surveys that fail in providing reliable estimates of juvenile survival for such species. Estimating juvenile survival however remains problematic (Gilroy et al. 2012). Indeed, standard CR surveys allow the estimation of apparent survival, i.e. the product between true survival and the probability of recruiting in the study area, the latter being often weak for juveniles. For NB-breeding species, apparent survival is further affected by the probability of breeding in a nest-box, and not in a natural cavity where birds are typically outof-reach. Therefore, juvenile survival cannot be compared among study areas that differ by the proportion of pairs occupying nest-boxes, the latter being usually unknown. Overall, we suggest that the monitoring of new recruitment in NB survey, achieved by the capture of breeding birds, may significantly contribute to the comprehension of population dynamics in absence of reliable data on juvenile survival [START_REF] Karell | Population dynamics in a cyclic environment: consequences of cyclic food abundance on tawny owl reproduction and survival[END_REF].
Capturing breeding adults: the panacea?
For both ON long-lived species and NB medium-lived species, the capture of breeding adults greatly improved the probability to detect a change in survival rates. Delayed recruitment in long-lived species is a major constraint to CR survey, and especially for species in which the probability to observe non-breeding birds is low. Our simulations showed that capturing some adults in the initial years greatly improved the ability of short-duration surveys to reach a satisfactory statistical power. However, the costs associated to this component vary across species and can severely reduce its effectiveness. For instance in large ON raptors, this entails prohibitive costs as it requires the mobilisation of numerous and highly-skilled people over a long time period. Alternative indirect techniques may however be implemented to reduce the costs of capturing adults (see below).
In contrast, capturing breeding birds in nest-boxes is relatively easy and cheap and only requires the knowledge of the breeding phenology. Females can be caught during late incubation or when brooding chicks and therefore provide highly valuable CR data. This is especially true when considering medium-lived species in which local recruitment rate is low.
Implementation and future directions
If we are to reliably inform management on a reasonably short time-scale, CR surveys maximising statistical power should be favoured. Unfortunately, such surveys often include costly components such as capturing breeding individuals in ON long-lived species. Our simulations included standard CR techniques and alternative methods may be achievable to decrease the cost of the more effective but less efficient design components. For instance, collecting biological materials to implement identification of individuals through DNA analyses might provide valuable data for ON long-lived species [START_REF] Marucco | Wolf survival and population trend using non-invasive capture-recapture techniques in the Western Alps[END_REF][START_REF] Bulut | Use of Noninvasive Genetics to Assess Nest and Space Use by White-tailed Eagles[END_REF][START_REF] Woodruff | Estimating Sonoran pronghorn abundance and survival with fecal DNA and capturerecapture methods[END_REF]. Feathers of breeding birds can be searched for when nests are visited for ringing chicks. Providing that specific microsatellite markers are already available, genetic CR data can be gathered at low costs (30-50 € per sample). Alternatively, RFID microchips embedded in plastic rings may also reduce the cost of recapture by recording the ID of the parents when visiting the nests for both ON and NB (e.g. [START_REF] Ezard | The contributions of age and sex to variation in common tern population growth rate[END_REF]. Reducing costs entailed by the number of nests surveyed, or the proportion of nest in which chicks are ringed, may be further achieved by optimising travelling costs as proposed by [START_REF] Moore | Optimizing ecological survey effort over space and time[END_REF].
Here we took advantage of data-rich study models to set our simulations. Many species of conservation concern may lack such data but values for demographic traits can be gathered from the literature on species with similar life history characteristics. Furthermore, the effect size of the conservation effect can be set according to the extent of temporal variation in survival, as we did for the NB example. Because we did not have other systems combining a field-derived knowledge of both demographic parameters and survey costs available, we did not perform a full factorial treatment between life-history strategies and monitoring types. We believe, however, that our simulation framework enabled one to derive generic statements on the way CR surveys should be designed, partly because the relative, not absolute, costs between the different components are likely to be similar whatever the species considered. Our conclusions regarding the NB monitoring are largely insensitive to the type of lifehistory, as the capture of breeding adults remain feasible at low cost for species with either shorter (e.g. blue tit Cyanistes caeruleus, [START_REF] Garcia-Navas | The role of immigration and local adaptation on fine-scale genotypic and phenotypic population divergence in a less mobile passerine[END_REF] or longer life expectancy (e.g. tawny owl Strix aluco, [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]Cory's shearwater Calonectris diomedea, Oppel et al. 2011). NB monitoring of passerines can entail colour ringing and resightings in addition to recapture. Regarding ON monitoring, our conclusions drawn for long-lived raptors may be altered when considering species with lower local recruitment rate and for which the capture of breeding adults, e.g. mist-nets might be easier/cheaper (e.g. ring ouzel Turdus torquatus; [START_REF] Sim | Characterizing demographic variation and contributions to population growth rate in a declining population[END_REF]. In such a case, it is likely that the cost-benefit analysis regarding the capture of adults will promote this component. Finally, many cliff-nesting seabirds show similar monitoring type and life-history characteristics to our examples and our guidelines are likely to apply equally. For instance, a recent post-study evaluation of a CR survey conducted on common guillemot Uria aalge found that resighting effort could be halved without altering the capacity to monitor survival [START_REF] Lahoz-Monfort | Exploring the consequences of reducing survey effort for detecting individual and temporal variability in survival[END_REF], in agreement with our results. The complete R scripts provided as electronic supplements can be modified to help designing specific guidelines for other species.
Finally, the different components of CR design considered in our simulations are somewhat specific to bird ecology and may not directly apply when considering other vertebrates such as mammals, reptiles or amphibians. For instance, in carnivorous mammals, CR surveys are limited by the difficulty of capturing/recapturing individuals with elusive behaviour. Survival estimations often rely on the use of GPS/VHF tracking that is not well suited for long-term monitoring. Camera-trapping and DNA-based identification are increasingly used to improve CR surveys in such species [START_REF] Marucco | Wolf survival and population trend using non-invasive capture-recapture techniques in the Western Alps[END_REF][START_REF] Cubaynes | Importance of accounting for detection heterogeneity when estimating abundance: the case of French wolves[END_REF][START_REF] O'connell | Camera traps in animal ecology: methods and analyses[END_REF] and we believe that a cost-efficiency approach may be helpful for carefully designing optimal surveys in such monitoring. For example, one could simulate different sampling designs varying by trap number, inter-trap distance and the area covered for carnivores having small or large home-ranges to assess the effect of these components on the detection of survival variation. The path is, therefore, open for developing cost-effective CR surveys and improving the output of wildlife monitoring in all management situations.
Acknowledgments
We would like to thank all the practitioners we have worked with for sharing their experiences on the monitoring of wild populations. NL received a PhD Grant from École Normale Supérieure/EDSE Aix-Marseille Université. Comments from two anonymous reviewers helped us to improve the quality of the manuscript. Sonia Suvelor kindly edited the English. | 44,397 | [
"21698",
"760279",
"739289",
"18913"
] | [
"188653",
"543505",
"171392",
"171392",
"515732",
"188653"
] |
01744592 | en | [
"sdv"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01744592/file/COMPREHENSIVE-PHYSIOL.pdf | Bénédicte Gaborit
PhD Coralie Sengenes
PhD Patricia Ancel
Alexis Jacquier
MD, PhD Anne Dutour
Role of epicardial adipose tissue in health and disease: a matter of fat?
Epicardial adipose tissue (EAT) is a small but very biologically active ectopic fat depot that surrounds the heart. Given its rapid metabolism, thermogenic capacity, unique transcriptome, secretory profile, and simply measurability, epicardial fat has drawn increasing attention among researchers attempting to elucidate its putative role in health and cardiovascular diseases. The cellular crosstalk between epicardial adipocytes and cells of the vascular wall or myocytes is high and suggests a local role for this tissue. The balance between protective and proinflammatory/profibrotic cytokines, chemokines, and adipokines released by EAT seem to be a key element in atherogenesis and could represent a future therapeutic target. EAT amount has been found to predict clinical coronary outcomes. EAT can also modulate cardiac structure and function. Its amount has been associated with atrial fribrillation, coronary artery disease, and sleep apnea syndrome. Conversely, a beiging fat profile of EAT has been identified. In this review, we describe the current state of knowledge regarding the anatomy, physiology and pathophysiological role of EAT, and the factors more globally leading to ectopic fat development. We will also highlight the most recent findings on the origin of this ectopic tissue, and its association with cardiac diseases.
Didactic synopsis
Major teaching points:" followed by a bulleted list of 5-10 summary statements.
EAT is an ectopic fat depot located between myocardium and the visceral pericardium with no fascia separating the tissues, allowing local interaction and cellular cross-talk between myocytes and adipocytes Given the lack of standard terminology, it is necessary to make a distinction between epicardial and pericardial fat to avoid confusion in the use of terms. The pericardial fat refers to the combination of epicardial fat and paracardial fat (located on the external surface of the parietal pericardium) Imaging techniques such as echocardiography, computed tomography or magnetic resonance imaging are necessary to study EAT distribution in humans Very little amount of EAT is found in rodents compared to humans EAT displays high rate of fatty acids metabolism (lipogenesis and lipolysis), thermogenic (beiging features), and mechanical properties (protective framework for cardiac autonomic nerves and vessels) Compared to visceral fat, EAT is likely to have predominant local effects EAT secretes numerous bioactive factors including adipokines, fibrokines, growth factors and cytokines that could either be protective or harmful depending on the local microenvironement Human EAT has a unique transcriptome enriched in genes implicated in extracellular matrix remodeling, inflammation, immune signaling, beiging, thrombosis and apoptosis pathways Epicardial adipocytes have a mesothelial origin and derive mainly from epicardium. Cells originating from the Wt1+ mesothelial lineage, can differentiate into EAT and this "epicardium-to-fat transition" fate could be reactivated after myocardial infarction Factors leading to cardiac ectopic fat deposition may include dysfunctional subcutaneous adipose tissue, fibrosis, inflammation, hypoxia, and aging Periatrial EAT has a specific transcriptomic signature and its amount is associated with atrial fibrillation EAT is likely to play a role in the pathogenesis of cardiovascular disease and coronary artery disease EAT amount is a strong independent predictor of future coronary events EAT is increased in obesity, type 2 diabetes, hypertension, metabolic syndrome, nonalcoholic fatty liver disease, and obstructive sleep apnea (OSA)
Introduction
Obesity and type 2 diabetes have become importantly prevalent in recent years, and are strongly associated with cardiovascular diseases, which remain a major contributor to total global mortality despite advances in research and clinical care (195). Organ-specific adiposity has renewed scientific interest in that it probably contributes to the pathophysiology of cardiometabolic diseases [START_REF] Despres | Body Fat Distribution and Risk of Cardiovascular Disease: An Update[END_REF]321). Better phenotyping obese individuals, increasing our knowledge on one's individual risk, and identifying new therapeutic targets is therefore decisive. Epicardial adipose tissue (EAT) is the visceral heart depot in direct contact with myocardium and coronary arteries. Its endocrine and metabolic activity is outstanding, and its key localization allows a singular cross talk with cardiomyocytes and cells of the vascular wall.
Despite the little amount of EAT found in rodents, human EAT is readily measured using imaging methods. This has brought more than 1000 publications in the past decade. In this review, we discuss the recent basic and clinical research with regards to the EAT (i) anatomy, (ii) physiology, (iii) origin, and (iv) development, (v) clinical applications of EAT measurments, and (vi) its role in pathophysiology, in particular with atrial fribrillation, heart function, coronary artery disease (CAD) and obstructive sleep apnea syndrome.
Systematic review criteria
We searched MEDLINE and Pubmed for original articles published over the past ten years, focusing on epicardial adipose tissue. The search terms we used alone or in combination, were "cardiac ectopic fat", "cardiac adiposity", "fatty heart", "ectopic cardiovascular fat", "ectopic fat depots", "ectopic fat deposits", "epicardial fat" "epicardial adipose tissue", "pericardial fat", "pericardial adipose tissue". All articles identified were English-language, full-text papers. We also searched in the reference lists of identified articles, for further investigation.
EAT IN HEALTH
Anatomy of EAT
Definitions and distinction between pericardial and epicardial fat
Epicardial fat is the true visceral fat deposit over the heart (111,253,265). It is most commonly defined as adipose tissue surrounding the heart, located between the myocardium and the visceral pericardium (Figure 1). It should be distinguished from paracardial fat (adipose tissue located external to the parietal pericardium) and pericardial fat (often defined as paracardial fat plus epicardial fat) [START_REF] Gaborit | Epicardial fat: more than just an "epi" phenomenon?[END_REF]126). However, it should be noted that in the literature there is often some confusion in the use of the term pericardial instead of epicardial or conversely, so that it is prudent to carefully review the definition of adipose tissues measured by imaging used by authors in any individual study.
Distribution of EAT in humans and other species
Eventhough the adipose tissue of the heart was neglected for a long time, anatomists made early observations in humans that it varies in extent and distribution pattern. EAT constitutes in average 20% of heart weight in autopsy series [START_REF] Corradi | The ventricular epicardial fat is related to the myocardial mass in normal, ischemic and hypertrophic hearts[END_REF]253,259). However, it has been shown to vary widely among individuals from 4% to 52% and to be preferentially distributed over the base of the heart, the left ventricular apex, the atrioventricular and interventricular grooves, along the coronary arteries and veins, and over the right ventricle (RV), in particular free wall (253). In our postmortem study, age, waist circumference and heart weight were the main determinants of EAT increase, the latter covering the entire epicardial surface of the heart in some cases (284). Importantly, a close functional and anatomical relationship exists between the EAT and the myocardium. Both share the same microcirculation, with no fascia separating the adipose tissue from myocardial layers, allowing cellular cross talk between adipose tissue and cardiac muscle (127). In other species than humans, such as pigs, rabbits or sheep, EAT is relatively abundant, which contrasts with the very small EAT amount found in rodents (Figure 2) (127). Initially, these findings did not support for a critical role of EAT in normal heart physiology and partly explain why EAT has been so poorly studied. However, there is a growing body of evidence that beyond the amount of EAT, its metabolic and endocrine activity is also crucial.
Physiology of EAT
The current understanding of EAT physiology is still in its infancy. The main anatomical and supposed physiological properties of epicardial fat are summarized in table 1. One of the major limitations in studying the physiology of EAT is that only patients with cardiac diseases undergo cardiac surgery. Sampling healthy volunteers would be unethical.
Histology
In humans, EAT has a smaller adipocyte size than subcutaneous or peritoneal adipose tissue [START_REF] Bambace | Adiponectin gene expression and adipocyte diameter: a comparison between epicardial and subcutaneous adipose tissue in men[END_REF]. But EAT is composed of far more than simply adipocytes. It also contains inflammatory, stromal and immune cells but also nervous and nodal tissue (206). It has been suggested that EAT may serve as a protective framework for cardiac autonomic nerves and ganglionated plexi (GP). Accordingly, nerve growth factor (NGF), which is essential for the development and survival of sensory neurons, is highly expressed in EAT (266). Atrial EAT is thus often the target of radiofrequency ablation for arrhythmias (see paragraph EAT and atrial fibrillation).
Metabolism
Up to now, our understanding of EAT physiology in humans remains quite limited, and data regarding lipid storage (lipogenesis) and release (lipolysis) come mainly from animal studies.
In guinea pigs, Marchington et al., reported that EAT exhibits an approximately two-fold higher metabolic capacity for fatty acids incorporation, breakdown, and release relative to other intra-abdominal fat depots (198). Considering that free fatty acids (FFA) are the major source of fuel for contracting heart muscle, EAT may act as a local energy supply, and an immediate ATP source for adjacent myocardium during time of energy restriction (199).
Conversely, due to its high lipogenic activity, and high expression of fatty acid transporters specialized in intracellular lipid trafficking such as FABP4 ( 325), (fatty-acid-binding-protein 4), EAT could serve as a buffer against toxic levels of FFA during time of excess energy intake. How FFAs are transported from the EAT into the myocardium has however to be elucidated. One hypothesis is that FFAs could diffuse bidirectionally in interstitial fluid across concentration gradients (265).
Secretome
EAT is more than a fat storage depot. Indeed, it is now widely recognized to be an extremely active endocrine organ and a major source of adipokines, chemokines, cytokines that could either be protective or harmful depending on the local microenvironement (127,206). The human secretome of EAT is wide and is described in Table 2. This richness probably reflects the complex cellularity and cross talk between EAT and neighboring structures. Interleukin (IL)-1β, IL6, IL8, IL10, tumor necrosis factor α (TNF-α), monocyte chemoattractive protein 1 (MCP-1), adiponectin, leptin, visfatin, resistin, phospholipase A2 (sPLA2), and plasminogen activator inhibitor 1 (PAI-1) are examples of bioactive molecules secreted by EAT [START_REF] Cherian | Cellular cross-talk between epicardial adipose tissue and myocardium in relation to the pathogenesis of cardiovascular disease[END_REF][START_REF] Dutour | Secretory Type II Phospholipase A2 Is Produced and Secreted by Epicardial Adipose Tissue and Overexpressed in Patients with Coronary Artery Disease[END_REF]206,268). Given the lack of anatomical barriers, adipokines produced by EAT are thought to interact with vascular cells or myocytes in two manners: paracrine and/or vasocrine.
Interaction with cardiomyocytes is likely to be paracrine as close contact between epicardial adipocytes and myocytes exist and fatty infiltration into myocardium is not rare [START_REF] Corradi | The ventricular epicardial fat is related to the myocardial mass in normal, ischemic and hypertrophic hearts[END_REF]193,308). Interactions with cells of the vascular wall seem to be paracrine or vasocrine. In paracrine signalling, it is hypothesized that EAT-derived adipokines could diffuse directly through the layers of the vessel wall via the interstitial fluid to interact with smooth muscle cells, endothelium probably influencing the initiation of inflammation, and atherogenesis (see EAT and Coronary artery disease (CAD)). An alternative vasocrine signalling mechanism has been proposed, in which EAT-derived adipokines directly enter the lumen of closely opposed adventitial vasa vasorum, and thus are transported downstream into the arterial wall (126,265). Apart from the classical endothelial and intima layers "inside-out" cross talk, this would suggest the opposite existence of an "outside-in" cellular cross talk (111,124,266).
Supposed Protective functions
Mechanical protective effects have been attributed to epicardial fat. EAT is supposed to act as a shock absorber to protect coronary arteries against torsion induced by the arterial pulse wave and cardiac contraction (253). A permissive role of EAT on vessel expansion and positive remodeling of coronary vessels, to maintain the arterial lumen has been reported (251). Given its high metabolic activity, EAT is likely to be involved in the regulation of fatty acids homeostasis in the coronary microcirculation (199). Some adipokines such as adiponectin, adrenomedullin and omentin, may have protective effects on vasculature, by regulating arterial vascular tone (vasodilation), reducing oxidative stress, improving endothelial function, and increasing insulinsensitivity [START_REF] Cheng | Adipocytokines and proinflammatory mediators from abdominal and epicardial adipose tissue in patients with coronary artery disease[END_REF][START_REF] Fain | Identification of omentin mRNA in human epicardial adipose tissue: comparison to omentin in subcutaneous, internal mammary artery periadventitial and visceral abdominal depots[END_REF][START_REF] Gaborit | Human epicardial adipose tissue has a specific transcriptomic signature depending on its anatomical peri-atrial, periventricular, or peri-coronary location[END_REF]283). EAT is also considered as an immunological tissue that serves to protect the myocardium and vessels against pathogens [START_REF] Fain | Identification of omentin mRNA in human epicardial adipose tissue: comparison to omentin in subcutaneous, internal mammary artery periadventitial and visceral abdominal depots[END_REF]266). Hence, under physiological conditions EAT can exert cardioprotective actions through production of anti-atherogenic cytokines. However, the modification of EAT into a more pro-inflammatory or pro-fibrosing phenotype is susceptible to favor many pathophysiological states (see EAT in diseases). Determining the factors that regulate this fragile balance is a big challenge for next years.
Transcriptome
EAT has a unique transcriptomic signature when compared to subcutaneous fat [START_REF] Gaborit | Human epicardial adipose tissue has a specific transcriptomic signature depending on its anatomical peri-atrial, periventricular, or peri-coronary location[END_REF]188).
Using a pangenomic approach we identified that EAT was particularly enriched in extracellular matrix remodeling, inflammation, immune signaling, beiging, coagulation, thrombosis and apoptosis pathways [START_REF] Gaborit | Human epicardial adipose tissue has a specific transcriptomic signature depending on its anatomical peri-atrial, periventricular, or peri-coronary location[END_REF]. Omentin (ITLN1) was the most upregulated gene in EAT, as confirmed by others [START_REF] Fain | Identification of omentin mRNA in human epicardial adipose tissue: comparison to omentin in subcutaneous, internal mammary artery periadventitial and visceral abdominal depots[END_REF]102), and network analysis revealed that its expression level was related with many other genes, supporting an important role for this cardioprotective adipokine (273). Remarkably, we observed a specific transcriptomic signature for EAT taken at different anatomical sites. EAT taken from the periventricular area overexpressed genes implicated in Notch/p53, inflammation, ABC transporters and glutathione metabolism. EAT taken from coronary arteries overexpressed genes implicated in proliferation, O-N glycan biosynthesis, and sphingolipid metabolism. Finally, EAT taken from atria overexpressed genes implicated in oxidative phosphorylation, cell adhesion, cardiac muscle contraction and intracellular calcium signalling pathway, suggesting a specific contribution of periatrial EAT to cardiac muscle activity. These findings further support the importance of the microenvironment on EAT gene profile. Likewise abdominal adipose tissue comprises many different depots there is not one but rather many epicardial adipose tissues.
Thermogenesis
The thermogenic and browning potential of epicardial fat has received increasing attention, and has been recently reviewed elsewhere [START_REF] Chechi | Thermogenic potential and physiological relevance of human epicardial adipose tissue[END_REF]. Brown adipose tissue generates heat in response to cold temperatures and activation of the autonomic nervous system. The heat generation is due to the expression of an uncoupling protein UCP-1, in the mitochondria of brown adipocytes (183). Until quite recently, BAT was thought to be of metabolic importance only in mammals during hibernation, and human newborns. However, recent studies using positron emission tomography (PET), have reported the presence of metabolically active BAT in human adults [START_REF] Cypess | Identification and importance of brown adipose tissue in adult humans[END_REF]224). Interestingly, Sacks et al, reported that UCP-1 expression was fivefold higher in EAT than substernal fat, and undetectable in subcutaneous fat, suggesting that EAT could have «brown» fat properties to defend myocardium and coronary arteries against hypothermia [START_REF] Chechi | Brown fat like gene expression in the epicardial fat depot correlates with circulating HDL-cholesterol and triglycerides in patients with coronary artery disease[END_REF]. The authors further demonstrated that the structure and architecture of EAT differs among the neonate, infant, and child with more genes implicated in the control of thermogenesis in EAT of neonates, and a shift towards lipogenesis through infancy (230).
Further studies identified that EAT had beige or brite profile, with the expression of beige markers such as CD137 (267). Besides, we reported that periventricular EAT could be an EAT more sensitive to browning, as it expressed more UCP-1 than other epicardial fat stores [START_REF] Bellows | Influence of BMI on level of circulating progenitor cells[END_REF]. Furthermore, several genes upregulated in periventricular EAT encoded for enzymes of the glutathione metabolism pathway. Yet these enzymes have a specific signature in brown adipose tissue, due to the decoupling of the respiratory chain, and the increase in oxidative metabolism (246). The 'brite' (i.e. brown in white) or 'beige' adipocytes are multi-locular adipocytes located within white adipose tissue islets, which have the capacity to be recruited and to express UCP-1, mainly in case of cold exposure [START_REF] Cousin | Occurrence of brown adipocytes in rat white adipose tissue: molecular and morphological characterization[END_REF]282,339). It has been suggested that beige adipose tissue in EAT originates from the recruitment of white adipocytes that could produce UCP-1 in response to browning factors such as myokines like irisin, cardiac natriuretic peptides, or fibroblast growth factor 21 (FGF21) [START_REF] Bordicchia | Cardiac natriuretic peptides act via p38 MAPK to induce the brown fat thermogenic program in mouse and human adipocytes[END_REF]. Whether these factors have a direct beiging effect on EAT and can stimulate its thermogenic potential remains to be addressed. A recent study demonstrated that increased reactive oxygen species (ROS) production from epicardial fat of CAD patients was possibly associated with brown to white transdifferentiation of adipocytes within EAT [START_REF] Dozio | Increased reactive oxygen species production in epicardial adipose tissues from coronary artery disease patients is associated with brown-to-white adipocyte trans-differentiation[END_REF]. Accordingly, another study revealed that an increase in brown EAT was associated with the lack of progression of coronary atherosclerosis in humans [START_REF] Ahmadi | Aged garlic extract with supplement is associated with increase in brown adipose, decrease in white adipose tissue and predict lack of progression in coronary atherosclerosis[END_REF]. These results point to a beneficial role of EAT browning in CAD development. Whether these beige adipocytes within white epicardial adipocytes could serve, as a therapeutic target to improve cardiac health and metabolism remains to be explored.
The origin of epicardial adipose tissue
In the recent years, there has been growing interest in the distribution and function of adipocytes and the developmental origins of white adipose tissue (WAT) [START_REF] Billon | Developmental origins of the adipocyte lineage: new insights from genetics and genomics studies[END_REF]109,168,244).
Since adipocytes are located close to microvasculature, it has been suggested that white adipocytes could have endothelial origin (307,315). However, this hypothesis has been challenged by recent lineage tracing experiments that revealed epicardium as the origin of epicardial fat cells [START_REF] Chau | Visceral and subcutaneous fat have different origins and evidence supports a mesothelial source[END_REF]180,343). Chau et al, used genetic lineage tracing to identify descendants of cells expressing the Wilms' tumor gene Wt1 (Wt1-Cre mice), and found a major ontogenetic difference between VAT and WAT [START_REF] Chau | Visceral and subcutaneous fat have different origins and evidence supports a mesothelial source[END_REF]. The authors observed that epicardial and five other visceral fat depots (gonadal, mesenteric, perirenal, retroperitoneal, and omental) appearing postnatally received a significant contribution from cells that once expressed Wt1 late in gestation. By contrast, Wt1-expressing cells did not contribute to the development of inguinal WAT or brown adipose tissue (BAT). Wt1 is a major regulator of mesenchymal progenitors in the developing heart. During development Wt1 expression is restricted mainly to the intermediate mesoderm, parts of the lateral plate mesoderm and tissues that derive from these and the mesothelial layer that lines the visceral organs and the peritoneum (201). Postnatally, in their experiments a subset of visceral WAT continued to arise from Wt1-expressing cells, consistent with the finding that Wt1 marks a proportion of cell populations enriched in WAT progenitors [START_REF] Chau | Visceral and subcutaneous fat have different origins and evidence supports a mesothelial source[END_REF]. Depending on the depot, Wt1+ cells comprised 4-40% of the adult progenitor population, being the most abundant in omental and epicardial fat. This suggested heterogeneity in the visceral WAT lineage. Finally, using FACS analysis the authors showed that Wt1-expressing mesothelial cells expressed accepted markers of adipose precursors (CD29, CD34, Sca1). Cultures of epididymal appendage explants in addition gave rise to adipocytes from Wt1+ cells, confirming that Wt1 expressing mesothelium can produce adipocytes [START_REF] Chau | Visceral and subcutaneous fat have different origins and evidence supports a mesothelial source[END_REF]. The concept of a mesothelial origin of epicardial fat cells has been supported by contemporaneous lineage-tracing studies from Liu et al, using double transgenic mice line Wt1-CreER; Rosa26 RFP/+ tracing epicardium-derived cells (EDPCs), and adenovirus that expresses Cre under an epicardium-specific promoter Msln (180). They demonstrated that epicardial fat descends from embryonic epicardial progenitors expressing Wt1 and Msln. They referred to this as epicardium-to-fat transition (ETFT).
Furthermore, cells of the epicardium in adult animals gave rise to epicardial adipocytes following myocardial infarction, but not during normal heart homeostasis (180). Another group confirmed these results and further established IGF1R signaling as a key pathway that governs EAT formation after myocardial injury by redirecting the fate of Wt1+ lineage cells (349). Taken together this suggested that while embryonic epicardial cells contribute to EAT, there is minima ETFT in normal adult heart, but this process can be reactivated after myocardial infarction or severe injury (Figure 3). This important discovery provides new insights into the treatment of cardiovascular diseases and regenerative medicine or stem cell therapy, as isolated human epicardial adipose derived stem cells (ADSCs) revealed the highest cardiomyogenic potential, as compared to the pericardial and omental subtypes (340).
Further investigations are awaited in humans to decipher the mechanisms of ETFT reactivation in the setting of metabolic and cardiovascular diseases.
Another study clarified the discrepancy of EAT abundance among species in EAT development (343). The authors confirmed in mice that EAT originates from epicardium, that the adoption of the adipocyte fate in vivo requires the transcription factor PPARγ (peroxisome proliferator activated receptor gamma). By stimulation of PPARγ at times of epicardiummesenchymal transformation, they were indeed able to induce this adipocyte fate ectopically in ventricular epicardium, in embryonic and adult mice (343). Human embryonic ventricular epicardial cells natively express PPARγ, which explains the abundant presence of fat seen in human hearts at birth and throughout life, whereas in mice EAT remains small and located to the atrio-ventricular groove.
Whereas EAT seems to have epicardial origin, adipocytes present in myocardium could have a different one (Figure 3). Indeed infiltration of adipocytes interspersed with the right ventricular muscle fibres is commonly seen in necropsies (308). It is thought to reflect the normal physiological process of involution that occurs with ageing. This is different from the accumulation of triglycerides in cardiomyocytes (namely steatosis). A recent study identified endocardial origin of intramyocardial adipocytes during development (351). Nevertheless, the endocardium of the postnatal heart did not contribute to intramyocardial adipocytes during homeostasis or after myocardial infarction, suggesting that the endocardium-to-fat transition could not be recapitulated after myocardial infarction. It remains however unknown whether endocardial cells could give rise to excessive adipocytes in other types of cardiovascular diseases such as arrhythmogenic right ventricular cardiomyopathy. In this genetic disease, excessive adipose tissue replace myocardium of the right ventricle, leading to ventricular arrhythmias, and sudden death (182).
Taken together, further lineage studies are hence needed to better understand whether mesothelial progenitors contribute to epicardial adipocyte hyperplasia in obesity, type 2 diabetes or cardiovascular diseases.
What drives the development of ectopic fat in the heart?
It is likely that genetic, epigenetic and environmental factors are involved in this process.
EAT has been found to vary among population of different ethnicities [START_REF] Baba | CT Hounsfield units of brown adipose tissue increase with activation: preclinical and clinical studies[END_REF][START_REF] Bachar | Epicardial adipose tissue as a predictor of coronary artery disease in asymptomatic subjects[END_REF][START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF][START_REF] Bakkum | The impact of obesity on the relationship between epicardial adipose tissue, left ventricular mass and coronary microvascular function[END_REF][START_REF] Bambace | Adiponectin gene expression and adipocyte diameter: a comparison between epicardial and subcutaneous adipose tissue in men[END_REF][START_REF] Bapat | Depletion of fat-resident Treg cells prevents age-associated insulin resistance[END_REF][START_REF] Barandier | Mature adipocytes and perivascular adipose tissue stimulate vascular smooth muscle cell proliferation: effects of aging and obesity[END_REF], EAT volume or thickness was reported to be lower in South Asians, Southeast and East Asians compared to Caucasians [START_REF] Barandier | Mature adipocytes and perivascular adipose tissue stimulate vascular smooth muscle cell proliferation: effects of aging and obesity[END_REF], higher in White, or Japanese versus Blacks or African Americans [START_REF] Bachar | Epicardial adipose tissue as a predictor of coronary artery disease in asymptomatic subjects[END_REF][START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF][START_REF] Bambace | Adiponectin gene expression and adipocyte diameter: a comparison between epicardial and subcutaneous adipose tissue in men[END_REF].
In a genome-wide association analysis including 5,487 individuals of European ancestry from the Framingham Heart Study (FHS) and the Multi-Ethnic Study of Atherosclerosis (MESA) a unique locus 10198628 near TRIB2 (Tribbles homolog 2 gene) was identified to be associated with cardiac ectopic fat deposition, reinforcing the concept that there are unique genetic underpinnings to ectopic fat distribution [START_REF] Baroja-Mazo | The NLRP3 inflammasome is released as a particulate danger signal that amplifies the inflammatory response[END_REF]. Animal studies have also revealed the possible effects of fetal programming such as late gestation undernutrition on visceral adiposity predisposing [START_REF] Barone-Rochette | Left ventricular remodeling and epicardial fat volume in obese patients with severe obstructive sleep apnea treated by continuous positive airway pressure[END_REF]. Other environmental factors such as aging, excess caloric intake, sedentary life style, pollutants, and microbiota may also modulate ectopic fat deposition [START_REF] Bastarrika | Relationship between coronary artery disease and epicardial adipose tissue quantification at cardiac CT: comparison between automatic volumetric measurement and manual bidimensional estimation[END_REF][START_REF] Batal | Left atrial epicardial adiposity and atrial fibrillation[END_REF]. In obesity and type 2 diabetes, increased amount of ectopic fat stores have been consistently reported, but the mobilization of those ectopic fat depots seem to be site specific [START_REF] Bellows | Influence of BMI on level of circulating progenitor cells[END_REF][START_REF] Bidault | LMNA-linked lipodystrophies: from altered fat distribution to cellular alterations[END_REF][START_REF] Billon | Developmental origins of the adipocyte lineage: new insights from genetics and genomics studies[END_REF].
Studying the cellular mechanisms that favors ectopic fat accumulation has become therefore an important focus of research.
Factors leading to ectopic fat development
Expandability hypothesis: dysfunctional subcutaneous fat
There are several potential mechanisms that might explain the tendency to deposit ectopic fat but one convincing hypothesis is that individual's capacity to store lipids in subcutaneous adipose tissue has a set maximal limit. When this limit is exceeded, increased import and storage of lipids in visceral adipose tissue and in non-adipose tissues occurs. This is the adipose tissue expandability hypothesis (323). The limited capacity of the subcutaneous adipose tissue to expand induces a "lipid spillover" to other cell types, leading to ectopic lipid deposition, which, in turn, are drivers of insulin resistance and the collective pathologies that encompass metabolic syndrome (319).
There is some intriguing evidence from human studies that supports the adipose tissue expansion hypothesis. In LMNA linked lipodystrophies, the lack of subcutaneous adipose tissue result in severe insulin resistance, hypertriglyceridemia and increased ectopic fat deposition in the liver and the heart [START_REF] Bidault | LMNA-linked lipodystrophies: from altered fat distribution to cellular alterations[END_REF][START_REF] Galant | A Heterozygous ZMPSTE24 Mutation Associated with Severe Metabolic Syndrome, Ectopic Fat Accumulation, and Dilated Cardiomyopathy[END_REF]. Data on animal studies have revealed that transplantation of SAT or removal of VAT in obese mice reversed adverse metabolic effects of obesity, improved glucose homeostasis, and hepatic steatosis [START_REF] Foster | Removal of intra-abdominal visceral adipose tissue improves glucose tolerance in rats: role of hepatic triglyceride storage[END_REF]117). These data replace the adipose tissue function at the center of ectopic lipids deposition.
Fibrosis
Adipocytes are surrounded by a network of extracellular matrix (ECM) proteins which represent a mechanical support and respond to various signaling events (151,223). During adipogenesis, both the formation and expansion of the lipid droplet require dramatic morphological changes, involving both cellular and ECM remodeling (208). Throughout the progression from the lean to the obese state, adipose tissue has been reported to actively change its ECM to accommodate the growth [START_REF] Alligier | Subcutaneous adipose tissue remodeling during the initial phase of weight gain induced by overfeeding in humans[END_REF][START_REF] Divoux | Architecture and the extracellular matrix: the still unappreciated components of the adipose tissue[END_REF]244). Moreover, it has been shown that metabolically dysfunctional adipose tissue exhibits a higher degree of fibrosis, characterized by abundant ECM proteins, and particularly abnormal collagen deposition (151). Therefore, as obesity progresses, ECM rigidity, composition and remodeling impact adipose tissue expandability by physically limiting adipocytes hypertrophy, thus promoting lipotoxicity and ectopic fat deposition. Indeed, genetic ablation of collagen VI (which is a highly enriched ECM constituent of adipose tissue ( 137)) in mouse model of genetic or dietary obesity, induced impaired ECM stability, reduced adipose tissue fibrosis and dramatically ameliorated glucose and lipid metabolism (151). In this mouse model, the lack of collagen VI allowed adipocytes to increase their size without ECM constraints, which favored lipid storage and minimized ectopic lipid accumulation in non adipose tissues. Such results suggest that adipose tissue fibrosis is likely to induce systemic metabolic alterations as fibrosis in the liver, heart or kidney. Moreover, it appears that maintaining a high degree of ECM elasticity allows adipose tissue to expand in a healthy manner, without adverse metabolic consequences (299).
Though hypertrophic adipocytes exhibit a profibrotic transcriptome (114), the contribution and the identity of different cell types responsible for fibrotic deposits in adipose tissue is difficult to determine. However, we and others have demonstrated that macrophages are the master regulators of fibrosis in adipose tissue [START_REF] Bourlier | TGFbeta family members are key mediators in the induction of myofibroblast phenotype of human adipose tissue progenitor cells by macrophages[END_REF]150,299). They produce high levels of transforming growth factor β1 (TGF-β1), and we and others demonstrated that they could directly activate preadipocytes (the so-called adipose progenitor cells) to differentiate towards a myofibroblast-like phenotype thus promoting fibrosis into adipose tissue during its unhealthy excessive development [START_REF] Bourlier | TGFbeta family members are key mediators in the induction of myofibroblast phenotype of human adipose tissue progenitor cells by macrophages[END_REF]150). Importantly, it has recently been demonstrated that transcription factor interferon regulatory factor 5 (Irf5) known to polarize macrophages toward an inflammatory phenotype (162) represses directly TGF-β1 expression in macrophages thus directly controlling ECM deposition [START_REF] Dalmas | Irf5 deficiency in macrophages promotes beneficial adipose tissue expansion and insulin sensitivity during obesity[END_REF]. Importantly, IRF5 expression in obese individuals is negatively associated with insulin sensitivity and collagen deposition in visceral adipose tissue (162).
It has been proposed that fibrosis development in adipose tissue promotes adipocyte necrosis
which in turn induces the infiltration of immune cells in order to remove cell debris, thus leading to low-grade inflammation state. Whether fibrosis is a cause or a consequence of adipose tissue inflammation in obesity is still a matter of intense debate (258). That being said, it is undisputed that there is a close relationship between adipose tissue fibrosis and inflammation development in adipose tissue.
Inflammation
The link between obesity and adipose tissue inflammation was first suspected with the finding that proinflammatory cytokine TNF-α levels were increased in obese adipose tissue the blockade of which led to insulin sensitivity improvement (120, 121). Consequently, macrophages were found to infiltrate obese adipose tissue (329, 341), which led to the general concept that obesity is a chronic unmitigated inflammation with insidious results, where adipose tissue releases proinflammatory cytokines and adipokines which impairs insulin sensitivity in metabolic tissues [START_REF] Cildir | Chronic adipose tissue inflammation: all immune cells on the stage[END_REF]. Very importantly, of the various fat depots visceral adipose tissue has been shown to be the predominant source of chronic systemic inflammation (140). Under lean conditions adipose tissue houses a number of immune cells, mostly M2-like macrophages (with a 4:1 M2:M1 ratio( 186)), as well as eosinophils and regulatory T cells which secrete Il-4/IL-13 and IL-10 respectively, polarizing macrophages toward an antiinflammatory phenotype (185, 331). To note, the M2-like phenotype of macrophages has been reported to be maintained by both immune cells and adipocytes (203). Importantly, the polarization of macrophages from an M2 to a pro-inflammatory M1-like phenotype has been considered as a key event in the induction of obesity visceral adipose tissue inflammation [START_REF] Bourlier | Remodeling phenotype of human subcutaneous adipose tissue macrophages[END_REF][START_REF] Castoldi | The Macrophage Switch in Obesity Development[END_REF]185,240). However, the crucial trigger for such polarization as well as the increase of immune cells in adipose tissue is still unclear, but is likely to be derived from adipocytes. As already mentioned above, with adipose tissue mass increase several morphological changes occur leading to activation of several stress pathways such as endoplasmic reticulum stress, oxidative stress and inflammasome within adipose tissue [START_REF] Clement | Weight of Pericardial Fat on Coronaropathy[END_REF][START_REF] Cypess | Identification and importance of brown adipose tissue in adult humans[END_REF]. Meanwhile, adiponectin production drops, the one of leptin increases and adipose tissue produces inflammatory mediators including IL1-β, IL-6; IL-8, Il-10; TGF-β, TNF-α, MCP-1, plasminogen activating inhibitor-1 (PAI-I) macrophage migratory inhibitory, metallothionin, osteopontin, chemerin, and prostaglandin E2 (140,196). Adiponectin drop results in decreased glucose uptake while leptin decrease affects satiety signals but also the immune system. Indeed, leptin receptor (LEP-R) is expressed on most immune cells (331) and increased leptin production by adipose tissue could dramatically promote immune cell increase (236). Mice that are leptin (ob/ob) or leptin receptor (db/db) deficient are obese and exhibit a strong reduction in functional immune cells (regulatory T cells, NK cells and dendritic cells (166, 214)). Paradoxically, very provocative recent data argue that a reduced ability for an adipocyte to sense and respond to proinflammatory stimuli decreases the capacity for healthy adipose tissue expansion and remodeling. As for fibrosis, such inability would result in increased high fat diet induced ectopic fat accumulation and metabolic dysfunction. Moreover, the authors demonstrate that proinflammatory responses in adipose tissue are essential for both proper ECM remodeling and angiogenesis, two processes known to facilitate adipogenesis, thus favoring healthy adipose tissue expansion (332). Finally, new regulatory players in adipose tissue homeostasis have been identified: the innate lymphoid type 2 cells (ILC2s) and IL-33. ILC2 are a regulatory subtype of ILCs, which are immune cells that lack a specific antigen receptor and can produce a spectrum of effectors cytokines, which match T helper cell subsets (294). ILCs are activated by IL-33 and produce large amounts of type 2 cytokines IL-5 and IL-13 (217).
Upon binding to its receptor (ST2), IL-33 induces the production of large amounts of antiinflammatory cytokines by adipose tissue ILC2s and also the polarization of macrophages toward a M2 phenotype, which results both in adipose tissue mass reduction and insulin resistance improvement (110).
Considerable changes in the composition and phenotype of immune cells occur in adipose tissue during the onset of obesity suggesting that they are actively involved in releasing secretory products along with adipocytes. Conversely to chronic systemic inflammation, which interferes with optimal metabolic fitness a potent acute adipose tissue inflammation is an adaptive response to stress-inducing conditions, which has beneficial effects since it enables healthy adipose tissue remodeling and expansion.
Hypoxia
In the attempt to identify the trigger of adipose dysfunction in obesity, the theory of insufficient angiogenesis to maintain normoxia in the developing fat pad during obesity has also been proposed (316,345). Interestingly parallels exist between the excessive development of adipose tissue and tumors in that both situations are challenged to vascularize growing tissue to provide sufficient O2 and nutrients (298). Various arguments strongly support the idea of "hypoxia in adipose tissue". First, white mature hypertrophic adipocytes can reach a diameter of up to 200 µm in obese patients (205,286) and the normal diffusion distance of O2 across tissues is 100 to 200 µm [START_REF] Brahimi-Horn | Oxygen, a source of life and stress[END_REF]. Second, although lean subjects exhibit a postprandial blood flow rise to adipose tissue obese individuals do not [START_REF] Goossens | Increased Adipose Tissue Oxygen Tension in Obese Compared With Lean Men Is Accompanied by Insulin Resistance, Impaired Adipose Tissue Capillarization, and Inflammation[END_REF]148), indicating that O2 delivery to adipose tissue is indeed impaired in obesity. Third, various works performed in different murine models of obesity have robustly shown that in obese mice, hypoxia-responsive genes expression is increased, increased number of hypoxic foci (using hydroxyprobes system, such as pimonidazole) is found as well as lower adipose tissue oxygen partial pressure (256,344,347). As a result of hypoxic state, hypoxia-inducible factor (HIF) 1α, which has been described as the "master regulator of oxygen homeostasis" (261, 274, 317) is induced in adipose tissue. The molecular and cellular responses of mature adipocytes to reduced O2 tension have been intensively investigated (336). Hypoxia has been shown to dramatically modify the expression and/or release of leptin (increase) and adiponectin (decrease) and inflammation related proteins (IL-6, IL1β, MCP-1), indicating the installation of an inflammatory state (336). For that reason, hypoxia is postulated to explain the development of inflammation and is considered as a major initiating factor for ECM production, thus triggering the subsequent metabolic dysfunction of adipose tissue in obesity (299, 317). Among the other functional changes that were described concern the rates of lipolysis and lipogenesis, where lipolysis seem to be increased [START_REF] Geiger | Identification of hypoxia-induced genes in human SGBS adipocytes by microarray analysis[END_REF] while both lipogenesis and the uptake of fatty acids are decreased (232) and the fact that hypoxia may directly impair adipocyte insulin sensitivity (257). Other cell types present in adipose tissue have been shown to respond to hypoxia. Indeed, it has been clearly demonstrated that hypoxia induces proinflammatory phenotype of macrophages (218). Moreover, macrophages have been localized within adipose tissue in hypoxic areas of obese mice thus augmenting their inflammatory response (256). In addition to macrophages, preadipocytes have been demonstrated to largely increase both their production of VEGF and leptin under hypoxic culture conditions. Conversely, PPARγ expression was reported to be dramatically diminished thus reducing preadipocyte adipogenic abilities under hypoxic environment (153).
Aging
With aging, adipose tissue changes in abundance, distribution, cell composition and endocrine signaling. Indeed, through middle/early old age, body fat percentage increases in both, men and women (107,165,211), shifts from subcutaneous depots to intra-abdominal visceral depots [START_REF] Enzi | Subcutaneous and visceral fat distribution according to sex, age, and overweight, evaluated by computed tomography[END_REF]235). Moreover, the aging process is accompanied by subsequent changes in adipose tissue metabolic functions such as decreased insulin responsiveness and altered lipolysis, which could cause excessive free fatty acids release with subsequent ectopic lipid deposition and lipotoxicity [START_REF] Das | Caloric restriction, body fat and ageing in experimental models[END_REF][START_REF] Fukagawa | Loss of skeletal muscle mass with aging: effect on glucose tolerance[END_REF]287). In a metabolic point of view, the balance between fat storage and oxidation is disrupted with aging and the capacity of tissues to oxidize fat progressively decreases. Therefore, it is likely that adiposity increase with aging could be also due to positive energy balance, decreased physical activity and basal metabolic rate and maintained caloric intake [START_REF] Enzi | Subcutaneous and visceral fat distribution according to sex, age, and overweight, evaluated by computed tomography[END_REF]245). Thus, fat aging is associated with age-related diseases, lipotoxicity, reduced longevity (216, 309). The aged adipose tissue is also characterized by reduced adipocyte size, fibrosis, endothelial dysfunction and diminished angiogenic capacity [START_REF] Donato | The impact of ageing on adipose structure, function and vasculature in the B6D2F1 mouse: evidence of significant multisystem dysfunction[END_REF]. Importantly, extensive changes in preadipocyte functions occur with aging [START_REF] Djian | Influence of anatomic site and age on the replication and differentiation of rat adipocyte precursors in culture[END_REF]154,155). These include preadipocyte replication decrease [START_REF] Djian | Influence of anatomic site and age on the replication and differentiation of rat adipocyte precursors in culture[END_REF], diminished adipogenic abilities (155), increased susceptibility to lipotoxicity (108), and increased pro-inflammatory cytokine, chemokine and ECM-modifying proteases [START_REF] Cartwright | Aging, depot origin, and preadipocyte gene expression[END_REF]310).
As in obesity, inflammation is a common feature of aging (215,295). Associated to this lowgrade inflammation state, macrophages have been reported to accumulate with age in subcutaneous adipose tissue. Conversely, no significant change in the visceral one was observed, however, the ratio of pro-inflammatory M1 macrophages to anti-inflammatory M2 macrophages has been shown to increase with aging [START_REF] Garg | Changes in adipose tissue macrophages and T cells during aging[END_REF]185,187). Interestingly, T cells populations have also been reported to change with aging. Specifically, Treg cells which accumulate to unusually high levels as a function of age and exacerbate both the decline of adipose metabolic function as well as the rise in insulin resistance [START_REF] Bapat | Depletion of fat-resident Treg cells prevents age-associated insulin resistance[END_REF]187). Aging is also linked with immune-senescence, a process leading to dysregulation of immunity or an adaptive response (106, 241). Notably, T cell dysfunction has been described and might also lead to systemic increases in TNF-α, IL-6 and acute phase proteins such as C-reactive protein and serum amyloid A [START_REF] Bruunsgaard | Age-related inflammatory cytokines and disease[END_REF]270). The "redox stress hypothesis' is also proposed to explain that age-related redox imbalance activates various pro-inflammatory signaling pathways leading to tissue inflammaging and immune deregulation (288). To note, considerable accumulation of senescent cells has been reported in aging adipose tissue (309). Among the various changes, which occur in senescent cells, multiple cytokines, chemokines, growth factors, matrix metalloproteinases and senescence-associated secretory phenotype (SASP) proteins are secreted and were shown to induce or sustain the age-related inflammation state [START_REF] Coppe | Senescence-associated secretory phenotypes reveal cellnonautonomous functions of oncogenic RAS and the p53 tumor suppressor[END_REF]187,235,342). It was recently shown that removing senescent cells from older mice improves adipogenesis and metabolic function (342). The authors propose that senescent cell removal may facilitate healthy adipose tissue expansion, less ectopic fat formation and improved insulin sensitivity (235).
Circulating adipose stem/stromal cells
Ectopic fat deposition can also take the form of mature adipocytes, which "infiltrate" non adipose organs such as muscles, pancreas and heart. Conversely to ectopic lipid formation, the cause and mechanisms responsible for ectopic adipocyte formation are largely unknown [START_REF] Bluher | Adipose tissue dysfunction in obesity[END_REF], neither their cellular origin nor the mechanisms controlling their metabolic activity [START_REF] Addison | Intermuscular Fat: A Review of the Consequences and Causes[END_REF]248,313). As already discussed in the present review, adipose tissue depots undergo active remodeling throughout adulthood. To enable such remodeling, the presence of precursor cells exhibiting adipogenic potential is necessary (272). A population of multipotent progenitors, the adipose-derived stem/stromal cells (ASCs) (long identified as preadipocytes) were identified by various studies including ours to exhibit such abilities [START_REF] Gimble | Adipose-derived adult stem cells: isolation, characterization, and differentiation potential[END_REF]204,205,262,275,352). ASCs, as their bone marrow counterpart the mesenchymal stem/stromal cells (MSCs) are endowed with multilineage mesodermal differentiation potentials as well as regenerative abilities, leading to their extensive investigation from a therapeutic and tissue engineering perspective [START_REF] Ferraro | Adipose Stem Cells: From Bench to Bedside[END_REF][START_REF] Gimble | Human adipose-derived cells: an update on the transition to clinical translation[END_REF]158). Adipose tissue remodeling is frequently reported to be associated with the infiltration of various cell populations (226, 329). However, adipose tissue is rarely seen as a reservoir of exportable cells. Indeed, cell export, the so-called mobilization process, has been essentially studied in bone marrow (169). For instance, in response to stress or injury, hematopoietic stem/progenitor cells lose their anchorage in the bone marrow microenvironment and are increasingly mobilized into the circulation. Cell mobilization involves chemoattractants and adhesion molecules and among these factors, the chemokine CXCL12 and its receptor CXCR4 are dominant in controlling stem/progenitor cell trafficking [START_REF] Döring | The CXCL12/CXCR4 chemokine ligand/receptor axis in cardiovascular disease[END_REF]170,171). Interference with CXCL12/CXCR4-mediated retention is a fundamental mechanism of stem/progenitor cell mobilization. Such interferences can be obtained by inducing (i) a CXCL12 decrease in the microenvironment through proteolysis by protease dipeptidyl-peptidase 4 (DPP4, also known as CD26) [START_REF] Christopherson Kw 2nd | Cell surface peptidase CD26/DPPIV mediates G-CSF mobilization of mouse progenitor cells[END_REF], (ii) a CXCL12 destabilization with MMP9, or neutrophil elastase or cathepsin G (175), (iii) an increase in CXCL12 plasma levels, which favors CXCL12-induced migration of stem/progenitor cells into the circulation over their retention in the bone marrow ( 213) and (iv) CXCR4 antagonism, with AMD3100 for instance, which induces the fast release of stem/progenitor cells from the bone marrow to the circulation [START_REF] Dar | Rapid mobilization of hematopoietic progenitors by AMD3100 and catecholamines is mediated by CXCR4-dependent SDF-1 release from bone marrow stromal cells[END_REF]. We and others have reported that both human and murine native ASCs (freshly harvested) express functional CXCR4 [START_REF] Gil-Ortega | Native adipose stromal cells egress from adipose tissue in vivo: evidence during lymph node activation[END_REF]276). Moreover we have also demonstrated for the first time that the in vivo administration of AMD3100 (a CXCR4 antagonist) induces the rapid mobilization of ASCs from subcutaneous adipose tissue to the circulation [START_REF] Gil-Ortega | Ex vivo microperfusion system of the adipose organ: a new approach to studying the mobilization of adipose cell populations[END_REF][START_REF] Gil-Ortega | Native adipose stromal cells egress from adipose tissue in vivo: evidence during lymph node activation[END_REF].
Interestingly, obesity has been associated with increased systemic circulation of MSCs, the tissue origin of which has not been identified [START_REF] Bellows | Influence of BMI on level of circulating progenitor cells[END_REF]. Moreover, while a reduction in CXCL12 level has been demonstrated in adipose tissue with obesity (227), CXCL12 plasmatic levels were demonstrated to dramatically increase in the context of type 2 diabetes (147, 181).
Therefore one can speculate that since we showed that subcutaneous adipose tissue releases adipose progenitors via a CXCL12/CXCR4 dependant mechanism, the unhealthy development of subcutaneous adipose tissue might trigger the aberrant release of adipose progenitors into the circulation and their further infiltration into non adipose tissues leading to ectopic adipocyte formation (Figure 4).
To sum up, the mechanisms driving the development of ectopic fat deposition and its consequences are summarized in Figure 4. What drive the development of one ectopic fat among others remains unknown. This needs to be explored further in clinical and experimental settings.
EAT IMAGING Noninvasive Imaging Quantification of EAT
EAT can be relatively easily assessed by a variety of different imaging techniques, whose characteristics are summarized in Table 3. Epicardial fat quantification is usually performed on an exam that was realized in a clinical work up for a condition other than fat repartition quantification. In research, set up quantification of EAT is of major interest in several cardiac and metabolic diseases. Pericardium is the anatomical limit between epicardial and paracardial fat. As outlined earlier in this review, these two tissues have different embryonic origin (see paragraph EAT origin), different vascularization, and their hypertrophy has different origin and consequences (265). The main problem for quantification of epicardial fat is the precise definition of the anatomical limit of the pericardium. Normal pericardium is a very thin layer and required cardiac ultrasound, gated MRI sequences and synchronized CT acquisition to be depicted. Besides imaging acquisition that has to depict correctly the pericardium layer, manual quantification of epicardial fat volume is time consuming. Recent teams have developed software analysis allowing and semi-automatic quantification of epicardial fat (192,222,229). These tools are now available for research community and progress will be made to save time during analysis phase.
Echocardiography
Quantification of epicardial fat using trans thoracic echocardiography (TTE) is limited to measurements of fat thickness surrounding the right ventricle through one echoic window. Indeed, EAT is visible as an echo free space between the outer wall of the myocardium and the visceral layer of the pericardium (Figure 5). The thickness of this space is measured on the right ventricular free wall in the parasternal long and short axis views where EAT is thought to be thickest. This technique, which is the most accessible and affordable imaging modality has been described by the group of Iacobellis (125). Distinction of the pericardium in a normal patient using TTE is possible so distinction of epicardial or paracardial fat is feasible using TTE.
Computed Tomography (CT)
CT is widely used for thoracic or cardiac diseases. The majority of clinical studies to date examining associations of epicardial fat depots with cardiovascular disease have utilized CT.
With high spatial resolution, pericardial fat can be readily and reproducibly identified with CT (Figure 6). Pericardial fat quantification is possible on non synchronized images but motion artefacts might pertain clear depiction between epicardial and paracardial fat [START_REF] Britton | Body fat distribution, incident cardiovascular disease, cancer, and all-cause mortality[END_REF].
Synchronized acquisitions such as calcium scoring and coronary CT angiography are now well-established exams in clinical practice with a large number of indications. Distinction of the pericardium layer is facilitated by excellent spatial definition and by the high contrast between chest-pericardium-EAT and heart. Synchronized images provide less artifact and more precise quantification of fat volume and should be considered as the standard of reference for fat volume quantification using CT (174). Iodine injection is not required for fat quantification and acquisition such as calcium scoring could be used for fat quantification [START_REF] Cheng | Pericardial fat burden on ECG-gated noncontrast CT in asymptomatic patients who subsequently experience adverse cardiovascular events[END_REF]. Technical progress has dramatically decreased the amount of radiation exposure for one standard acquisition for 10 years with the irradiation dose of less than 1msv for calcium score and coronary CT. Nevertheless irradiation exposure pertains broad use of CT for fat quantification. Recent studies suggested that epicardial fat quantification can be performed semi-automatically with good accuracy thus reducing the time required for the quantification to fewer than 2 min [START_REF] Cheng | Pericardial fat burden on ECG-gated noncontrast CT in asymptomatic patients who subsequently experience adverse cardiovascular events[END_REF]292).
Magnetic Resonance Imaging (MRI)
MRI offers excellent spatial resolution and is considered today as the standard of reference for epicardial fat quantification (192). Furthermore MRI is a great tool to assess other cardiac parameters such as function, myocardial fibrosis or intramyocardial fat quantification using proton spectroscopy [START_REF] Gaborit | Effects of bariatric surgery on cardiac ectopic fat: lesser decrease in epicardial fat compared to visceral fat loss and no change in myocardial triglyceride content[END_REF][START_REF] Gaborit | Assessment of epicardial fat volume and myocardial triglyceride content in severely obese subjects: relationship to metabolic profile, cardiac function and visceral fat[END_REF]. MRI image acquisition does not require irradiation and MRI is the ideal imaging method for follow-up. Usually, distinction of pericardium is well performed either on end diastolic or systolic phase (Figure 7). Areas obtained for each slice are summed together and multiplied by the slice thickness to yield epicardial fat volume. Consistency between measurements at two different time points required the definition of anatomical landmarks and by using the same imaging parameters [START_REF] Gaborit | Effects of bariatric surgery on cardiac ectopic fat: lesser decrease in epicardial fat compared to visceral fat loss and no change in myocardial triglyceride content[END_REF]. Recently software that provides an automatic quantification of epicardial fat was described with no difference compared to manual drawing and significant time saving but to date these tools are not broadly available [START_REF] Torrado-Carvajal | Automated quantification of epicardial adipose tissue in cardiac magnetic resonance imaging[END_REF].
What Should be Measured and How?
MRI was the only technique that was validated in vivo on animal models (192,225). Mahajan et al., imaged at 1.5T, 10 merino sheep using cine steady state free precession sequences in short axis covering the whole heart. End diastolic images were used to quantify ventricular, atrial and total pericardial fat. Correlation between MRI and autopsies were strong with ICC>0.8 and Inter-observer 95% limits of agreement were 7.2% for total pericardial adipose tissue (192). No study validates CT against histologic quantification of adipose tissue but based on the current knowledge, one can assume that result might be similar to MRI. MRI and CT are the two techniques that could quantify the total amount of epicardial, paracardial and pericardial fat. Nevertheless MR should be preferred, if possible, due to the lack of irradiation. Ultrasound is limited to fat thickness assessment on one region. A recent study including 311 patients validated TTE against CT with the use of a High-Frequency Linear Probe (r=0.714, p< 0.001) (116). By contrast, one recent paper found no correlation between epicardial fat thicknesses measured using TTE and volume of epicardial fat measured using MRI (281). This fact could be explained by the wide anatomical variability of cardiac fat repartition [START_REF] Bastarrika | Relationship between coronary artery disease and epicardial adipose tissue quantification at cardiac CT: comparison between automatic volumetric measurement and manual bidimensional estimation[END_REF]. Nevertheless, localized thickness of epicardial fat might be a measured of interest to assess clinical risk. A recent paper showed that EAT thickness localized at the left atrio-venticular groove assessed on CT performed for calcium scoring was the only parameter correlated with the number of vessels exhibiting stenosis 50% (338). Furthermore some investigators found that epicardial fat thickness measured at the left atrioventricular groove was the best predictor of obstructive coronary artery disease (116,338). This finding was confirmed in a meta-analysis but confirmation is needed in other populations than Asians (337).
EAT IN DISEASES
EAT and atrial fibrillation
Atrial fibrillation (AF) is caused by an interaction between an initiating trigger and the underlying atrial substrate, the latter being structural or electrical. AF is the most prevalent cardiac arrhythmia seen in clinical practice, that is associated with increased morbidity and mortality such as stroke or heart failure (144,160,334). Previous studies have highlighted that obesity is an independent risk factor for the new onset of atrial fibrillation (AF) (311, 327). In the general population, obesity increases the risk of developing AF by 49%, and the risk escalates in parallel with increased BMI (326). Recently, there has been evolving evidence that EAT could be implicated in the pathogenesis of AF. Numerous studies have confirmed the association between EAT abundance and the AF risk, severity and post ablation or electrical cardioversion recurrence [START_REF] Chekakie | Pericardial fat is independently associated with human atrial fibrillation[END_REF][START_REF] Chao | Epicardial adipose tissue thickness and ablation outcome of atrial fibrillation[END_REF][START_REF] Cho | Impact of duration and dosage of statin treatment and epicardial fat thickness on the recurrence of atrial fibrillation after electrical cardioversion[END_REF]219,221,312,335). This has been particularly observed in patients with persistent compared to paroxysmal AF [START_REF] Chekakie | Pericardial fat is independently associated with human atrial fibrillation[END_REF][START_REF] Batal | Left atrial epicardial adiposity and atrial fibrillation[END_REF]280). This association was found to be independent of total adiposity or left atrial enlargement (3).
In the Framingham Heart cohort including 3217 participants, CT measured pericardial fat (but not VAT) was an independent predictor of prevalent AF even after adjusting for established AF risk factors (age, sex, systolic blood pressure, PR interval, clinically significant valvular disease) and other measures of adiposity such as BMI or intrathoracic fat volume (312).
Interestingly, several studies have shown that EAT surrounding the atria in particular, was linked to AF recurrence after catheter ablation (219,221,318). But what are the mechanisms involved in this association between EAT and AF? Does EAT modulate the trigger (initiation) or the substrate (maintenance) of AF?
Direct mechanisms
Histologically, there is no fascia boundaries separating EAT from myocardium. Hence a direct infiltration of adipocytes within the atrial myocardium is not rare as we observed in huma atria (Figure 8). This could contribute to a remodeled atrial substrate, and lead to conduction defects (conduction slowing or inhomogeneity) (112,335). In a diet-induced obese sheep model, Mahajan et al, showed a major fatty infiltration in the atrial musculature (posterior left atrial wall) of obese sheep compared to controls (193). This sub-epicardial adipocyte infiltration interspersed between cardiac myocytes was associated with reduction in posterior left atrial voltage and increased voltage heterogeneity in this region, suggesting that EAT could be a unique feature of the AF substrate (193). This EAT infiltration could promote side-to-side cells connection loss and conduction abnormalities in a way similar to microfibrosis (291). In 30 patients in sinus rhythm, prior to AF ablation procedure, left atrial EAT was associated with lower bipolar voltage and electrogram fractionation (350). In the Framingam Heart study cohort, Friedman et al, showed that pericardial fat was significantly associated with several P wave indices such as P wave duration even after adjustment for visceral and intrathoracic fat [START_REF] Friedman | Pericardial fat is associated with atrial conduction: the Framingham Heart Study[END_REF]. P wave indices (PWI) represent indeed a summation of the electrical vectors of atrial depolarization reflecting the atrial activation sequence. These are also known as markers of atrial remodeling (249). Another small study using a unique 3D merge process, dominant frequency left atrial map, identified EAT locations to correspond to high dominant frequency during AF. High dominant frequency are key electrophysiological parameters reflecting microreentrant circuits or sites of focal-firing that drive AF [START_REF] Atienza | Mechanisms of fractionated electrograms formation in the posterior left atrium during paroxysmal atrial fibrillation in humans[END_REF]302).
Therefore, overlap between EAT locations and high dominant frequency sites implies that EAT is most likely to harbor high-frequency sites, producing a favorable condition for perpetuation of AF. In vitro incubation of isolated rabbit left atrial myocytes with EAT modulated the electrophysiological properties of the cells leading to higher arrhythmogenesis in left atrial myocytes (178). All together, these data suggest a possible role of EAT on AF electrophysiological substrate.
Another important point is that EAT is the anatomical site of intrinsic cardiac autonomic nervous system, namely ganglionated plexi (GP) and interconnecting nerves, especially in the posterior wall around pulmonary veins ostia (124). These ganglia are a critical element responsible for the initiation and maintenance of AF [START_REF] Coumel | Paroxysmal atrial fibrillation: a disorder of autonomic tone?[END_REF]250). GP activation includes both parasympathetic and sympathetic stimulation of the atria/ pulmonary veins adjacent to the GP.
Parasympathetic stimulation shortens the action potential duration, and sympathetic stimulation increases calcium loading and calcium release from the sarcoplasmic reticulum.
The combination of the short action potential duration and longer calcium release induces triggered firing resulting from delayed after-depolarization of the atria/pulmonary veins, as manifested by the high dominant frequency sites. Pulmonary veins isolation and radiofrequency ablation target sites for substrate modification overlap most of the EAT sites (179,250,301). Whether EAT has a physiological role to protect these ganglia against mechanical forces due to cardiac contraction has been suggested (266). By contrast, recent clinical data showed that periatrial EAT is an independent predictor of AF recurrence after ablation (157,202,219,296), supporting that EAT may have a pro-arrhythmic influence.
Furthermore, electrical conductivity of the fat being lower than that of the atrial tissue, EAT volume may directly decrease the chance of the procedure to succeed (297).
Finally, a mechanical effect of EAT on left atrial pressure stretch and wall stress, which is known to favor arrhythmias can not be excluded.
Indirect mechanisms
EAT is a endocrine organ and a source of pro-inflammatory cytokines (such as TNF--1, IL-6, Monocyte Chemoattractant Protein-1 (MCP-1)) and profibrotic factors (such as TGFs and MMPs) acting in a paracrine way on the myocardium (111,115,206). These molecules are thought to diffuse in the pericardial sac and contribute to the structural remodeling of the atria. Indeed, using a unique organo-culture model, we showed that human EAT secretome, induced marked fibrosis of rat atrial myocardium and favored the differentiation of fibroblasts into myofibroblasts (322). This effect was mediated in part by Activin A, a member of the TGF family, and blocked by anti-activin A antibody (322). Constitutive TGF-ß1 overexpression in a transgenic mouse model produces increased atrial fibrosis and episodes of inducible AF while the ventricle remains normal (220,231). This data suggest that EAT could interfere with cardiac electrical activity and with the electrophysiological remodeling of the atria. According this, we previously demonstrated using a transcriptomic approach that periatrial EAT had a unique signature, expressing genes implicated in cardiac muscle contraction and intracellular calcium signaling pathway. Fibrosis is a central process in the alteration of the functional and structural properties of the atrial myocardium [START_REF] Burstein | Atrial fibrosis: mechanisms and clinical relevance in atrial fibrillation[END_REF]172). It causes interstitial expansion between bundles of myocytes. Dense and disorganized collagen weave fibrils physically separate cardiomyocytes, and can create a barrier to impulse propagation (285,300). Other pro-fibrotic factors known to be secreted by EAT may also contribute to remodeling of the atrial myocardium. Matrix metalloproteinases (MMPs), key regulators of extra-cellular matrix turnover, are known to contribute to atrial fibrosis, are upregulated during AF, and their secretion is increased in EAT compared to SAT [START_REF] Boixel | Fibrosis of the left atria during progression of heart failure is associated with increased matrix metalloproteinases in the rat[END_REF]322).
Local inflammatory pathways may also influence structural changes in the left atrium, and occurrence of AF. EAT secretes a myriad of pro-inflammatory cytokines such as IL-6, IL-8, IL-1, TNF-, MCP-1 that may have local effects on the adjacent atrial myocardium, and may induce migration of monocytes and immune cells (146,206). The pro-inflammatory activity of EAT, adjacent to left atrium, atrioventricular groove, and left main artery assessed with positron emission tomography (PET), was confirmed to be higher in AF compared with non AF patients. ( 207).
EAT is also an important source of reactive oxygen species (ROS) with a high oxidative stress activity that could be involved in the genesis of AF ( 271). Ascorbate, an antioxidant and peroxynitrite decomposition catalyst, has been shown to decrease atrial pacing-induced peroxynitrite formation in dogs, and the incidence of postoperative AF in humans [START_REF] Carnes | Ascorbate attenuates atrial pacing-induced peroxynitrite formation and electrical remodeling and decreases the incidence of postoperative atrial fibrillation[END_REF]. This point to a role of oxidative stress and cytokines produced by EAT on atrial remodeling and arrhythmogenesis.
Taken together, all these studies provide uncovered findings that EAT through mechanical, fibrotic, inflammation and oxidative stress mechanisms may exert an impact on the atrial susbtrate and triggering (summarized in Figure 9). An improved understanding of how EAT modifies atrial electrophysiology and struture may yield novel approaches towards preventing AF in obesity.
EAT and cardiac geometry and function:
EAT has local effects on the structure and function of the heart. Numerous clinical studies have unveiled the association between EAT volume and early defects in cardiac structure, volume and function [START_REF] Corradi | The ventricular epicardial fat is related to the myocardial mass in normal, ischemic and hypertrophic hearts[END_REF][START_REF] Dabbah | Epicardial fat, rather than pericardial fat, is independently associated with diastolic filling in subjects without apparent heart disease[END_REF][START_REF] Fontes-Carvalho | Influence of epicardial and visceral fat on left ventricular diastolic and systolic functions in patients after myocardial infarction[END_REF][START_REF] Gaborit | Assessment of epicardial fat volume and myocardial triglyceride content in severely obese subjects: relationship to metabolic profile, cardiac function and visceral fat[END_REF]123,128,131,143,177,328,333). Increased amount of EAT has been associated with increased left ventricular (LV) mass and abnormal right ventricle geometry or subclinical dysfunction [START_REF] Gökdeniz | Relation of epicardial fat thickness to subclinical right ventricular dysfunction assessed by strain and strain rate imaging in subjects with metabolic syndrome: a twodimensional speckle tracking echocardiography study[END_REF]330). This is in accordance with initial necropsic and echographic studies showing an increase in LV mass to be strongly related to EAT, irrespective of CAD or hypertrophy [START_REF] Corradi | The ventricular epicardial fat is related to the myocardial mass in normal, ischemic and hypertrophic hearts[END_REF]128,131). In a study of 208 non CAD patients evaluated by [ 15 O]H2O hybrid positron emission tomography (PET)/CT imaging, EAT volume was associated with LV mass independently of BMI [START_REF] Bakkum | The impact of obesity on the relationship between epicardial adipose tissue, left ventricular mass and coronary microvascular function[END_REF]. EAT thickness and EAT volume were then associated with right and LV diastolic dysfunction, initially in severely obese patients and afterwards in various cohorts of subjects with impaired glucose tolerance, and no apparent heart disease [START_REF] Dabbah | Epicardial fat, rather than pericardial fat, is independently associated with diastolic filling in subjects without apparent heart disease[END_REF][START_REF] Gaborit | Assessment of epicardial fat volume and myocardial triglyceride content in severely obese subjects: relationship to metabolic profile, cardiac function and visceral fat[END_REF]128,143,152,177,194,228,238,328). In 75 men with or without metabolic syndrome, the amount of EAT correlated negatively with all parameters of LV diastolic function (LV mass-to-volume ratio, end-diastolic, end-systolic, and indexed stroke volumes) and was an independent determinant of LV early peak filling rate (228).
After myocardial infarction, EAT volume was also associated with LV diastolic function after adjustment for classical risk factors and other adiposity parameters [START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF]. By contrast, other studies have reported that myocardial fat, but not EAT, was independently associated with cardiac output and work [START_REF] Gaborit | Assessment of epicardial fat volume and myocardial triglyceride content in severely obese subjects: relationship to metabolic profile, cardiac function and visceral fat[END_REF]134). Myocardial fat, which can be assessed by proton magnetic resonance spectroscopy ( LV mass, peak longitudinal and circumferential strains and was a better indicator for cardiac remodeling and dysfunction than BMI z-score or VAT (139). Another study found a persistent association between regional EAT and LV function beyond serum levels of adipokines, which is in favor of a local EAT effect rather than a systemic VAT effect (122).
Healthy men aged 19-94 were evaluated using STE echography, to study the profile of the healthy aging heart. EAT was associated with longitudinal STE LV-dyssynchrony, longitudinal strain, circumferential LV-dyssynchrony, and LV twist [START_REF] Crendal | Increased myocardial dysfunction, dyssynchrony, and epicardial fat across the lifespan in healthy males[END_REF]. Furthermore EAT and hepatic triglyceride content correlated negatively with peak circumferential systolic strain and diastolic strain rate in type 2 diabetes (174). However, this is not consistent with other studies reporting no link of geometry alterations and LV diastolic dysfunction with EAT [START_REF] Bonapace | Nonalcoholic fatty liver disease is associated with left ventricular diastolic dysfunction in patients with type 2 diabetes[END_REF]100,247,252). EAT has been associated with myocardial and hepatic steatosis, which are confounding factors (133,197). Whether EAT, VAT, hepatic fat or myocardial fat is the best predictor of LV function merits further evaluation and large population studies assessing each ectopic fat depot are needed.
The impact of EAT on cardiac function is less evident at a more advanced stage disease.
Interestingly, reduced amount of EAT were found in patients with congestive heart failure (HF), compared to patients with preserved systolic function [START_REF] Doesch | Epicardial adipose tissue in patients with heart failure[END_REF][START_REF] Doesch | Bioimpedance analysis parameters and epicardial adipose tissue assessed by cardiac magnetic resonance imaging in patients with heart failure[END_REF]132). Furthermore, EAT reduction was predictive of cardiac deaths in these patients [START_REF] Doesch | Bioimpedance analysis parameters and epicardial adipose tissue assessed by cardiac magnetic resonance imaging in patients with heart failure[END_REF]. Reduction of EAT volume with the severity of right ventricular systolic dysfunction in patients with chronic obstructive pulmonary disease was also demonstrated (145). EAT reduction might reflect a global fat mass reduction due to disease (124). Burgeiro et al, found reduction of glucose uptake, lipid storage and inflammation-related gene expression in EAT of patients with heart failure compared to SAT [START_REF] Burgeiro | Glucose uptake and lipid metabolism are impaired in epicardial adipose tissue from heart failure patients with or without diabetes[END_REF]. However, the triggering factors causing EAT diminution and phenotype modification in heart failure is still under investigation, yet.
How EAT can participate and initiate LV dysfunction? First, EAT could mechanically enhanced LV afterload that could lead to increase LV output and stroke volume to enable adequate myocardium perfusion. EAT may act as local energy supplier and/or as a buffer against toxic levels of free fatty acids in the myocardium (198). EAT was found to have an enhanced adrenergic activity with increased catecholamine levels and expression of catecholamine biosynthetic enzymes so that EAT could directly contribute to sympathetic nervous system hyperactivity in the heart that accompanies and fosters myocardial sympathetic denervation. Indeed, Parisi et al, studied the relationship between EAT and sympathetic nerve activity assessed by 123I-metaiodobenzylguanidine ( 123 I-MIBG) in patients with HF (237). They found that EAT thickness was correlated to cardiac sympathetic denervation and represented an important source of norepinephrine, whose levels were 2-fold higher than those found in plasma. Because of the EAT proximity to the myocardium, the increase in catecholamine content in this tissue could result in a negative feedback on cardiac sympathetic nerves, thus inducing a functional and anatomic denervation of the heart (237) .
Alternatively, secretory products of EAT and an imbalance between anti-inflammatory and proinflammatory adipocytokines could participate in myocardium remodeling [START_REF] Gaborit | Epicardial fat: more than just an "epi" phenomenon?[END_REF]. The contribution of EAT to cardiac fibrosis, a substratum widely recognized to impair cardiac function, has been recently demonstrated (see also above EAT and AF) (322). EAT, through its capacity to produce and secrete adipo-fibrokines and miRNA could be a main mechanism The reciprocal crosstalk between EAT, myocardium and epicardium is even more complex than what was first suggested. Indeed as described above in paragraph EAT origin, signals from necrotic cardiomyocytes could induce epicardium-to-fat transition, thay may increase EAT volume which may in turn modulate heart disease evolution.
All together, the available studies in humans do not imply causality but suggest that accumulation of EAT is, at least an indirect marker of early cardiac dysfunction in selected stages of disease progression. Wide cohorts evaluating extensively all ectopic fat depots and comprehensively characterizing cardiac geometry and function across the lifespan are needed.
EAT and coronary artery disease
Histological and radiological evidence
Although our limited understanding of the physiological role of EAT, there has been a lot of studies published in recent years, underscoring the strong association of EAT with the onset and development of coronary artery disease (CAD) in humans [START_REF] Chechi | Thermogenic potential and physiological relevance of human epicardial adipose tissue[END_REF][START_REF] Clement | Weight of Pericardial Fat on Coronaropathy[END_REF]234). Initially, a plausible role of EAT in CAD was supported by the histological observations that segments of coronary arteries running in a myocardial bridge (ie free of any immediately adjacent epicardial fat) tended to be free from atherosclerosis (135,260). Necropsic studies have then demonstrated that EAT was higher in patients dead from CAD, and correlated with CAD staging (284). Since then, and although correlations do not necessarily prove causation, a growing body of imaging studies using echocardiography (thickness), computed tomography (CT, reviewed elsewhere (293)) or magnetic resonance imaging (MRI) have confirmed the association of EAT with CAD [START_REF] Gorter | Relation of epicardial and pericoronary fat to coronary atherosclerosis and coronary artery calcium in patients undergoing coronary angiography[END_REF]101,105,156,190,212,264,305,324). Initial large population studies, including the Framingham Heart Study and Multi-Ethnic Study of Atherosclerosis, identified pericardial fat as an independent predictor of cardiovascular risk [START_REF] Ding | The association of pericardial fat with incident coronary heart disease: the Multi-Ethnic Study of Atherosclerosis (MESA)[END_REF]191). Compared to the Framingham Risk Score, pericardial fat volume >300 cm 3 was by far the strongest predictor for coronary atherosclerosis (OR 4.1, 95% CI 3.63-4.33)(101).
Other studies highlighted the add-on predictive value of EAT compared to CAD scores such as coronary calcium score (CAC) (113,138,173). EAT significantly correlated with the extent and severity of CAD, chest pain, unstable angina and coronary flow reserve (233, 269).
In addition, case-control studies identified pericardial fat volume as a strong predictor of myocardial ischemia (113,305). By contrast, some studies did not find such an association between EAT and the extent of CAD in intermediate to high risk patients, suggesting that the relationship is not constant at more advanced stages (263,306). Interestingly, in the positive studies linking EAT with CAD and developing high risk obstructive plaques, the association was independent of adiposity measures, BMI and the presence of coronary calcifications (128,136). Recent studies indicated that EAT could also serve as a marker for the presence and severity of atherosclerosis burden in asymptomatic patients [START_REF] Bachar | Epicardial adipose tissue as a predictor of coronary artery disease in asymptomatic subjects[END_REF]346), threshold EAT thickness identified at 2.4 mm [START_REF] Bachar | Epicardial adipose tissue as a predictor of coronary artery disease in asymptomatic subjects[END_REF]. All these findings are highly suggestive of a role for EAT in promoting the early stages of atherosclerotic plaque formation. In highly selected healthy volunteers, we reported that a higher EAT volume was associated with a decrease in coronary microvascular response, likely suggesting that EAT could participate in endothelial dysfunction [START_REF] Gaborit | Epicardial fat volume is associated with coronary microvascular response in healthy subjects: a pilot study[END_REF]. By using intravascular ultrasound it could be demonstrated that plaques develop most frequently with a pericardial spatial orientation suggesting a permissive role of EAT (251).
EAT and Clinical outcomes
More recently, the Heinz Nixdorf Recall study including more 4000 patients from general population confirmed the predictive role of EAT on clinical outcomes within 8 years (189). In this prospective trial, EAT volume significantly predicted fatal and nonfatal coronary events independently of cardiovascular risk factors and CAC score. They observed that subjects in the highest EAT quartile had a 4 fold higher risk of coronary events when compared to subjects in the lowest quartile (0.9 versus 4.7 %, p<0.001, respectively). In addition, doubling EAT volume was associated with a 1.5 fold adjusted risk of coronary events [hazard ratio (HR), 1.54; 95% CI, 1.09-2.19] (189). A recent meta-analysis, evaluating 411 CT studies confirmed EAT as a prognostic metric for future clinical adverse events (binary cut-off of 125 mL) (293). This cut-off needs to be evaluated further in prospective cohorts in order to discuss the relevance of its introduction in clinical care. To date, there is a lack of agreement on EAT threshold value associated with increased CAD risk, as various methods are used for its assessment (see Imaging paragraph). In conclusion to all these clinical studies, EAT volume is a strong independent predictor of CAD. Nevertheless, whether a reduction in the amount of EAT could reduce CAD in humans remains to be established.
Pathophysiology of EAT in CAD
The mechanisms by which EAT can cause atherosclerosis are complex and not completely understood. Epicardial fat might alter the coronary arteries through multiple pathways, including oxidative stress, endothelial dysfunction, vascular remodeling, macrophage activation, innate inflammatory response, and plaque destabilization (124, 243)
1/ EAT has a specific profile in coronary artery disease:
EAT in CAD displays a pro-inflammatory phenotype, high levels of ROS and a specific pattern of micro RNA. Epicardial adipocytes have intrinsic proinflammatory and atherogenic secretion profiles [START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF][START_REF] Cheng | Adipocytokines and proinflammatory mediators from abdominal and epicardial adipose tissue in patients with coronary artery disease[END_REF]. In 2003, Mazurek et al., first reported that, in CAD patients EAT exhibited significantly higher levels (gene expression and protein secretion) of chemokines such as monocyte chemotactic protein-1 (MCP-1) and several inflammatory cytokines IL-6, IL-1, and TNF- than SAT (206). They also observed the presence of inflammatory cells infiltrate including macrophages, lymphocytes and mast cells in EAT compared to SAT. The presence of these inflammatory mediators was hypothesized to accentuate vascular inflammation, plaque instability via apoptosis (TNF-), and neovascularization (MCP-1).
Peri-adventitial application of endotoxin, MCP-1, IL-1, or oxidized LDL induces inflammatory cell influx into the arterial wall, coronary vasospasm, or intimal lesions, which suggests that bioactive molecules from the pericoronary tissues may alter arterial homeostasis (279). These observations tend to support the concept of "outside to-inside" cellular cross-talk or "vasocrine/paracrine signaling", in that inflammatory mediators or free fatty acids produced by EAT adjacent to the coronary artery, may have a locally toxic effect on the vasculature, in diffusing passively or in vasa vasorum through the arterial wall, as depicted in Figure 10 (38,266,348). Migration of immune cells between EAT and adjacent adventitia may also occur (133). Nevertheless, direct proofs that these mechanisms operate in vivo are lacking. Since then, other groups have confirmed that EAT is a veritable endocrine organ and a source of a myriad of bioactive locally acting molecules (266). EAT content and release of adiponectin were consistently found to be decreased in CAD patients, suggesting that an imbalance between antiatherogenic, insulinsensitizing and harmful adipocytokines secreted by EAT could initiate inflammation in the vascular wall [START_REF] Cheng | Adipocytokines and proinflammatory mediators from abdominal and epicardial adipose tissue in patients with coronary artery disease[END_REF]129,278). Innate immunity represents one of the potential pathways for proinflammatory cytokines release. Innate immunity can be activated via the toll-like receptors (TLRs), which recognize antigens such as lipopolysaccharide (LPS) (141). Activation of TLRs leads to the translocation of NFκB into the nucleus to initiate the transcription and the release of IL-6, TNF-, and resistin [START_REF] Creely | Lipopolysaccharide activates an innate immune system response in human adipose tissue in obesity and type 2 diabetes[END_REF]164). Remarkably Baker et al, showed that NFκB was activated in EAT of CAD patients [START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF].
TLR-2 and TLR-4 and TNF- gene expression was higher in EAT of CAD patients, and was closely linked to the presence of activated macrophages in the EAT. In another study, EAT amount positively correlated with the CD68+ and CD11c+ cell numbers, NLRP3 inflammasome, IL-1β, and IL-1R expression. NLRP3 inflammasome is a sensor in the nodlike receptor family of the innate immune cell system that activates caspase-1 and mediates the processing and release of IL-1β, and thereby has a central role in the inflammatory response [START_REF] Baroja-Mazo | The NLRP3 inflammasome is released as a particulate danger signal that amplifies the inflammatory response[END_REF]. Interestingly, the ratio of proinflammatory M1 macrophages and antiinflammatory M2 macrophages in EAT was reported to be shifted toward the M1 phenotype MicroRNAs could also be an important actor of this crosstalk between EAT and the coronary artery wall. Indeed miRNAs are small, non-coding RNAs acting as posttranscriptional regulators of gene expression, either interfering with protein translation or reducing transcript levels (176). A nice integrative miRNA and whole genome analyses of EAT identified the signature of miRNAs in EAT of CAD patients (320). The authors described that EAT in CAD displays affected metabolic pathways with suppression of lipid-and retinoid sensing nuclear receptors, transcriptional activities, increased inflammatory infiltrates, activation of innate and adaptive immune response enhanced chemokine signalling (CCL5, CCL13, and CCL5R) and decrease of miR-103-3p as prominent features (320).
Furthermore higher levels of reactive oxygen species (ROS) and lower expression of antioxidant enzymes (such as catalase), have been observed in EAT of individuals with CAD compared with SAT (Figure 10) (271). On the other hand, EAT might also contribute to the accumulation of oxidized lipids within atherosclerotic plaques, as we evidenced increased expression and secretion of Secretory type II phospholipase A2 (sPLA2-IIa) in EAT of CAD patients [START_REF] Dutour | Secretory Type II Phospholipase A2 Is Produced and Secreted by Epicardial Adipose Tissue and Overexpressed in Patients with Coronary Artery Disease[END_REF].
2/ EAT plays a pivotal role in the initiation of atherosclerosis
The negative impact of EAT secretome on adjacent coronary arteries in CAD has been clearly demonstrated. In vitro studies revealed that EAT secreted fatty acids, inflammatory, stress mediators and migrated immune cells may induce endothelial dysfunction and vascular remodeling. EAT can affect the endothelium by inducing cell-surface expression of adhesion molecules such as VCAM-1, and it enhances migration of monocytes to coronary artery endothelial cells (146). Besides, it has been demonstrated that the permeability of endothelial cells in vitro was significantly increased after exposure to EAT supernatrant in patients with acute coronary syndrome, and this effect was normalized by anti-resistin antiserum (167).
Payne et al, showed that perivascular EAT derived leptin electively impaired coronary endothelial-dependent dilation in Ossabaw swine with metabolic syndrome (242). Other in vitro studies support the role of perivascular adipose tissue on vascular remodeling (243).
Conditioned medium of cultured perivascular adipocytes from HFD rats was found to significantly stimulate vascular smooth muscle cells proliferation [START_REF] Barandier | Mature adipocytes and perivascular adipose tissue stimulate vascular smooth muscle cell proliferation: effects of aging and obesity[END_REF]. Other in vitro studies highlighted the role of peri-adventitial fat on neointimal formation after angioplasty (303,304). Finally, in a recent study involving Ossabaw miniature swine, selective surgical excision of EAT surrounding the left anterior descending artery was shown to be associated with slower progression of coronary atherosclerosis over a period of 3 months with atherogenic diet (210). Athough this study was preliminary and without controls, these results support the hypothesis that EAT could locally contribute to the initiation of coronary atherosclerosis, and further suggest that targeting its reduction could reduce CAD progression.
To conlude, EAT is not simply a marker of CAD but seems to play a key role in the initiation [START_REF] Barone-Rochette | Left ventricular remodeling and epicardial fat volume in obese patients with severe obstructive sleep apnea treated by continuous positive airway pressure[END_REF]. These data are consistent with previous studies supporting a negative role of EAT on cardiac function [START_REF] Cavalcante | Association of epicardial fat, hypertension, subclinical coronary artery disease, and metabolic syndrome with left ventricular diastolic dysfunction[END_REF][START_REF] Dabbah | Epicardial fat, rather than pericardial fat, is independently associated with diastolic filling in subjects without apparent heart disease[END_REF][START_REF] Fontes-Carvalho | Influence of epicardial and visceral fat on left ventricular diastolic and systolic functions in patients after myocardial infarction[END_REF]128,130,143,174,238).
The prognostic impact of EAT reduction by CPAP therapy on cardiovascular outcomes need to be further explored by large prospective studies. In all, EAT is increased in OSA patients and is a correlate of OSA severity. Additionally, CPAP therapy can significantly reduce the amount of EAT. Further large prospective studies are needed to evaluate the effect of CPAP therapy on EAT quantity, phenotype, and secretome.
Conclusion and perspectives
To conclude, the unique anatomic location of epicardial adipose tissue likely translates into a unique physiological relevance and pathophysiological role for this cardiac ectopic depot. Far from being an inert and uniform tissue, EAT has been shown to be a dynamic organ with highly developed functions, and a unique trasncriptome that are determined by its developmental epicardial origin, its regenerative potential, and molecular structure. It was poorly studied during a long time because of the small amount of EAT found in rodents and because of the difficulties faced by the researchers for biological studies requiring open cardiac surgery. Since, imaging studies have provided new non invasive tools for EAT quantification, and recent studies have paved the way for identifying new cellular characteristics of EAT by measuring its radiodensity [START_REF] Baba | CT Hounsfield units of brown adipose tissue increase with activation: preclinical and clinical studies[END_REF][START_REF] Franssens | Relation between cardiovascular disease risk factors and epicardial adipose tissue density on cardiac computed tomography in patients at high risk of cardiovascular events[END_REF][START_REF] Gaborit | Looking beyond ectopic fat amount: A SMART method to quantify epicardial adipose tissue density[END_REF].
In addition, an increase of epicardial fat result in an increased propensity not only for the onset but also for the progression and severity of CAD or atrial fibrillation in humans. Many intervention studies have proven that EAT is flexible and is a modifiable factor with weight loss induced by diet, GLP-1 receptor agonists or bariatric surgery [START_REF] Dutour | Exenatide decreases Liver fat content and Epicardial Adipose Tissue in Patients with obesity and Type 2 Diabetes: A prospective randomised clinical trial using Magnetic Resonance Imaging and Spectroscopy[END_REF]254). The type of intervention, in addition to the amount of weight loss achieved, is predictive of the amount of EAT reduction. Hence this depot represents a therapeutic target for the management of CAD, and should be further assessed to identify CAD risk. But whether its reduction will lead to the reduction of cardiac events or cardiac rhythm disorders needs to be addressed in randomized controlled studies. The effect of EAT on cardiac autonomic nerves and the cardiac conduction system also needs to be further explored.
Furthermore, EAT has a beige profile that decreases with age and CAD. In support of this hypothesis is evidence of brown-to-white differentiation trans-differentiation in CAD patients with a decrease in thermogenic genes and up-regulation of white adipogenesis [START_REF] Aldiss | Browning" the cardiac and peri-vascular adipose tissues to modulate cardiovascular risk[END_REF][START_REF] Dozio | Increased reactive oxygen species production in epicardial adipose tissues from coronary artery disease patients is associated with brown-to-white adipocyte trans-differentiation[END_REF]. Teaching points: a variety of termes including "epicardial", "pericardial", "paracardial" and "intra-thoracic" have been used in the literature to describe ectopic fat depots in proximity to the heart or within mediastinum. The use of these terms appears to be a point of confusion, as there is varied use of definitions. Of particular confusion is the term used to define the adipose tissue located within the pericardial sac, between myocardium and visceral pericardium. This has previously been described in the literature as "pericardial fat", while other groups have referred it as "epicardial fat". As illustrated in Figure 1, the most accurate term for the adipose tissue fully enclosed in the pericardial sac that directly surrounds myocardium and coronary arteries is EAT. Pericardial fat (PeriF) refers to paracardial fat (ParaF) plus all adipose tissue located internal to the parietal pericardium. PeriF=ParaF+EAT. In an obesogenic environment and chronic positive energy balance, the ability of subcutaneous adipose tissue (SAT) to expand, and to store the free fatty acids in excess is crucial in preventing the accumulation of fat in ectopic sites, and the development of obesity complications. Healthy SAT and gynoid obesity are associated with a protective phenotype with less ectopic fat and metabolically healthy obesity, while dysfunctional SAT and android obesity are associated with more visceral fat and ectopic fat accumulation with an increased risk of type 2 diabetes, metabolic syndrome and coronary artery disease (CAD). Inflammation or profibrotic processes, hypoxia, and aging could also contribute to ectopic fat development. Mobilization and release of adipose progenitors adipose-derived stem/stromal cells (ASCs) into the circulation and their further infiltration into non adipose tissues leading to ectopic adipocyte formation also cannot be excluded. Figure 9. This figure summarizes the possible mechanisms that could link EAT with atrial fibrillation. EAT expansion-induced mechanical stress, direct adipocyte infiltration within atrial myocardium, inflammation, oxidative stress, and EAT producing adipofibrokines are thought to participate in structural and electrical remodeling of the atria, and in cardiac autonomous system activation, hence promoting arrhythmogenesis.
Figure 10. This figures illustrates a transversal and longitudinal view of EAT surrounding a coronary artery. As there is no fascia separating EAT from the vessel wall, free fatty acids or proinflammatory cytokines produced by EAT could diffuse passively or in vasa vasorum through the arterial wall and participate in the early stages of atherosclerosis plaque formation (endothelial dysfunction, ROS production, oxidized LDL uptake, monocyte transmigration, smooth muscle cells proliferation, macrophages transformation into foam cells). An imbalance between antiatherogenic, and harmful adipocytokines secreted by EAT could initiate inflammation in the intima. Innate immunity can be activated via the toll-like receptors (TLRs), which recognize antigens such as lipopolysaccharide (LPS). Activation of TLRs leads to the translocation of NFκB into the adipocyte nucleus to initiate the transcription and the release of proinfammatory molecules such as IL-6, TNF-α, and resistin. NLRP3 inflammasome is a sensor in the nod-like receptor family of the innate immune cell system that activates caspase-1, and mediates the processing and release of IL-1β by the adipocyte, and thereby has a central role in the EAT-induced inflammatory response.
Fat tissues have low T1 value and appear in high signal on most sequences. Usually cine Steady State Free Precession (SSFP) sequences are used to quantify fat volume. Contrast on SSFP images allow a precise distinction between paracardial and epicardial fat and coverage of whole ventricles is always performed on a standard cardiac MR acquisition (161). Recently novel 3D Dixon acquisition using cardiac synchronization and respiratory triggering provide high accuracy and reproducibility for peri and epicardial fat quantification (118).
contributing to the excess deposition of extracellular matrix proteins which distort organ architecture, induce pathological signaling and impair mechano-electric coupling of cardiomyocytes.(163, 291). However, concomitant study of heart fibrosis and EAT molecular characteristics has never been simultaneously performed in humans. In vitro studies from the group of Eckel, have demonstrated in both guinea pigs and humans that secreted factors from EAT can affect contractile function and insulin signaling in cardiomyocytes (103, 104). Highfat feeding of guinea pigs induces qualitative alterations in the secretory profile of EAT, which contributes to the induction of impaired rat cardiomyocyte function, as illustrated by impairments in insulin signaling, sarcomere shortening, cytosolic Ca2+ metabolism and SERCA2a expression (104). Rat cardiomyocytes treated with secretome of EAT from diabetic patients showed reductions in sarcomere shortening, cytosolic Ca 2+ fluxes, expression of sarcoplasmic endoplasmic reticulum ATPase 2a. This result suggests that EAT could contribute to the pathogenesis of cardiac dysfunction in type 2 diabetes, eventhough the development of cardiac dysfunction is likely to be multifactorial, insulinresistance, myocardial fibrosis, endothelial dysfunction, autonomic dysfunction and myocyte damage being probably implicated.
in patients with CAD (115). More recently, Patel et al nicely demonstrated the implication of renin-angiotensin system in the inflammation of EAT (239). In a model of mice lacking angiotensin converting enzyme 2 (ACE2) submitted to a HFD, loss of ACE2 resulted in decreased weight gain, but increased glucose intolerance, and EAT inflammation. Ang 1-7 treatment resulted in ameliorated EAT inflammation and reduced cardiac steatosis, function and lipotoxicity (239).
Figure 1 :
1 Figure 1: Layers of the heart and pericardium Scheme demonstrating epicardial fat between
Figure 2 :
2 Figure 2: Epicardial adipose tissue among speciesanterior and posterior heart photographic
Figure 3 :
3 Figure 3: The origin of epicardial adipose tissue. Epicardial adipocytes derived from
Figure 4 :Figure 5 :Figure 6 :
456 Figure 4: Main factors leading to ectopic fat deposition in humans. FFA: free fatty acids;
Figure 7 :Figure 8 :Figure 9 :Figure 10 :
78910 Figure 7: MR short axis cine sequences at the diastolic phase A, with contouring of the heart
Figure 1 .
1 Figure1. Teaching points: a variety of termes including "epicardial", "pericardial", "paracardial" and "intra-thoracic" have been used in the literature to describe ectopic fat depots in proximity to the heart or within mediastinum. The use of these terms appears to be a point of confusion, as there is varied use of definitions. Of particular confusion is the term used to define the adipose tissue located within the pericardial sac, between myocardium and visceral pericardium. This has previously been described in the literature as "pericardial fat", while other groups have referred it as "epicardial fat". As illustrated in Figure1, the most accurate term for the adipose tissue fully enclosed in the pericardial sac that directly surrounds myocardium and coronary arteries is EAT. Pericardial fat (PeriF) refers to paracardial fat (ParaF) plus all adipose tissue located internal to the parietal pericardium. PeriF=ParaF+EAT.
Figure 2 .
2 Figure 2. This figure illustrates the relative amount of epicardial adipose tissue among species. Humans and swine have much more EAT than rodents.
Figure 3 .
3 Figure 3. This figure illustrates the origin of epicardial adipose tissue. Epicardial adipocytes have a mesothelial origin and derive mainly from epicardium. Cells originating from the (Wilms' tumor gene Wt1) Wt1+ mesothelial lineage, can differentiate into EAT and this epicardium-to-fat transition (ETFT) fate can be reactivated after myocardial infarction.
Figure 4 .
4 Figure 4. This figures illustrates the mechanisms driving the development of ectopic fat deposition and its consequences. In an obesogenic environment and chronic positive energy balance, the ability of subcutaneous adipose tissue (SAT) to expand, and to store the free fatty acids in excess is crucial in preventing the accumulation of fat in ectopic sites, and the development of obesity complications. Healthy SAT and gynoid obesity are associated with a protective phenotype with less ectopic fat and metabolically healthy obesity, while dysfunctional SAT and android obesity are associated with more visceral fat and ectopic fat accumulation with an increased risk of type 2 diabetes, metabolic syndrome and coronary artery disease (CAD). Inflammation or profibrotic processes, hypoxia, and aging could also contribute to ectopic fat development. Mobilization and release of adipose progenitors adipose-derived stem/stromal cells (ASCs) into the circulation and their further infiltration into non adipose tissues leading to ectopic adipocyte formation also cannot be excluded.
Figure 5 to 7 ;
57 Figure 5 to 7; These figures illustrate imaging techniques for EAT quantification. MRI remains the standard reference for adipose tissue quantification. The major advantage of this technique is its excellent spatial resolution and possible distinction between paracardial and epicardial fat. The major limitation of echocardiography is its 2D approach (thickness measurement). The major limitation of computed tomography remains its radiation exposure.
Figure 8 .
8 Figure 8. This figure illustrates microscopic images of human atrial epicardial adipose tissue and myocardium. One can observe fatty infiltration of myocardium with EAT, ie direct adipocytes infiltration into the underlying atrial myocardium, associated with fibrosis. Such direct adipocytes infiltration separating myocytes are supposed to induce remodeled atrial substrate, and lead to conduction defects (conduction slowing or inhomogeneity).
1 H-MRS) refers to the storage of triglyceride droplets within cardiomyocytes, which can generate toxic lipid intermediates ie ceramides, endoplasmic reticulum stress, mitochondrial dysfunction and lipotoxicity (209). In the physiologically aging male heart, myocardial triglyceride content increased in association with the decline in diastolic function and could be thus a potential confounding factor (133). Altough these clinical studies do not infer causality, they point to possible early impact of cardiac adiposity on LV remodeling and function.
More recently, using newly innovative methods such as speckle tracking echocardiography
(STE) or cardiovascular magnetic resonance (CMR) displacement encoded imaging, subtle
changes in cardiac structure, contractile dysfunction and myocardial dyssynchrony were
associated with EAT volume. Indeed, cardiac mechanics (strain, torsion, and synchrony of
contraction) are more sensitive measures of heart function that may detect subtle
abnormalities, preceding clinical manifestations. Using CMR in 41 obese children, Jing et al,
showed that, early in life obese children develop contractile dysfunction with higher LV mass indexed to height compared to healthy weight children
(139)
. In this study, EAT was linked to
of atherosclerosis, by secreting locally many bioactive molecules such as fatty acids, inflammatory, immune, and stress factors, cytokines or chemokines. Current investigations are done to comprehensively understand how factors produced by EAT are able to cross the vessel wall, and to what initiate or precede the change in EAT phenotype. An imbalance between the protective and the deleterious factors secreted by EAT, and between the pro and anti-inflammatory immune cells is likely to trigger CAD development. Despite all the described findings, the pathophysiological link between EAT and CAD needs to be elucidated further, and we really need interventional studies to investigate whether EAT reduction could reduce clinical outcomes.
pressure (CPAP) during 24 weeks significantly reduced EFT in 28 symptomatic OSA patients
with AHI > 15, without significant change in BMI or waist circumference (36). Shorter-term
of CPAP treatment (3 months) in 25 compliant OSA patients also reduced EFT (159), but in
EAT and obstructive sleep apnea another study EAT remained higher in CPAP treated OSA obese patients (n=19, mean BMI
Obstructive sleep apnea (OSA) is a sleep disorder characterized by repetitive episodes of 38 ± 4 kg/m 2 ) compared to age-matched healthy subjects (n=12), and CPAP was not
upper airway obstruction during sleep, resulting in decreased oxygen saturation, disruption of sufficient to alleviate left ventricular concentric hypertrophy, as assessed by mass-cavity ratio,
sleep, and daytime somnolence (71). Repetitive apneic events disrupt the normal physiologic the latter being independently correlated with EAT
interactions between sleep and the cardiovascular system (289, 314). Such sleep
fragmentation and cyclic upper airway obstruction may result in hypercapnia, chronic
intermittent hypoxemia that have been linked to increased sympathetic activation, vascular
endothelial dysfunction, increased oxidative stress, inflammation, decreased fibrinoloytic
activity, and metabolic dysregulation (62, 142, 149, 255). Hence OSA could contribute to the
initiation and progression of cardiac and vascular disease. Conclusive data implicate OSA in
the development of hypertension, CAD, congestive heart failure, and cardiac arrhythmias
(277, 290). We previously reported that EAT is sensitive to OSA status and that bariatric
surgery had little effect on epicardial fat volume (EFV) loss in OSA patients (86). It is
tempting to hypothesize that OSA-induced chronic intermittent hypoxia could modify the
phenotypic features of EAT and may be an initiator of adipose tissue remodeling (fibrosis or
inflammation). However, this has never been investigated in EAT yet.
Two recent studies have reported a relationship between epicardial fat thickness and OSA
severity (184, 200). Mariani et al. reported a significant positive correlation between EFT and
apnea/hypopnea inex (AHI), and EFT values were significantly higher in moderate and severe
OSA groups comparing to mild OSA group (200). A similar study was conducted by Lubrano
et al. in 171 obese patients with and without metabolic syndrome, in which EFT rather than
BMI was the best predictor of OSA (184). Treatment of OSA with continuous positive airway
Table 1 .
1 The thermogenic potential of EAT may represent a useful beneficial property, and another unique target for therapeutic interventions. This is an attractive way of research in that the understanding of EAT browning and factors able to induce the browning of fat is mounting daily. Further experimental research is hence warranted to enhance our understanding of EAT thermogenic and wholesome energy expenditure potential as well as its potential flexibility with life style, medical or surgical treatments.Finally, additional research and understanding on adipose tissue biology in general and mechanisms responsible for ectopic fat formation are needed in the future. Whether epicardium-to-fat-transition reactivation exists in humans, and whether unhealthy subcutaneous adipose tissue could trigger the release of adipose progenitors such as adiposederived stem/stromal cells into the circulation, and whether these adipogenic cells could reach the heart and give rise to new adipocyte development in EAT is a fascinating area of interest for next years. Main anatomical and physiological properties of EAT Main anatomical and physiological properties of EAT
Tables
Localization Between the myocardium and the visceral layer
of the pericardium
Anatomical and functional proximity Myocardium, coronary arteries, nerves and
ganlionated plexi
Origin Epicardium
Blood supply Branches of the coronary arteries
Color White and beige
Cells Small adipocytes
Mixed cellularity with stromal preadipocytes,
fibroblasts, macrophages, mast cells,
lymphocytes (immune cells)
Metabolism High lipogenesis and lipolysis
Thermogenesis
Secretome Source of a myriad of adipocytokines,
chemokines, growth factors, FFA
Way of action Mainly local: paracrine and vasocrine
Transcriptome Extracellular matrix remodeling, inflammation,
immune signaling, coagulation, thrombosis,
beiging and apoptosis enriched pathways
Protective actions Arterial pulse wave, vasomotion
Thermogenic potential
Autonomic nervous system
Immune defence
Regeneration potential (epicardial-to-fat-
transition)
Table 2 . Human EAT bioactive molecules Category Biomarkers Expression Pathological state References
2
α1-glycoprotein mRNA CAD Fain et al., 2010
Chemerin protein, mRNA CAD Spiroglou et al., 2010
CRP secretion CAD Baker et al., 2006
Haptoglobin mRNA CAD Fain et al., 2010
Proinflammatory cytokines sICAM-1 IL-1β mRNA protein, mRNA, CAD CAD Karastergiou et al., 2010 Mazurek et al., 2003
secretion
IL-1Rα secretion CAD, obesity Karastergiou et al., 2010
IL-6 protein, mRNA, secretion CAD Mazurek et al., 2003 Kremen et al., 2006
Acknowledgements
We are grateful to Michel Grino, Marc Barthet, Marie Dominique Piercecchi-Marti, and Franck thuny for their help in collecting rat, swine, and human pictures.
Cardiovasc Imaging 8, 2015. 101. Greif M, Becker A, von Ziegler F, Lebherz C, Lehrke M, Broedl UC, Tittus J, Parhofer K, Becker C, Reiser M, Knez A, Leber AW. Pericardial adipose tissue determined by dual source CT is a risk factor for coronary atherosclerosis. Arterioscler Thromb Vasc Biol 29: 781-786, 2009. 102. Greulich S, Chen WJY, Maxhera B, Rijzewijk LJ, van der Meer RW, Jonker JT, Mueller H, de Wiza DH, Floerke R-R, Smiris K, Lamb HJ, de Roos A, Bax JJ, Romijn JA, Smit JWA, Akhyari P, Lichtenberg A, Eckel J, Diamant M, Ouwens DM. Cardioprotective properties of omentin-1 in type 2 diabetes: evidence from clinical and in vitro studies. PloS One 8: e59697, 2013. 103. Greulich S, Maxhera B, Vandenplas G, de Wiza DH, Smiris K, Mueller H, Heinrichs J, Blumensatt M, Cuvelier C, Akhyari P, Ruige JB, Ouwens DM, Eckel J. Secretory products from epicardial adipose tissue of patients with type 2 diabetes mellitus induce cardiomyocyte dysfunction. Circulation 126: 2324-2334, 2012. 104. Greulich S, de Wiza DH, Preilowski S, Ding Z, Mueller H, Langin D, Jaquet K, Ouwens DM, Eckel J. Secretory products of guinea pig epicardial fat induce insulin resistance and impair primary adult rat cardiomyocyte function. J Cell Mol Med 15: 2399-2410, 2011. 105. Groves EM, Erande AS, Le C, Salcedo J, Hoang KC, Kumar S, Mohar DS, Saremi F, Im J, Agrawal Y, Nadeswaran P, Naderi N, Malik S. Comparison of epicardial adipose tissue volume and coronary artery disease severity in asymptomatic adults with versus without diabetes mellitus. Am J Cardiol 114: 686-691, 2014. 106. Gruver AL, Hudson LL, Sempowski GD. Immunosenescence of ageing. J Pathol 211: 144-156, 2007. 107. Guo SS, Zeller C, Chumlea WC, Siervogel RM. Aging, body composition, and lifestyle: the Fels Longitudinal Study. Am J Clin Nutr 70: 405-411, 1999. 108. Guo W, Pirtskhalava T, Tchkonia T, Xie W, Thomou T, Han J, Wang T, Wong S, Cartwright A, Hegardt FG, Corkey BE, Kirkland JL. Aging results in paradoxical susceptibility of fat cell progenitors to lipotoxicity. Am J Physiol Endocrinol Metab 292: E1041-51, 2007. 109. Gupta OT, Gupta RK. Visceral Adipose Tissue Mesothelial Cells: Living on the Edge or Just Taking Up Space? Trends Endocrinol Metab TEM 26: 515-523, 2015. 110. Han JM, Wu D, Denroche HC, Yao Y, Verchere CB, Levings MK. IL-33 Reverses an Obesity-Induced Deficit in Visceral Adipose Tissue ST2+ T Regulatory Cells and Ameliorates Adipose Tissue Inflammation and Insulin Resistance. J Immunol 194: 4777-
Cross references
Ectopic lipid and inflammatory mechanisms of insulin resistance | 121,427 | [
"862653",
"764752"
] | [
"180118",
"518989",
"180118",
"480402",
"180118"
] |
01677497 | en | [
"chim"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01677497/file/Michau_19836.pdf | A Michau
F Maury
F Schuster
R Boichot
M Pons
E Monsifrot
email: [email protected]
Chromium carbide growth at low temperature by a highly efficient DLI-MOCVD process in effluent recycling mode
Keywords:
MOCVD process Bis(arene)chromium
The effect of direct recycling of effluents on the quality of Cr x C y coatings grown by MOCVD using direct liquid injection (DLI) of bis(ethylbenzene)chromium(0) in toluene was investigated. The results are compared with those obtained using non-recycled solutions of precursor. Both types of coatings exhibit the same features. They are amorphous in the temperature range 673-823 K. They exhibit a dense and glassy-like microstructure and a high hardness (> 23 GPa). Analyses at the nanoscale revealed a nanocomposite microstructure consisting of free-C domains embedded in an amorphous Cr 7 C 3 matrix characterized by strong interfaces and leading to an overall composition slightly higher than Cr 7 C 3 . The stiffness and strength of these interfaces are mainly due to at least two types of chemical bonds between Cr atoms and free-C: (i) Cr intercalation between graphene sheets and (ii) hexahapto η 6 -Cr bonding on the external graphene sheets of the free-C domains. The density of these interactions was found increasing by decreasing the concentration of the injected solution, as this occurred using a recycled solution. As a result, "recycled" coatings exhibit a higher nanohardness (29 GPa) than "new" coatings (23 GPa). This work demonstrates that using bis(arene)M(0) precursors, direct recycling of effluents is an efficient route to improve the conversion yield of DLI-MOCVD process making it cost-effective and competitive to produce protective carbide coatings of transition metals which share the same metal zero chemistry.
Introduction
For a better control of production cost of manufactured objects that comprise CVD coatings, the economic performance of deposition processes is an important need. The increasing use of metalorganic precursors is a way to reduce the cost of large-scale CVD process because this greatly lowers the deposition temperatures leading to substantial energy savings. This is evidenced for instance by the growth of metallic Cr at 673 K by DLI-MOCVD [START_REF] Michau | Evidence for a Cr metastable phase as a tracer in DLI-MOCVD chromium hard coatings usable in high temperature environment[END_REF] in comparison with the industrial chromizing method of pack cementation which operates at about 1273 K.
A way to reduce the cost of CVD products is to repair the coating or to recycle the substrate. Indeed, though the coating and the substrate generally form strong and inseparable pairs there are examples where the substrate can be separated and recycled in CVD process to reduce the production cost. For instance in diamond coated cutting tools the worn coating was removed to apply a new one by the same CVD process [START_REF] Liu | Recycling technique for CVD diamond coated cutting tools[END_REF] and, in the graphene CVD synthesis the Cu substrate used as catalyst was recycled after delamination because it is an expensive substrate [START_REF] Wang | Electrochemical delamination of CVD-grown graphene film: toward the recyclable use of copper catalyst[END_REF].
Another way to improve economic performance of CVD is to implement the recycling of effluents. Recycling in CVD processes is only mentioned in a basic book on the technique although it is important for applications [START_REF] Rees | Introduction[END_REF]. When expensive molecular precursors are used, as for deposition of precious metals, the by-products are collected at the exit of the CVD reactor then leading recyclers and traders develop in parallel complex effluent treatments either to refine and reuse the collected precursor or to transform by-products and reuse pure metal [START_REF]Recycling International, Japanese recycling process for ruthenium precursors[END_REF]. This approach is also applied in high volume CVD production facilities. For instance a hydrogen recycle system was proposed recently for CVD of poly-Si [START_REF] Revankar | CVD-Siemens reactor process hydrogen recycle system[END_REF]; in this case it is the carrier gas which is recycled. Also in the growth of Si for solar cells the exhaust gases (H 2 , HCl, chlorosilanes) were collected, separated and recycled [START_REF]Poly plant project, off-gas recovery & recycling[END_REF]. Generally these strategies reduce the production cost but they did not act directly on the CVD process itself since the precursor is not directly recycled in a loop.
One of the advantages of CVD processes is the deposition of uniform position temperature, high quality of the coatings…. Direct recycling of effluent using metalorganic precursors was not reported because the growth occurs at lower temperature than in hydride and halide chemistry and, in this condition, the quality of the layer strongly depends on the metal source which motivates many studies on molecular precursors [START_REF] Rees | Introduction[END_REF][START_REF] Maury | Selection of metalorganic precursors for MOCVD of metallurgical coatings: application to Cr-based coatings[END_REF][START_REF] Jones | CVD of Compound Semiconductors[END_REF][START_REF] Kodas | The Chemistry of Metal CVD[END_REF]. Furthermore, these compounds generally undergo complex decomposition mechanisms producing many unstable metal-containing by-products. Kinetics plays a major role and the growth occurs far from thermodynamic equilibrium. Examples of complexity of decomposition pathways of Cr precursors are reported in [START_REF] Rees | Introduction[END_REF][START_REF] Kodas | The Chemistry of Metal CVD[END_REF][START_REF] Maury | Evaluation of tetra-alkylchromium precursors for OMCVD: Ifilms grown using Cr[CH 2 C(CH 3 ) 3 ] 4[END_REF]. The bis(arene)M(0) precursors, where M is a transition metal in the oxidation state zero of the columns 5 and 6 are an important family of CVD precursors for low temperature deposition of carbides, nitrides and even metal coatings. This is supported by several works using these precursors on carbides of V [START_REF] Abisset | Low temperature MOCVD of V-C-N coatings using bis(arene) vanadium as precursors[END_REF], Nb [17], Ta [17], Cr [START_REF] Anantha | Chromium deposition from dicumene-chromium to form metal-semiconductor devices[END_REF][START_REF] Maury | Structural characterization of chromium carbide coatings deposited at low temperature by LPCVD process using dicumene chromium[END_REF][START_REF] Schuster | Influence of organochromium precursor chemistry on the microstructure of MOCVD chromium carbide coatings[END_REF][START_REF] Polikarpov | Chromium films obtained by pyrolysis of chromium bisarene complexes in the presence of chlorinated hydrocarbons[END_REF], Mo [START_REF] Whaley | Carbonaceous solid bodies and processes for their manufacture[END_REF] and W [START_REF] Whaley | Carbonaceous solid bodies and processes for their manufacture[END_REF], nitrides of V [START_REF] Abisset | Low temperature MOCVD of V-C-N coatings using bis(arene) vanadium as precursors[END_REF] and Cr [START_REF] Schuster | Characterization of chromium nitride and carbonitride coatings deposited at low temperature by OMCVD[END_REF] and metal V [START_REF] Abisset | Low temperature MOCVD of V-C-N coatings using bis(arene) vanadium as precursors[END_REF] and Cr [START_REF] Maury | Low temperature MOCVD routes to chromium metal thin films using bis(benzene)chromium[END_REF][START_REF] Luzin | Chromium films produced by pyrolysis of its bis-arene complexes in the presence of sulfur-containing additives[END_REF], as well as nanostructured multilayer Cr-based coatings [START_REF] Maury | Multilayer chromium based coatings grown by atmospheric pressure direct liquid injection CVD[END_REF].
Chromium carbides are of great interest as tribological coatings for the protection of steel and metallic alloy components owing to their good resistance to corrosion and wear and their high hardness and melting point. They are used in many fields such as transports (automobile, shipping, aeronautic), mechanical and chemical industries and tools [START_REF] Drozda | Tool and manufacturing engineers handbook[END_REF][START_REF] Bryskin | Innovative processing technology of chromium carbide coating to apprise performance of piston rings[END_REF].
Our greater knowledge of the growth mechanisms of Cr-based coatings [START_REF] Maury | Low temperature MOCVD routes to chromium metal thin films using bis(benzene)chromium[END_REF][START_REF] Vahlas | A thermodynamic approach to the chemical vapor deposition of chromium and of chromium carbides starting from Cr(C 6 H 6 ) 2[END_REF], thermodynamic modeling without [START_REF] Vahlas | A thermodynamic approach to the chemical vapor deposition of chromium and of chromium carbides starting from Cr(C 6 H 6 ) 2[END_REF] and with direct liquid injection (DLI) to feed the reactor [START_REF] Douard | Thermodynamic simulation of Atmospheric DLI-CVD processes for the growth of chromium based hard coatings using bis(benzene) chromium as molecular source[END_REF], the determination of a kinetic model and the simulation of the CVD process [START_REF] Michau | Chromium Carbide Growth by Direct Liquid Injection Chemical Vapor Deposition in Long and Narrow Tubes, Experiments, Modeling and Simulation[END_REF] led us to study the effect of direct recycling of effluents on the quality of chromium carbide (Cr x C y ) coatings grown by DLICVD using bis(ethylbenzene)chromium(0) as representative of this family. The results are compared with those obtained using a non-recycled solution of precursor. Both types of coatings exhibit the same features (composition, structure, hardness), demonstrating that using this specific chemical system, direct recycling of effluents is an efficient route to improve the conversion yield of DLI-MOCVD process making it very competitive to develop industrial applications. The barely significant difference of hardness is commented and selection criteria for molecular precursors are also discussed so that they can be implemented in CVD processes with recycling of effluent.
Experimental
Deposition process
The growth was carried out at low temperature by direct liquid injection of metalorganic precursors in a CVD reactor (namely DLI-MOCVD process). It is a horizontal, hot-wall, Pyrex tubular reactor (300 mm long and 24 mm in internal diameter) with an isothermal zone around 150 mm. Stainless steel (304 L) plates and Si(100) wafers passivated by an amorphous SiN x thin layer acting as a barrier were used as substrates. They were placed on a planar horizontal sample-holder in the isothermal zone. More details are reported elsewhere [START_REF] Michau | Chromium Carbide Growth by Direct Liquid Injection Chemical Vapor Deposition in Long and Narrow Tubes, Experiments, Modeling and Simulation[END_REF]. The total pressure was automatically monitored and kept constant at 6.7 kPa and deposition temperature was set at 723 K.
Commercial bis(ethylbenzene)chromium (BEBC) from Strem (CAS 12212-68-9) was used as chromium precursor. It is in fact a viscous liquid mixture of several bis(arene)chromium compounds with the general formula [(C 2 H 5 ) x C 6 H 6-x ] 2 Cr where x = 0-4 and BEBC is the major constituent. A solution in anhydrous toluene (99.8%) from Sigma-Aldrich (CAS 108-88-3) was prepared under inert atmosphere with a concentration of 3 × 10 -1 mol•L -1 (4 g of BEBC in 50 mL of toluene). This precursor solution was injected in a flash vaporization chamber heated at 473 K using a Kemstream pulsed injector device. A liquid flow rate of 1 mL•min -1 was set by adjusting the injection parameters in the ranges: frequency 1-10 Hz and opening time 0.5-5 ms. Nitrogen was used as carrier gas with a 500 sccm mass flow rate and was heated at approximately 453 K before entering the flash vaporization chamber to prevent condensation.
In this paper, "new" coatings refer to coatings elaborated using a freshly prepared liquid solution of as-received precursor and solvent, while "recycled" coatings concern coatings deposited using directly a recycled liquid solution of precursor, by-products and solvent. The same experimental parameters were used for new and recycled coatings: temperatures, pressure, injection parameters, carrier gas flow rate (the deposition time was about 1 h to inject about 160 mL of solution). The only difference was that the precursor concentration of the recycled solution was significantly lower due to consumption during previous runs. As a result, the growth rate in recycling mode was significantly lower. No attempt was made to change the deposition parameters in order to compare the characteristics of the coatings under identical growth conditions. The main CVD conditions are reported in Table 1.
The recycling mode investigated at this stage was based on an openloop. This means gaseous by-products going out of the CVD reactor were forced to pass through a liquid nitrogen trap. Thus undecomposed molecules of BEBC and solvent were condensed in a tank with the reactions by-products (except light hydrocarbons as C 2 species whose trapping is not effective because of their high volatility). After returning to room temperature, a homogenous "daughter" liquid solution was obtained and stored in a pressurized tank under argon before further use. Several CVD runs were required to recover a sufficient amount of "daughter" solution for a recycled run. For example, we succeeded in obtaining a 1.5 μm thick coating with a recycled solution originating from two "new" deposition experiments which had each produced a 5 μm thick coating.
Each trapped solution could also be analyzed, for instance by UV coatings on 3D components with a high conformal coverage. This is achieved when the process operates in the chemical kinetic regime, i.e. at low pressure and low temperature. However, under these particular conditions the conversion efficiency of reactants is low (typically < 30%). Consequently to develop large-scale CVD processes using expensive reactants recycling of precursor become necessary to achieve a high conversion yield. For instance in the CVD production of boron fibers the selective condensation of unconverted BCl 3 is reused directly in the growth process [START_REF] Rees | Introduction[END_REF]. Also the gas mixture CH 4 /H 2 recycling for diamond growth was reported [START_REF] Lu | Economical deposition of a large area of high quality diamond film by a high power DC arc plasma jet operating in a gas recycling mode[END_REF] and a closed gas recycling CVD process has been proposed for solar grade Si [START_REF] Noda | Closed recycle CVD process for mass production of SOG-Si from MG-Si[END_REF]. Furthermore it was shown that a recycle loop is very useful for the management of the axial coating thickness uniformity of poly-Si in a horizontal low pressure CVD reactor [START_REF] Collingham | Effect of recycling on the axial distribution of coating thickness in a low pressure CVD reactor[END_REF], in agreement with the fact that the regime of the reactor is close to a Continuous Stirred Tank Reactor as previously demonstrated [START_REF] Jensen | Modeling and analysis of low pressure CVD reactors[END_REF]. In these few examples the precursor is a hydride or a halide. Metalorganic precursors become very important CVD sources thanks to the diversity of their molecular structures which allows controlling their chemical, physical and thermal properties. This allows satisfying the stringent requirements for the CVD process, e.g. low de-for new and recycled coatings, typically rms = 18 ± 2 nm. Both types of coatings exhibit a very good conformal coverage on substrates with high surface roughness and on non-planar surfaces (edges, trenches…; not shown here). This is a great advantage of low pressure DLI-MOCVD which combines both a high diffusivity of gaseous reactants and decomposition at low temperature leading to a growth in the reactioncontrolled regime. The only difference at this stage is the growth rate: recycled coatings were deposited with a lower growth rate than new coatings because of the lower precursor concentration of the recycled solution. Of course, this can be adjusted later.
Structure
A typical XRD pattern of a new coating 3.4 μm thick grown at 723 K on Si substrate is presented and compared to a recycled coating in Fig. 2. In both cases, there is no evidence of diffraction peaks of polycrystalline phases. The pattern of the new coating is characteristic of an amorphous material with 4 weak and broad bumps, around 2θ = 13.8°, 28.6°, 42.5°and 79.0°(better seen in the inset zoom). No difference was found for the recycled coating, except that due to its lower thickness, a broad peak originating from the substrate (amorphous SiN x barrier layer) is observed at about 69°and the small bump at 79°is not as well marked.
The last two bumps from the coating at 42.5°and 79.0°corresponds to amorphous chromium carbides such as Cr 7 C 3 (JCPDS 00-036-1482) and Cr 3 C 2 (JCPDS 35-804) which both exhibits their main diffraction peaks in these two angular ranges. The FWHM of the most intense bump at 42.5°gives an average size of coherent domains close to 1 nm using Scherrer's formula, confirming the amorphous character of this carbide phase.
In carbon-containing materials the presence of graphite crystallites spectrophotometry to determine its precursor concentration. Indeed, BEBC exhibits a characteristic absorption band around 315 nm that could be used to measure the concentration according to the Beer-Lambert law (Supplementary material, Fig. S1). Also, as an improvement of the process, a closed-loop recycling system could also be installed and automated (currently in progress).
Coating characterization
The surface morphology and cross sections of coatings were characterized by scanning electron microscopy (SEM; Leo-435VP), and by electron probe micro-analysis (EPMA; Cameca SXFive) for the chemical composition. The crystalline structure was investigated at room temperature and ambient atmosphere by X-ray diffraction (XRD) in 2θ range [8-105°] using a Bruker D8-2 diffractometer equipped with a graphite monochromator (Bragg-Brentano configuration; Cu K α radiation). The microstructure of the coatings was also studied by transmission electron microscopy (TEM; Jeol JEM 2100 equipped with a 200 kV FEG, and a Bruker AXS Quantax EDS analyzer). For TEM observations, the samples were cut and thinned perpendicular to the surface by mechanical polishing, then prepared by a dimpling technique and thinned using a precision ion polishing system (PIPS, Gatan). By this method, the electron transparent area that can be observed is a cross section including the coating and the interface with the substrate.
The chemical environment of each element of the coating was investigated by X-ray photoelectron spectroscopy (XPS; Thermo Scientific K-Alpha) equipped with a monochromatic Al X-ray source and a low energy Ar + gun (1 keV) for surface cleaning and depth profile analysis. Raman spectroscopy (Horiba Jobin Yvon LabRAM HR 800 with a 532 nm laser) was also used to analyze chemical bonding, in particular CeC bonds.
Hardness and Young's modulus were determined by nanoindentation using a Nano Scratch Tester (CSM Instruments). The loading/unloading cycle was respectively from 0 to 30 mN at 60 mN•min -1 , a pause of 30 s, then from 30 to 0 mN at 60 mN•min -1 . With this cycle the indenter penetration depth was lower than 1/10 of the coating thickness. For the thickest coatings, Vickers hardness was also measured using a BUEHLER OmniMet 2100 tester.
Results
General appearance and morphology
New and recycled coatings grown in the conditions reported in Table 1 exhibit the same glassy and dense microstructure, typical of an amorphous material. Interestingly, no grain boundary is observed by SEM, even at higher magnification than in Fig. 1, and it will be confirmed by TEM analysis in the Section 3.3. They have a metallic glossy appearance and a mirror-like surface morphology. Surface roughness of Cr x C y coatings on Si substrates measured by AFM gave similar values is evidenced by the diffraction of the (002) plane expected at 2θ = 26.6°in well crystallized graphite. However, disorder in hexagonal graphite structure (e.g. inside and between basal planes, stacking order between the graphene sheets, slipping out of alignment, folding…) leads to broadening and shifting of this peak. For instance pyrolytic carbon can exhibit a more or less disordered turbostratic or graphitic structure. Consequently the bump at 28.6°is assigned to pyrolytic carbon nanoparticles, namely free-C.
The first bump at 2θ = 13.8°is not related to amorphous chromium carbides or to free-C. At this small angle (large interplanar spacing), this could be a compound with a lamellar structure derived from graphite, such as graphite oxide (GO). Indeed it has been reported that GO (001) plane diffracted from 2θ = 2 to 12°depending on the presence of oxygen-containing groups [START_REF] Blanton | Characterization of X-ray irradiated graphene oxide coatings using X-ray diffraction, X-ray photoelectron spectroscopy, and atomic force microscopy[END_REF]. However, the oxygen content of our coatings is lower than 5 at.% (Table 2), which is too low to support this hypothesis. Therefore the first bump at 13.8°was assigned to another derivative from graphite: the intercalation of Cr atoms between two graphene sheets as in graphite intercalation compound (GIC). Recently there are several reports dealing with the interactions between Cr and different forms of carbon, including graphene, nanotubes and fullerenes. For instance, a structural feature of the functionalization of graphene surface is the grafting of Cr which recreates locally the same type of bonding than in bis(arene)chromium, i.e. with an η 6 -bonding to the aromatic cycles [START_REF] Bui | Graphene-Cr-Graphene intercalation nanostructures: stability and magnetic properties from density functional theory investigations[END_REF][START_REF] Sarkar | Organometallic chemistry of[END_REF]. On the other hand, in sandwich graphene-Cr-graphene nanostructures, the representative distance of an ordered staking is twice the spacing between two consecutive graphene sheets, i.e. 6.556 Å, because Cr cannot be intercalated between two consecutive interlayer spaces as described in [START_REF] Bui | Graphene-Cr-Graphene intercalation nanostructures: stability and magnetic properties from density functional theory investigations[END_REF][START_REF] Sarkar | Organometallic chemistry of[END_REF]. This corresponds to a diffraction angle 2θ = 13.5°considering the (001) plane, which is very close to our 13.8°experimental value (Fig. 2).
Microstructure
From approximately the same magnification as in SEM images (Fig. 1) and to larger values up to high resolution, TEM images showed the same glassy microstructure for Cr x C y coatings, as it can be seen on Fig. 3a. Again no significant difference was found for recycled coatings. A high resolution TEM analysis (Supplementary material, Fig. S2) has revealed a dense and very finely granular structure with homogeneous and monodisperse distribution of contrasted domains. The average size of these domains is of the order of magnitude of 1 nm, in good agreement with the value found by XRD.
The selected area electron diffraction pattern of the micrograph shown in Fig. 3a revealed two diffuse rings (Fig. 3b) as for high resolution TEM analysis in Fig. S2. They are centered on interplanar distances 2.097 Å and 1.223 Å. In accordance with the Bragg relation, they correspond to theoretical XRD diffraction angles at 2θ = 43.1°and 78.1°. Therefore, the inner and outer rings on TEM diffraction patterns correspond to the first and the second Cr x C y bumps on XRD pattern found at 2θ = 43.5°and 79.0°, respectively (Fig. 2). This is supported by the fact that both crystalline Cr 7 C 3 and Cr 3 C 2 phases show their strongest XRD contributions in the 2θ range 39-44°and they also exhibit a second bunch of peaks with a lower intensity around 2θ = 80°. The two bumps on XRD pattern at 2θ = 13.8°and 28.6°assigned to GIC and free-C respectively, were not seen on TEM diffraction pattern likely because they were too weak and diffuse for this technique.
Chemical composition
Atomic composition determined by EPMA of new and recycled coatings is reported in Table 2. No significant difference is observed between both coatings; it is typically Cr 064 C 0.33 O 0.03 . The level of oxygen contamination is slightly higher in recycled coatings but it does not exceed about 5 at.%. This was attributed to the handling of the recycled solution that was stored in a pressurized tank. Although handled under Ar atmosphere this container had to be opened and closed several times in order to recover enough solution after several deposition experiments for a further use in a recycling CVD run. By neglecting traces of oxygen, the total carbon content of these carbide coatings (C:Cr = 0.50 ± 0.02) is intermediate between Cr 7 C 3 (C:Cr = 0.43) and Cr 3 C 2 (C:Cr = 0.67) but it is closer to Cr 7 C 3 . The overall atomic composition is consistent with a nanocomposite structure consisting of an amorphous carbide matrix a-Cr x C y and free-C, as supposed from XRD and TEM results, and the ratio y:x in the matrix is lower than 0.50, i.e. even closer to the Cr 7 C 3 stoichiometry.
XPS analyses did not reveal significant difference between both types of coatings, except a higher contribution of oxygen bonded to Cr for recycled coatings, in agreement with EPMA data. As-deposited samples exhibit Cr 2p 3/2 peaks characteristic of Cr(III) oxide (575.8 eV) and CreOH (576.8 eV), O 1s peaks were assigned to Cr 2 O 3 (530.8 eV) and CeO/OH (532.0 eV), and C 1s core level was characteristic to adventitious carbon with the components CeC/CeH at 284.8 eV and OeC]O at 288.0 eV. A depth profile analysis of C 1s showed that Ar + ion etching of the sample for about a minute at 1 keV removes readily the surface contamination without significant secondary effect of sputtering (Fig. 4a). The C 1s region of as-deposited sample shows the main features of adventitious carbon with the CeC/CeH and OeC]O components (Fig. 4b). After removal of the contamination layer by ion etching for 220 s the C 1s peak reveals two forms of carbon present in the coating: the carbide (282.8 eV) and the free-C (~284 eV) as The coatings being of metallic nature, Raman spectra should not be a priori readily informative. Raman signal originates only from surface oxides and carbon components; no response was expected from a metallic matrix. Fig. 5 compares the Raman spectra of a new and a recycled coating for the 200 to 1800 cm -1 spectral range. Overall, the large width of the bands reveals the absence of long-range order, either because of the amorphous character of the phases or because of defects. At first glance the spectra appear very different but they are both constituted of two zones with different intensities. The bands in the first zone (200-1000 cm -1 ) are essentially due to chromium oxides on the surface of the sample [START_REF] Iliev | Raman spectroscopy of ferromagnetic CrO 2[END_REF][START_REF] Yu | Phase control of chromium oxide in selective microregions by laser annealing[END_REF][START_REF] Barshilia | Structure and optical properties of pulsed sputter deposited Cr x O y /Cr/Cr 2 O 3 solar selective coatings[END_REF] while the bands in the second zone (1000-1800 cm -1 ) are characteristic of carbon in different environments [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF]. In comparison with the new coating (Fig. 5a), the higher intensity of the bands in the first zone for the recycled coating is consistent with a substantially greater oxidation (Fig. 5b), in good agreement with EPMA and XPS analyses. Spectra deconvolution on Fig. 5c and d reveals the D band at 1340 cm -1 and the G band at 1570 cm -1 which are representative of CeC bonds [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF]. G stands for graphite and D for disorder in graphitic structures. The bands at 1225 and 1455 cm -1 were assigned to transpolyacetylene, a strong polymeric chain e(C 2 H 2 ) n e where carbon adopts sp 2 configuration [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF]. Because of overlaps with the bands of trans-polyacetylene, the presence of C sp 3 cannot be ruled out since it is expected at 1180 and 1332 cm -1 in nanocrystalline and cubic diamond respectively, and at 1500 cm -1 in DLC (disordered sp 3 hybridization) [START_REF] Chu | Characterization of amorphous and nanocrystalline carbon films[END_REF].
In carbon materials, correlations were established between the development of the disorder from sp 2 structural model (graphite) to sp 3 (diamond) and the variation of the intensity ratio I(D)/I(G) as well as the FWHM and the position of the G band [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF][START_REF] Chu | Characterization of amorphous and nanocrystalline carbon films[END_REF][START_REF] Cançado | Quantifying defects in graphene via Raman spectroscopy at different excitation energies[END_REF]. Fig. 5 shows that the intensity ratio I(D)/I(G) significantly increases from new coatings (~0.6) to recycled coatings (~1.2), suggesting that the disorder within graphitic nanostructures is higher for the recycled sample. The fact that the relative intensity of the bands at 1220 and 1460 cm -1 assigned to trans-polyacetylene and possibly to C sp 3 does not change from a new coating to a recycled one suggests that the evolution of the disorder cannot be interpreted in terms of C sp 3 proportion. As a result, it is more appropriate to consider interactions between Cr and free-C as structural defects (both in-plane and between graphene sheets) that cause increasing disorder when their number increases.
The average size of graphitic nanoparticles in the basal plane determined from the FWHM of the G band, namely L a [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF], remains constant at around 35 nm for both new and recycled coatings. On the other hand, a disorder measurement L D can be determined from the intensity ratio I(D)/I(G); it represents the average distance between two point defects in graphene planes [START_REF] Cançado | Quantifying defects in graphene via Raman spectroscopy at different excitation energies[END_REF]. As suggested above, Cr grafting on graphene sheet can be considered as a defect which induces disorder. Thus, L D can be considered as the average distance between two grafted Cr and it was found decreasing from 15.5 to 6.2 nm for new and recycled coatings respectively. This means that despite an identical overall atomic composition, recycled coatings exhibit a higher defect density, as interactions at the free-C/a-Cr 7 C 3 interfaces, e.g. both as hexahapto-η 6 -Cr grafting on external graphene sheets and Cr intercalation between graphene layers.
Hardness
For the thickest Cr x C y new coatings (~5 μm), Vickers hardness around 2300 HV was measured which is quite high for CVD chromium carbide coatings. Values in the range 700-1600 HV were previously reported for polycrystalline MOCVD chromium carbides coatings [START_REF] Aleksandrov | Vapor-phase deposition of coatings from bis-arene chromium compounds on aluminum alloys[END_REF][START_REF] Yurshev | Surface hardening of tools by depositing a pyrolytic chromium carbide coating[END_REF] while electrodeposited Cr x C y coatings did not exceed 1300 HV 100 [START_REF] Zeng | Tribological and electrochemical behavior of thick Cr-C alloy coatings electrodeposited in trivalent chromium bath as an alternative to conventional Cr coatings[END_REF][START_REF] Protsenko | Improving hardness and tribological characteristics of nanocrystalline Cr-C films obtained from Cr(III) plating bath using pulsed electrodeposition[END_REF] and electroplating hard Cr is lower than 1200 HV [START_REF] Lausmann | Electrolytically deposited hardchrome[END_REF][START_REF] Liang | Structure characterization and tribological properties of thick chromium coating electrodeposited from a Cr(III) electrolyte[END_REF]. Only PVD processes manage to reach even higher hardness, from 2000 to 3000 HV [START_REF] Aubert | Hard chrome and molybdenum coatings produced by physical vapour deposition[END_REF][START_REF] Cholvy | Characterization and wear resistance of coatings in the Cr-C-N ternary system deposited by physical vapour deposition[END_REF][START_REF] Wang | Synthesis of Cr 3 C 2 coatings for tribological applications[END_REF].
For the thinnest Cr x C y coatings (≤3.5 μm), hardness was determined by nanoindentation on five samples corresponding to three new and two recycled coatings. The values of hardness (H) and Young's modulus (E) reported in Table 3 are While there is no difference on the Young's modulus between new and recycled coatings (285 and 295 GPa, respectively), the nanoindentation hardness of recycled coatings could be considered slightly higher (29 GPa) than that of new coatings (23 GPa), despite the standard deviations. This will be discussed in the next section. The ratio H 3 /E 2 is often referred as a durability criteria; it is proportional to the contact loads needed to induce plasticity and consequently it characterizes the resistance to induce plastic deformation [START_REF] Tsui | Nanoindentation and nanoscratching of hard carbon coatings for magnetic disks[END_REF][START_REF] Musil | Hard and superhard Zr-Ni-N nanocomposite films[END_REF]. Comparison of both types of coatings revealed a better behavior of recycled coatings as a result of their higher hardness, the ratio H 3 /E 2 being increased almost 3-fold.
Discussion
Chromium carbide coatings were successfully deposited using directly recycled solutions in the same DLI-MOCVD conditions as using new bis(ethylbenzene)chromium solutions. The growth rate in effluent recycling mode was found lower (around 0.5-1 μm•h -1 instead of 5-10 μm•h -1 ) essentially because the recycled BEBC solution in toluene was less concentrated due to consumption in previous runs. Chemical and structural characterizations of both types of coatings did not reveal significant differences. The coatings exhibit a smooth surface morphology, a dense and glassy-like microstructure and an amorphous structure (XRD and TEM analyses). The overall atomic composition was found to be Cr 0.64 C 0.33 O 0.03 (Table 2). Interestingly, both coatings exhibit a high conformal coverage on non-planar surfaces at relatively low deposition temperature (723 K).
A high hardness
This section discusses on the one hand the high values of hardness of the coatings whatever the precursor solution injected (new or recycled) and, on the other hand, on the difference of nanohardness between recycled and new coatings, assuming therefore that the difference is significant enough.
Both Vickers hardness (2300 HV) and nanoindentation hardness (23-29 GPa) have revealed high values, at the level of those previously reported for PVD Cr x C y coatings [START_REF] Su | Effect of chromium content on the dry machining performance of magnetron sputtered Cr x C coatings[END_REF][START_REF] Esteve | Cathodic chromium carbide coatings for molding die applications[END_REF][START_REF] Romero | Nanometric chromium nitride/chromium carbide multilayers by R.F. magnetron sputtering[END_REF]. A great advantage of DLI-MOCVD Cr x C y coatings is that they are amorphous, without grain boundaries, while those deposited by other processes are polycrystalline. It is generally reported that crystalline chromium carbides coatings grown by PVD [START_REF] Esteve | Cathodic chromium carbide coatings for molding die applications[END_REF], cathodic arc evaporation [START_REF] Esteve | Cathodic chromium carbide coatings for molding die applications[END_REF] and electrodeposition [START_REF] Zeng | Tribological and electrochemical behavior of thick Cr-C alloy coatings electrodeposited in trivalent chromium bath as an alternative to conventional Cr coatings[END_REF] are harder than amorphous ones. Interestingly the hardness of our amorphous coatings is already at the level of these polycrystalline Cr x C y coatings. Consequently, their amorphous structure cannot explain their high hardness. We are aware that Cr 3 C 2 is the hardest phase of CreC system and therefore its presence, even in the amorphous state, should significantly increase the hardness. However, we have shown that the matrix of our coatings has the stoichiometry a-Cr 7 C 3 . Also it is known that for nanocrystalline Cr x C y coatings, the nanohardness increased by decreasing the average grain size [START_REF] Protsenko | Improving hardness and tribological characteristics of nanocrystalline Cr-C films obtained from Cr(III) plating bath using pulsed electrodeposition[END_REF]. Without evidence for nanocrystalline structure this claim does not hold for our coatings. It was also reported that a high hardness was achieved for high Cr contents [START_REF] Su | Effect of chromium content on the dry machining performance of magnetron sputtered Cr x C coatings[END_REF], or that stoichiometric CreC phases must be privileged, meaning a C excess must be avoided [START_REF] Romero | Nanometric chromium nitride/chromium carbide multilayers by R.F. magnetron sputtering[END_REF]. We will discuss below that in our case a C excess, compared to Cr 7 C 3 stoichiometry, on the contrary plays a key role.
Among the factors that influence the hardness of coatings, residual stresses are probably the most important [START_REF] Nowak | The effect of residual stresses on nanoindentation behavior of thin W-C based coatings[END_REF]. The influence of other factors as coating thickness, growth conditions, micro-and -1.20 and -1.25 GPa were found for the 4.0 and 6.0 μm thick coatings, respectively. Reliable data could not be obtained by this method for recycled coatings because they were too thin. Assuming a rigid substrate, the maximum thermal stress can be calculated according to the equation:
= ′ - ∆ σ E (α α ) T. t f f s
where ∆ T is the variation of temperature and α i the thermal expansion coefficient (α s = 18.3 × 10 -6 K -1 and α f = 10.1 × 10 -6 K -1 ). For ∆ T = 430 K, calculated thermal stresses are -1.32 GPa. These values are generally found for ceramic coatings as TiN on stainless steel substrates [START_REF] Wu | Modified curvature method for residual thermal stress estimation in coatings[END_REF]. These results are confirming the dominant contribution of thermal stresses to residual stresses. The comparison of hardness of new (23 ± 2 GPa) and recycled (29 ± 4 GPa) coatings reveals a small but significant difference, taking into account the standard deviations, which raises the question: why should recycled coatings be harder? This could be discussed in terms of residual stresses but no data are available for recycled coatings. However, the residual stresses are largely dominated by thermal stress which has the same value for both coatings since they were deposited at the same temperature and their thickness was relatively close (3.5 and 1.0 μm, respectively). Consequently this is likely not a major factor to explain the difference of hardness. One of the best ways to comment on this difference in hardness is to focus on the specific nanocomposite structure of these hard coatings since it is known that this influences the hardness.
A specific amorphous nanocomposite structure
The morphology, microstructure and composition of new and recycled coatings are the same. Furthermore, XRD, XPS and Raman analyses gave evidence for an amorphous nanocomposite structure. The only significant differences between both types of coatings were found by Raman spectroscopy (Fig. 5).
Basically, the microstructure is composed of 2 phases with interfaces acting as strong interphases. The dominant phase is an amorphous carbide matrix with the Cr 7 C 3 stoichiometry (namely a-Cr 7 C 3 ). Nanometric free-C domains (L a = 35 nm in-plane correlation length) are embedded in this amorphous carbide matrix. They are related to pyrolytic C, which means they exhibit a disordered graphitic structure (turbostratic stacking) with likely some covalent bonding between graphene sheets via open cycles at the edges (generating C sp 3 sites). Furthermore, some graphene sheets are also connected by trans-polyacetylene chains at the edges of these C domains. The relative amount of free-C does not exceed 20% of the total carbon (XPS and EPMA data). An important finding was to identify signatures by Raman and XRD revealing interactions between Cr and free-C. This particular nanostructure is shown schematically in Fig. 6. Due to their layered structure, the free-C domains exhibit two types of interfaces with the a-Cr 7 C 3 matrix: the one which is parallel to the graphene sheets (parallel interface) and that which is roughly perpendicular to the stacking of graphene planes (perpendicular interface). Cr atoms from the amorphous carbide matrix can be grafted on external graphene sheets as hexahapto η 6 -Cr complexes of graphene [START_REF] Sarkar | Organometallic chemistry of[END_REF]. These specific bonds are very similar to those in the BEBC precursor. They contribute to the strengthening of the parallel interfaces. Also, individual Cr atoms can be intercalated between consecutive graphene sheets as in graphite intercalation compounds as supported by XRD data (Fig. 2) [START_REF] Bui | Graphene-Cr-Graphene intercalation nanostructures: stability and magnetic properties from density functional theory investigations[END_REF][START_REF] Sarkar | Organometallic chemistry of[END_REF]. All these Cr interactions can be considered as point defects in ideal graphene sheets. Despite a graphitic base structure, these interactions and interconnections between free-C and a-Cr 7 C 3 , through C sp 3 , transpolyacetylene and η 6 -Cr bonding rigidify the free-C domains, strengthen the interfaces and consolidate a 3D structural network between the carbide matrix and free-C through strong interphases.
The defect density on the external graphene sheets of free-C nanostructures has been estimated from Raman data as the average distance L D between two point defects in graphene sheet (Fig. 6). Interestingly L D was found to decrease from 15.5 to 6.2 nm for new and recycled coatings, respectively, while the average size of graphene sheet given by in-plane correlation length L a is constant (35 nm). This means the defect density, i.e. the density of interactions between the carbide matrix and free-C is significantly higher for recycled coatings than for the new ones. This trend suggests a correlation with the higher hardness of recycled coatings (29 ± 4 GPa) compared to the new coatings (23 ± 2 GPa). The nanohardness would increase with the density of chemical bonds both within graphene sheets of the free-C nanostructures and between these free-C domains and the amorphous carbide matrix.
Basically the growth mechanism is the same for "new" and "recycled" coatings. A simple chemical mechanism reduced to 4 limiting reactions (1 homogeneous, 3 heterogeneous) was proposed for kinetic nanostructure has been reported but they act both on the stresses and the hardness, and so their effect is difficult to decouple.
For two new coatings 3.5 and 35.0 μm thick the nanohardness and Young modulus were found constant at 23.6 ± 2.0 GPa and approximately 293 GPa, respectively. This means the nanohardness is independent of the thickness for values higher than 3.5 μm. At this stage no data is available for thinner coatings to comment on the possible influence of thicknesses lower than 3.5 μm. It is noteworthy that our experimental value of the Young's modulus is in good agreement with the Cr 7 C 3 theoretical value of 302 GPa [START_REF] Xiao | Mechanical properties and chemical bonding characteristics of Cr 7 C 3 type multicomponent carbides[END_REF].
In coating-substrate systems prepared by CVD, residual stresses (σ r ) originate from the sum of thermal stresses (σ t ) induced by the mismatch of thermal expansion between the coating and the substrate, and intrinsic stresses (σ i ) induced by the growth mechanism. The residual stresses were determined for two "new" coatings (thickness t f = 4.0 and 6.0 μm) deposited on 304 L steel strip 0.5 mm thick (t s ). As the ratio t f /t s (or E f ′ t f /E s ′ t s ) is ≤ 1% [START_REF] Klein | How accurate are Stoney's equation and recent modifications[END_REF], the deformation of the substrate can be negligible. The E i ′ are the biaxial moduli (E i / (1ν i )), t i the thicknesses, ν i the Poisson's ratio and the subscripts s and f respectively denotes the substrate and the film, which leads to E f ′ = 363 GPa and E s ′ = 278 GPa using ν CrC = 0.2 and E CrC = 290 GPa (average data of Cr 7 C 3 and Cr 3 C 2 ). The Stoney's equation is applicable with an error which does not exceed 5%. From the measurement of the change of curvature before and after deposition, compressible residual stresses of reactive, thermally stable in the deposition temperature range, and therefore does not participate in the growth mechanism as found using toluene and cyclohexane [START_REF] Maury | Multilayer chromium based coatings grown by atmospheric pressure direct liquid injection CVD[END_REF][START_REF] Douard | Dépôt de carbures, nitrures et multicouches nanostructurées à base de chrome sous pression atmosphérique par DLI-MOCVD: nouveaux procédés et potentialités de ces revêtements métallurgiques[END_REF].
The solution recovered in the cold trap at the exit of the CVD reactor contains undecomposed BEBC, toluene (solvent) and a mixture of organic by-products originating from the released ligands and the heterogeneous decomposition of a small part of them producing ethylbenzene, diethylbenzene, benzene, ethyltoluene, toluene [START_REF] Travkin | Thermal decomposition of bisarene compounds of chromium[END_REF] as well as lighter and non-aromatic hydrocarbons and hydrogen [START_REF] Maury | Low temperature MOCVD routes to chromium metal thin films using bis(benzene)chromium[END_REF]. The lighter hydrocarbons and hydrogen are not efficiently trapped because of their high volatility. The organic by-products originating from the ligands are of the same family as the solvent. Consequently, the trapped solution that can be directly recycled contains unreacted BEBC and solvents constituted of a mixture of several aromatic hydrocarbons. The major difference with the new solution is that the BEBC concentration in the recycled solution is lower due to its consumption. Finally, direct recycling of all effluents can be implemented using this chemical system in a close-loop to reach a conversion rate of the precursor near 100% (currently in progress).
Conclusions
The impact of the high cost of metalorganic precursors on the economic viability of MOCVD can be overcome by maximizing the conversion yield. It was demonstrated that direct recycling of effluent is possible using appropriate bis(arene)Cr(0) precursors.
Chromium carbide coatings were deposited by DLI-MOCVD using either a new bis(ethylbenzene)chromium solution in toluene or a recycled solution recovered at the exit of the reactor. Chemical and structural characteristics of both types of coatings are very similar. They are amorphous with a composition slightly higher than Cr 7 C 3 . The nanohardness is particularly high with values in the range 23-29 GPa. This high hardness is essentially due to the nanocomposite microstructure, without grain boundary, and strong interphases between free-C domains embedded in an amorphous Cr 7 C 3 matrix. The slightly higher hardness of recycled coatings was assigned to a higher density of chemical bonds both within the C domains (C sp 3 and trans-polyacetylene bridges) and at the interfaces with the amorphous carbide matrix (Cr grafting and intercalation). A gradual filling of prismatic and octahedral C sites of the matrix also likely plays a role in strengthening the interphase.
It is a breakthrough for MOCVD because the process can be extended to metals of columns 5 and 6 for which the same M(0) chemistry can be implemented and the carbides also have many practical applications as protective metallurgical coatings. Recycling in a closed-loop is currently in progress to reach a conversion rate near 100% in a onestep CVD run. modeling and simulation of the process. It is based on site competition reactions [START_REF] Michau | Chromium Carbide Growth by Direct Liquid Injection Chemical Vapor Deposition in Long and Narrow Tubes, Experiments, Modeling and Simulation[END_REF]. The lower concentration of recycled BEBC solutions would induce a lower supersaturation of BEBC near the growing surface. As a result, this would favor a higher mobility of adsorbed chemical species or would influence adsorption competition and finally would facilitate locally the formation of chemical bonds both within the C domains (C sp 3 , trans-polyacetylene bridges) and at the parallel and perpendicular interfaces with the amorphous carbide matrix (Cr grafting and intercalation, respectively). Subsequently the nanostructure of the coating is overall strengthened and its nanohardness is increased.
Another hypothesis about strong interphases instead of sharp and weak interfaces, not supported here by experimental data, is to be aware that in the crystallographic structure of Cr 7 C 3 carbon atoms are in trigonal prisms connected in chains while it was reported that in amorphous Cr 1-x C x for x > 33% carbon progressively filled octahedral interstitial sites as the C content increased, suggesting that C coexisted in both prismatic and octahedral sites [START_REF] Bauer-Grosse | Thermal stability and crystallization studies of amorphous TM-C films[END_REF]. It is reasonable to assume that at the interface a-Cr 7 C 3 /free-C a carbon enrichment of the carbide matrix is possible by the gradual occupation of both prismatic and octahedral sites. For instance in C-rich amorphous Cr x C y grown by PVD, C atoms were located in a mixture of prismatic and octahedral sites with a distribution depending on the total C content [START_REF] Magnuson | Electronic structure and chemical bonding of amorphous chromium carbide thin films[END_REF]. These polyhedral units are characterized by strong covalent Cr 3d-C 2p bonding. Locally, at the a-Cr 7 C 3 /free-C interface, the proportion of C atom filling octahedral sites probably depends on growth conditions. If the growth rate of the a-Cr 7 C 3 matrix is slow enough, for instance because the mole fraction of precursor is low, in a competitive pathway C can diffuse to fill octahedral sites and thus strengthen the interphase.
At this stage it is not reasonable to speculate more on the difference of hardness between "new" and "recycled" coatings because the difference is not so large and it must be confirmed by other experiments. However, it can be retained that both the density of interactions between the carbide matrix and free-C (grafting and intercalation of Cr), supported by experimental data, and the assumed gradual occupation of prismatic and octahedral sites of the carbide by the carbon generate strong interphases which influence the mechanical properties.
Key points making recycling possible: selection of precursor
A barrier in the implementation of MOCVD recycling is that the decomposition of the metalorganic precursor is complex and often produces many by-products which, if recycled, significantly affect the composition and microstructure of the coatings. Consequently a tedious and expensive separation of the by-products is necessary to recover the precursor which has not reacted. The key is therefore to use metalorganic precursors whose decomposition mechanism is very simple, and which do not produce metalorganic by-products that could modify the growth mechanism. This is the case of bis(arene)M(0) compounds where the metal M is in the zero valence state, as in the deposited metal or carbide coatings. This important family of precursors was used for low temperature MOCVD of carbides of V [START_REF] Abisset | Low temperature MOCVD of V-C-N coatings using bis(arene) vanadium as precursors[END_REF], Nb [17], Ta [17], Cr [START_REF] Anantha | Chromium deposition from dicumene-chromium to form metal-semiconductor devices[END_REF][START_REF] Maury | Structural characterization of chromium carbide coatings deposited at low temperature by LPCVD process using dicumene chromium[END_REF][START_REF] Schuster | Influence of organochromium precursor chemistry on the microstructure of MOCVD chromium carbide coatings[END_REF][START_REF] Polikarpov | Chromium films obtained by pyrolysis of chromium bisarene complexes in the presence of chlorinated hydrocarbons[END_REF], Mo [START_REF] Whaley | Carbonaceous solid bodies and processes for their manufacture[END_REF] and W [START_REF] Whaley | Carbonaceous solid bodies and processes for their manufacture[END_REF]. In the deposition process the metal, and in particular Cr, stays in the zero valence state. For instance no hexavalent Cr(VI) compound is formed which entirely satisfies European regulation REACH or related rules. The ligands are stable aromatic molecules; they are readily released by selective bond breaking during the deposition process without undergoing significant pyrolysis [START_REF] Maury | Low temperature MOCVD routes to chromium metal thin films using bis(benzene)chromium[END_REF][START_REF] Travkin | Thermal decomposition of bisarene compounds of chromium[END_REF]. It is then recommended in DLI-MOCVD to use solvent of the same family as the ligands (e.g. toluene for BEBC) to avoid uncontrolled side-reactions.
The main characteristics of the coatings are independent of the nature of the ligands as deduced from the use of Cr(C 6 H 6 ) 2 , Cr (C 6 H 5 i Pr) 2 and Cr(C 6 H 5 Et) 2 [START_REF] Maury | Structural characterization of chromium carbide coatings deposited at low temperature by LPCVD process using dicumene chromium[END_REF][START_REF] Schuster | Influence of organochromium precursor chemistry on the microstructure of MOCVD chromium carbide coatings[END_REF]. As a result, a mixture of different bis(arene)Cr precursors can be used as in [START_REF] Devyatykh | Composition of impurities in bis-ethylbenzene chromium produced according to the Friedel-Crafts method[END_REF][START_REF] Gribov | Super-pure materials from metal-organic compounds[END_REF] and in this work. Also the nature of the solvent is not very important provided that it is non-
Fig. 1 .
1 Fig. 1. Cross section of Cr x C y coatings grown at 723 K and 6.7 kPa on Si substrates using (a) a new BEBC solution in toluene and (b) a recycled solution. The lower thickness in (b) originates from the lower concentration of the recycled solution.
Fig. 2 .
2 Fig. 2. Typical XRD pattern of a "new" Cr x C y coating grown by DLI-MOCVD with a new solution of BEBC in toluene (black) compared to that of a "recycled" coating (grey) grown in the same conditions.
Fig. 3 .
3 Fig. 3. (a) TEM micrograph of a new Cr x C y coating observed in cross section; (b) corresponding selected area electron diffraction showing two diffuse rings of the amorphous carbide phase.
minority component (Fig.4c). After the surface cleaning by ion etching, the O 1s intensity is significantly decreased and only one component is found at 530.8 eV (CreO). Regarding Cr 2p 3/2 region, the oxygenated components have almost disappeared and the peak is shifted to 574.0 eV as for Cr metal or carbide (CreC bonds). This XPS analysis confirms the presence of free-C and a carbidic form in the coatings as observed by XRD. After in situ surface cleaning the atomic composition of the surface analyzed by XPS is Cr 0.57 C 0.33 O 0.10 that is in good agreement with EPMA data (Table2). From the relative intensity of the two components of C 1s peak of Fig.4cthe proportion of free-C to the total-C is approximately 20%. On the other hand, considering the EPMA composition Cr 0.64 C 0.33 O 0.03 (Table2) as a representative formula and neglecting oxygen content, comparison with the stoichiometric Cr 7 C 3 phase reveals a carbon excess as free-C of 18 at.%. These two values of relative content of free-C determined by XPS and EPMA are in good agreement and confirm the presence of free-C nanostructures identified in XRD. Due to the presence of free-C, it is confirmed that the matrix a-Cr x C y has the composition a-Cr 7 C 3 .
Fig. 4 .
4 Fig. 4. XPS analysis of a new Cr x C y coating: (a) depth profile of C 1s components, (b) C 1s spectra of as-received sample (0 s ion etching time) and (c) C 1s spectra after 220 s ion etching time.
the average of at least ten successful indentations per sample. Considering standard deviations of measurements, new and recycled coatings have substantially comparable H and E values. Nanoindentation measurements confirmed Vickers hardness tests. With values in the range 23-29 GPa for coatings on 304 L thicker than 1 μm, both types of coatings exhibit nanoindentation hardness as high as those of Cr x C y coatings deposited by PVD, e.g. 24.2 GPa [50], 22 GPa [51] and 21 GPa [52], as well as MOCVD, 25 GPa [53]. It is noteworthy that our coatings are amorphous, whereas those in the cited references were polycrystalline.
Fig. 5 .
5 Fig. 5. Raman spectra of Cr x C y coatings grown on a Si substrate with a new BEBC solution (a) (c), and a recycled one (b) (d): spectral range 200-1800 cm -1 (a) (b), and the C region (c) (d). The proposed deconvolution of the C bands is commented in the text.
Fig. 6 .
6 Fig. 6. Schematic representation of the amorphous and nanocomposite microstructure of Cr x C y coatings deposited by DLI-MOCVD showing the main structural features at the interface between free C nanostructures embedded in an amorphous Cr 7 C 3 matrix (Cr atoms are the red circles). The L a and L D distances shown are discussed in the text. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Table 1
1 Experimental DLICVD conditions of new Cr x C y coatings. The growth conditions in recycling mode are the same except that the BEBC solution injected resulting from cryogenic trapping during previous CVD runs had a lower concentration.
T (K) P (kPa) BEBC in toluene (mol/L) BEBC gas flow rate (sccm) Toluene gas flow rate (sccm) N 2 gas flow (sccm) Frequency (Hz) Opening time (ms)
723 6.7 0.3 9 216 500 1-10 0.5-5
Table 2
2 Atomic composition of coatings grown with a new and recycled solution (EPMA data).
Table 3
3 Nanoindentation results for new and recycled coatings deposited on 304 L stainless steel substrate.
Coating Thickness (μm) H (GPa) E (GPa) H 3 /E 2 (GPa)
New Cr x C y Recycled Cr x C y 3.5 1.0 23 ± 2 29 ± 4 285 ± 20 295 ± 40 1.4 × 10 -1 3.9 × 10 -1
Acknowledgements
This work was supported by the Centre of Excellence of Multifunctional Architectured Materials "CEMAM" [grant number AN-10-LABX-44-01]. We thank Sofiane Achache and Raphaël Laloo for their help in hardness measurements, Jerome Esvan and Olivier Marsan for their assistance in XPS and Raman spectroscopies.
Appendix A. Supplementary data
Supplementary data to this article can be found online at https:// doi.org/10.1016/j.surfcoat.2017.06.077. | 59,985 | [
"174847"
] | [
"580",
"580",
"255534",
"1041828",
"1041828"
] |
01766530 | en | [
"info"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01766530/file/main.pdf | Thomas Chatain
Maurice Comlan
David Delfieu
Loïg Jezequel
Olivier H Roux
Pomsets and Unfolding of Reset Petri Nets
Reset Petri nets are a particular class of Petri nets where transition firings can remove all tokens from a place without checking if this place actually holds tokens or not. In this paper we look at partial order semantics of such nets. In particular, we propose a pomset bisimulation for comparing their concurrent behaviours. Building on this pomset bisimulation we then propose a generalization of the standard finite complete prefixes of unfolding to the class of safe reset Petri nets.
Introduction
Petri nets are a well suited formalism for specifying, modeling, and analyzing systems with conflicts, synchronization and concurrency. Many interesting properties of such systems (reachability, boundedness, liveness, deadlock,. . . ) are decidable for Petri nets. Over time, many extensions of Petri nets have been proposed in order to capture specific, possibly quite complex, behaviors in a more direct manner. These extensions offer more compact representations and/or increase expressive power. One can notice, in particular, a range of extensions adding new kinds of arcs to Petri nets: read arcs and inhibitor arcs [START_REF] Baldan | Contextual Petri nets, asymmetric event structures and processes[END_REF][START_REF] Montanari | Contextual nets[END_REF] (allowing to read variables values without modifying them), and reset arcs [START_REF] Araki | Some decision problems related to the reachability problem for Petri nets[END_REF] (allowing to modify variables values independently of their previous value). Reset arcs increase the expressiveness of Petri nets, but they compromise analysis techniques. For example, boundedness [START_REF] Dufourd | Boundedness of reset P/T nets[END_REF] and reachability [START_REF] Araki | Some decision problems related to the reachability problem for Petri nets[END_REF] are undecidable. For bounded reset Petri nets, more properties are decidable, as full state spaces can be computed.
Full state-space computations (i.e. using state graphs) do not preserve partial order semantics. To face this problem, Petri nets unfolding has been proposed and has gained the interest of researchers in verification [START_REF] Esparza | Unfoldings -A Partial-Order Approach to Model Checking[END_REF], diagnosis [START_REF] Benveniste | Diagnosis of asynchronous discreteevent systems: a net unfolding approach[END_REF], and planning [START_REF] Hickmott | Planning via Petri net unfolding[END_REF]. This technique keeps the intrinsic parallelism and prevents the combinatorial interleaving of independent events. While the unfolding of a Petri net can be infinite, there exist algorithms for constructing finite prefixes of it [START_REF] Esparza | An improvement of McMillan's unfolding algorithm[END_REF][START_REF] Mcmillan | Using unfoldings to avoid the state explosion problem in the verification of asynchronous circuits[END_REF]. Unfolding have the strong interest of preserving more behavioral properties of Petri nets than state graphs. In particular they preserve concurrency and its counterpart: causality. Unfolding techniques have also been developed for extensions of Petri nets, and in particular Petri nets with read arcs [START_REF] Baldan | Efficient unfolding of contextual Petri nets[END_REF].
Our contribution: Reachability analysis is known to be feasible on bounded reset Petri nets, however, as far as we know, no technique for computing finite prefixes of unfolding exists yet, and so, no technique preserving concurrency and causality exists yet. This is the aim of this paper to propose one. For that, we characterise the concurrent behaviour of reset Petri nets by defining a notion of pomset bisimulation. This has been inspired by several works on pomset behaviour of concurrent systems [START_REF] Best | Concurrent bisimulations in Petri nets[END_REF][START_REF] Van Glabbeek | Equivalence notions for concurrent systems and refinement of actions[END_REF][START_REF] Vogler | Bisimulation and action refinement[END_REF]. From this characterization we can then express what should be an unfolding preserving the concurrent behaviour of a reset Petri net. We show that it is not possible to remove reset arcs from safe reset Petri nets while preserving their behaviours with respect to this pomset bisimulation. Then we propose a notion of finite complete prefixes of unfolding of safe reset Petri nets that allows for reachability analysis while preserving pomset behaviour. As a consequence of the two other contributions, these finite complete prefixes do have reset arcs.
This paper is organized as follows: We first give basic definitions and notations for (safe) reset Petri nets. Then, in Section 3, we propose the definition of a pomset bisimulation for reset Petri nets. In Section 4 we show that, in general, there is no Petri net without resets which is pomset bisimilar to a given reset Petri net. Finally, in Section 5 -building on the results of Section 4 -we propose a finite complete prefix construction for reset Petri nets.
Reset Petri nets
Definition 1 (structure). A reset Petri net structure is a tuple (P , T , F , R) where P and T are disjoint sets of places and transitions, F ⊆ (P × T ) ∪ (T × P ) is a set of arcs, and R ⊆ P × T is a set of reset arcs.
An element x ∈ P ∪ T is called a node and has a preset • x = {y ∈ P ∪ T : (y, x) ∈ F } and a postset x • = {y ∈ P ∪ T : (x, y) ∈ F }. If, moreover, x is a transition, it has a set of resets x = {y ∈ P : (y, x) ∈ R}.
For two nodes x, y ∈ P ∪ T , we say that: x is a causal predecessor of y, noted x ≺ y, if there exists a sequence of nodes x 1 . . . x n with n ≥ 2 so that ∀i ∈ [1..n-1], (x i , x i+1 ) ∈ F , x 1 = x, and x n = y. If x ≺ y or y ≺ x we say that x and y are in causal relation. The nodes x and y are in conflict, noted x#y, if there exists two sequences of nodes x 1 . . . x n with n ≥ 2 and ∀i ∈ [1..n -1], (x i , x i+1 ) ∈ F , and y 1 . . . y m with m ≥ 2 and ∀i ∈ [1..m -1], (y i , y i+1 ) ∈ F , so that x 1 = y 1 is a place, x 2 = y 2 , x n = x, and y m = y.
A marking is a set M ⊆ P of places. It enables a transition t ∈ T if ∀p ∈ • t, p ∈ M . In this case, t can be fired from M , leading to the new marking
M = (M \ ( • t ∪ t)) ∪ t • .
The fact that M enables t and that firing t leads to M is denoted by M [t M .
Definition 2 (reset Petri net).
A reset Petri net is a tuple (P , T , F , R, M 0 ) where (P , T , F , R) is a reset Petri net structure and M 0 is a marking called the initial marking. A marking M is said to be reachable in a reset Petri net if there exists a sequence M 1 . . . M n of markings so that: ∀i ∈ [1..n -1], ∃t ∈ T , M i [t M i+1 (each marking enables a transition that leads to the next marking in the sequence), M 1 = M 0 (the sequence starts from the initial marking), and M n = M (the sequence leads to M ). The set of all markings reachable in a reset Petri net N R is denoted by [N R .
A reset Petri net with an empty set of reset arcs is simply called a Petri net.
Definition 3 (underlying Petri net). Given N R = (P , T , F , R, M 0 ) a reset Petri net, we call its underlying Petri net the Petri net N = (P , T , F , ∅, M 0 ).
The above formalism is in fact a simplified version of the general formalism of reset Petri nets: arcs have no multiplicity and markings are sets of places rather than multisets of places. We use it because it suffices for representing safe nets.
Definition 4 (safe reset Petri net). A reset Petri net (P , T , F , R, M 0 ) is said to be safe if for any reachable marking M and any transition
t ∈ T , if M enables t then (t • \ ( • t ∪ t)) ∩ M = ∅.
The reader familiar with Petri nets will notice that our results generalize to larger classes of nets: unbounded reset Petri nets for our pomset bisimulation (Section 3), and bounded reset Petri nets for our prefix construction (Section 5).
In the rest of the paper, unless the converse is specified, we consider reset Petri nets so that the preset of each transition t is non-empty: • t = ∅. Notice that this is not a restriction to our model: one can equip any transition t of a reset Petri net with a place p t so that p t is in the initial marking and • p t = p • t = {t}. One may need to express that two (reset) Petri nets have the same behaviour. This is useful in particular for building minimal (or at least small, that is with few places and transitions) representatives of a net; or for building simple (such as loop-free) representatives of a net. A standard way to do so is to define a bisimulation between (reset) Petri nets, and state that two nets have the same behaviour if they are bisimilar.
The behaviour of a net will be an observation of its transition firing, this observation being defined thanks to a labelling of nets associating to each transition an observable label or the special unobservable label ε.
Definition 5 (labelled reset Petri net). A labelled reset Petri net is a tuple (N R , Σ, λ) so that: N R = (P , T , F , R, M 0 ) is a reset Petri net, Σ is a set of transition labels, and λ : T → Σ ∪ {ε} is a labelling function.
In such a labelled net we extend the labelling function λ to sequences of transitions in the following way: given a sequence t
1 . . . t n (with n ≥ 2) of tran- sitions, λ(t 1 . . . t n ) = λ(t 1 )λ(t 2 . . . t n ) if λ(t 1 ) ∈ Σ and λ(t 1 . . . t n ) = λ(t 2 . . . t n ) else (that is if λ(t 1 ) = ε).
From that, one can define bisimulation as follows.
Definition 6 (bisimulation). Let (N R,1 , Σ 1 , λ 1 ) and (N R,2 , Σ 2 , λ 2 ) be two labelled reset Petri nets with N R,i = (P i , T i , F i , R i , M 0,i ). They are bisimilar if and only if there exists a relation ρ ⊆
[N R,1 × [N R,2 (a bisimulation) so that: 1. (M 0,1 , M 0,2 ) ∈ ρ, 2. if (M 1 , M 2 ) ∈ ρ, then (a) for every transition t ∈ T 1 so that M 1 [t M 1,n there exists a sequence t 1 . . . t n of transitions from T 2 and a sequence M 2,1 . . . M 2,n of markings of N R,2 so that: M 2 [t 1 M 2,1 [t 2 . . . [t n M 2,n , λ 2 (t 1 . . . t n ) = λ 1 (t), and (M 1,n , M 2,n ) ∈ ρ (b) the other way around (for every transition t ∈ T 2 . . . ) p1 t1 p2 p3 t2 p4 N R,1 p1 t1 p2 p3 t2 p4 p5 N R,2
Fig. 2. Two bisimilar nets
This bisimulation however hides an important part of the behaviours of (reset) Petri nets: transition firings may be concurrent when transitions are not in causal relation nor in conflict. For example, consider Figure 2 where N R,1 and N R,2 are bisimilar (we identify transition names and labels). In N R,1 , t 1 and t 2 are not in causal relation while in N R,2 they are in causal relation.
To avoid this loss of information, a standard approach is to define bisimulations based on partially ordered sets of transitions rather than totally ordered sets of transitions (the transition sequences used in the above definition). Such bisimulations are usually called pomset bisimulations.
Pomset bisimulation for reset Petri nets
In this section, we propose a definition of pomset bisimulation for reset Petri nets. It is based on an ad hoc notion of processes (representations of the executions of a Petri net, concurrent counterpart of paths in automata).
Processes of reset Petri nets
We recall a standard notion of processes of Petri nets and show how it can be extended to reset Petri nets. As a first step, we define occurrence nets which are basically Petri nets without loops.
Definition 7 (occurrence net). An occurrence net is a (reset) Petri net (B, E, F O , R O , M O 0 ) so that, ∀b ∈ B, ∀x ∈ B ∪ E: (1) | • b| ≤ 1, (2) x is not in causal relation with itself, (3) x is not in conflict with itself, (4) {y ∈ B ∪E : y ≺ x} is finite, (5) b ∈ M O 0 if and only if • b = ∅.
Places of an occurrence net are usually referred to as conditions and transitions as events. In an occurrence net, if two nodes x, y ∈ B ∪ E are so that x = y, are not in causal relation, and are not in conflict, they are said to be concurrent. Moreover, in occurrence net, the causal relation is a partial order.
There is a price to pay for having reset arcs in occurrence nets. With no reset arcs, checking if a set E of events together form a feasible execution (i.e. checking that the events from E can all be ordered so that they can be fired in this order starting from the initial marking) is linear in the size of the occurrence net (it suffices to check that E is causally closed and conflict free). With reset arcs the same task is NP-complete as stated in the below proposition.
Proposition 1. The problem of deciding if a set E of events of an occurrence net with resets forms a feasible execution is NP-complete.
Proof. (Sketch) Graph 3-coloring reduces to executability of an occurrence net.
The branching processes of a Petri net are then defined as particular occurrence nets linked to the original net by homomorphisms.
Definition 8 (homomorphism of nets). Let N 1 and N 2 be two Petri nets such that
N i = (P i , T i , F i , ∅, M 0,i ). A mapping h : P 1 ∪ T 1 → P 2 ∪ T 2 is an homomorphism of nets from N 1 to N 2 if ∀p 1 ∈ P 1 , ∀p 2 ∈ P 2 , ∀t ∈ T 1 : (1) h(p 1 ) ∈ P 2 , (2) h(t) ∈ T 2 , (3) p 2 ∈ • h(t) ⇔ ∃p 1 ∈ • t, h(p 1 ) = p 2 , (4) p 2 ∈ h(t) • ⇔ ∃p 1 ∈ t • , h(p 1 ) = p 2 , (5) p 2 ∈ M 0,2 ⇔ ∃p 1 ∈ M 0,1 , h(p 1 ) = p 2 . Definition 9 (processes of a Petri net). Let N = (P , T , F , ∅, M 0 ) be a Petri net, O = (B, E, F O , ∅, M O 0
) be an occurrence net, and h be an homomorphism of nets from
O to N . Then (O, h) is a branching process of N if ∀e 1 , e 2 ∈ E, ( • e 1 = • e 2 ∧ h(e 1 ) = h(e 2 )) ⇒ e 1 = e 2 . If, moreover, ∀b ∈ B, |b • | ≤ 1, then (O, h) is a process of N .
Finally, a process of a reset Petri net is obtained by adding reset arcs to a process of the underlying Petri net (leading to what we call below a potential process) and checking that all its events can still be enabled and fired in some order.
Definition 10 (potential processes of a reset Petri net). Let N R = (P , T , F , R, M 0 ) be a reset Petri net and N be its underlying Petri net, let O = (B, E, F O , R O , M O 0 ) be an occurrence net, and h be an homomorphism of
nets from O to N R . Then (O, h) is a potential process of N R if (1) (O , h) is a process of N with O = (B, E, F O , ∅, M O 0 ), (2) ∀b ∈ B, ∀e ∈ E, (b, e) ∈ R O if and only if (h(b), h(e)) ∈ R. Definition 11 (processes of a reset Petri net). Let N R = (P , T , F , R, M 0 ) be a reset Petri net, O = (B, E, F O , R O , M O 0
) be an occurrence net, and h be an
homomorphism of nets from O to N R . Then (O, h) is a process of N R if (1) (O, h) is a potential process of N R , and (2) if E = {e 1 , . . . , e n } then ∃M 1 , . . . , M n ⊆ B so that M O 0 [e k1 M 1 [e k2 .
. . [e kn M n with {k 1 , . . . , k n } = {1, . . . , n}. Notice that processes of reset Petri nets and processes of Petri nets do not exactly have the same properties. In particular, two properties are central in defining pomset bisimulation for Petri nets and do not hold for reset Petri nets.
Property 1. In any process of a Petri net with set of events E, consider any sequence of events e 1 e 2 . . . e n (1) that contains all the events in E and (2) such that ∀i, j ∈ [1..n] if e i ≺ e j then i < j. Necessarily, there exist markings M 1 , . . . , M n so that
M O 0 [e 1 M 1 [e 2 .
. . [e n M n . This property (which, intuitively, expresses that processes are partially ordered paths) is no longer true for reset Petri nets. Consider for example the reset Petri net of Figure 1 (left). Figure 1 (right) is one of its processes (the occurrence net with the homomorphism h below). As not e 2 ≺ e 1 , their should exist markings
M 1 , M 2 so that M 0 [e 1 M 1 [e 2 M 2 . However, M 0 = {c 1 , c 3 } indeed enables e 1 , but the marking M 1 such that M 0 [e 1 M 1 is {c 2 }, which does not enable e 2 .
Property 2. In a process of a Petri net all the sequences of events e 1 e 2 . . . e n verifying (1) and ( 2) of Property 1 lead to the same marking (i.e. M n is always the same), thus uniquely defining a notion of maximal marking of a process. This property defines the marking reached by a process. As a corollary of Property 1 not holding for reset Petri nets, there is no uniquely defined notion of maximal marking in their processes. Back to the example {c 2 } is somehow maximal (no event can be fired from it) as well as {c 2 , c 4 }.
To transpose the spirit of Properties 1 and 2 to processes of reset Petri nets, we define below a notion of maximal markings in such processes. In other words, the maximal markings of a process are all the marking that are reachable in it using all its events. This, in particular, excludes {c 2 } in the above example.
Abstracting processes
We show how processes of labelled reset Petri nets can be abstracted as partially ordered multisets (pomsets) of labels.
Definition 13 (pomset abstraction of processes). Let (N R , Σ, λ) be a labelled reset Petri net and (O, h) be a process of
N R with O = (B, E, F O , R O , M O 0 )
. Define E = {e ∈ E : λ(h(e)) = ε}. Define λ : E → Σ as the function so that ∀e ∈ E , λ (e) = λ(h(e)). Define moreover < ⊆ E × E as the relation so that e 1 < e 2 if and only if e 1 ≺ e 2 (e 1 is a causal predecessor of e 2 in O). Then, (E , < , λ ) is the pomset abstraction of (O, h). This abstraction (E, < , λ ) of a process is called its pomset abstraction because it can be seen as a multiset of labels (several events may have the same associated label by λ ) that are partially ordered by the < relation. In order to compare processes with respect to their pomset abstractions, we also define the following equivalence relation.
Definition 14 (pomset equivalence).
Let (E, < , λ) and (E , < , λ ) be the pomset abstractions of two processes P and P . These processes are pomset equivalent, noted P ≡ P if and only if there exists a bijection f : E → E so that ∀e 1 , e 2 ∈ E: (1) λ(e 1 ) = λ (f (e 1 )), and (2) e 1 < e 2 if and only if f (e 1 ) < f (e 2 ).
Intuitively, two processes are pomset equivalent if their pomset abstractions define the same pomset: same multisets of labels with same partial orderings. Finally, we also need to be able to abstract processes as sequences of labels.
Definition 15 (linear abstraction). Let (N R , Σ, λ) be a labelled reset Petri net, let P = (O, h) be a process of
N R with O = (B, E, F O , R O , M O 0 )
, and let M be a reachable marking in O. Define λ : E → Σ as the function so that ∀e ∈ E, λ (e) = λ(h(e)). The linear abstraction of P with respect to M is the set lin(M , P) so that a sequence of ω is in lin(M , P) if and only if in O there exist markings M 1 , . . . , M n-1 and events e 1 , . . . , e n so that
M O 0 [e 1 M 1 [e 2 . . . M n-1 [e n M and λ (e 1 .
. . e n ) = ω.
Pomset bisimulation
We now define a notion of pomset bisimulation between reset Petri nets, inspired by [START_REF] Best | Concurrent bisimulations in Petri nets[END_REF][START_REF] Van Glabbeek | Equivalence notions for concurrent systems and refinement of actions[END_REF][START_REF] Vogler | Bisimulation and action refinement[END_REF]. Intuitively, two reset Petri nets are pomset bisimilar if there exists a relation between their reachable markings so that the markings that can be reached by pomset equivalent processes from two markings in relation are themselves in relation. This is formalized by the below definition.
Definition 16 (pomset bisimulation for reset nets). Let (N R,1 , Σ 1 , λ 1 ) and (N R,2 , Σ 2 , λ 2 ) be two labelled reset Petri nets with N R,i = (P i , T i , F i , R i , M 0,i ).
They are pomset bisimilar if and only if there exists a relation ρ
⊆ [N R,1 ×[N R,2
(called a pomset bisimulation) so that:
1.
(M 0,1 , M 0,2 ) ∈ ρ, 2. if (M 1 , M 2 ) ∈ ρ, then (
a) for every process P 1 of (P 1 , T 1 , F 1 , R 1 , M 1 ) there exists a process P 2 of (P 2 , T 2 , F 2 , R 2 , M 2 ) so that P 1 ≡ P 2 and
-∀M 1 ∈ M max (P 1 ), ∃M 2 ∈ M max (P 2 ) so that (M 1 , M 2 ) ∈ ρ, -∀M 1 ∈ M max (P 1 ), ∀M 2 ∈ M max (P 2 ), (M 1 , M 2 ) ∈ ρ ⇒ lin(M 1 , P 1 ) = lin(M 2 , P 2
). (b) the other way around (for every process P 2 . . . ) Notice that, in the above definition, taking the processes P 1 and P 2 bisimilar (using the standard bisimulation relation for Petri nets) rather than comparing lin(M 1 , P 1 ) and lin(M 2 , P 2 ) would lead to an equivalent definition.
Remark that pomset bisimulation implies bisimulation, as expressed by the following proposition. The converse is obviously not true. Proposition 2. Let (N R,1 , Σ 1 , λ 1 ) and (N R,2 , Σ 2 , λ 2 ) be two pomset bisimilar labelled reset Petri nets, then (N R,1 , Σ 1 , λ 1 ) and (N R,2 , Σ 2 , λ 2 ) are bisimilar.
Proof. It suffices to notice that Definition 6 can be obtained from Definition 16 by restricting the processes considered, taking only those with exactly one transition whose label is different from ε.
From now on, we consider that (reset) Petri nets are finite, i.e. their sets of places and transitions are finite.
N 0R b1(p0) b2(p2) e1 (t1) e2 (t3) b4(p0) b5(p1) b6(p2) e3 (t2) b3(p3) b7(p3) F0 R Fig. 3. A remarkable pattern N pat
R and its structural transformation N pat str , a labelled reset Petri net N 0 R including the pattern N R, and a finite complete prefix F0 R of N 0 R . Transition labels are given on transitions.
In this section, we prove that it is, in general, not possible to remove reset arcs from safe reset Petri nets while preserving their behaviours with respect to this pomset bisimulation. More precisely, we prove that it is not possible to build a safe labelled Petri net (while this is out of the scope of this paper, the reader familiar with Petri nets may notice that this is the case for bounded labelled Petri net) without reset arcs which is pomset bisimilar to a given safe labelled reset Petri net. For that, we exhibit a particular pattern -Figure 3 (left) -and show that a reset Petri net including this pattern cannot be pomset bisimilar to a Petri net without reset arcs.
As a first intuition of this fact, let us consider the following structural transformation that removes reset arcs from a reset Petri net.
Definition 17 (Structural transformation). Let (N R , Σ, λ) be a labelled reset Petri net such that N R = (P , T , F , R, M 0 ), its structural transformation is the labelled Petri net (N R,str , Σ str , λ str ) where N R,str = (P str , T str , F str , ∅, M 0,str ) so that:
P str = P ∪ P with P = {p : p ∈ P ∧ ∃t ∈ T , (p, t) ∈ R}, T str = T ∪ T with T = {t : t ∈ T ∧ t = ∅}, F str = F ∪ {(p, t) : t ∈ T , (p, t) ∈ F } ∪ {(t, p) : t ∈ T , (t, p) ∈ F } (1)
∪ {(p, t) : p ∈ P , (t, p) ∈ F } ∪ {(t, p) : p ∈ P , (p, t) ∈ F } (2)
∪ {(p, t) ∈ P × T : (t, p) ∈ F } ∪ {(t, p) ∈ T × P : (p, t) ∈ F } (3) ∪ {(p, t), (p, t), (t, p), (t, p) : (p, t) ∈ R}, (4)
M 0,str = M 0 ∪ {p ∈ P : p / ∈ M 0 },
and moreover, Σ str = Σ, ∀t ∈ T, λ str (t) = λ(t), and ∀t ∈ T , λ str (t) = λ(t).
Intuitively, in this transformation, for each reset arc (p, t), a copy p of p and a copy t of t are created. The two places are so that p is marked if and only if p is not marked, the transition t will perform the reset when p is marked and t will perform it when p is not marked (i.e when p is marked). For that, new arcs are added to F so that: t mimics t (1), the link between p and p is enforced (2, 3), and the resets are either performed by t or t depending of the markings of p and p (4). This is examplified in Figure 3 (left and middle left). Lemma 1. A labelled reset Petri net (N R , Σ, λ) and its structural transformation (N R,str , Σ str , λ str ) as defined in Definition 17 are bisimilar.
Proof. (Sketch) The bisimulation relation is
ρ ⊆ [N R,1 × [N R,2 defined by (M, M struct ) ∈ ρ iff ∀p ∈ P, M (p) = M struct (p) and ∀p ∈ P such that p ∈ P , we have M struct (p) + M struct (p) = 1.
For the transformation of Definition 17, a reset Petri net and its transformation are bisimilar but not always pomset bisimilar. This can be remarked on any safe reset Petri net including the pattern N pat R of Figure 3. Indeed, this transformation adds in N pat str a causality relation between the transition labelled by t 1 and each of the two transitions labelled by t 3 . From the initial marking of N pat str , for any process whose pomset abstraction includes both t 1 and t 3 , these two labels are causally ordered. While, from the initial marking of N pat R there is a process which pomset abstraction includes both t 1 and t 3 but does not order them. We now generalize this result.
Let us consider the labelled reset Petri Net N 0 R of Figure 3 (middle right). It uses the pattern N pat R of Figure 3 in which t 1 and t 3 can be fired in different order infinitely often. In this net, the transitions with labels t 1 and t 3 are not in causal relation. Proposition 3. There is no finite safe labelled Petri net (i.e. without reset arc) which is pomset bisimilar to the labelled reset Petri net N 0 R . Proof. We simply remark that any finite safe labelled Petri net with no reset arcs which is bisimilar to N 0 R has a causal relation between two transitions labelled by t 1 and t 3 respectively (Lemma 2). From that, by Proposition 2, we get that any such labelled Petri net N which would be pomset bisimilar to N 0 R would have a process from its initial marking whose pomset abstraction is such that some occurrence of t 1 and some occurrence of t 3 are ordered, while this is never the case in the processes of N 0 R . This prevents N from being pomset bisimilar to N 0 R , and thus leads to a contradiction, proving the proposition. Lemma 2. Any safe labelled Petri net with no reset arcs which is bisimilar (see definition 6) to N 0 R has a causal relation between two transitions labelled by t 1 and t 3 respectively.
Proof. (Sketch) The firing of t 3 prevents the firing of t 2 ; then t 3 and t 2 are in conflict and share an input place which has to be marked again after the firing of t 1 . This place generates a causality between t 1 and t 3 .
In this section, we propose a notion of finite complete prefixes of unfolding of safe reset Petri nets preserving reachability of markings and pomset behaviour. As a consequence of the previous section, these finite complete prefixes do have reset arcs.
The unfolding of a Petri net is a particular branching process (generally infinite) representing all its reachable markings and ways to reach them. It also preserves concurrency.
Definition 18 (Unfolding of a Petri net). The unfolding of a net can be defined as the union of all its branching processes [START_REF] Esparza | Unfoldings -A Partial-Order Approach to Model Checking[END_REF] or equivalently its largest branching process (with respect to inclusion).
In the context of reset Petri nets, no notion of unfolding has been defined yet. Accordingly to our notion of processes for reset Petri nets and because of Proposition 4 below we propose Definition 19. In it and the rest of the paper, nets and labelled nets are identified (each transition is labelled by itself) and labellings of branching processes are induced by homomorphisms (as for pomset abstraction).
Definition 19 (Unfolding of a reset Petri net). Let N R be a safe reset Petri net and N be its underlying Petri net. Let U be the unfolding of N . The unfolding of N R is U R , obtained by adding reset arcs to U according to (2) in Definition 10. Proof. (Sketch) This extends a result of [START_REF] Van Glabbeek | Petri net models for algebraic theories of concurrency[END_REF], stating that two Petri nets having the same unfolding (up to isomorphism) are pomset bisimilar (for a notion of bisimulation coping with our in absence of resets).
Petri nets unfolding is however unpractical for studying Petri nets behaviour as it is generally an infinite object. In practice, finite complete prefixes of it are preferred [START_REF] Mcmillan | Using unfoldings to avoid the state explosion problem in the verification of asynchronous circuits[END_REF][START_REF] Esparza | An improvement of McMillan's unfolding algorithm[END_REF].
Definition 20 (finite complete prefix, reachable marking preservation). A finite complete prefix of the unfolding of a safe Petri net N is a finite branching processe (O, h) of N verifying the following property of reachable marking preservation: a marking M is reachable in N if and only if there exists a reachable marking M in O so that M = {h(b) : b ∈ M }.
In this section, we propose an algorithm for construction of finite complete prefixes for safe reset Petri nets. For that, we assume the existence of a black-box algorithm for building finite complete prefixes of safe Petri nets (without reset arcs). Notice that such algorithms indeed do exist [START_REF] Mcmillan | Using unfoldings to avoid the state explosion problem in the verification of asynchronous circuits[END_REF][START_REF] Esparza | An improvement of McMillan's unfolding algorithm[END_REF].
Because of Proposition 3, we know that such finite prefixes should have reset arcs to preserve pomset behaviour. We first remark that directly adding reset arcs to finite complete prefixes of underlying nets would not work. Proposition 5. Let U be the unfolding of the underlying Petri Net N of a safe reset Petri net N R , let F be one of its finite and complete prefixes. Let F be the object obtained by adding reset arcs to F according to (2) in Definition 10. The reachable marking preservation is in general not verified by F (with respect to N R ).
The proof of this proposition relies on the fact that some reachable markings of N R are not represented in F . This suggests that this prefix is not big enough. We however know an object that contains, for sure, every reachable marking of N R along with a way to reach each of them: its structural transformation N R,str (Definition 17). We thus propose to compute finite prefixes of reset Petri nets from their structural transformations: in the below algorithm, F str is used to determine the deepness of the prefix (i.e. the length of the longest chain of causally ordered transitions).
Algorithm 1 (Finite complete prefix construction for reset Petri nets) Let N R be a safe reset Petri net, (step 1) compute the structural transformation N R,str of N R , (step 2) compute a finite complete prefix F str of N R,str , (step 3) compute a finite prefix F of U (the unfolding of the underlying net N ) that simulates F str (a labelled net N 2 simulates a labelled net N 1 if they verify Definition 6 except for condition 2.b.), (step 4) compute F R by adding reset arcs from N R to F according to (2) in Definition 10. The output of the algorithm is F R .
Applying this algorithm to the net N 0 R of Figure 3 (middle right) -using the algorithm from [START_REF] Esparza | An improvement of McMillan's unfolding algorithm[END_REF] at step 2 -leads to the reset Petri net F 0 R of Figure 3 (right).
Notice that the computation of F str -step 1 and 2 -can be done in exponential time and space with respect to the size of N R . The computation of F from F str (step 3) is linear in the size of F. And, the addition of reset arcs (step 4) is at most quadratic in the size of F.
We conclude this section by showing that Algorithm 1 actually builds finite complete prefixes of reset Petri nets. Proposition 6. The object F R obtained by Algorithm 1 from a safe reset Petri net N R is a finite and complete prefix of the unfolding of N R .
Proof. Notice that if N R is safe, then N R,str is safe as well. Thus F str is finite by definition of finite complete prefixes of Petri nets (without reset arcs). F str is finite and has no node in causal relation with itself (i.e. no cycle), hence any net bisimilar with it is also finite, this is in particular the case of F. Adding reset arcs to a finite object does not break its finiteness, so F R is finite.
Moreover, F str is complete by definition of finite complete prefixes of Petri nets (without reset arcs). As F simulates F str it must also be complete (it can only do more). The reset arcs addition removes semantically to F only the unexpected sequences (i.e. the sequence which are possible in F but not in F str ). Therefore, F R is complete.
Our contribution in this paper is three-fold. First, we proposed a notion of pomset bisimulation for reset Petri nets. This notion is, in particular, inspired from a similar notion that has been defined for Petri nets (without reset arcs) in [START_REF] Best | Concurrent bisimulations in Petri nets[END_REF]. Second, we have shown that it is not possible to remove reset arcs from safe reset Petri nets while preserving their behaviours with respect to this pomset bisimulation. And, third, we proposed a notion of finite complete prefixes of unfolding of safe reset Petri nets that allows for reachability analysis while preserving pomset behaviour. As a consequence of the two other contributions, these finite complete prefixes do have reset arcs.
Figure 1 (
1 Figure 1 (left) is a graphical representation of a reset Petri net. It has five places (circles) and three transitions (squares). Its set of arcs contains seven elements (arrows) and there is one reset arc (line with a diamond).
Fig. 1 .
1 Fig. 1. A reset Petri net (left) and one of its processes (right)
Definition 12 (
12 maximal markings). Let P = (O, h) be a process with set of events E = {e 1 , . . . , e n } and initial marking M O 0 of a reset Petri net. The set M max (P) of maximal markings of P contains exactly the markings M so that ∃M 1 , . . . , M n-1 , verifying M O 0 [e k1 M 1 [e k2 . . . M n-1 [e kn M for some {k 1 , . . . , k n } = {1, . . . , n}.
Proposition 4 .
4 Any safe (labelled) reset Petri net N R and its unfolding U R are pomset bisimilar. | 33,752 | [
"17349",
"745648",
"1010548",
"4510",
"17486"
] | [
"473973",
"2571",
"157663",
"523723",
"1040135",
"473973",
"481380",
"473973",
"481380"
] |
01766650 | en | [
"math"
] | 2024/03/05 22:32:13 | 2020 | https://hal.science/hal-01766650/file/GH.pdf | Nathaël Gozlan
Ronan Herry
MULTIPLE SETS EXPONENTIAL CONCENTRATION AND HIGHER ORDER EIGENVALUES
.
Introduction
Let (M, g) be a smooth compact connected Riemannian manifold with its normalized volume measure µ and its geodesic distance d. The Laplace-Beltrami operator ∆ is then a non-positive operator whose spectrum is discrete. Let us denote by λ (k) , k = 0, 1, 2 . . ., the eigenvalues of -∆ written in increasing order. With these notations λ (0) = 0 (achieved for constant functions) and (by connectedness) λ (1) > 0 is the socalled spectral gap of M .
The study of the spectral gap of Riemannian manifolds is, by now, a very classical topic which has found important connections with numerous geometrical and analytical questions and properties. The spectral gap constant λ (1) is for instance related to Poincaré type inequalities and governs the speed of convergence of the heat flow to equilibrium. It is also related to Ricci curvature via the classical Lichnerowicz theorem [START_REF] Lichnerowicz | Géométrie des groupes de transformations[END_REF] and to Cheeger isoperimetric constant via Buser's theorem [START_REF] Buser | A note on the isoperimetric constant[END_REF]. We refer to [START_REF] Bakry | Analysis and geometry of Markov diffusion operators[END_REF][START_REF] Chavel | Eigenvalues in Riemannian geometry[END_REF] and the references therein for a complete picture.
Another important property of the spectral gap constant, first observed by Gromov and Milman [START_REF] Gromov | A topological application of the isoperimetric inequality[END_REF], is that it controls exponential concentration of measure phenomenon for the reference measure µ. The result states as follows. Define for all Borel sets A ⊂ M , its r-enlargement A r as the (open) set of all x ∈ E such that there exists y ∈ A with d(x, y) < r. Then, for any A ⊂ M such that µ(A) ≥ 1/2 it holds
µ(A r ) ≥ 1 -be -a √ λ (1) r , ∀r > 0,
where a, b > 0 are some universal constants (according to [START_REF] Ledoux | The concentration of measure phenomenon[END_REF]Theorem 3.1], one can take b = 1 and a = 1/3). Note that this implication is very general and holds on any metric space supporting a Poincaré inequality (see [START_REF] Ledoux | The concentration of measure phenomenon[END_REF]Corollary 3.2]). See also [6,[START_REF] Schmuckenschläger | Martingales, Poincaré type inequalities, and deviation inequalities[END_REF][START_REF] Aida | Moment estimates derived from Poincaré and logarithmic Sobolev inequalities[END_REF][START_REF] Nathael Gozlan | From dimension free concentration to the Poincaré inequality[END_REF] for alternative derivations, generalizations or refinements of this result. This note is devoted to a multiple sets extension of the above result. Roughly speaking, we will see that if A 1 , . . . , A k are sets which are pairwise separated in the sense that d(A i , A j ) := inf{d(x, y) : x ∈ A i , y ∈ A j } > 0 for any i = j and A is their union then the probability of A r goes exponentially fast to 1 at a rate given by √ λ (k) as soon as r is such that the sets A i,r , i = 1, . . . , k remain separated. More precisely, it follows from Theorem 1.1 (whose setting is actually more general) that, if A 1 , . . . , A k are such that µ(A i ) ≥ 1 k+1 and d(A i,r , A j,r ) > 0 for all i = j, then, denoting
A = A 1 ∪ . . . ∪ A k , it holds (0.1) µ(A r ) ≥ 1 - 1 k + 1
exp -c min(r 2 λ (k) ; r λ (k) ) ,
for some universal constant c. This kind of probability estimates first appeared, in a slightly different but essentially equivalent formulation in the work of Chung, Grigor'yan and Yau [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF][START_REF] Chung | Eigenvalues and diameters for manifolds and graphs[END_REF] (see also the related paper [START_REF] Friedman | Laplacian eigenvalues and distances between subsets of a manifold[END_REF] by Friedman and Tillich). Nevertheless, the method of proof we use to arrive at (0.1) (based on the Courant-Fischer min-max formula for the λ (k) 's) is quite different from the one of [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF][START_REF] Chung | Eigenvalues and diameters for manifolds and graphs[END_REF] and seems more elementary and general. This is discussed in details in Section 1.5. The paper is organized as follows. In Section 1, we prove (0.1) in an abstract metric space framework. This framework contains, in particular, the compact Riemannian case equipped with the Laplace operator presented above. The Section 1.5 contains a detailed discussion of our result with the one of Chung, Grigor'yan & Yau. In Section 2, we recall various bounds on eigenvalues on several non-negatively curved manifolds. Section 3 gives an extension of (0.1) to discrete Markov chains on graphs. In Section 4, we give a functional formulation of the results of Sections 1 and 3. As a corollary of this functional formulation, we obtain a deviation inequality as well as an estimate for difference of two Lipschitz extensions of a Lipschitz function given on k subsets. Finally, Section 5 discusses open questions related to this type of concentration of measure phenomenon.
Multiple sets exponential concentration in abstract spaces
1.1. Courant-Fischer formula and generalized eigenvalues in metric spaces. Let us recall the classical Courant-Fischer min-max formula for the k-th eigenvalue (k ∈ N) of -∆, noted λ (k) , on a compact Riemannian manifold (M, g) equipped with its (normalized) volume measure µ:
(1.1) λ (k) = inf V ⊂C ∞ (M ) dim V =k+1 sup f ∈V \{0} ´|∇f | 2 dµ ´f 2 dµ ,
where ∇f is the Riemannian gradient, defined through the Riemannian metric g (see e.g [START_REF] Chavel | Eigenvalues in Riemannian geometry[END_REF]) and |∇f | 2 = g(∇f, ∇f ). The formula (1.1) above does not make explicitly reference to the differential operator ∆. It can be therefore easily generalized to a more abstract setting, as we shall see below.
In all what follows, (E, d) is a complete, separable metric space and µ a reference Borel probability measure on E. Following [START_REF] Cheeger | Differentiability of Lipschitz functions on metric measure spaces[END_REF], for any function f : E → R and x ∈ E, we denote by |∇f |(x) the local Lipschitz constant of f at x, defined by
|∇f |(x) = 0 if x is isolated lim sup y→x |f (x)-f (y)| d(x,y)
otherwise.
Note that when E is a smooth Riemannian manifold, equipped with its geodesic distance d, then, the local Lipschitz constant of a differentiable function f at x coincides with the norm of ∇f (x) in the tangent space T x E. With this notion in hand, a natural generalization of (1.1) is as follows (we follow [23, Definition 3.1]):
(1.2) λ (k) d,µ := inf V ⊂H 1 (µ) dim V =k+1 sup f ∈V \{0} ´|∇f | 2 dµ ´f 2 dµ , k ≥ 0,
where H 1 (µ) denotes the space of functions f ∈ L 2 (µ) such that ´|∇f | 2 dµ < +∞. In order to avoid heavy notations, we drop the subscript and we simply write
λ (k) instead of λ (k)
d,µ within this section.
a i + k j=1 a j ≥ 1, ∀i ∈ {1, . . . , k}.
Recall the classical notation
d(A, B) = inf{d(x, y) : x ∈ A, y ∈ B} of the distance between two sets A, B ⊂ E.
The following theorem is the main result of the paper and is proved in Section 1.3.
Theorem 1.1.
There exists a universal constant c > 0 such that, for any k ≥ 1 and for all sets
A 1 , . . . , A k ⊂ E such that min i =j d(A i , A j ) > 0 and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k , the set A = A 1 ∪ A 2 ∪ • • • ∪ A k satisfies µ(A r ) ≥ 1 -(1 -µ(A)) exp -c min(r 2 λ (k) ; r λ (k) ) , for all 0 < r ≤ 1 2 min i =j d(A i , A j )
, where λ (k) ≥ 0 is defined by (1.2). Note that, since (1/(k + 1), . . . , 1/(k + 1)) ∈ ∆ k , Theorem 1.1 immediately implies Inequality (0.1).
Inverting our concentration estimate, we obtain the following statement that provides a bound on the λ (k) 's. (E,d,µ) be a metric measured space and λ (k) be defined as in (1.2). Let A 1 , . . . , A k be measurable sets such that (µ(A 1 ), . .
Proposition 1.2. Let
. , µ(A
k )) ∈ ∆ k , then, with r = 1 2 min i =j d(A i , A j ) and A 0 = E \ (∪A i ) r , λ (k) ≤ 1 r 2 ψ 1 c min i ln µ(A i ) µ(A 0 ) ,
where ψ(x) = max(x, x 2 ).
Proof. Let A = ∪ i A i . Inverting the formula in Theorem 1.1, we obtain
λ (k) ≤ 1 r 2 ψ 1 c ln 1 -µ(A) 1 -µ(A r ) , where ψ(x) = max(x, x 2 ). By definition of ∆ k , 1 -µ(A) = 1 - i µ(A i ) ≤ min i µ(A i ).
Therefore, letting A 0 = E \ A r , we obtain the announced inequality by non-decreasing monotonicity of ψ and ln.
The collection of sets ∆ k , k ≥ 1 has the following useful stability property:
Lemma 1.3. Let I 1 , I 2 , . . . , I n be a partition of {1, . . . , k}, k ≥ 1. Let a = (a 1 , . . . , a k ) ∈ R k and define b = (b 1 , . . . , b n ) ∈ R n by setting b i = j∈I i a j , i ∈ {1, . . . , n}. If a ∈ ∆ k then b ∈ ∆ n .
Proof. The proof is obvious and left to the reader.
Thanks to this lemma it is possible to iterate Theorem 1.1 and to obtain a general bound for µ(A r ) for all values of r > 0. This bound will depend on the way the sets A 1,r , . . . , A k,r coalesce as r increases. This is made precise in the following definition.
Definition 1.1 (Coalescence graph of a family of sets). Let A 1 , . . . , A k be subsets of E. The coalescence graph of this family of sets is the family of graphs G r = (V, E r ), r > 0, where V = {1, 2, . . . , k} and the set of edges E r is defined as follows: {i, j}
∈ E r if d(A i,r , A j,r ) = 0. Corollary 1.4. Let A 1 , . . . , A k be subsets of E such that min i =j d(A i , A j ) > 0 and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k .
For any r > 0, let N (r) be the number of connected components in the coalescence graph G r associated to A 1 , . . . , A k . The function (0, ∞) → {1, . . . , k} : r → N (r) is non-increasing and right-continuous. Define r i = sup{r > 0 : N (r) ≥ k -i + 1}, i = 1, . . . , k and r 0 = 0 then it holds
(1.3) µ(A r ) ≥ 1 -(1 -µ(A)) exp -c k i=1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) , ∀r > 0,
where φ(x) = min(x; x 2 ), x ≥ 0 and c is the universal constant appearing in Theorem 1.1.
Observe that, contrary to usual concentration results, the bound given above depends on the geometry of the set A.
µ(A r ) ≥ 1 -(1 -µ(A)) exp -cφ(r λ (k) ) , for all 0 < r ≤ 1 2 min i =j d(A i , A j ). Let k 1 = N ( 1 2 min i =j d(A i , A j )) and let i 1 = k -k 1 .
Then, for all i ∈ {1, . . . , i 1 }, r i = 1 2 min i =j d(A i , A j ). So that, for all 0 < r ≤ r i 1 , the preceding bound can be rewritten as follows (note that only the term of index i = 1 gives a non zero contribution)
µ(A r ) ≥ 1 -(1 -µ(A)) exp -c i 1 i=1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) = 1 -(1 -µ(A)) exp -c k i=1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) (1.4)
which shows that (1.3) is true for 0 < r ≤ r i 1 . Now let I 1 , . . . , I k 1 be the connected components of G r 1 and define, for all i ∈ {1, . . . , k 1 },
B i = ∪ j∈I i A j,r 1 . It follows eas- ily from Lemma 1.3 that (µ(B 1 ), . . . , µ(B k 1 )) ∈ ∆ k 1 . Since min i =j d(B i , B j ) > 0, the induction hypothesis implies that µ(B s ) ≥ 1 -(1 -µ(B)) exp -c k 1 i=1 φ [s ∧ s i -s i-1 ] + λ (k 1 -i+1) , ∀s > 0,
where
B = B 1 ∪ • • • ∪ B k 1 = A r 1 and s i = sup{s > 0 : N ′ (s) ≥ k 1 -i + 1}, i ∈ {1, . . . , k 1 } (s 0 = 0) with N ′ (s) the number of connected components in the graph G ′ s associated to B 1 , . . . , B k 1 . It is easily seen that r i 1 +i = r i 1 + s i , for all i ∈ {0, 1 . . . , k 1 }. Therefore, we have that, for r > r i 1 , µ(A r ) ≥ µ(B r-r i 1 ) ≥ 1 -(1 -µ(A r i 1 )) exp -c k i=i 1 +1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) ≥ 1 -(1 -µ(A)) exp -c k i=1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) ,
where the last line is true by (1.4).
To prove Theorem 1.1, we need some preparatory lemmas. Given a subset A ⊂ E, and x ∈ E, the minimal distance from x to A is denoted by
d(x, A) = inf y∈A d(x, y). Lemma 1.5. Let A ⊂ E and ǫ > 0, then (E \ A ǫ ) ǫ ⊂ E \ A. Proof. Let x ∈ (E \ A ǫ ) ǫ . Then, there exists y ∈ E \ A ǫ (in particular d(y, A) ≥ ǫ) such that d(x, y) < ǫ. Since the function z → d(z, A) is 1-Lipschitz, one has d(x, A) ≥ d(y, A) -d(x, y) > 0 and so x ∈ E \ A. Remark 1. In fact, we proved that (E \ A ǫ ) ǫ ⊂ E \ Ā. The converse is, in general, not true. Lemma 1.6. Let A 1 , . . . , A k be a family of sets such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k and r := 1 2 min i =j d(A i , A j ) > 0. Let 0 < ǫ ≤ r and set A = ∪ 1≤i≤k A i and A 0 = E \ (A ǫ ). Then, (1.5) max i=0,...,k µ(A i,ǫ ) µ(A i ) ≤ 1 -µ(A) 1 -µ(A ǫ ) .
Proof. First, this is true for i = 0. Indeed, by definition A 0 = E \ (A ǫ ) and, according to Lemma 1.5, (A 0 ) ǫ ⊂ A c (the equality is not always true), which proves (1.5) in this case. Now, let us show (1.5) for the other values of i. Since ǫ ≤ r, the A j,ǫ 's are disjoint sets. Thence, (1.5) is equivalent to
1 - k j=1 µ(A j,ǫ ) µ(A i,ǫ ) ≤ 1 - k j=1 µ(A j ) µ(A i ).
This inequality is true as soon as
(1 -µ(A i,ǫ ) -m i ) µ(A i,ǫ ) ≤ (1 -µ(A i ) -m i ) µ(A i ), denoting m i = k j =i µ(A j ). The function f i (u) = (1 -u -m i )u, u ∈ [0, 1], is decreasing on the interval [(1 -m i )/2, 1]. We conclude from this that (1.5) is true for all i ∈ {1, . . . , k}, as soon as µ(A i ) ≥ (1 -m i )/2 for all i ∈ {1, . . . , k} which amounts to (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . For p > 1, we define the function χ p : [0, ∞[→ [0, 1] by χ p (x) = (1 -x p ) p , for x ∈ [0, 1] and χ p (x) = 0 for x > 1. It is easily seen that χ p (0) = 1, χ ′ p (0) = χ p (1) = χ ′ p (1) = 0, that χ p takes values in [0, 1]
and that χ p is continuously differentiable on [0, ∞[. We use the function χ p to construct smooth approximations of indicator functions on E, as explained in the next statement.
Lemma 1.7. Let A ⊂ E and consider the function
f (x) = χ p (d(x, A)/ǫ), x ∈ E, where ǫ > 0 and p > 1. For all x ∈ E, it holds |∇f |(x) ≤ p 2 ǫ -1 1 Aǫ\A
Proof. Thanks to the chain rule for the local Lipschitz constant (see e.g. [2, Proposition 2.1]),
∇χ p d(•, A) ǫ (x) ≤ ǫ -1 χ ′ p d(•, A) ǫ |∇d(•, A)|(x).
The function d(•, A) being Lipschitz, its local Lipschitz constant is ≤ 1 and, thereby,
|∇f |(x) ≤ χ ′ p d(x, A) ǫ .
In particular, thanks to the aforementioned properties of χ, |∇f | vanishes on A (and even on A) and on {x ∈ E : d(x, A) ≥ ǫ} = E \ A ǫ . On the other hand, a simple calculation shows that |χ ′ p | ≤ p 2 which proves the claim.
Proof of Theorem 1.1. Take Borel sets A 1 , . . . , A k with 1 2 min i =j d(A i , A j ) ≥ r > 0 and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k and consider A = A 1 ∪ • • • ∪ A k . Let us show that, for any 0 < ǫ ≤ r, it holds (1.6) 1 + λ (k) ǫ 2 (1 -µ(A ǫ )) ≤ (1 -µ(A)). Let A 0 = E \ (A ǫ ) and set f i (x) = χ p (d(x, A i )/ǫ), x ∈ E, i ∈ {0, . . . , k}, where p > 1.
According to Lemma 1.7 and the fact that
f i = 1 on A i , we obtain (1.7) ˆ|∇f i | 2 dµ = p 4 ǫ 2 µ(A i,ǫ \ A i ) and ˆf 2 i dµ ≥ µ(A i ).
Since the f i 's have disjoint supports they are orthogonal in L 2 (µ) and, in particular, they span a k + 1 dimensional subspace of H 1 (µ). Thus, by definition of λ (k) ,
λ (k) ≤ sup a∈R k+1 ´|∇ k i=0 a i f i | 2 dµ ´ k i=0 a i f i 2 dµ ≤ sup a∈R k+1 ´ k i=0 |a i ||∇f i | 2 dµ ´ k i=0 a i f i 2 dµ ,
where the second inequality comes from the following easy to check sub-linearity property of the local Lipschitz constant:
|∇ (af + bg) | ≤ |a||∇f | + |b||∇g|.
Since the f ′ i s and the |∇f i | ′ s are two orthogonal families, we conclude using (1.7), that
λ (k) ǫ 2 p 4 ≤ sup a∈R k+1 k i=0 a 2 i (µ(A i,ǫ ) -µ(A i )) k i=0 a 2 i µ(A i )
, which amounts to
(1.8) 1 + λ (k) ǫ 2 p 4 ≤ max i=0,...,k µ(A i,ǫ ) µ(A i ) .
Applying Lemma 1.6 and sending p to 1 gives (1.6). Now, if n ∈ N and 0 < ǫ are such that nǫ ≤ r, then iterating (1.6) immediately gives
1 + λ (k) ǫ 2 n (1 -µ(A nǫ )) ≤ 1 -µ(A).
Optimizing this bound over n for a fixed ε gives
(1 -µ(A r )) ≤ (1 -µ(A)) exp -sup ⌊r/ǫ⌋ log 1 + λ (k) ǫ 2 : ǫ ≤ r .
Thus, letting
(1.9) Ψ(x) = sup ⌊t⌋ log 1 + x t 2 : t ≥ 1 , x ≥ 0, it holds (1 -µ(A r )) ≤ (1 -µ(A)) exp -Ψ λ (k) r 2 .
Using Lemma 1.8 below, we deduce that Ψ λ (k) r 2 ≥ c min(r 2 λ (k) ; r √ λ (k) ), with c = log(5)/4, which completes the proof.
Lemma 1.8. The function Ψ defined by (1.9) satisfies
Ψ(x) ≥ log(5) 4 min(x; √ x), ∀x ≥ 0.
Proof. Taking t = 1, one concludes that Ψ(x) ≥ log(1 + x), for all x ≥ 0. The function x → log(1 + x) being concave, the function x → log(1+x)
x is non-increasing. Therefore, log(1 + x) ≥ log (5) 4 x for all x ∈ [0, 4]. Now, let us consider the case where x ≥ 4. Observe that ⌊t⌋ ≥ t/2 for all t ≥ 1 and so, for x ≥ 4,
Ψ(x) ≥ 1 2 sup t≥1 t log 1 + x t 2 ≥ log(5) 4 √ x, by choosing t = √ x/2 ≥ 1. Thereby, Ψ(x) ≥ log(5) 4 x1 [0,4] (x) + √ x1 [4,∞) (x) ≥ log(5) 4 min(x; √ x),
which completes the proof.
Remark 2. The conclusion of Lemma Lemma 1.8 can be improved. Namely, it can be shown that
Ψ(x) = max 1 + ⌊ √ x a ⌋ log 1 + x 1 + ⌊ √ x a ⌋ 2 ; ⌊ √ x a ⌋ log 1 + x ⌊ √ x a ⌋ 2 ,
(the second term in the maximum being treated as 0 when √ x < a) where 0 < a < 2 is the unique point where the function (0, ∞) → R : u → log(1 + u 2 )/u achieves its supremum. Therefore,
Ψ(x) ∼ log(1 + a 2 ) a √ x
when x → ∞. The reader can easily check that log(1+a 2 ) a ≃ 0.8. In particular, it does not seem possible to reach the constant c = 1 in Theorem 1.1 using this method of proof. 1.4. Two more multi-set concentration bounds. The condition (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k can be seen as the multi-set generalization of the condition, standard in concentration of measure, that the size of the enlarged set has to be bigger than 1/2. Indeed, the reader can easily verify that ( 1 k+1 , . . . , 1 k+1 ) ∈ ∆ k . However, in practice, this condition can be difficult to check. We provide two more multi-set concentration inequalities that hold in full generality. The method of proof is the same as for Theorem 1.1 and is based on (1.8). Proposition 1.9. Let (E, d, µ) be a metric measured space and λ (k) be defined as in (1.2). Let (A 1 , . . . , A k ) be k Borel sets, A = ∪ i A i and A 0 = E \ A r . Then, with a (1) = min 1≤i≤k µ(A i ), the following two bounds hold:
1 -µ(A r ) ≤ (1 -µ(A)) 1 k i=1 µ(A i ) exp -c min r 2 λ (k) , r λ (k) ; 1 -µ(A r ) ≤ (1 -µ(A)) 1 µ(A) µ(A)/a (1) exp -c min r 2 λ (k) , r λ (k) .
Proof. Fix N ∈ N and ǫ > 0 such that N ǫ ≤ r. For i = 1, . . . , k and n ≤ N , we define
α i (n) = µ(A i,nǫ ) µ(A i,(n-1)ǫ )
;
M n = max 1≤i≤k α i (n) ∨ 1 -µ(A (n-1)ǫ ) 1 -µ(A nǫ ) ; L n = {i ∈ {1, . . . , k}|M n = α i (n)}; N i = ♯{n ∈ {1, . . . , N }|i = inf L n }; N 0 = N - k i=1 N i .
Roughly speaking, the number N i (0 ≤ i ≤ k) counts the number of time where the set A i growths in iterating (1.8). Lemma 1.6 asserts that in the case where (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k , then N 0 = N . However, we still obtain from (1.8), for 1
≤ i ≤ k, (1.10) 1 µ(A i ) ≥ N n=1 α i (n) ≥ 1 + λ (k) ǫ 2 N i .
The first inequality is true because µ(A i,N ǫ ) ≤ 1 and a telescoping argument. The second inequality is true because, as n ranges from 1 to N , by definition of the number N i and (1.8), there are, at least N i terms appearing in the product that can be bounded by (1 + λ (k) ǫ 2 ). The other terms are bounded above by 1. The case of i = 0 is handled in a similar fashion and we obtain:
1 -µ(A N ǫ ) ≤ (1 -µ(A)) 1 + λ (k) ǫ 2 -N 0 = (1 -µ(A)) 1 + λ (k) ǫ 2 -N k i=1 1 + λ (k) ǫ 2 N i .
(1.11)
The announced bounds will be obtain by bounding the product appearing in the righthand side and an argument similar to the end of the proof of Theorem 1.1. From (1.10), we have that,
(1.12)
k i=1 1 + λ (k) ǫ 2 N i ≤ 1 k i=1 µ(A i )
.
Also, from (1.10),
µ(A i,N ǫ ) ≥ 1 + λ (k) ǫ 2 N i µ(A i ).
Because N ǫ ≤ r, the sets A 1,N ǫ , . . . , A k,N ǫ are pairwise disjoint and, thereby,
1 ≥ µ(A i,N ǫ ) ≥ k i=1 1 + λ (k) ǫ 2 N i µ(A i ).
Fix θ > 0 to be chosen later. By convexity of exp,
1 + (1 -µ(A)) 1 + λ (k) ǫ 2 θ ≥ exp k i=1 µ(A i )N i + (1 -µ(A))θ log 1 + λ (k) ǫ 2 ≥ exp a (1) k i=1 N i + (1 -µ(A))θ log 1 + λ (k) ǫ 2 .
Finally, with p = 1 -µ(A) and t = θ log(1 + λ (k) ǫ 2 ), we obtain
k i=1 1 + λ (k) ǫ 2 N i ≤ e -pt
+p e (1-p)t 1/a (1) .
We easily check that, the quantity in the right-hand side is minimal for t = log 1 1-p at which it takes the value (1 -p) p-1 = µ(A) -µ(A)/a (1) . Thus, (1.13) (1) .
k i=1 (1 + λ (k) ǫ 2 ) N i ≤ 1 µ(A) µ(A)/a
Combining (1.12) and (1.13) with (1.11) and the same argument as for (1.9), we obtain the two announced bounds.
From Proposition 1.9, we can derive bounds on the λ (k) 's. The proof is the same as the one of Proposition 1.2 and is omitted. Proposition 1.10. Let (E, d, µ) be a metric measured space and λ (k) be defined as in (1.2). Let A 1 , . . . , A k be measurable sets, then, with r = 1 2 min i =j d(A i , A j ) and
A 0 = E \ (∪A i ) r , λ (k) ≤ 1 r 2 ψ 1 c ln a (1) µ(A 0 ) + 1 c k ln 1 a (1) ; λ (k) ≤ 1 r 2 ψ 1 c ln a (1) µ(A 0 ) + 1 c µ(A) a (1) ln 1 µ(A) ,
where ψ(x) = max(x, x 2 ) and a (1) = min 1≤i≤k µ(A i ).
1.5. Comparison with the result of Chung-Grigor'yan-Yau. In [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF], the authors obtained the following result:
Theorem 1.11 (Chung-Grigoryan-Yau [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF]). Let M be a compact connected smooth Riemannian manifold equipped with its geodesic distance d and normalized Riemannian volume µ. For any k ≥ 1 and any family of sets A 0 , . . . , A k , it holds
(1.14) λ (k) ≤ 1 min i =j d 2 (A i , A j ) max i =j log ( 4 µ(A i )µ(A j ) ) 2
,
where 1 = λ (0) ≤ λ (1) ≤ • • • λ (k) ≤ • • • denotes the discrete spectrum of -∆.
Let us translate this result in terms of concentration of measure. Let A 1 , . . . , A k be sets such that r = 1 2 min 1≤i<j≤k d(A i , A j ) > 0 and define , so that (1.15) is equivalent to the following statement:
A = A 1 ∪ • • • ∪ A k and A 0 = M \A s , for some 0 < s ≤ r. Then,
(1.16) µ(A s ) ≥ 1 - 4 a (1) exp(-λ (k) s), ∀s ∈ [min(s o , r); r].
We note that (1.16) holds for any family of sets, whereas the inequality given in Theorem 1.1 is only true when (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . Also due to the fact that the constant c appearing in Theorem 1.1 is less than 1, (1.16) is asymptotically better than ours (see also Remark 2 above). On the other hand, one sees that (1.16) is only valid for s large enough (and its domain of validity can thus be empty when s o > r) whereas our inequality is true on the whole interval (0, r]. It does not seem also possible to iterate (1.16) as we did in Corollary 1.4. Finally, observe that the method of proof used in [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF] and [START_REF] Chung | Eigenvalues and diameters for manifolds and graphs[END_REF] is based on heat kernel bounds and is very different from ours. Let us translate Theorem 1.11 in a form closer to our Proposition 1.2. Fix k sets A 1 , . . . , A k such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . Let 2r = min d(A i , A j ), where the infimum runs on i, j = 1, . . . , k with i = j. We have to choose a (k + 1)-th set. In view of Theorem 1.11, the most optimal choice is to choose A 0 = E \ (∪A i ) r . Indeed, it is the biggest set (in the sense of inclusion) such that min d(A i , A j ) = r where this time the infimum runs on i, j = 0, . . . , k and i = j. We let a (0) = µ(A 0 ) and we remark that if (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k then a (0) ≤ a [START_REF] Aida | Moment estimates derived from Poincaré and logarithmic Sobolev inequalities[END_REF] . The bound (1.14) can be read: for all r > 0,
λ (k) ≤ 1 r 2 log 4 a (1) a (0) 2 .
Therefore, to compare it to our bound, we need to solve
φ -1 1 c log a (1) a (0) 2 ≤ log 4 a (1) a (0) 2 .
Because the right-hand side is always ≥ 1, taking the square root and composing with the non-decreasing function φ yields
1 c log a (1) a (0) ≤ log 4 a (1) a (0)
.
That is a 1+c
(1) ≤ 4 c a 1-c (0) . In other words, on some range our bound is better and in some other range their bound is better. However, if the constant c = 1 could be attained in Theorem 1.1, this would show that our bound is always better. Note that comparing the bounds obtained in Proposition 1.10 and the one of [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF] is not so clear as, without the assumption that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k it is not necessary that a (0) ≤ a [START_REF] Aida | Moment estimates derived from Poincaré and logarithmic Sobolev inequalities[END_REF] and in that case we would have to compare different sets.
Eigenvalue estimates for non-negatively curved spaces
We recall the values of the λ (k) 's that appear in Theorem 1.1 in the case of two important models of positively curved spaces in geometry. Namely:
(i) The n-dimensional sphere of radius n-1 ρ , S n,ρ endowed with the natural geodesic distance d n,ρ arising from its canonical Riemannian metric and its normalized volume measure µ n,ρ which has constant Ricci curvature equals to ρ and dimension n.
(ii) The n-dimensional Euclidean space R n endowed with the n-dimensional Gaussian measure of covariance ρ -1 Id,
γ n,ρ (dx) = ρ n/2 e -ρ|x| 2 /2 (2π) n/2 dx.
This space has dimension ∞ and curvature bounded below by ρ in the sense of [START_REF] Bakry | Diffusions hypercontractives[END_REF]. These models arise as weighted Riemannian manifolds without boundary having a purely discrete spectrum. In that case, it was proved in [START_REF] Milman | Spectral Estimates, Contractions and Hypercontractivity[END_REF]Proposition 3.2] that the λ k 's of (1.2) are exactly the eigenvalues (counted with multiplicity) of a self-adjoint operator that we give explicitly in the following. Using a comparison between eigenvalues of [START_REF] Milman | Spectral Estimates, Contractions and Hypercontractivity[END_REF], we obtain an estimates for eigenvalues in the case of log-concave probability measure over the Euclidean R n .
Example 1 (Spheres). On S n,ρ , the eigenvalues of minus the Laplace-Beltrami operator (see for instance [START_REF] Atkinson | Spherical harmonics and approximations on the unit sphere: an introduction[END_REF]Chapter 3]) are of the form ρ -2 (n -1) 2 l(l + n -1) for l ∈ N and the dimension of the corresponding eigenspace
H l,n is dim H l,n = 2l + n -1 l l + n -2 l -1 , if l > 0and dim H l,n = 1, if l = 0.
Consequently,
D l,n := dim l l ′ =0 H l ′ ,n = n + l l + n + l -1 l -1 ,
and
λ (k) = ρ -2 (n -1) 2 l(l + n -1) if and only if D l-1,n < k ≤ D l,n where λ (k)
is the k-th eigenvalues of -∆ S n,ρ and coincides with the variational definition given in (1.2).
Example 2 (Gaussian spaces). On the Euclidean space R n , equipped with the Gaussian measure γ n,ρ , the corresponding weighted Laplacian is ∆ γn,ρ = ∆ R n -ρx • ∇. The eigenvalues of -∆ γn,ρ are exactly of the form ρ 2 q and the dimension of the associated eigenspace H q,n is dim H q,n = n + q -1 q .
Consequently,
D q,n := dim q q ′ =0 H q ′ ,n = n + q q ,
and λ (k) = ρ -2 q if and only if D q-1,n < k ≤ D q,n where λ (k) is the k-th eigenvalues of -∆ γn,ρ and coincides with the variational definition given in (1.2).
Example 3 (Log-concave Euclidean spaces). We study the case where E = R n , d is the Euclidean distance and µ is a strictly log-concave probability measure. By this we mean that µ(dx) = e -V (x) dx, where Proposition 4] that such a condition on V implies that the semigroup generated by the solution of the stochastic differential equation dX t = √ 2dB t -∇V (X t )dt, where B is a Brownian motion on R n , satisfies the curvature-dimension CD(∞, K) of Bakry-Emery and, therefore, holds the log-Sobolev inequality, for all
V : R n → R such that V is C 2 and satisfying ∇ 2 V ≥ K for some K > 0. It is a consequence of [4,
f ∈ C ∞ c (R n ), Ent µ f 2 ≤ 2 K ˆ|∇f (x)| 2 µ(dx).
Such an inequality implies the super-Poincaré of [27, Theorem 2.1] that in turns implies that the self-adjoint operator L = -∆ + ∇V • ∇ has a purely discrete spectrum. In that case, the λ (k) of (1.2) corresponds to these eigenvalues and [START_REF] Milman | Spectral Estimates, Contractions and Hypercontractivity[END_REF] showed that
λ (k) ≥ λ (k) γn,ρ , where λ (k)
γn,ρ is the eigenvalues of -∆ γn,ρ of the previous example.
Extension to Markov chains
As in the classical case (see [START_REF] Ledoux | The concentration of measure phenomenon[END_REF]Theorem 3.3]), our continuous result admits a generalization on finite graphs or more broadly in the setting of Markov chains on a finite state space. We consider a finite set E and X = (X n ) n∈N be a irreducible time-homogeneous Markov chain with state space E. We write p(x, y) = P(X 1 = y|X 0 = x) and we regard p as a matrix. We assume that p admits a reversible probability measure µ on E : p(x, y)µ(x) = p(y, x)µ(y) for all x, y ∈ E (which implies in particular that µ is invariant). The Markov kernel p induces a graph structure on E by the following procedure. Set the elements of E as the vertex of the graph and for x, y ∈ E connect them with an edge if p(x, y) > 0. As the chain is irreducible, this graph is connected. We equip E with the induced graph distance d. We write L = p -I, where I stands for the identity matrix. The operator -L is a symmetric positive operator on L 2 (µ). We let λ (k) be the eigenvalues of this operator. Then, our Theorem 1.1 extends as follows:
Theorem 3.1. For any k ≥ 1 and for all sets A 1 , . . . , A k ⊂ E such that min i =j d(A i , A j ) ≥ 1 and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k the set B = A 1 ∪ A 2 ∪ • • • ∪ A k satisfies µ(B n ) ≥ 1 -(1 -µ(B)) 1 + λ (k) -n , for all 1 ≤ n ≤ 1 2 min i =j d(A i , A j ) where λ (k)
is the k-th eigenvalue of the operator -L acting on L 2 (µ).
Proof. We let Π(x, y) = p(x, y)µ(x) and
E (f, g) = 1 2 (f (y) -f (x))(g(y) -g(x))Π(x, y) = f, -Lg µ .
For any set A, we define the discrete boundary of A as
∂A = A 1 \ A ∪ (A C ) 1 \ A C .
Let (X n ) be the Markov chain with transition kernel p and initial distribution µ. By reversibility of µ, (X 0 , X 1 ) is an exchangeable pair of law Π whose the marginals are given by µ. Then, for a set U , we have
E (1 U ) = E1 U (X 0 )(1 U (X 0 ) -1 U (X 1 )) = P(X 0 ∈ U, X 1 ∈ U ) ≤ P(X 1 ∈ ∂U ) = µ(∂U ).
Observe that if d(U, V ) ≥ 1, U and V are disjoint and U × V ∈ supp Π so that E (1 U , 1 V ) = 0. By Courant-Fischer's min-max theorem
λ (k) = min dim V =k+1 max f ∈V E (f, f ) µ(f 2 ) . Choose sets A 1 , . . . , A k with d(A i , A j ) ≥ 2n (i = j) and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . Set f i = 1 A i .
The f i 's have disjoint support and so they are orthogonal in L 2 (µ). By the previous variational representation of λ (k) , we have
λ (k) ≤ sup a i E k i=0 a i f i ´ k i=0 a i f i 2 dµ = sup a i a i a i ′ E (f i , f i ′ ) a i a i ′ ´fi f i ′ dµ = sup a i k i=0 a 2 i E (f i ) k i=0 a i ´f 2 i dµ .
In other words,
λ (k) ≤ max i=0,...,k µ((A i ) 1 ) + µ((A C i ) 1 ) -1 µ(A i ) ≤ µ((A i ) 1 ) -µ(A i ) µ(A i ) ,
where the last inequality comes from the fact that, by Lemma 1.5,
µ(E \ (E \ A) 1 ) ≥ µ(A). Consider the set B = ∪ k i=1 A i and choose A 0 = E \B 1 .
In that case, by Lemma 1.6 with ǫ = 1, we have
max i=0,...,k µ((A i ) 1 ) µ(A i ) ≤ 1 -µ(B) 1 -µ(B 1
) .
Thus, we proved that
(1 + λ (k) )(1 -µ(B 1 )) ≤ (1 -µ(B)).
We derive the announced result by an immediate recursion.
Functional forms of the multiple sets concentration property
We investigate the functional form of the multi-sets concentration of measure phenomenon results obtained in Sections 1 and 3. (1) For all Borel sets A 1 , . . . ,
A k ⊂ E such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k , the set A = A 1 ∪ • • • ∪ A k satisfies (4.1) µ(A r ) ≥ 1 -(1 -µ(A))α k (r), ∀0 < r ≤ 1 2 min i =j d(A i , A j ).
(2) For all 1-Lipschitz functions f 1 , . . . , f k : E → R such that the sublevel sets
A i = {f i ≤ 0} are such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k , the function f * = min(f 1 , . . . , f k ) satisfies µ(f * < r) ≥ 1 -µ(f * ≤ 0)α k (r), ∀0 < r ≤ 1 2 min i =j d(A i , A j ).
Together with Theorem 1.1 or Theorem 3.1, one thus sees that the presence of multiple wells can improve the concentration properties of a Lipschitz function.
Proof. It is clear that (2) implies (1) when applied to f i (x) = d(x, A i ), in which case A i = {f i ≤ 0} and f * (x) = d(x, A). The converse is also very classical. First, observe that {f * < r} = ∪ k i=1 {f i < r}. Then, since f i is 1-Lipschitz, it holds A i,r ⊂ {f i < r} with A i = {f i ≤ 0} and so letting A = A 1 ∪ • • • ∪ A k , it holds A r ⊂ {f * < r}. Therefore, applying [START_REF] Aida | Moment estimates derived from Poincaré and logarithmic Sobolev inequalities[END_REF] to this set A gives (2). When (4.1) holds, we will say that the probability metric space (E, d, µ) satisfies the multi-set concentration of measure property of order k with the concentration profile α k .
In the usual setting (k = 1), the concentration of measure phenomenon implies deviation inequalities for Lipschitz functions around their median. The next result generalizes this well known fact to k > 1.
Proposition 4.2. Let (E, d, µ) be a probability metric space satisfying the multi-set concentration of measure property of order k with the concentration profile α k and f :
E → R be a 1-Lipschitz function. If I 1 , . . . , I k ⊂ R are k disjoint Borel sets such that (µ(f ∈ I 1 ), . . . , µ(f ∈ I k )) ∈ ∆ k , then it holds µ f ∈ ∪ k i=1 I i,r ≥ 1 -(1 -µ(f ∈ ∪ k i=1 I i ))α k (r), ∀0 < r ≤ 1 2 min i =j d(I i , I j )
Proof. Let ν be the image of µ under the map f . Since f is 1-Lipschitz, the metric space (R, | • |, ν) satisfies the multi-set concentration of measure property of order k with the same concentration profile α k as µ. Details are left to the reader.
Let us conclude this section by detailing an application of potential interest in approximation theory. Suppose that f : E → R is some 1-Lipschitz function and A 1 , . . . , A k are (pairwise disjoint) subsets of E such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . Let us assume that the restrictions f |A i , i ∈ {1, . . . , k} are known and that one wishes to estimate or reconstruct f outside A = ∪ k i=1 A i . To that aim, one can consider an explicit 1-Lipschitz extension of f |A , that is to say a 1-Lipschitz function g : E → R (constructed based on our knowledge of f on A exclusively) such that f = g on A. There are several canonical ways to perform the extension of a Lipschitz function defined on a sub domain (known as Kirszbraun-McShane-Whitney extensions [START_REF] Kirszbraun | Uber die zusammenziehende und lipschitzsche transformationen[END_REF][START_REF] Mcshane | Extension of range of functions[END_REF][START_REF] Whitney | Analytic extensions of differentiable functions defined in closed sets[END_REF]). One can consider for instance the functions
g + (x) = inf y∈A {f (y) + d(x, y)} or g -(x) = sup y∈A {f (y) -d(x, y)}, x ∈ E.
It is a very classical fact that functions g -and g + are 1-Lipschitz extensions of f |A and moreover that any extension g of f |A satisfies g -≤ g ≤ g + (see e.g [START_REF] Heinonen | Lectures on Lipschitz analysis[END_REF]).
The following simple result shows that, for any 1-Lipschitz extension g of f |A , the probability of error µ(|f -g| > r) is controlled by the multi-set concentration profile α k . In particular, in the framework of our Theorem 1.1, this probability of error is expressed in terms of λ (k) . Proposition 4.3. Let (E, d, µ) be a probability metric space satisfying the multi-set concentration of measure property of order k with the concentration profile α k and f :
E → R be a 1-Lipschitz function. Let A 1 , . . . A k be subsets of E such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k ; then for any 1-Lipschitz extension g of f |A , it holds µ(|f -g| ≥ r) ≤ (1 -µ(A))α k (r/2), ∀0 < r ≤ min i =j d(A i , A j ).
Proof.
: [0, ∞) → [0, ∞) and β k : [0, ∞) k → [0, ∞] such that for all Borel sets A 1 , . . . , A k ⊂ E, the set A = A 1 ∪ • • • ∪ A k satisfies µ(A r ) ≥ 1 -β k (µ(A 1 ), • • • , µ(A k ))α k (r), ∀0 < r ≤ 1 2 min i =j d(A i , A j ).
This framework contains the preceding one, by choosing β k (a) = 1 -k i=1 a i if a = (a 1 , . . . , a k ) ∈ ∆ k and +∞ otherwise. It also contains the concentration bounds obtained in Proposition 1.9, corresponding respectively to
β k (a) = 1 -k i=1 a i k i=1 a i , and β k (a) = 1 -k i=1 a i k i=1 a i k i=1 a i / min(a 1 ,••• ,a k )
, a = (a 1 , . . . , a k ).
Open questions
We list open questions related to the multi-set concentration of measure phenomenon. γn,ρ denotes the kth eigenvalue of the n-dimensional centered Gaussian measure with covariance matrix ρ -1 Id. Since the measure µ satisfies the log-Sobolev inequality, it is well known that it satisfies a (classical) Gaussian concentration of measure inequality. Therefore, it is natural to conjecture that µ satisfies a multi-set concentration of measure property of order k ≥ 1 with a profile of the form β k (r) = exp -C k,ρ,n r 2 , r ≥ 0, for some constant C k,ρ,n depending solely on its arguments. In addition, it would be interesting to see how usual functional inequalities (Log-Sobolev, transport-entropy, . . . ) can be modified to catch such a concentration of measure phenomenon.
Equivalence between multi-set concentration and lower bounds on eigenvalues in non-negative curvature.
Let us quickly recall the main finding of E. Milman [START_REF] Milman | On the role of convexity in isoperimetry, spectral gap and concentration[END_REF][START_REF] Milman | Isoperimetric and concentration inequalities: equivalence under curvature lower bound[END_REF], that is, under non-negative curvature assumptions, a concentration of measure estimate implies a bound on the spectral gap. Let µ be a probability measure with a density of the form e -V on a smooth connected Riemannian manifold M with V a smooth function such that (5.1) Ric + Hess V ≥ 0.
Assume that µ satisfies a concentration inequality of the form: for all A ⊂ M such that µ(A) ≥ 1/2 µ(A r ) ≥ 1 -α(r), r ≥ 0, where α is a function such that α(r o ) < 1/2 for at least one value r o > 0. Then, letting λ (1) be the first non zero eigenvalue of the operator -∆ + ∇V • ∇, it holds λ (1) ≥ 1 . It would be very interesting to extend Milman's result to a multiset concentration setting. More precisely, if µ satisfies the curvature condition (5.1) and the multi-set concentration of measure property of order k with a profile of the form α k (r) = exp(-min(ar 2 , √ ar)), r ≥ 0, can we find a universal function ϕ k such that λ (k) ≥ ϕ k (a)? This question already received some attention in recent works by Funano and Shioya [START_REF] Funano | Estimates of eigenvalues of the Laplacian by a reduced number of subsets[END_REF][START_REF] Funano | Concentration, Ricci curvature, and eigenvalues of Laplacian[END_REF]. In particular, let us mention the following improvement of the Chung-Grigor'yan-Yau inequality obtained in [START_REF] Funano | Estimates of eigenvalues of the Laplacian by a reduced number of subsets[END_REF]. There exists a universal constant c > 1 such that if µ is a probability measure satisfying the non-negative curvature assumption (5.1), it holds: for any family of sets A 0 , A 1 , . . . , A l with 1 ≤ l ≤ k (5.2)
λ (k) ≤ c k-l+1 1 min i =j d 2 (A i , A j ) max i =j log ( 4 µ(A i )µ(A j ) ) 2 .
Note that the difference with (1.14) is that λ (k) is estimated by a reduced number of sets. Using (5.2) (with l = 1) together with Milman's result recalled above, Funano showed that there exists some constant C k depending only on k such that under the curvature condition (5.1), it holds λ (k) ≤ C k λ (1) (recovering the main result of [START_REF] Funano | Concentration, Ricci curvature, and eigenvalues of Laplacian[END_REF]). The constant C k is explicit (contrary to the constant of [START_REF] Funano | Concentration, Ricci curvature, and eigenvalues of Laplacian[END_REF]) and grows exponentially when k → ∞. This result has been then improved by Liu [START_REF] Liu | An optimal dimension-free upper bound for eigenvalue ratios[END_REF], where a constant C k = O(k 2 ) has been obtained. As observed by Funano [START_REF] Funano | Estimates of eigenvalues of the Laplacian by a reduced number of subsets[END_REF], a positive answer to the open question stated above would yield that under (5.1) the ratios λ (k+1) /λ (k) are bounded from above by a universal constant.
Proposition 4 . 1 .
41 Let (E, d) be a metric space equipped with a Borel probability measure µ. Let α k : [0, ∞) → [0, ∞). The following properties are equivalent:
5. 1 .
1 Gaussian multi-set concentration. Using the terminology introduced in Section 4, Theorem 1.1 and the material exposed in Section 2 tell us that, if µ has a density of the form e -V with respect to Lebesgue measure on R n with a smooth function V such that Hess V ≥ ρ > 0, then the probability metric space (R n , | • |, µ) satisfies the multi-set concentration of measure property of order k with the concentration profile α k (r) = exp -c min(r 2 λ (k) γn,ρ ; r λ
4 1 -
1 2α(ro) ro 2
Optimizing over y∈ A gives that h(x) ≤ 2d(x, A). Therefore {h ≥ r} ⊂ {x : d(x, A) ≥ r/2} = A r/2 c and so, if 0 < r ≤ min i =j d(A i , A j ), it holds µ(|f -g| ≥ r) ≤ (1 -µ(A))α k (r/2).
Remark 3. Let us remark that Propositions 4.1 to 4.3 can be immediately extended
under the following more general (but notationally heavier) multi-set concentration of
measure assumption: there exists functions α k
The function h : E → R defined by h(x) = |f -g|(x), x ∈ E, is 2-Lipschitz and vanishes on A. Therefore, for any x ∈ E and y ∈ A, it holds h(x) ≤ h(y) + 2d(x, y) = 2d(x, y). | 41,769 | [
"1035445",
"18906"
] | [
"1004645",
"104741",
"29"
] |
01766657 | en | [
"phys",
"spi"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01766657/file/Tench_Romano_Delavaux_SPIE_Photonics_West_2018_13%20page%20manuscript_11212017.pdf | Robert E Tench
email: [email protected]
Clément Romano
Jean-Marc Delavaux
Optimized Design and Performance of a Shared Pump Single Clad 2 µm TDFA
We report the design, experimental performance, and simulation of a single stage, co-and counter-pumped Tmdoped fiber amplifier (TDFA) in the 2 μm signal wavelength band with an optimized 1567 nm shared pump source. We investigate the dependence of output power, gain, and efficiency on pump coupling ratio and signal wavelength. Small signal gains of >50 dB, an output power of 2 W, and small signal noise figures of <3.5 dB are demonstrated. Simulations of TDFA performance agree well with the experimental data. We also discuss performance tradeoffs with respect to amplifier topology for this simple and efficient TDFA.
Introduction
Simplicity and optimization of design are critical for the practical realization of wide bandwidth, high power single clad Thulium-doped fiber amplifiers (TDFAs) for 2 µm telecommunications applications. Recent TDFAs [START_REF] Romano | Simulation and design of a multistage 10 W thulium-doped double clad silica fiber amplifier at 2050 nm[END_REF][START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF] have reported 2 µm band amplifiers with output power > 2W, gain > 55 dB, noise figure < 4 dB, and optical bandwidth greater than 120 nm. While these designs achieve high optical performance, they employ two or more optical stages and multiple pump sources. Therefore, it is desirable to investigate designs using one amplifier stage and one pump source. In this paper we report on the design, simulation, and experimental performance of a one-stage single clad TDFA using an L-band (1567 nm) shared fiber laser pump source, as a function of pump coupling ratio, active fiber length, pump power, and signal wavelength. Our one-stage TDFA data compare well with recently reported performance of multi-stage, multi-pump amplifiers [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. In addition, the simplicity of the single clad pump shared design and its potential for cost reduction offer a broad selection of performance for different applications.
The paper is organized as follows: Section 2 presents our experimental setup, a single stage TDFA with variable coupling in the pump ratio between co-pumping and counter-pumping the active fiber. Section 3 covers the dependence of simulated amplifier performance on active fiber length, pump coupling ratio, slope efficiency, and signal wavelength. Section 4 compares measurement and simulation of the TDFA performance. Section 5 contrasts our simple TDFA design with performance of a two-stage, three-pump amplifier as reported previously in [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. Finally, Section 6 discusses design parameter tradeoffs for different TDFA applications.
Experimental Setup for Shared Pump Amplifier
The optical design of our one-stage single pump TDFA is shown in Figure 1. A single frequency 2 µm DML source (Eblana Photonics) is coupled through attenuator A and into the active fiber F1. Pump light from a multiwatt fiber laser P1 at 1567 nm is split by coupler C1 with a variable coupling ratio k (%).The two pump signals WDM = Wavelength Division Multiplexer. co-pump and counter-pump fiber F1, with k = 100% and k = 0% corresponding to all counter-pumping and all co-pumping, respectively. The value of k was changed in the simulations and experiments to optimize amplifier performance. Isolators I1 and I2 ensure unidirectional operation and suppress spurious lasing. Input and output signal powers, and co-pump and counter-pump powers, are respectively referenced to the input and output of Tm-doped fiber F1 (7 meters of OFS TmDF200).
Simulated Amplifier Performance
We begin the design of a high performance optical amplifier by studying the critically important variations of fiber signal gain (G) and output power (Pout) as a function of active fiber length (L) and input signal power (Ps).
To do this we turn to the simulated amplifier performance [START_REF] Romano | Characterization of the 3F4 -3H6 Transition in Thulium-doped Silica Fibres and Simulation of a 2µm Single Clad Amplifier[END_REF][START_REF] Jackson | Theoretical modeling of Tm-doped silica fiber lasers[END_REF] shown in Figure 2, where G is plotted vs. L for four input signal power levels. Here the total 1567 nm pump power (co-+ counter-) (Pp) is 2.5 W, the signal wavelength λs is 1952 nm, and the coupling ratio k = 50%. We note that a similar set of gain curves can be generated for different wavelength bands of the TDFA, and this behavior will be investigated later in the section.
In Figure 2 we have measured the dependence of G vs. L for a 32 dB input dynamic range in signal power. The different Ps values illustrate the amplifier operating from a linear/unsaturated regime (Ps= -30 dBm) to a highly saturated regime (Ps = + 2 dBm). The equally important dependence of noise figure on these parameters will be dealt with later.
The first observation drawn from Figure 2 is that for low input signals (e.g. for Ps = -30 dBm) G is maximized for long fiber lengths of 12 meters or greater, while for saturating input powers (e.g. Ps = +2 dBm) G reaches a maximum value for lengths of about 2 meters. It is also clear that for small signal or unsaturated gain, most of the gain (i.e. more than 80%) is achieved in the first 5 meters of the fiber, while for saturated gain most of the gain occurs within the first 1.5 meters. The second observation is that saturated gain varies only slightly with active fiber length for values greater than 3 meters, indicating that a wide range of fiber lengths can be chosen for design of a power booster amplifier. However, later we will see that the choice of the fiber length affects the useful amplifier bandwidth. The next design study for the shared pump amplifier is to examine the dependence of the saturated output power on active fiber length L and coupling ratio k. To study this issue, we plot the output signal power with pump coupling ratio k for four active fiber lengths (i.e. L = 3, 5, 7 and 9 m) as shown in Figure 3. In this simulation Ps is set to +2 dBm at 1952 nm to saturate the amplifier, with the total pump set at Pp = 2.5 W at 1567 nm.
We first note that for a given fiber length, Pout increases linearly when moving from co-pumping (k = 0%) to nearly all counter-pumping (k = 95%) and then drops down for full counter-pumping (k = 100%). For all fiber lengths, the maximum output power is achieved for k = 95%. This behavior is not surprising because counterpumping maximizes the pump power available at the output of the fiber where the amplified signal power is the largest. We next observe that the maximum output power is achieved for a ratio of 95% counter-pumping to 5% co-pumping. This indicates that a small amount of co-pumping provides signal gain that offsets fiber absorption loss. Therefore full counter-pumping is not the most efficient way to pump this fiber.
For k = 50% in Figure 3, the relatively small variation in output signal power Pout with fiber length is consistent with the small variation in gain seen in Figure 2 as a function of fiber length for Ps = +2 dBm. We further note that as the fiber length is decreased from 9 m to 3 m, the output power Pout consistently increases. For very short fibers the difference between co-and counter-pumping will become negligible. However, as we will illustrate later, this comes at the expense of the amplifier operating bandwidth shifting from higher to shorter wavelengths. The amplifier performance illustrated in Figure 3 shows that we may consider three cases for design: k = 0%, k= 50%, and k = 95%. For k = 0%, Pout variation with fiber length is 18%. For k = 50%, it is as much as 10.9%, and for k = 95% it is about 2%. This indicates that a mostly counter-pumped amplifier will be less sensitive to changes in active fiber length than a co-pumped amplifier. Now let's consider the important design consideration of the dependence of saturated output power as a function of active fiber length L and pump power Pp. This behavior is illustrated in Figure 4 for a signal wavelength of 1952 nm, a coupling ratio k = 50%, and 1567 nm pump powers of 0.83 W, 1.7 W and 2.55 W, respectively.
In this plot we see that the maximum saturated output power is obtained for L = 2 m, relatively independent of fiber length and the pump power. It is apparent that above L = 2 m, Pout decreases slightly with increases in fiber length. This behavior is consistent with the simulation in Figure 2, and it illustrates that the optimum Pout for a saturated amplifier is not greatly dependent on L. The curves in Figure 4 lead to the important observation that the output power scales linearly with increases in the pump power. Therefore saturated output powers much higher than the 2.6 W already demonstrated [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF] can be achieved with this type of Thulium-doped fiber up to the stimulated Brillouin scattering (SBS) threshold which is estimated to be 10-20 W for a fiber length of 7 m [START_REF] Sincore | SBS Threshold Dependence on Pulse Duration in a 2053 nm Single-Mode Fiber Amplifier[END_REF].
So far our simulations have been carried out for the signal wavelength of 1952 nm. To more fully study performance of the amplifier, we now look at the amplifier slope efficiency η vs. signal wavelength λs and active fiber length L. Slope efficiency η = ΔPsat/ΔPp is defined as: the ratio of the change in saturated output signal power Psat to a change in pump power, for a given fiber length L and signal wavelength λs. It measures the efficiency of conversion of pump light into signal light and is an important figure of merit for the amplifier. The saturated output power Psat in our experiments and simulations is measured for a high input signal power Ps = +2 dBm.
Figure 5 shows simulations of η over the wavelength region of 1900 nm to 2050 nm, for fiber lengths ranging from 1.5 to 9 meters. Clearly the bandwidth of the amplifier shifts toward longer signal wavelengths for the longer fiber such as 9 meters. Shorter fibers such as 1.5 and 2 meters shift the operating bandwidth region toward shorter wavelengths.
Figure 5 indicates that for short fibers of 1.5 and 2 meters, η is optimum below 1950 nm, then diminishes rapidly with wavelength around 2000 nm and is negligible above 2020 nm. For longer fibers of 5 to 9 meters, η decreases more gradually with increasing wavelength and allows for a modest efficiency (i.e. 35%) up to 2050 nm. The simulated slope efficiencies in Figure 5 give a value at 1952 nm of 73% which is fully consistent with the value of 73% determined from Figure 4. Based on the simulation results, for this single stage configuration we can draw four conclusions. First, the most significant gain occurs in the first couple of meters of the active fiber. Second, the saturated output power scales with pump power and is not significantly affected by the fiber length. Third, the optimum coupling ratio k for a combination of large dynamic range and saturated output power is achieved for medium fiber lengths of 6-8 meters and a k value around 50%. Fourth, the choice of the fiber length affects the operating bandwidth of the TDFA and the slope efficiency η: shorter lengths yield shorter operating wavelengths, while longer lengths give longer wavelength operating regions. This last point will be discussed further in Section 4.
Comparison of Simulation and Experiment
We now turn to comparisons of simulation and experiment for the single stage amplifier of Figure 1. In all these comparisons, the experimental fiber length is 7 meters.
We start by looking at the signal output power Pout as a function of 1952 nm signal input power Ps over a range of -30 dBm to + 2 dBm. Pump powers Pp at 1567 nm range from 0.89 W to 3.09 W and the coupling ratio k is 50%. As illustrated in Figure 6, the simulations (in solid lines) agree well with the experimental data (points) with an average difference between simulation and experiment of 0.6 dB.
For 0 dBm input power, the measured output powers are 1.11 W and 1.86 W for pump powers of 1.93 W and 3.09 W, respectively. This corresponds to optical power conversion efficiencies of 58% and 60%, respectively. Figure 8 shows the dependence of G and NF on coupling ratio k for Ps = -30 dBm at 1952 nm and Pp= 1.70 W at 1567 nm. Experimental data are shown in points and the simulations in solid lines. The optimum operating setpoint for small signal gain is different from the optimum for noise figure, with the largest small signal gain occurring for k = 50% and the lowest noise figures for k = 0%. The noise figure increases slowly at first with k, and then rapidly to 6.1 dB as k reaches 100% which corresponds to counter-pump only. The agreement between simulation and experiment is good, validating the performance of our simulator over the full range of k values. From this graph, we observe that a good balance between optimum gain and optimum noise figure is achieved for a coupling ratio of k = 50%. In Figure 9 we plot the dependence of G and NF on signal wavelength λs over the range of 1900 -2050 nm for a coupling ratio of k = 50%, Pp = 1.12 W, and input signal power Ps = -30 dBm. The highest measured unsaturated gain is achieved at 1910 nm , with a small decrease with λ at 1952 nm and then a steady decrease up to 2050 nm. For G > 30 dB, the amplifier bandwidth is >120 nm. By extending the investigation to values of λ lower than 1900 nm, we can expect even larger bandwidths.
The smallest measured NF of 3.5 dB is at 1952 nm, and NF variation with wavelength is small (i.e.< 1.4 dB).
We observe that the agreement between simulation and experiment is good, and this shows our simulation predicts well the small signal gain G and noise figure NF as a function of λs. We now investigate the slope efficiency η as a function of total pump power Pp for a saturating input power of Ps ≈ + 2 dBm, Figure 10 shows experimental and simulated values for saturated output power as a function of pump power, for four different values of λs across the transmission band with k = 50%. The agreement between simulation (solid lines) and experiment (points) is good, illustrating the accuracy of our simulator over a wide region of λs and over pump powers from 0.3 to 3.2 W. Notice that at λ = 2050 nm the number of data points is limited by the onset of lasing due to the large ASE produced as the pump power increases. The experimental variation in signal output power with pump power is linear in all cases as expected from theory. The maximum measured output power is 2.00 W for λ = 1910 nm and pump power of 3.09 W, corresponding to an optical power conversion efficiency of 65%. In Figure 11 we compare the slope efficiencies measured in Figure 10 with the simulation of Figure 5, with an expanded span for λ of 1760 nm -2060 nm. The experimental slope efficiencies (points) agree reasonably well with the theory and this demonstrates that our simulation is valid over a wide range of values for λs for a saturated amplifier. The maximum measured slope efficiency is 68.2% at 1910 nm. This can be compared with the simulated value at this signal wavelength of 76.0%. Using a slope efficiency of greater than 50% as a criterion, Figure 11 shows that the simulated operating bandwidth BW and center operating wavelength λc of the amplifier vary significantly with fiber length. For a short fiber (3 m) the operating bandwidth BW at 50% slope efficiency is 198 nm as indicated by the horizontal arrows in the figure, and λc is 1896 nm. For the longest fiber simulated (9 m) BW is reduced to 160 nm, and λc is shifted up in wavelength to 1940 nm. Results for all the fiber lengths Table 2. Operating Bandwidth BW and Center Wavelength λc as a Function of Fiber Length L.
studied are summarized in Table 2. It is evident that shorter fiber lengths give greater operating bandwidths and lower center wavelengths. This behavior is consistent with previously reported results [START_REF] Li | Exploiting the short wavelength gain of silicabased thulium-doped fiber amplifiers[END_REF]. We note that Figure 11 and Table 2 are the first detailed comparisons of TDFA simulation and theory, since previous work on spectral performance has been either wholly experimental [START_REF] Li | Diodepumped wideband thulium-doped fiber amplifiers for optical communications in the 1800-2050 nm window[END_REF][START_REF] Li | Exploiting the short wavelength gain of silicabased thulium-doped fiber amplifiers[END_REF] or theoretical [START_REF] Gorjan | Model of the amplified spontaneous emission generation in thulium-doped silica fibers[END_REF][START_REF] Khamis | Theoretical Model of a Thulium-doped Fiber Amplifier Pumped at 1570 nm and 793 nm in the Presence of Cross Relaxation[END_REF].
Comparison of Multistage Amplifier Performance
In Sections 3 and 4, we have shown that the shared pump topology can deliver high performance that is fully in agreement with simulation results. Here we will compare the shared pump amplifier with a two stage-three pump TDFA [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. A summary comparison of the two amplifiers is given below in Table 3. The table reveals that there is no major difference in performance between the two TDFAs. Comparing the maximum saturated output powers, we see that the shared pump TDFA achieves 1.9 W output for 3.2 W available pump, while the 2 stage amplifier achieves 2.6 W for 3.6 W of available pump. The output power performance of the two amplifiers is seen to be comparable when the maximum pump power available is accounted for. NF values for the two amplifiers are similar, as are the operating dynamic ranges (measured over an input power span of -30 dBm to +2 dBm). The two-stage amplifier has a slightly higher small signal gain with 56 dB compared to 51 dB for the shared pump single stage TDFA.
Fiber
The difference in slope efficiencies, with 66% for the 1 stage shared pump configuration and 82% for the 2 stage, 3 pump configuration, can be explained by referring to the architecture of the 3 pump configuration [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF].
Here we recall the definition of slope efficiency η from Section 3: η = ΔPsat/ΔPp. Remembering that Psat is the output power for a highly saturated amplifier, we observe that in the 2 stage TDFA the first stage boosts the input signal power of +2 dBm to an intermediate level of about +20 dBm which is then input to the second fiber stage. This boost in power to +20 dBm increases the conversion efficiency for available pump power in the second stage and so increases η. Indeed the two stage amplifier brings the measured efficiency closer to the simulated value as shown in Figure 11.
In comparing the amplifier bandwidth for the two configurations, we see that the 167 nm simulated bandwidth for the one stage, shared pump amplifier is consistent with the estimated value of >120 nm for the two stage, three pump amplifier. The >120 nm value was obtained by measuring the 10 dB width of the ASE noise background in the saturated output spectrum of the two stage TDFA. We believe that a simulation of the slope efficiency for the two stage amplifier (currently in progress) will result in a more precise value for its bandwidth.
The comparisons in Table 3 illustrate that our single stage shared pump TDFA can match the performance targets of a complex two stage three pump amplifier. The simplicity of the architecture of the shared pump TDFA is a considerable advantage in the design simulation of TDFAs for broadband telecommunications systems.
Discussion of Parameter Optimization for TDFA Architecture
The data reported in Figures 2 -11 illustrate several salient points about the operation of the shared pump TDFA.
From our experimental and theoretical studies, it is evident that input power levels, saturated output power targets, noise figure specifications, small signal gain specifications, and operating signal bandwidths all depend in an interrelated way on the amplifier architecture. Design of an optimized amplifier requires a careful balancing of all these performance targets as a function of fiber length L and coupling ratio k.
For gain amplifiers, Figure 2 shows that it is very important to consider the input signal power when choosing an optimum fiber length. For example, at the coupling ratio of 50%, the fiber gain for -30 dBm input is highest for a fiber length of 14 meters. For -15 dBm input, the optimum gain occurs for lengths of 7-8 meters. Clearly the design specifications of the TDFA must be carefully considered when choosing an optimum fiber length for a preamplifier designed to operate at low signal input powers. For these low input powers the NF value remains close to the quantum limit of 3 dB.
For power amplifiers, maximum simulated output power occurs for a coupling ratio of k = 95% and an optimized fiber length of about 3.5 meters for a signal wavelength of 1952 nm. This optimized fiber length agrees well with the values obtained in Figures 2 and4 where the optimum length for maximum output power is between 3 and 4 meters for a pump coupling ratio of 50 %. We conclude that for maximizing output power at 1952 nm, coupling ratios anywhere between 50 and 95 % can be employed. Figure 4 demonstrates that the saturated output power Pout scales linearly with pump power up to the maximum simulated Pp of 2.55W. No Brillouin scattering or other nonlinear effects were observed in our experiments. This means that we can improve the output power of the amplifier simply by increasing the pump power, up to the limit where nonlinear effects start to be observed. The threshold for nonlinear effects in our shared pump amplifier is currently under study. For the parameters in the current experiments, the one stage shared pump design yields an attractive power amplifier that is simple to build and has high signal output power For generic or multipurpose amplifiers, Figure 5 and 11 illustrate that the operating bandwidth BW and center wavelength λc of the amplifier are strongly dependent on the active fiber length, with maximum long wavelength response above 2000 nm occurring for fiber lengths L of 9 meters and longer. Short wavelength response is maximized for short fiber lengths of 1.5 and 2 meters. The desired operating bandwidth and center wavelength can therefore be selected by choosing an appropriate active fiber length. The noise figure NF as shown in Figure 9 is slowly varying with signal wavelength λs for a coupling ration of k = 50 %, indicating that the noise performance of the multipurpose amplifier is highly tolerant of variations in signal wavelength λs. This is an attractive feature for the many applications of this type of TDFA.
Figure 1 .
1 Figure 1. Optical Design of Single Stage Single Pump TDFA with a Shared Pump Arrangement.
Figure 2 .
2 Figure 2. Signal Gain (G) as a Function of Fiber Length (L) for Four Different Levels of Ps.
Figure 3 .
3 Figure 3. Simulated Output Signal Power (Pout) as a Function of Fiber Length (L) and Pump Coupling Ratio (k)
Figure 4 .
4 Figure 4. Simulated Output Power Pout as a Function of Fiber Length L and Pump Power Pp for k = 50%
Figure 5 .
5 Figure 5. Simulated Slope Efficiencies η vs. Signal Wavelength λs and Active Fiber Length L
Figure 6 .
6 Figure 6. Output Signal Power Pout vs. Input Signal power Ps for k=50%, for Three Different Total Pump Powers Pp.
Figure 7 .
7 Figure 7. Gain G and Noise Figure NF at 1952 nm as a Function of Input Signal Power Ps.
Figure 8 .
8 Figure 8. Gain and Noise Figure as a Function of Coupling Ratio k.
Figure 9 .
9 Figure NF, dB
Figure 10 .Figure 11 .
1011 Figure 10. Saturated Output power Pout vs. Total Pump Power Pp as a Function of λs
Table 1 .
1 Table 1 contrasts the measured and simulated values of slope efficiency η as a function of signal wavelength λs for a fiber length of 7 m. Comparison of Simulated and Measured Slope Efficiency η as a Function of λs .
η, %
λ, nm Exp. Sim.
1910 68.2 76.0
1952 65.9 72.9
2004 52.1 55.0
2050 13.5 9.6
Table 3 .
3 Comparison of Single Stage, Shared Pump TDFA with Two Stage, Three Pump TDFA
Length L = 7 m TDFA Configurations, 1952 nm
Parameter Symbol Units 1 Stage, Shared Pump 2 Stage, 3 Pumps
Pump Power (1567 nm) Pp W 3.2 3.6
Saturated Output Power Pout W 1.9 2.6
Small Signal Noise Figure NF dB 3.4 3.2
Signal Dynamic Range Pin dB 32 32
Small Signal Gain G dB 51 56
Slope Efficiency (Saturated) η % 65.9 82
Operating Bandwidth BW nm 167 (simulated) > 120 (est. from ASE)
Summary
We have reported the experimental and simulated performance of a single stage TDFA with a shared in-band pump at 1567 nm. In particular we considered the dependence of amplifier performance on pump coupling ratio and signal wavelength. We determined that the optimum fiber length L and optimum coupling ratio k depend strongly on the design performance specifications for the TDFA such as signal wavelength band, saturated output power, noise figure, small signal gain, and dynamic range. Our simulations show that the operating bandwidth of the amplifier can be as high as 198 nm. Due to the broad Thulium emission bandwidth, this amplifier configuration can be tailored to meet a variety of performance needs. We achieved saturated output powers of 2 W, small signal gains as high as 51 dB, noise figures as low as 3.5 dB, and a dynamic range of 32 dB for a noise figure of less than 4.7 dB. In all cases we found good agreement between our simulation tool and the experiments. No Brillouin scattering or other nonlinear effects were observed in any of our measurements. Our experiments and simulations show that the shared pump TDFA can match the performance of more complex multistage, multi-pump TDFAs, and illustrate the simplicity and usefulness of our design. This opens the possibility for new and efficient TDFAs for lightwave transmission systems as preamplifiers, as in-line amplifiers, and as power booster amplifiers.
Acknowledgments
We gratefully acknowledge Eblana Photonics for the single frequency distributed mode 2 µm laser sources, and OFS for the single clad Tm-doped fiber. | 26,872 | [
"1025056",
"17946"
] | [
"524170",
"40873",
"524170"
] |
01766661 | en | [
"phys",
"spi"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01766661/file/Tench_Romano_Delavaux_OFT_Manuscript_v5_11272017.pdf | Robert E Tench
email: [email protected]
Clément Romano
Jean-Marc Delavaux
Optimized Design and Performance of a Shared Pump Single Clad 2 µm TDFA
Keywords: Fiber Amplifier, Thulium, 2000 nm, Silica Fiber, Single Clad
We report the design, experimental performance, and simulation of a single stage, co-and counter-pumped Tmdoped fiber amplifier (TDFA) in the 2 μm signal wavelength band with an optimized 1567 nm shared pump source. We investigate the dependence of output power, gain, and efficiency on pump coupling ratio and signal wavelength. Small signal gains of >50 dB, an output power of 2 W, and small signal noise figures of <3.5 dB are demonstrated. Simulations of TDFA performance agree well with the experimental data. We also discuss performance tradeoffs with respect to amplifier topology for this simple and efficient TDFA.
Introduction
Simplicity and optimization of design are critical for the practical realization of wide bandwidth, high power single clad Thulium-doped fiber amplifiers (TDFAs) for 2 µm telecommunications applications. Recent TDFAs [START_REF] Romano | Simulation and design of a multistage 10 W thulium-doped double clad silica fiber amplifier at 2050 nm[END_REF][START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF] have reported 2 µm band amplifiers with output power > 2W, gain > 55 dB, noise figure < 4 dB, and optical bandwidth greater than 120 nm. While these designs achieve high optical performance, they employ two or more optical stages and multiple pump sources. Therefore, it is desirable to investigate designs using one amplifier stage and one pump source. In this paper we report on the design, simulation, and experimental performance of a one-stage single clad TDFA using an L-band (1567 nm) shared fiber laser pump source, as a function of pump coupling ratio, active fiber length, pump power, and signal wavelength. Our one-stage TDFA data compare well with recently reported performance of multi-stage, multi-pump amplifiers [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. In addition, the simplicity of the single clad pump shared design and its potential for cost reduction offer a broad selection of performance for different applications.
The paper is organized as follows: Section 2 presents our experimental setup, a single stage TDFA with variable coupling in the pump ratio between co-pumping and counter-pumping the active fiber. Section 3 covers the dependence of simulated amplifier performance on active fiber length, pump coupling ratio, slope efficiency, and signal wavelength. Section 4 compares measurement and simulation of the TDFA performance. Section 5 contrasts our simple TDFA design with performance of a two-stage, three-pump amplifier as reported previously in [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. Finally, Section 6 discusses design parameter tradeoffs for different TDFA applications.
Experimental Setup for Shared Pump Amplifier
The optical design of our one-stage single pump TDFA is shown in Figure 1. A single frequency 2 µm DML source (Eblana Photonics) is coupled through attenuator A and into the active fiber F1. Pump light from a multiwatt fiber laser P1 at 1567 nm is split by coupler C1 with a variable coupling ratio k (%).The two pump signals WDM = Wavelength Division Multiplexer. co-pump and counter-pump fiber F1, with k = 100% and k = 0% corresponding to all counter-pumping and all co-pumping, respectively. The value of k was changed in the simulations and experiments to optimize amplifier performance. Isolators I1 and I2 ensure unidirectional operation and suppress spurious lasing. Input and output signal powers, and co-pump and counter-pump powers, are respectively referenced to the input and output of Tm-doped fiber F1 (7 meters of OFS TmDF200). The input and output spectra of the TDFA were measured with an optical spectrum analyzer (Yokogawa AQ6375B).
Simulated Amplifier Performance
We begin the design of a high performance optical amplifier by studying the critically important variations of fiber signal gain (G) and output power (Pout) as a function of active fiber length (L) and input signal power (Ps).
The signal gain G is given by the following simple equation:
G(λs) = Pout (λs) / Ps (λs) ( 1
)
where λs is the signal wavelength, and Ps and Pout are signal powers measured at the input and output of the active Tm-doped fiber, respectively.
To study amplifier design we turn to the simulated TDFA performance [START_REF] Romano | Characterization of the 3F4 -3H6 Transition in Thulium-doped Silica Fibres and Simulation of a 2µm Single Clad Amplifier[END_REF][START_REF] Jackson | Theoretical modeling of Tm-doped silica fiber lasers[END_REF] shown in Figure 2, where G is plotted vs. L for four input signal power levels. Here the total 1567 nm pump power (co-+ counter-) (Pp) is 2.5 W, the signal wavelength λs is 1952 nm, and the coupling ratio k = 50%. We note that a similar set of gain curves can be generated for different wavelength bands of the TDFA, and this behavior will be investigated later in the section.
In Figure 2 we have measured the dependence of G vs. L for a 32 dB input dynamic range in signal power. The different Ps values illustrate the amplifier operating from a linear/unsaturated regime (Ps= -30 dBm) to a highly saturated regime (Ps = + 2 dBm). The equally important dependence of noise figure on these parameters will be dealt with later.
The first observation drawn from Figure 2 is that for low input signals (e.g. for Ps = -30 dBm) G is maximized for long fiber lengths of 12 meters or greater, while for saturating input powers (e.g. Ps = +2 dBm) G reaches a maximum value for lengths of about 2 meters. It is also clear that for small signal or unsaturated gain, most of the gain (i.e. more than 80%) is achieved in the first 5 meters of the fiber, while for saturated gain most of the gain occurs within the first 1.5 meters. The second observation is that saturated gain varies only slightly with active fiber length for values greater than 3 meters, indicating that a wide range of fiber lengths can be chosen for design of a power booster amplifier. However, later we will see that the choice of the fiber length affects the useful amplifier bandwidth. The next design study for the shared pump amplifier is to examine the dependence of the saturated output power on active fiber length L and coupling ratio k. To study this issue, we plot the output signal power with pump coupling ratio k for four active fiber lengths (i.e. L = 3, 5, 7 and 9 m) as shown in Figure 3. In this simulation Ps is set to +2 dBm at 1952 nm to saturate the amplifier, with the total pump set at Pp = 2.5 W at 1567 nm.
We first note that for a given fiber length, Pout increases linearly when moving from co-pumping (k = 0%) to nearly all counter-pumping (k = 95%) and then drops down for full counter-pumping (k = 100%). For all fiber lengths, the maximum output power is achieved for k = 95%. This behavior is not surprising because counterpumping maximizes the pump power available at the output of the fiber where the amplified signal power is the largest. We next observe that the maximum output power is achieved for a ratio of 95% counter-pumping to 5% co-pumping. For full counter-pumping, the pump is attenuated significantly within two meters after being launched, leaving the input end of the active fiber unpumped with no inversion achieved for the input Tm ion population. This indicates that a small amount of co-pumping provides signal gain that offsets fiber absorption loss. Therefore full counter-pumping is not the most efficient way to pump this amplifier.
For k = 50% in Figure 3, the relatively small variation in output signal power Pout with fiber length is consistent with the small variation in gain seen in Figure 2 as a function of fiber length for Ps = +2 dBm. We further note that as the fiber length is decreased from 9 m to 3 m, the output power Pout consistently increases. For very short fibers the difference between co-and counter-pumping will become negligible. However, as we will illustrate later, this comes at the expense of the amplifier operating bandwidth shifting from higher to shorter wavelengths. The amplifier performance illustrated in Figure 3 shows that we may consider three cases for design: k = 0%, k= 50%, and k = 95%. For k = 0%, Pout variation with fiber length is 18%. For k = 50%, it is as much as 10.9%, and for k = 95% it is about 2%. This indicates that a mostly counter-pumped amplifier will be less sensitive to changes in active fiber length than a co-pumped amplifier. Now let's consider the important design consideration of the dependence of saturated output power as a function of active fiber length L and pump power Pp. This behavior is illustrated in Figure 4 for a signal wavelength of 1952 nm, a coupling ratio k = 50%, and 1567 nm pump powers of 0.83 W, 1.7 W and 2.55 W, respectively.
In this plot we see that the maximum saturated output power is obtained for L = 2 m, relatively independent of fiber length and the pump power. It is apparent that above L = 2 m, Pout decreases slightly with increases in fiber length. This behavior is consistent with the simulation in Figure 2, and it illustrates that the optimum Pout for a saturated amplifier is not greatly dependent on L. The curves in Figure 4 lead to the important observation that the output power scales linearly with increases in the pump power. Therefore saturated output powers much higher than the 2.6 W already demonstrated [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF] can be achieved with this type of Thulium-doped fiber up to the stimulated Brillouin scattering (SBS) threshold which is estimated to be 10-20 W for a fiber length of 7 m [START_REF] Sincore | SBS Threshold Dependence on Pulse Duration in a 2053 nm Single-Mode Fiber Amplifier[END_REF].
So far our simulations have been carried out for the signal wavelength of 1952 nm. To more fully study performance of the amplifier, we now look at the amplifier slope efficiency η vs. signal wavelength λs and active fiber length L. Slope efficiency η = ΔPsat/ΔPp is defined as: the ratio of the change in saturated output signal power Psat to a change in pump power, for a given fiber length L and signal wavelength λs. It measures the efficiency of conversion of pump light into signal light and is an important figure of merit for the amplifier. The saturated output power Psat in our experiments and simulations is measured for a high input signal power Ps = +2 dBm.
Figure 5 shows simulations of η over the wavelength region of 1900 nm to 2050 nm, for fiber lengths ranging from 1.5 to 9 meters. Clearly the bandwidth of the amplifier shifts toward longer signal wavelengths for the longer fiber such as 9 meters. Shorter fibers such as 1.5 and 2 meters shift the operating bandwidth region toward shorter wavelengths.
Figure 5 indicates that for short fibers of 1.5 and 2 meters, η is optimum below 1950 nm, then diminishes rapidly with wavelength around 2000 nm and is negligible above 2020 nm. For longer fibers of 5 to 9 meters, η decreases more gradually with increasing wavelength and allows for a modest efficiency (i.e. 35%) up to 2050 nm. The simulated slope efficiencies in Figure 5 give a value at 1952 nm of 73% which is fully consistent with the value of 73% determined from Figure 4. Based on the simulation results, for this single stage configuration we can draw four conclusions. First, the most significant gain occurs in the first couple of meters of the active fiber. Second, the saturated output power scales with pump power and is not significantly affected by the fiber length. Third, the optimum coupling ratio k for a combination of large dynamic range and saturated output power is achieved for medium fiber lengths of 6-8 meters and a k value around 50%. Fourth, the choice of the fiber length affects the operating bandwidth of the TDFA and the slope efficiency η: shorter lengths yield shorter operating wavelengths, while longer lengths give longer wavelength operating regions. This last point will be discussed further in Section 4.
Comparison of Simulation and Experiment
We now turn to comparisons of simulation and experiment for the single stage amplifier of Figure 1. In all these comparisons, the experimental fiber length is 7 meters.
We start by looking at the signal output power Pout as a function of 1952 nm signal input power Ps over a range of -30 dBm to + 2 dBm. Pump powers Pp at 1567 nm range from 0.89 W to 3.09 W and the coupling ratio k is 50%. As illustrated in Figure 6, the simulations (in solid lines) agree well with the experimental data (points) with an average difference between simulation and experiment of 0.6 dB.
For 0 dBm input power, the measured output powers are 1.11 W and 1.86 W for pump powers of 1.93 W and 3.09 W, respectively. This corresponds to optical power conversion efficiencies of 58% and 60%, respectively.
In Equations ( 2) through (4), Δλ is the effective resolution bandwidth of the optical spectrum analyzer in m, and PASE is the measured internal forward spontaneous output power under the signal peak in Watts. h is Planck's constant, and c is the speed of light in vacuum. G(λ) is given by Equation (1). We measured the noise figure with a Δλ of 0.1 nm on the Yokogawa optical spectrum analyzer.
Using Equations ( 1) through ( 4), we now analyze the performance of the TDFA as shown in Figure 7. A maximum signal gain G (points) of 51 dB is measured at Ps = -30 dBm with an NF < 3.5 dB. Over the full range of input powers studied, the simulated gain values (solid lines) agree with the measured gain values to within 1 dB, validating the performance of our simulator over a wide range of input powers. Experimental values of noise figure are also plotted in points in Figure 7. The minimum measured noise figure is 3.5 dB, and the minimum simulated noise figure is 3.2 dB, close to the 3.0 dB quantum limit. Agreement between experiment and simulation for noise figure is good. The measured dynamic range for the amplifier is 32 dB for a noise figure of 4.7 dB or less.
Figure 8 shows the dependence of G and NF on coupling ratio k for Ps = -30 dBm at 1952 nm and Pp= 1.70 W at 1567 nm. Experimental data are shown in points and the simulations in solid lines. The optimum operating setpoint for small signal gain is different from the optimum for noise figure, with the largest small signal gain occurring for k = 50% and the lowest noise figures for k = 0%. The noise figure increases slowly at first with k, and then rapidly to 6.1 dB as k reaches 100% which corresponds to counter-pump only. The agreement between simulation and experiment is good, validating the performance of our simulator over the full range of k values. From this graph, we observe that a good balance between optimum gain and optimum noise figure is achieved for a coupling ratio of k = 50%. In Figure 9 we plot the dependence of G and NF on signal wavelength λs over the range of 1900 -2050 nm for a coupling ratio of k = 50%, Pp = 1.12 W, and input signal power Ps = -30 dBm. The highest measured unsaturated gain is achieved at 1910 nm , with a small decrease with λ at 1952 nm and then a steady decrease up to 2050 nm. For G > 30 dB, the amplifier bandwidth is >120 nm. By extending the investigation to values of λ lower than 1900 nm, we can expect even larger bandwidths.
The smallest measured NF of 3.5 dB is at 1952 nm, and NF variation with wavelength is small (i.e.< 1.4 dB).
We observe that the agreement between simulation and experiment is good, and this shows our simulation predicts well the small signal gain G and noise figure NF as a function of λs. We now investigate the slope efficiency η as a function of total pump power Pp for a saturating input power of Ps ≈ + 2 dBm, Figure 10 shows experimental and simulated values for saturated output power as a function of pump power, for four different values of λs across the transmission band with k = 50%. The agreement between simulation (solid lines) and experiment (points) is good, illustrating the accuracy of our simulator over a wide region of λs and over pump powers from 0.3 to 3.2 W. Notice that at λ = 2050 nm the number of data points is limited by the onset of lasing due to the large ASE produced as the pump power increases. The experimental variation in signal output power with pump power is linear in all cases as expected from theory. The maximum measured output power is 2.00 W for λ = 1910 nm and pump power of 3.09 W, corresponding to an optical power conversion efficiency of 65%. In Figure 11 we compare the slope efficiencies measured in Figure 10 with the simulation of Figure 5, with an expanded span for λ of 1760 nm -2060 nm. The experimental slope efficiencies (points) agree reasonably well with the theory and this demonstrates that our simulation is valid over a wide range of values for λs for a saturated amplifier. The maximum measured slope efficiency is 68.2% at 1910 nm. This can be compared with the simulated value at this signal wavelength of 76.0%. Using a slope efficiency of greater than 50% as a criterion, Figure 11 shows that the simulated operating bandwidth BW and center operating wavelength λc of the amplifier vary significantly with fiber length. For a short fiber (3 m) the operating bandwidth BW at 50% slope efficiency is 198 nm as indicated by the horizontal arrows in the figure, and λc is 1896 nm. For the longest fiber simulated (9 m) BW is reduced to 160 nm, and λc is shifted up in wavelength to 1940 nm. Results for all the fiber lengths studied are summarized in Table 2. It is evident that shorter fiber lengths give greater operating bandwidths and lower center wavelengths. This behavior is consistent with previously reported results [START_REF] Li | Exploiting the short wavelength gain of silicabased thulium-doped fiber amplifiers[END_REF]. We note that Figure 11 and Table 2 are the first detailed comparisons of TDFA simulation and theory, since previous work on spectral performance has been either wholly experimental [START_REF] Li | Diodepumped wideband thulium-doped fiber amplifiers for optical communications in the 1800-2050 nm window[END_REF][START_REF] Li | Exploiting the short wavelength gain of silicabased thulium-doped fiber amplifiers[END_REF] or theoretical [START_REF] Gorjan | Model of the amplified spontaneous emission generation in thulium-doped silica fibers[END_REF][START_REF] Khamis | Theoretical Model of a Thulium-doped Fiber Amplifier Pumped at 1570 nm and 793 nm in the Presence of Cross Relaxation[END_REF].
Comparison of Multistage Amplifier Performance
In Sections 3 and 4, we have shown that the shared pump topology can deliver high performance that is fully in agreement with simulation results. Here we will compare the shared pump amplifier with a two stage-three pump TDFA [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. A summary comparison of the two amplifiers is given below in Table 3. The table reveals that there is no major difference in performance between the two TDFAs. Comparing the maximum saturated output powers, we see that the shared pump TDFA achieves 1.9 W output for 3.2 W available pump, while the 2 stage amplifier achieves 2.6 W for 3.6 W of available pump. The output power performance of the two amplifiers is seen to be comparable when the maximum pump power available is accounted for. NF values for the two amplifiers are similar, as are the operating dynamic ranges (measured over an input power span of -30 dBm to +2 dBm). The two-stage amplifier has a slightly higher small signal gain with 56 dB compared to 51 dB for the shared pump single stage TDFA.
Fiber
The difference in slope efficiencies, with 66% for the 1 stage shared pump configuration and 82% for the 2 stage, 3 pump configuration, can be explained by referring to the architecture of the 3 pump configuration [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF].
Here we recall the definition of slope efficiency η from Section 3: η = ΔPsat/ΔPp. Remembering that Psat is the output power for a highly saturated amplifier, we observe that in the 2 stage TDFA the first stage boosts the input signal power of +2 dBm to an intermediate level of about +20 dBm which is then input to the second fiber stage. This boost in power to +20 dBm increases the conversion efficiency for available pump power in the second stage and so increases η. Indeed the two stage amplifier brings the measured efficiency closer to the simulated value as shown in Figure 11.
In comparing the amplifier bandwidth for the two configurations, we see that the 167 nm simulated bandwidth for the one stage, shared pump amplifier is consistent with the estimated value of >120 nm for the two stage, three pump amplifier. The >120 nm value was obtained by measuring the 10 dB width of the ASE noise background in the saturated output spectrum of the two stage TDFA. We believe that a simulation of the slope efficiency for the two stage amplifier (currently in progress) will result in a more precise value for its bandwidth.
The comparisons in Table 3 illustrate that our single stage shared pump TDFA can match the performance targets of a complex two stage three pump amplifier. The simplicity of the architecture of the shared pump TDFA is a considerable advantage in the design simulation of TDFAs for broadband telecommunications systems.
Discussion of Parameter Optimization for TDFA Architecture
The data reported in Figures 2 -11 illustrate several salient points about the operation of the shared pump TDFA.
From our experimental and theoretical studies, it is evident that input power levels, saturated output power targets, noise figure specifications, small signal gain specifications, and operating signal bandwidths all depend in an interrelated way on the amplifier architecture. Design of an optimized amplifier requires a careful balancing of all these performance targets as a function of fiber length L and coupling ratio k.
For gain amplifiers, Figure 2 shows that it is very important to consider the input signal power when choosing an optimum fiber length. For example, at the coupling ratio of 50%, the fiber gain for -30 dBm input is highest for a fiber length of 14 meters. For -15 dBm input, the optimum gain occurs for lengths of 7-8 meters. Clearly the design specifications of the TDFA must be carefully considered when choosing an optimum fiber length for a preamplifier designed to operate at low signal input powers. For these low input powers the NF value remains close to the quantum limit of 3 dB.
For power amplifiers, maximum simulated output power occurs for a coupling ratio of k = 95% and an optimized fiber length of about 3.5 meters for a signal wavelength of 1952 nm. This optimized fiber length agrees well with the values obtained in Figures 2 and4 where the optimum length for maximum output power is between 3 and 4 meters for a pump coupling ratio of 50 %. We conclude that for maximizing output power at 1952 nm, coupling ratios anywhere between 50 and 95 % can be employed.
Figure 4 demonstrates that the saturated output power Pout scales linearly with pump power up to the maximum simulated Pp of 2.55W. No Brillouin scattering or other nonlinear effects were observed in our experiments. This means that we can improve the output power of the amplifier simply by increasing the pump power, up to the limit where nonlinear effects start to be observed. The threshold for nonlinear effects in our shared pump amplifier is currently under study. For the parameters in the current experiments, the one stage shared pump design yields an attractive power amplifier that is simple to build and has high signal output power For generic or multipurpose amplifiers, Figure 5 and 11 illustrate that the operating bandwidth BW and center wavelength λc of the amplifier are strongly dependent on the active fiber length, with maximum long wavelength response above 2000 nm occurring for fiber lengths L of 9 meters and longer. Short wavelength response is maximized for short fiber lengths of 1.5 and 2 meters. The desired operating bandwidth and center wavelength can therefore be selected by choosing an appropriate active fiber length. The noise figure NF as shown in Figure 9 is slowly varying with signal wavelength λs for a coupling ratio of k = 50 %, indicating that the noise performance of the multipurpose amplifier is highly tolerant of variations in signal wavelength λs. This is an attractive feature for the many applications of this type of TDFA.
To conclude, we have shown that an active fiber length L of 7 meters and a coupling ratio k = 50 % provide balanced performance over a wide range of operating parameters for the one stage, shared pump TDFA.
Figure 1 .
1 Figure 1. Optical Design of Single Stage Single Pump TDFA with a Shared Pump Arrangement.
Figure 2 .
2 Figure 2. Signal Gain (G) as a Function of Fiber Length (L) for Four Different Levels of Ps.
Figure 3 .
3 Figure 3. Simulated Output Signal Power (Pout) as a Function of Fiber Length (L) and Pump Coupling Ratio (k)
Figure 4 .
4 Figure 4. Simulated Output Power Pout as a Function of Fiber Length L and Pump Power Pp for k = 50%
Figure 5 .
5 Figure 5. Simulated Slope Efficiencies η vs. Signal Wavelength λs and Active Fiber Length L
Figure 6 .
6 Figure 6. Output Signal Power Pout vs. Input Signal power Ps for k=50%, for Three Different Total Pump Powers Pp.
Figure 7 .
7 Figure 7. Gain G and Noise Figure NF at 1952 nm as a Function of Input Signal Power Ps.
Figure 8 .
8 Figure 8. Gain and Noise Figure as a Function of Coupling Ratio k.
Figure NF, dB
Figure 9 .
9 Figure 9 . Small Signal Gain G and Noise Figure NF as a Function of λs
Figure 10 .
10 Figure NF, dB
Table 1 .
1 Table 1 contrasts the measured and simulated values of slope efficiency η as a function of signal wavelength λs for a fiber length of 7 m. Comparison of Simulated and Measured Slope Efficiency η as a Function of λs .
η, %
λ, nm Exp. Sim.
1910 68.2 76.0
1952 65.9 72.9
2004 52.1 55.0
2050 13.5 9.6
Table 2 .
2 Operating Bandwidth BW and Center Wavelength λc as a Function of Fiber Length L.
L, m BW, nm λc, nm
3 198 1896
5 182 1918
7 167 1932
9 160 1940
Table 3 .
3 Comparison of Single Stage, Shared Pump TDFA with Two Stage, Three Pump TDFA
Length L = 7 m TDFA Configurations, 1952 nm
Parameter Symbol Units 1 Stage, Shared Pump 2 Stage, 3 Pumps
Pump Power (1567 nm) Pp W 3.2 3.6
Saturated Output Power Pout W 1.9 2.6
Small Signal Noise Figure NF dB 3.4 3.2
Signal Dynamic Range Pin dB 32 32
Small Signal Gain G dB 51 56
Slope Efficiency (Saturated) η % 65.9 82
Operating Bandwidth BW nm 167 (simulated) > 120 (est. from ASE)
Summary
We have reported the experimental and simulated performance of a single stage TDFA with a shared in-band pump at 1567 nm. In particular we considered the dependence of amplifier performance on pump coupling ratio and signal wavelength. We determined that the optimum fiber length L and optimum coupling ratio k depend strongly on the design performance specifications for the TDFA such as signal wavelength band, saturated output power, noise figure, small signal gain, and dynamic range. Our simulations show that the operating bandwidth of the amplifier can be as high as 198 nm. Due to the broad Thulium emission bandwidth, this amplifier configuration can be tailored to meet a variety of performance needs. We achieved saturated output powers of 2 W, small signal gains as high as 51 dB, noise figures as low as 3.5 dB, and a dynamic range of 32 dB for a noise figure of less than 4.7 dB. In all cases we found good agreement between our simulation tool and the experiments. No Brillouin scattering or other nonlinear effects were observed in any of our measurements. Our experiments and simulations show that the shared pump TDFA can match the performance of more complex multistage, multi-pump TDFAs, and illustrate the simplicity and usefulness of our design. This opens the possibility for new and efficient TDFAs for lightwave transmission systems as preamplifiers, as in-line amplifiers, and as power booster amplifiers.
Acknowledgments
We gratefully acknowledge Eblana Photonics for the single frequency distributed mode 2 µm laser sources, and OFS for the single clad Tm-doped fiber. | 28,975 | [
"1025056",
"17946"
] | [
"524170",
"40873",
"524170"
] |
01766662 | en | [
"phys",
"spi"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01766662/file/PTL%20Broadband%202W%20Tandem%20TDFA%20Tench%20Romano%20Delavaux%20REVISION%20v2%201%2001152018.pdf | Keywords: Doped Fiber Amplifiers, Infrared Fiber Optics, Optical Fiber Devices, Thulium, 2 microns
We report experimental and simulated performance of a tandem (dual-stage) Tm-doped silica fiber amplifier with a high signal output power of 2.6 W in the 2 µm band. Combined high dynamic range, high gain, low noise figure, and high OSNR are achieved with our design.
I. INTRODUCTION
The recent progress in transmission experiments at signal wavelengths in the 2 µm band [START_REF] Liu | High-capacity Directly-Modulated Optical Transmitter for 2-µm Spectral Region[END_REF] shows the need for Thulium-doped fiber amplifiers (TDFAs) with a combination of high gain, low noise figure, and large dynamic range. Previous work has demonstrated single stage amplifiers operating from 1900-2050 nm and 1650-1850 nm [START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF]. In this paper we report the experimental and simulated performance of a tandem single clad TDFA employing inband pumping around 1560 nm and designed for the 1900-2050 nm signal band. A combination of high gain (> 50 dB), output power of 2.6 W, > 30 dB dynamic range, and < 4 dB small signal noise figure are demonstrated with our design. Performance as a function of input power and signal wavelength is presented. The experimental data are in good agreement with steady-state simulations of our single clad tandem TDFA performance.
II. EXPERIMENTAL SETUP
Figure 1 shows the setup for measurements of the tandem TDFA which consists of a preamplifier (Stage 1) and a power booster (Stage 2). Signal light from a single frequency discrete mode laser (DML) source (Eblana Photonics) is coupled into the first TDF F1 through attenuator A. Signal input power is set by varying the attenuator. In stage 1, fiber 1 is co-and counter-pumped using wavelength division multiplexers (WDMs) with 1550 nm grating stabilized DFBs (P1 and P2) which deliver more than 200 mW each into F1.
Manuscript submitted on December 13, 2017. Robert E. Tench and Jean-Marc Delavaux are with Cybel LLC, 1195 Pennsylvania Avenue, Bethlehem, PA 18018 USA (e-mail: [email protected]) (e-mail: [email protected]) Clement Romano is with Cybel LLC, 1195 Pennsylvania Avenue, Bethlehem, PA, 18018 USA, and Institut Telecom/Paris Telecom Tech, 46 Rue Barrault, 75634, Paris, France (e-mail: [email protected]). The signal output of F1 is then coupled into the second TDF fiber F2, in Stage 2, which is counter-pumped either with a multi-watt 1560 nm fiber laser or a multi-watt 1567 nm fiber laser. Optical isolators I1 and I2 suppress parasitic lasing and ensure unidirectional operation. In our experiments, F1 is a 7 m length of OFS TDF designated TmDF200. Two types of fiber F2 are investigated, the first 5 m of OFS TmDF200 and the second 4.4 m of IXBlue TDF designated IXF-TDF-4-125.
III. EXPERIMENTAL RESULTS AND SIMULATIONS
Figure 2 shows the measured gain (G) and noise figure (NF) for the two amplifier configurations, first the OFS/OFS combination and then the OFS/IXBlue combination. In all of our data, which are displayed as points in the figures, input powers are referenced to the input of F1, and output powers are referenced to the output of F2. Maximum values of G of 54.6 dB and 55.8 dB, for OFS/OFS and OFS/IXBlue, respectively, were measured at a signal wavelength λs of 1952 nm for fibre laser pump powers Pp at 1560 nm of 1.95 W. The corresponding NF was measured to be in the range Pin between -30 dBm and +2 dBm.
The data demonstrate a large dynamic range of over 32 dB for an NF of 5.1 dB or less. For lower fiber laser pump powers (Pp=0.2 W to 0.8 W at 1560 nm), NF values as low as 3.2 dB were measured for the OFS/OFS configuration.
Simulations of these data were performed using fiber parameters measured in our laboratory [START_REF] Romano | Characterization of the 3F4-3H6 Transition in Thulium-doped Silica Fibres and Simulation of a 2µm Single Clad Amplifier[END_REF]. The simulation is based on a three level model of the Thulium ion in silica using the 3 H6, 3 F4, and 3 H4 levels including ion-ion interactions [START_REF] Romano | Simulation and design of a multistage 10 W thulium-doped double clad silica fiber amplifier at 2050 nm[END_REF]. The parameters of gain coefficient, absorption coefficient, and 3 F4 level lifetime were determined for the OFS and IXBlue fibers under test. Figure 3 plots the measured gain and absorption coefficients for the OFS fiber, which has a maximum absorption of 92 dB/m at 1630 nm. Figure 4 shows the gain and absorption coefficients for the IXBlue fiber, which has a maximum absorption of 140 dB/m at 1630 nm. The measured lifetimes are 650 µS for the OFS fiber and 750 µS for the IXBlue fiber. Other relevant parameters were taken from the literature. We note that our measurements of peak gain are lower than the peak absorption. This feature is consistent with some published data but not others [START_REF] Sincore | High Average Power Thulium-Doped Silica Fiber Lasers: Review of Systems and Concepts[END_REF][START_REF] Pisarik | Thulium-doped fibre broadband source for spectral region near 2 micrometers[END_REF][START_REF] Agger | Emission and absorption cross section of thulium doped silica fibers[END_REF][START_REF] Smith | Mode instability thresholds for Tm-doped fiber amplifiers pumped at 790 nm[END_REF].
The set of three level differential population equations [START_REF] Jackson | Theoretical modeling of Tmdoped silica fiber lasers[END_REF] was solved using a stiff solver, while the propagation set of differential equations was solved with a 4 th order Runge-Kutta The simulation accounts numerically for the amplified spontaneous emission (ASE) generated in the setup. Two stage simulation was carried out by sequentially applying the results of the single stage calculations.
As illustrated by the solid lines in Figure 2, the simulations agree well with the experimental data. Simulations of G are within 1.5 dB of the data for Pin > -25 dBm. Simulations of NF agree with the data to within 2 dB. These results validate the accuracy of our simulations for both high gain and highly saturated operating regimes.
Data illustrating the variation in output power Pout as a function of 1567 nm fiber laser pump power Pp are shown in Figure 5. For these data, Pin was set to between +1.3 and +2.2 dBm to saturate the amplifier and Pp was varied from 0.3 W to 3.2 W. For the OFS/OFS configuration, a maximum slope efficiency of 82% was observed at λs = 2004 and 1952 nm, corresponding to maximum output powers at these wavelengths of 2.60 W. The slope efficiency is defined as ΔPout / ΔPp. A reduced output power of 0.4 W was achieved at 2051 nm because of lower slope efficiency and the onset of lasing. Figure 6 illustrates the long term stability Abs.
Gain
Abs.
of Pout at 1952 nm and Pp = 2.43 W over a period of 6 hours. The variation in Pout over this time period was less than 4%.
No fiber nonlinear behavior such as Raman or Brillouin scattering was observed in our experiments.
Comparison of the data and simulations shows agreement to better than 0.5 dB for all experimental signal wavelengths as illustrated by the solid lines in Figure 5. These results validate the performance of our simulator as a function of signal wavelength.
Slope efficiency data as a function of λs for the OFS/OFS and OFS/IXBlue setups are shown in Figure 7. Simulated slope efficiencies, given by the solid lines in Figure 7, agree well with the experimental data for all the measured signal wavelengths. The simulations indicate that high slope efficiencies of >70% can be expected from 1900 nm to 2020 nm. The simulations also show that the single clad fiber can significant power 2051 nm with reduced efficiency. We attribute this behavior to the presence of lower wavelength ASE and to reabsorption at lower wavelengths. In 8 we contrast experimental output spectra obtained for the two TDFA amplifiers, for saturated input signals of +2.1 dBm at 1952 nm and fiber laser pump power at 1567 nm of 3.2 W. These data are taken under the same conditions and yield optical signal to noise ratios (OSNR) of 57 dB/0.1 nm for both configurations. The spectra observed for both setups exhibit small differences in the wavelength region below 1950 nm. We believe this is caused by the different doping of the two fibers. Nevertheless, the operating wavelength regions and bandwidths for the OFS and IXBlue fibers are largely equivalent. We attribute this similarity to the low concentration of Tm in the two fibers where the scattering and ion-ion interactions can be neglected. Figure 9 compares the experimental output spectrum for the OFS/IXBlue configuration with the results of our steadystate simulations. We find that the simulations predict the experimental data relatively well. At low wavelengths <1900 nm, we believe the differences between data and simulation are caused by the wavelength dependence in the passive components and the non-monochromatic spectrum of the single frequency laser source.
IV. DISCUSSION
The high measured internal gain of >55 dB represents a significant improvement over results previously reported [START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF] for single stage TDFAs. Such a high small signal gain is promising for preamplifier, repeater, and low noise applications. Expt.
The high observed slope efficiency of 82% and output power of 2.6 W also show significant improvement over previously reported performance [START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF]. The experimental SNR of 57 dB/0.1 nm for a saturated amplifier output is important for applications such as booster amplifiers.
The usable operating optical bandwidths of the tandem TDFAs, with the criterion of 10 dB down from the spontaneous emission peak (Figure 8), are estimated to be 122 nm for the OFS/OFS configuration and 130 nm for the OFS/IXBlue configuration. These values agree with previous work [START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF] and are fully consistent with the simulated slope efficiencies in Figure 7.
Our steady state simulations of tandem TDFA performance agree well with the experimental over a range of Pin from -30 dBm to +2 dBm, for measurements of G, NF, and Pout. This agreement covers the measured wavelength range of 1910 -2051 nm. Future work will extend the studied wavelength range toward lower wavelengths. The good agreement between experiment and theory confirms that our simulator is a useful tool for the design of tandem high gain, high power TDFAs.
Finally, we note that both the OFS and IXBlue configurations of the tandem TDFA exhibit similar performance, both experimentally and in simulation, confirming [START_REF] Liu | High-capacity Directly-Modulated Optical Transmitter for 2-µm Spectral Region[END_REF][START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF] that we can employ multiple commercial sources of Tm-doped fiber in our simulation and design of high performance tandem optical amplifiers.
V. SUMMARY
We have reported the design and experimental performance of a tandem single clad TDFA, in-band pumped around 1560 nm and operating in the 1900 -2050 nm signal band. Small signal gains >55 dB, output powers as high as 2.6 W, and small signal noise figures as low as 3.2 dB were experimentally measured. Slope efficiencies as high as 82% were also observed, and an SNR of 57 dB/0.1 nm was demonstrated with output powers >2 W. Comparison of our data with steady state simulations yielded good agreement, thereby validating our model for high gain and high saturated output powers from the tandem two-stage TDFA over a wavelength range of 1952-2051 nm. Our design is appropriate for high transmit power, preamplifier, and repeater applications in the 2 µm region.
Figure 1 .
1 Figure 1. Tandem TDFA configuration.
Figure 2 .
2 Figure 2. G and NF as a function of Pin for the two tandem amplifiers at 1952 nm.
of 4 .Broadband 2 W
42 Figure, dB
Figure 3 .
3 Figure 3. Measured gain and absorption coefficients for the OFS Tm-doped fiber.
Figure 4 .
4 Figure 4. Measured gain and absorption coefficients for the IXBlue Tm-doped fiber.
Figure 5 .
5 Figure 5. Saturated Pout vs. Pp for the OFS/OFS tandem amplifier.
Figure 6 .
6 Figure 6. Long term stability of the TDFA output for Pp = 2.43 W.
Figure 7 .
7 Figure 7. Slope efficiency as a function of λs for the two tandem configurations.
Figure 8 .
8 Figure 8. Saturated output spectra for the two tandem configurations.
Figure 9 .
9 Figure 9. Comparison of experimental and simulated output spectra for the OFS/IXBlue configuration.
VI. ACKNOWLEDGEMENTS
We are grateful to OFS and IXBlue for the Thulium-doped silica fibers, and to Eblana Photonics for the single frequency source lasers in the 2000 nm band. | 13,538 | [
"17946"
] | [
"524170",
"40873",
"524170"
] |
01766832 | en | [
"phys",
"sdu"
] | 2024/03/05 22:32:15 | 2015 | https://hal.univ-reunion.fr/hal-01766832/file/ILRC27_portafaix.pdf | Thierry Portafaix
Sophie Godin-Beekmann
Guillaume Payen
Martine De Mazière
Bavo Langerock
Susana Fernandez
Françoise Posny
Jean-Pierre Cammas
Jean-Marc Metzger
Hassan Bencherif
Ozone profiles obtained by DIAL technique at Maïdo Observatory in La Reunion Island: comparisons with ECC ozone-sondes, ground-based FTIR spectrometer and microwave radiometer measurements
Ozone profiles obtained by DIAL technique at Maïdo Observatory in La Reunion
Island: comparisons with ECC ozone-sondes, ground-based FTIR spectrometer and microwave radiometer measurements.
T. Portafaix (1)*, S. Godin-Beekmann (3), G. Payen (2), M. de Mazière (4), B. Langerock (4), S. Fernandez [START_REF] Neefs | BARCOS, an automation and remote control system for atmospheric observations with a Bruker interferometer[END_REF], F. Posny (1), J.P. Cammas (2), J. M. Metzger [START_REF] Fernandez | a novel ground based microwave radiometer for ozone measurement campaigns[END_REF], H. Bencherif [START_REF] Baray | Maïdo observatory: a new high-altitude station facility at Reunion Island (21° S, 55° E) for long-term atmospheric remote sensing and in situ measurements[END_REF] In addition, a microwave radiometer of University of Bern, has operated between late 2013 and early 2015.
STRATOSPHERIC DIAL SYSTEM AT REUNION ISLAND
This LIDAR was installed at Reunion Island in 2000 and moved to Maïdo facility in 2013 after instrumental updates.
Like any DIAL system, it requires the use of a pair of emitted wavelengths.
Laser sources are a tripled Nd:Yag laser (Spectra-Physics Lab 150) and a XeCl excimer laser (Lumonics PM 844). The Nd:Yag provides the non-absorbed beam at 355 nm with a pulse rate of 30 Hz and a power of 5W, and the excimer provides the absorbed beam at 308 nm with a pulse rate of 40 Hz and a power larger than 9W. An afocal optical system is used to reduce the divergence of the beam to 0.5 mrad.
The receiving telescope is composed of 4 parabolic mirrors (diameter: 500 mm). The backscattered signal is collected by 4 optical fibers located at the focal point of each mirror. The spectrometer used for the separation of the wavelengths is a Jobin Yvon holographic grating (3600 linesmm-1, resolution 3 Å.mm-1, efficiency >25 %).
The two Rayleigh beams at 308 and 355 nm are separated initially by the holographic grating and separated again at the output of the spectrometer by a lens system in the proportion 8% and 92 %, respectively, in order to adapt the signal to the non-saturation range of the photon-counting system. The optical signals are detected by 6 Hamamatsu non-cooled photomultipliers (PM). A mechanical chopper is used to cadence the laser shots and cut the signal in the lower altitude range where PM are saturated. This chopper consists of a steel blade rotating at 24 000 rpm in primary vacuum.
6 acquisition channels are recorded simultaneously: 2 channels at 355 nm corresponding to the lower and upper parts of the profile, 2 channels at 308 nm (lower and upper parts) and 2 Nitrogen Raman channels at 332 and 387 nm. In addition to the mechanical gating, both upper Rayleigh channels at 355 nm and 308 nm, are equipped with an electronic gating in order to cut the signals for the altitudes below 16 km and prevent signal-induced noise.
The system was moved to Maïdo Observatory by the end of 2012, after the update of the electronic system (now LICEL TR and PR transient recorders) and of the XeCl excimer laser. This new configuration allows us to obtain ozone profiles in the 15-45 km altitude range.
The lidar signals are recorded in a 3 min time file but averaged over the whole night acquisition (2 to 3h time integration per night) to increase the signal-to-noise ratio.
It is necessary to apply different corrections to the signal. The background signal is estimated and removed using an average or a linear regression in the high altitude range where the useful lidar signal is negligible (over 80 km). Another correction of the photomultiplier saturation for low layers is also required and applied.
OTHER STRATOSPHERIC OZONE INSTRUMENTS AT MAIDO FACILITY.
A ground-based microwave radiometer (GROMOS-C) designed to measure middle atmospheric ozone profiles has been installed at Maïdo Observatory in 2014 and removed in early 2015. It has been specifically designed for campaigns and is remotely controlled and operated continuously under all weather conditions. It measures the pressure broadened line at 110.836 GHz and can also measure the CO line at 115.271 GHz. The vertical profiles are retrieved by optimal estimation method [START_REF] Fernandez | a novel ground based microwave radiometer for ozone measurement campaigns[END_REF]. FTIR solar absorption measurements at high spectral resolution (from 0.0110 to 0.0035 cm -1 for ozone spectra) are performed by a Bruker 125HR spectrometer installed in 2013. This instrument is dedicated to NDACC measurements in the mid-infrared, covering the spectral range 600 to 6500 cm -1 (1.5 to 16 µm),and particularly the ozone retrievals are performed using the 1000-1005 cm -1 window in the 600-1400 cm -1 spectra (MCT detector, KBr beam-splitter). From the measured absorption spectrum, an inverse method (optimal estimation method) is used to trace back the vertical abundance profiles of gases present in the atmosphere. For ozone, information on about four independent layers in the atmosphere can be retrieved, roughly one in the troposphere and three in the stratosphere, up to about 45 km [START_REF] Vigouroux | Evaluation of tropospheric and stratospheric ozone trends over Western Europe from ground-based FTIR network observations[END_REF]. This instrument is operated remotely and automatically with an updated version of the BARCOS system [START_REF] Neefs | BARCOS, an automation and remote control system for atmospheric observations with a Bruker interferometer[END_REF]. In addition to the continuous monitoring of the atmospheric chemical composition and transport processes, the intention is also to participate to dedicated observations campaigns.
In addition, ECC ozone soundings are performed weekly at Reunion Island since 1998. The ozonesonde currently used is of ECC Z Ensci type with a 0.5% KI buffered solution from Droplet Measurement Technology [DMT]. It is coupled to a meteorological radiosonde M10 from MeteoModem. The effective vertical resolution of the ozone data is between 50 and 100 m [Thompson et al., 2003a[Thompson et al., ,b, 2007]]. The ozone measurement accuracy is around ±4% in the stratosphere below 10 mbar pressure level and the precision in total ozone column measured by the ECC sonde is around 5%. These ozone measurements are part of the SHADOZ (Thompson et al., 2003a(Thompson et al., , 2003b) and NDACC networks.
INTER-COMPARISONS
The first comparisons with ECC simultaneous sounding are very encouraging [START_REF] Baray | Maïdo observatory: a new high-altitude station facility at Reunion Island (21° S, 55° E) for long-term atmospheric remote sensing and in situ measurements[END_REF] with differences less than 10% throughout the profile. Figure 1 presents an example for June 23, 2014. Other comparison between DIAL and GROMOS -C ozone profiles after applying the averaging kernel show a very good agreement in the layer between 5 and 20 hPa, with differences less than 5 %. But differences are more important on the lower or upper layers. It can reach more than 15 % in the 20-100 hPa layer.
These comparisons with the microwave radiometer were made using the DIAL "Rapid Delivery" profiles for the NDACC network, using average parameters for photomultiplier desaturation and background signal removing.
These medium parameters can introduce some additional error in the lower or upper part of the resulting profiles. It will be important for the final version of this paper to make these comparisons from consolidated lidar profiles using refined parameters.
The stratospheric ozone LIDAR is already NDACC qualified. It should be noted however that an inter-comparison campaign of all the NDACC lidar systems (water vapor, temperature, ozone) installed the Maïdo Observatory with the mobile system of NASA-GSFC [START_REF] Mcgee | Improved stratospheric ozone lidar[END_REF] is planned for May 2015.
Comparisons with FTIR will be performed for the three layers between 15 and 45 km. The FTIR measurements in the ozone spectral range will be intensified during this 2015 intercomparison campaign.
The ozone number density is retrieved from the slope of signals after derivation[START_REF] Godin-Beekmann | Systematic DIAL ozone measurements at Observatoire de Haute-Provence[END_REF].The lidar signals are corrected the Rayleigh extinction using a composite pressure temperature profile computed from nearby meteorological soundings performed daily at Reunion Airport and the Arletty model (based on meteorological data from the European Centre) It is also necessary in the DIAL technique to use a low-pass filter. The logarithm of each signal is fitted to a 2nd order polynomial and the ozone number density is computed from the difference of the derivative of the fitted polynomial. Varying the number of points on which the signals are fitted completes the filtering.
Fig 1 :
1 Fig 1: Ozone profiles on 24 June 2013 by stratospheric DIAL (black line) and ECC-ozonesonde at Maïdo Observatory (blue).
ACKNOWLEDGEMENT
The present work is supported by LACy, OSU-Réunion and the FP7 European NORS project. The authors acknowledge the European Community, the Région Réunion, the CNRS, and the University of La Réunion for their support and contribution in the construction phase of the research infrastructure OPAR (Observatoire de Physique de l'Atmosphère à La Réunion). OPAR and LACy are presently funded by CNRS (INSU) and Université de La Réunion, and managed by OSU-R (Observatoire des Sciences de l'Univers à la Réunion, UMS 3365). We acknowledge Anne M. Thompson (NASA/GSFC, USA) the SHADOZ network principal Investigator, and E. Golubic, P. Hernandez and L. Mottet who are deeply involved in the routine lidar measurements at Maïdo facility. | 10,159 | [
"9097",
"753395",
"14686",
"981360",
"981359",
"984077",
"954126",
"748979",
"955093",
"173939"
] | [
"70806",
"391690",
"172211",
"86537",
"86537",
"494144",
"70806",
"172211",
"172211",
"70806",
"86537",
"172211"
] |
01766861 | en | [
"sdv"
] | 2024/03/05 22:32:15 | 2011 | https://amu.hal.science/hal-01766861/file/Debanne-Physiol-Rev-2011.pdf | Dominique Debanne
Emilie Campanac
Andrzej Bialowas
AND Edmond Carlier
Gisèle Alcaraz
Physiology. Alcaraz G Axon
Axon Physiology
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
I. INTRODUCTION
The axon (from Greek ␣´, axis) is defined as a long neuronal process that ensures the conduction of information from the cell body to the nerve terminal. Its discovery during the 19th century is generally credited to the German anatomist Otto Friedrich Karl Deiters (147), who first distinguished the axon from the dendrites. But the axon initial segment was originally identified by the Swiss Rüdolf Albert von Kölliker (293) and the German Robert Remak (439) (for a detailed historical review, see Ref. 480). The myelin of axons was discovered by Rudolf Virchow (548), and Louis-Antoine Ranvier (433) first characterized the nodes or gaps that now bear his name. The functional role of the axon as the output structure of the neuron was initially proposed by the Spanish anatomist Santiago Ramón y Cajal (429,430).
Two distinct types of axons occur in the peripheral and central nervous system (PNS and CNS): unmyelinated and myelinated axons, the latter being covered by a myelin sheath originating from Schwann cells in the PNS or oligodendrocytes in the CNS (Table 1). Myelinated axons can be considered as three compartments: an initial segment where somatic inputs summate and initiate an action potential; a myelinated axon of variable length, which must reliably transmit the information as trains of action potentials; and a final segment, the preterminal axon, beyond which the synaptic terminal expands (Fig. 1). The initial segment of the axon is not only the region of action potential initiation (117,124,514) but is also the most reliable neuronal compartment, where full action potentials can be elicited at very high firing frequencies without attenuation (488). Bursts of spikes display minimal attenuation in the AIS compared with the soma (488,561). The main axon is involved in the secure propagation of action potentials, but it is also able to integrate fluctuations in membrane potential originating from the somatodendritic region to modulate neurotransmitter release [START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF]291,489). Finally, the axon terminal that is principally devoted to excitation-release coupling with a high fidelity ( 159) is also the subject of activity-dependent regulation that may lead to spike broadening (209).
Generally, axons from the CNS are highly ramified and contact several hundreds of target neurons locally or distally. But, the function of the axon is not purely limited to the conduction of the action potential from the site of initiation near the cell body to the terminal. Recent experimental findings shed new light on the functional and computational capabilities of single axons, suggesting that several different complex operations are specifically achieved along the axon. Axons integrate subthreshold synaptic potentials and therefore signal both analog and digital events. Drop of conduction or backward propagation (reflection) may occur at specific axonal branch points under certain conditions. Axonal geometry together with the biophysical properties of voltage-gated channels determines the timing of propagation of the output message in different axonal branches. In addition, axons link central neurons through gap junctions that allow ultra-fast network synchrony. Moreover, local shaping of the axonal action potential may subsequently determine synaptic efficacy during repetitive stimulation. These operations have been largely described in in vitro preparations of brain tissue, but evidence for these processes is still scarce in the mammalian brain in vivo. In this paper we review the different ways in which the properties of axons can control the transmission of electrical signals. In particular, we show how the axon deter-mines efficacy and timing of synaptic transmission. We also discuss recent evidence for long-term, activity-dependent plasticity of axonal function that may involve morphological rearrangements of axonal arborization, myelination, regulation of axonal channel expression, and fine adjustment of AIS location. The cellular and molecular biology of the axon is, however, not discussed in depth in this review. The reader will find elsewhere recent reviews on axon-dendrite polarity [START_REF] Barnes | Establishment of axon-dendrite polarity in developing neurons[END_REF], axon-glia interaction (380,381), myelin formation [START_REF] Baumann | Biology of oligodendrocyte and myelin in the mammalian central nervous system[END_REF]483), axonal transport (138,250,405), and the synthesis of axonal proteins (210).
II. ORGANIZATION OF THE AXON
A. Complexity of Axonal Arborization: Branch Points and Varicosities
Axonal morphology is highly variable. Some axons extend locally (Ͻ1 mm long for inhibitory interneurons), whereas others may be as long as 1 m and more. The diameter of axons varies considerably (553). The largest axons in the mammalian PNS reach a diameter of ϳ20 m [(264); but the biggest is the squid giant axon with a diameter close to 1 mm (575)], whereas the diameter of unmyelinated cortical axons in the mammalian brain varies between 0.08 and 0.4 m [START_REF] Berbel | The development of the corpus callosum in cats: a light-and electron-microscopic study[END_REF]559). The complexity of axonal arborization is also variable. In one extreme, the cerebellar granule cell axon possesses a single T-shaped branch point that gives rise to the parallel fibers. On the other, many axons in the central nervous system typically form an elaborate and most impressive tree. For instance, the terminal arbor of thalamocortical axons in layer 4 of the cat visual cortex contains 150 -275 branch points [START_REF] Antonini | Morphology of single geniculocortical afferents and functional recovery of the visual cortex after reverse monocular deprivation in the kitten[END_REF]. The complexity of axonal arborization is also extensive in cortical pyramidal neurons. Axons of hippocampal CA3 pyramidal cells display at least 100 -200 branch points for a total axonal length of 150 -300 mm, and a single cell may contact 30,000 -60,000 neurons (269, 325, 347). GABAergic interneurons also display complex axons. Hippocampal and cortical inhibitory interneurons emit an axon with a very dense and highly branched arborization (235). One obvious function of axonal divergence is to allow synchronous transmission to a wide population of target neurons within a given brain area. For instance, hippocampal basket cells synchronize the firing of several hundred principal cells through their divergent axon (118).
The second morphological feature of axons is the presence of a large number of varicosities (synaptic boutons) that are commonly distributed in an en passant, "string of beads" manner along thin axon branches. A single axon may contain several thousands of boutons (235,325,411). Their size ranges between ϳ1 m for thin unmyelinated axons (482, 559) and 3-5 m for large hippocampal mossy-fiber terminals [START_REF] Blackstad | Special axo-dendritic synapses in the hippocampal cortex: electron and light microscopic studies on the layer of mossy fibers[END_REF]482). Their density varies among axons, and the spacing of varicosities is comprised between ϳ4 and ϳ6 m in unmyelinated fibers (481,482).
B. Voltage-Gated Ion Channels in the Axon
Voltage-gated ion channels located in assigned subdomains of the axonal membrane carry out action potential initiation and conduction, and synaptic transmission, by governing the shape and amplitude of the unitary spike, the pattern of repetitive firing, and the release of neurotransmitters (Fig. 2). Recent reviews (310,387,540) have provided a detailed account of the voltage-gated ion channels in neurons, clearly illustrating the view that in the axon, the specific array of these channels in the various neuronal types adds an extra level of plasticity to synaptic outputs.
Channels in the axon initial segment
A) SODIUM CHANNELS. Variations in potential arising from somato-dendritic integration of multiple inputs culminate FIG. [START_REF] Abraham | Long-term potentiation involves enhanced synaptic excitation relative to synaptic inhibition in guinea-pig hippocampus[END_REF]. Summary of axonal functions. A pyramidal neuron is schematized with its different compartments. Four major functions of the axon are illustrated (i.e., spike initiation, spike propagation, excitationrelease coupling, and integration). A spike initiates in the axon initial segment (AIS) and propagates towards the terminal where the neurotransmitter is released. In addition, electrical signals generated in the somatodendritic compartment are integrated along the axon to influence spike duration and neurotransmitter release (green arrow).
at the axon initial segment (AIS) where a suprathreshold resultant will trigger the action potential. This classical view relies on the presence of a highly excitable region in the initial segment of the axon (Fig. 3). Theoretical studies of action potential initiation have suggested that a 20to 1,000-fold higher density of sodium (Na ϩ ) channels in the axon relative to that found in the soma and dendrites is required to permit the polarity of spike initiation in the axon of the neuron (157,346,373,434). The first evidence for concentration of Na ϩ channels at the axon hillock and initial segment of retinal ganglion cells was obtained with the use of broad-spectrum Na ϩ channel antibodies (564). After several fruitless attempts (119, 120), functional confirmation of the high concentration of Na ϩ channels in the AIS was achieved only recently with the use of Na ϩ imaging (193,290) and outside-out patch-clamp recordings from the soma and the axon (259). In these last studies, the largest Na ϩ -dependent fluorescent signals or voltage-gated Na ϩ currents were obtained in the AIS of cortical pyramidal neurons (Fig. 3, A andB). Na ϩ current density is 34-fold greater in the AIS than in the soma (259). This estimation has been very recently confirmed for the Nav1.6 subunit detected in CA1 pyramidal neurons by a highly sensitive, quantitative electron microscope immunogold method (SDS-digested freeze-fracture replica-labeling; Ref. 333; Fig. 3C). The density of gold particles linked to Nav1.6 subunits measured by this method (ϳ180/m 2 ) is fully compatible with a previous estimate made by in the AIS of L5 neurons where the density of Na ϩ current amounts to 2,500 pS/m 2 (i.e., ϳ150 channels/m 2 given a 17 pS unitary Na ϩ channel conductance).
Three different isoforms of Na ϩ channels, which drive the ascending phase of the action potential, are present at the AIS, namely, Nav1.1, Nav1.2, and Nav1.6. Nav1.1 is dominant at the AIS of GABAergic neurons (394), but it is also found in the AIS of retinal ganglion cells (542) and in spinal cord motoneurons (169; see Table 2 for details). With a few exceptions, its expression in interneurons is restricted to the proximal part of the AIS and displays little overlap with Nav1.6 that occupies the distal part (169,332,394,542). Nav1.6 and Nav1.2 are principally associated with AIS of myelinated and unmyelinated axons, respectively, with Nav1.2 expressed first during development, then being gradually replaced by Nav1.6 concomitantly with myelination [START_REF] Boiko | Compact myelin dictates the differential targeting of two sodium channel isoforms in the same axon[END_REF][START_REF] Boiko | Functional specialization of the axon initial segment by isoform-specific sodium channel targeting[END_REF]. Although greatly diminished, the expression of Nav1.2 might persist in the AIS of adult neurons and is maintained in populations of unmyelinated axons. The two isoforms coexist in the AIS of L5 pyramidal neurons with a proximal distribution of Nav1.2 and a distal distribution of Nav1. 6 (259). Sodium channels in the distal part of the AIS display the lowest threshold, suggesting that this polarized distribution could explain the unique properties of the AIS, including action potential initiation (principally mediated by Nav1.6) and backpropagation (largely supported by Nav1. [START_REF] Ahern | Induction of persistent sodium current by exogenous and endogenous nitric oxide[END_REF]Refs. 171,259). A similar conclusion is drawn in CA1 pyramidal neurons where Nav1.6 sodium channels play a critical role for spike initiation (449). FIG. 2. Schematic representation of the distribution of sodium (top), potassium (middle), and calcium (bottom) channels in the different compartments of a myelinated axon. The cell body is symbolized by a pyramid shape (left). Channel densities are figured by the density of color. The myelin sheath is symbolized in gray. NoR, node of Ranvier; AIS, axon initial segment. Uncertain localizations are written in gray and accompanied by a question mark.
Nav channels generate three different Na ϩ currents that can be distinguished by their biophysical properties, namely, 1) the fast-inactivating transient Na ϩ current (I NaT ), the persistent Na ϩ current (I NaP ), and the resurgent Na ϩ current (I NaR ; i.e., a current activated upon repolarization; Ref. 427). The two last currents are activated at subthreshold or near-threshold, and they play a critical role in the control of neuronal excitability and repetitive firing (345). I NaP is responsible for amplification of subthreshold excitatory postsynaptic potentials (EPSP) and is primarily generated in the proximal axon [START_REF] Astman | Persistent sodium current in layer 5 neocortical neurons is primarily generated in the proximal axon[END_REF]512). I NaR is thought to facilitate reexcitation during repetitive firing and is generated in the AIS of cortical pyramidal neurons of the perirhinal cortex [START_REF] Castelli | Resurgent Na ϩ current in pyramidal neurones of rat perirhinal cortex: axonal location of channels and contribution to depolarizing drive during repetitive firing[END_REF]. I NaR might be present all along the axon since a recent study indicates that this current shapes presynaptic action potentials at the Calyx of Held (246).
B) POTASSIUM CHANNELS. Potassium channels are crucial regulators of neuronal excitability, setting resting membrane potentials and firing thresholds, repolarizing action potentials, and limiting excitability. Specific voltage-gated potassium (Kv) conductances are also expressed in the AIS (see Fig. 2). Kv1 channels regulate spike duration in the axon (291, 490; Fig. 4A). Kv1.1 and Kv1.2 are most frequently associated at the initial segment of both excitatory and inhibitory cortical and hippocampal neurons (267,332), and tend to be located more distally than Nav1.6. The current carried by these channels is indeed 10-fold larger in the distal part of the AIS than that measured in the soma (291). It belongs to the family of lowvoltage activated currents because a sizable fraction of the current is already activated at voltages close to the resting membrane potential (291,490). These channels are also directly implicated in the high fidelity of action potential amplitude during burst firing (488).
Kv2.2 is present in the AIS of medial nucleus trapezoid neurons, where it promotes interspike hyperpolarization during repeated stimuli, thus favoring the extremely high frequency firing of these neurons (275). Kv7 channels (7.2 and 7.3), that bear the M-current (also called KCNQ channels), are also found in the AIS of many central neurons (154,398,546). These channels are essential to the regulation of AP firing in hippocampal principal cells, where they control the resting membrane potential and action potential threshold (399,473,474,579).
C) CALCIUM CHANNELS. The last players that have recently joined the AIS game are calcium channels (Fig. 2). Using two-photon Ca 2ϩ imaging, Bender and Trussell (46) showed that T-and R-type voltage-gated Ca 2ϩ channels are localized in the AIS of brain stem cartwheel cells. In this study, Ca 2ϩ entry in the AIS of Purkinje cells and neocortical pyramidal neurons was also reported. These channels regulate firing properties such as spike-timing, burst-firing, and action potential threshold. The downregulation of T-type Ca 2ϩ channels by dopamine receptor activation represents a powerful means to control action potential output [START_REF] Bender | Dopaminergic modulation of axon initial segment calcium channels regulates action potential initiation[END_REF]. Using calcium imaging, pharmacological tools, and immunochemistry, a recent study reported the presence of P/Q-type (Cav2.1) and N-type (Cav2.2) Ca 2ϩ channels in the AIS of L5 neocortical pyra-midal neurons (577). These channels determine pyramidal cell excitability through activation of calcium-activated BK channels.
Channels in unmyelinated axons
In unmyelinated fibers, action potential conduction is supported by Nav1.2 sodium channels that are thought to be homogeneously distributed [START_REF] Boiko | Functional specialization of the axon initial segment by isoform-specific sodium channel targeting[END_REF]223,558).
At least five voltage-gated K ϩ channel subunits are present in unmyelinated fibers (Table 2). Kv1.3 channels have been identified in parallel fiber axons of cerebellar granule cells (305,543). The excitability of Schaffer collaterals is strongly enhanced by ␣-dendrotoxin (DTX; a blocker of Kv1. [START_REF] Abraham | Long-term potentiation involves enhanced synaptic excitation relative to synaptic inhibition in guinea-pig hippocampus[END_REF]Kv1.2,and Kv1.6) or margatoxin (MgTx; a blocker of Kv1.2 and Kv1.3), indicating that Kv1.2 is an important channel subunit for controlling excitability in these fibers (395). Hippocampal mossy fiber axons ex-FIG. [START_REF] Alle | Analog signalling in mammalian cortical axons[END_REF]. K ϩ channels determine AP duration in AIS of L5 pyramidal neuron and hippocampal mossy fiber terminals. A: DTX-sensitive K ϩ channels determine spike duration in L5 pyramidal axons. Top left: superimposed AP traces recorded from the soma (black) and at the indicated axonal distances from the axon hillock (red). Top right: representative K ϩ currents evoked by voltage steps from Ϫ110 to ϩ45 mV in cellattached patches from the soma, proximal AIS (5-30 m), distal AIS (35-55 m), and axonal sites (up to 400 m). Bottom: impact of 50 -100 nM DTX-I on somatic (left) and axonal (right) APs before (black) and after DTX-I (red). Note the enlargement of AP in the AIS but not in the soma. [From Kole et al. (291), with permission from Elsevier.] B: DTX-sensitive K ϩ channels determine spike duration in mossy-fiber terminal. Left: mossy-fiber bouton approached with a patch pipette. [From Bischofberger et al. (58), with permission from Nature Publishing Group.] Top right: K ϩ current activated in a mossy fiber bouton outside-out patch by pulses from Ϫ70 to ϩ30 mV in the absence (control) and in the presence of 1 M ␣-dendrotoxin (␣-DTX). Bottom right: comparison of the spike waveform in the soma and mossy fiber terminal (MF terminal) of a hippocampal granule cell. Note the large spike duration in the soma. [Adapted from Geiger and Jonas (209), with permission from Elsevier.] press Kv3.3 and Kv3.4 channels (105). The Kv7 activator retigabine reduces excitability of C-type nerve fibers of human sural nerve (315). Kv7 channels determine excitability of pyramidal CA1 cell axons (546).
Channels in the nodes of Ranvier
In myelinated axons, conduction is saltatory and is made possible by the presence of hot spots of sodium channels in the node of Ranvier (Fig. 2). Two principal Na ϩ channel isoforms are found in the nodes of PNS and CNS axons: Nav1.6 and Nav1.1 [START_REF] Caldwell | Sodium channel Na(v)1.6 is localized at nodes of ranvier, dendrites, and synapses[END_REF]169,333; see Table 2). In a recent study, Lorincz and Nusser (333) found that the density of Nav1.6 subunit in the node of Ranvier is nearly twice that observed in the AIS (ϳ350 channels/m 2 ). Transient and persistent sodium currents have been identified in the node of myelinated axons [START_REF] Benoit | Properties of maintained sodium current induced by a toxin from Androctonus scorpion in frog node of Ranvier[END_REF]166).
Saltatory conduction at nodes is secured by the juxtaparanodal expression of Kv1.1 and Kv1.2 channels, and by the nodal expression of Kv3.1b and Kv7.2/Kv7.3, which all concur to reduce reexcitation of the axon (152,154,165,378,435,550,551,584,585). Other calcium-or sodium-activated potassium channels are encountered in the nodal region of myelinated axons (see Table 2).
Channels in the axon terminals
Axonal propagation culminates in the activation of chemical synapses with the opening of the presynaptic Cav2.1 and Cav2.2 calcium channels (Fig. 2). With the use of imaging techniques, the presence of calcium channels has been identified in en passant boutons of cerebellar basket cell axons where they presumably trigger transmitter release (330). Hot spots of calcium influx have also been reported at branch points (330). Although their function is not entirely clear, they may control signal transmission in the axonal arborization. In addition, Cav1.2 (L-type) calcium channels that are sparsely expressed all over hippocampal soma and dendrites are prominently labeled by immunogold electron microscopy in hippocampal axons and in mossy fiber terminals (531).
Functional sodium channels have been identified in presynaptic terminals of the pituitary (2), at the terminal of the calyx of Held (260,320), and in hippocampal mossy fiber terminal (179). While Nav1.2 is probably the sole isoform of sodium channel expressed at terminals (in agreement with its exclusive targeting to unmyelinated portions of axons), terminal Kv channels exhibit a greater diversity (159). Kv1.1/Kv1.2 subunits dominate in many axon terminals (see Table 2 for details). Mossy fiber axons and boutons are enriched in Kv1.4 subunits (126,478,543) which determine the spike duration (Fig. 4B) and regulate transmitter release (209). The other main function of Kv1 channels is preventing the presynaptic terminal from aberrant action potential firing (158).
While Kv1 channels start to activate at low threshold, Kv3 conductances are typical high-voltage-activated currents. They have been identified in terminals of many inhibitory and excitatory neurons (see Table 2). Functionally, Kv3 channels keep action potential brief, thus limiting calcium influx and hence release probability (218).
Kv7 channels are also present in preterminal axons and synaptic terminals (see Table 2 for details). The specific M-channel inhibitor XE991 inhibits synaptic transmission at the Schaffer collateral input, whereas the Mchannel opener retigabine has the opposite effect, suggesting the presence of presynaptic Kv7 channels in Schaffer collateral terminals (546). It should be noted that these effects are observed in experimental conditions in which the M-current is activated, i.e., in the presence of a high external concentration of K ϩ .
Other dampening channels such as the hyperpolarization-activated cyclic nucleotide-gated cationic (HCN) channels are expressed in the unmyelinated axon and in axon terminals (see Table 2). H-channels are also encountered at the calyx of Held giant presynaptic terminal (133) and in nonmyelinated peripheral axons of rats and humans [START_REF] Baginskas | The H-current secures action potential transmission at high frequencies in rat cerebellar parallel fibers[END_REF]225). The typical signature of H-channels is also observed in cerebellar mossy fiber boutons recorded in vitro or in vivo (432). The postsynaptic function of Hchannels is now well understood, but their precise role in the preterminal axon and axon terminal is less clear. They may stabilize membrane potential in the terminal. For instance, the axons of cerebellar basket cells are particularly short, and any hyperpolarization or depolarization arising from the somatodendritic compartment may significantly change the membrane potential in the terminal and thus alter transmitter release. Thus stabilizing membrane potential in the terminal with a high density of HCN channels may represent a powerful means to prevent voltage shifts.
Besides voltage-gated conductances, axons and axon terminals also contain several ion-activated conductances including large-conductance, calcium-activated BK potassium channels (also called Maxi-K or Slo1 channels; Refs. 258,287,377,423,455), smallconductance calcium-activated SK potassium channels (390, 447), and sodium-activated K ϩ channels (K Na , also called Slack channels or Slo2. [START_REF] Ahern | Induction of persistent sodium current by exogenous and endogenous nitric oxide[END_REF]Ref. 52) that are activated upon depolarization of the axon by the propagating action potential (Table 2). All these channels will also limit excitability of the nerve terminal by preventing uncontrolled repetitive activity.
G protein-gated inwardly rectifying potassium (GIRK) channels are also present at presynaptic terminals (Table 2). In the cortex and the cerebellum, these channels are functionally activated by GABA B receptors where they are thought to control action potential duration (188, 308).
C. Ligand-Gated Receptors in the Axon
Axons do not contain only voltage-or metabolitegated ion channels but also express presynaptic vesicular release machinery (586) and many types of ligand-gated receptors including receptors to fast neurotransmitters and slow neuromodulators. We will focus here only on receptors that alter the excitability of the axon in physiological conditions.
Receptors in the axon initial segment
The axon initial segments of neocortical and hippocampal pyramidal neurons are particularly enriched in axo-axonic inhibitory contacts (499 -501). A single axon initial segment receives up to 30 symmetrical synapses from a single axo-axonic (chandelier) GABAergic cell (500). Axon-initial segments contain a high concentration of the ␣2 subunit variant of the GABA A receptor [START_REF] Brunig | Intact sorting, targeting, and clustering of gamma-aminobutyric acid A receptor subtypes in hippocampal neurons in vitro[END_REF]. Axo-axonic synapses display a fast and powerful GABAergic current (340). The strategic location of GABAergic synapses on the AIS has generally been thought to endow axo-axonic cells with a powerful inhibitory action on the output of principal cells. However, this view has been recently challenged. Gabor Tamás and colleagues (522) recently discovered that axo-axonic synapses impinging on L2-3 pyramidal neurons may be in fact excitatory in the mature cortex. Importantly, the potassium-chloride cotransporter 2 (KCC2) is very weakly expressed in the AIS, and thus the reversal potential for GABA currents is much more depolarized in the axon than in the cell body (522). Similar conclusions have been drawn in the basolateral amygdala (566) and in hippocampal granule cells with the use of local uncaging of GABA in the different compartments of the neuron (285). However, a recent study using noninvasive techniques concludes that inhibitory postsynaptic potentials (IPSPs) may be hyperpolarizing throughout the entire neuron (211).
Receptors in the axon proper
GABA A receptors are not exclusively located in the AIS, but they have also been demonstrated in myelinated axons of the dorsal column of the spinal cord (456,457) and in axonal branches of brain stem sensory neurons (545). Activation of these receptors modulates the compound action potential conduction and waveform. In some cases, propagation of antidromic spikes can be blocked by electrical stimulation of local interneurons (545). This effect is prevented by bath application of GABA A receptor channel blocker, suggesting that conduction block results from activation of GABA A receptors after the release of endogenous GABA. Similarly, GABA A receptors have been identified in the trunk of peripheral nerves [START_REF] Brown | Axonal GABA-receptors in mammalian peripheral nerve trunks[END_REF]. However, the precise mode of physiological activation of these receptors remains unknown, and there is no clear evidence that GABA is released from oligodendrocytes or Schwann cells (307).
Monoamines regulate axonal properties in neurons from the stomatogastric ganglion of the crab or the lobster [START_REF] Ballo | Complex intrinsic membrane properties and dopamine shape spiking activity in a motor axon[END_REF][START_REF] Bucher | Axonal dopamine receptors activate peripheral spike initiation in a stomatogastric motor neuron[END_REF]213,366). They also determine axonal properties in mammalian axons. For instance, subtype 3 of the serotonin receptor (5-HT 3 ) modulates excitability of unmyelinated peripheral rat nerve fibers (316).
Nicotinic acetylcholine receptors are encountered on unmyelinated nerve fibers of mammals where they modulate axonal excitability and conduction velocity [START_REF] Armett | The action of acetylcholine on conduction in mammalian non-myelinated fibres and its prevention by an anticholinesterase[END_REF]314).
Receptors in the periterminal axon and nerve terminals
While the axon initial segment and the axon proper contain essentially GABA A receptors, the preterminal axon and nerve terminals are considerably richer and express many different modulatory and synaptic receptors (180). Only a subset of these receptors affects axonal excitability.
A) GABA A RECEPTORS. Although GABA B receptors are widely expressed on presynaptic excitatory and inhibitory terminals [START_REF] Bettler | Molecular structure and physiological functions of GABA(B) receptors[END_REF]536), their action on periterminal and axonal excitability is slow and moderate. In contrast, high-conductance GABA A receptors control axonal excitability more accurately. Frank and Fuortes (197) first hypothesized modulation of transmitter release via axoaxonic inhibitory synapses to explain the reduction in monosynaptic transmission in the spinal cord (reviewed in Ref. 450). Based on the temporal correspondence between presynaptic inhibition and the depolarization of the primary afferent terminals, suggested that depolarization of the afferent was responsible for the inhibition of synaptic transmission. It was later shown that presynaptic inhibition is caused by a reduction in transmitter release (168,175). Since this pioneering work, the primary afferent depolarization (PAD) has been demonstrated with axonal recordings and computational tools in many different sensory afferents including the cutaneous primary afferents of the cat (224), group Ib afferent fibers of the cat spinal cord (309,312,313), and sensory afferents of the crayfish (100 -102). These studies and others (132,515) indicate that activation of GABA A receptors produces a decrease in the amplitude of the presynaptic AP, thus decreasing transmitter release. Two mechanisms based on simulation studies have been proposed to account for presynaptic inhibition associated with PADs: a shunting mechanism (469) and inactivation of sodium channels (226). In the crayfish, the reduction in spike amplitude is mainly mediated by a shunting effect, i.e., an increase in membrane conductance due to the opening of GABA A receptors (102). The inactivation of sodium channels may add to the shunting effect for larger PADs.
Single action potentials evoked in cerebellar stellate and basket cells induce GABAergic currents measured in the soma, indicating that release of GABA regulates axonal excitability through GABA A autoreceptors (419). Application of the GABA A receptor agonist muscimol in the bath or locally to the axon modulates the excitability of hippocampal mossy fibers (452). The sign of the effect may be modulated by changing the intra-axonal Cl Ϫ concentration. Direct evidence for GABA A receptors on hippocampal granule cell axons has been provided unambiguously by Alle and Geiger (6) by the use of patch-clamp recordings from single mossy fiber boutons and local application of GABA. In mechanically dissociated CA3 pyramidal neurons from young rats, mossy fiber-derived release is strongly facilitated by stimulation of presynaptic GABA A receptors (273). This facilitation has been extensively studied by with direct whole cell recordings from the mossy-fiber bouton. GABA A receptors modulate action potential-dependent Ca 2ϩ transients and facilitate LTP induction (451).
B) GLYCINE RECEPTORS. In a similar way, glycine receptors may also control axonal excitability and transmitter release. At the presynaptic terminal of the calyx of Held, glycine receptors replace GABA A receptors as maturation proceeds (538). Activation of presynaptic glycine receptors produces a weakly depolarizing Cl Ϫ current in the nerve terminal and enhances synaptic release (537). The depolarization induces a significant increase in the basal concentration of Ca 2ϩ in the terminal [START_REF] Awatramani | Modulation of transmitter release by presynaptic resting potential and background calcium levels[END_REF]. Similar conclusions are reached in the ventral tegmental area where presynaptic glycine receptors lead to the facilitation of GABAergic transmission through activation of voltagegated calcium channels and intraterminal concentration of Ca 2ϩ (573).
C) GLUTAMATE RECEPTORS. At least three classes of glutamate receptors are encountered at presynaptic release sites where they regulate synaptic transmission (412). Only a small fraction of these receptors regulates axonal excitability. In the CA1 region of the hippocampus, kainate produces a marked increase in spontaneous IPSCs. This effect might result from the direct depolarization of the axons of GABAergic interneurons (472). In fact, kainate receptors lower the threshold for antidromic action potential generation in CA1 interneurons.
NMDA receptors are encountered in many axons. They determine synaptic strength at inhibitory cerebellar synapses (170, 212), at granule cell-Purkinje cell synapse [START_REF] Bidoret | Presynaptic NR2Acontaining NMDA receptors implement a high-pass filter synaptic plasticity rule[END_REF][START_REF] Casado | Presynaptic N-methyl-Daspartate receptors at the parallel fiber-Purkinje cell synapse[END_REF], at L5-L5 excitatory connections (494), and at L2/3 excitatory synapses (127). However, recent studies indicate that axonal NMDA receptors do not produce significant depolarization or calcium entry in cerebellar stellate cells (111) and in L5 pyramidal cell axons (112) to significantly affect axonal excitability. In fact, NMDA receptors might modulate presynaptic release simply by the electrotonic transfer of the depolarization from the somatodendritic compartments to the axonal compartment (111,112); see also sect. VC). However, such tonic change in the somatodendritic compartment of the presynaptic cell has not been observed in paired-recording when presynaptic NMDA receptors are pharmacologically blocked (494). D) PURINE RECEPTORS. ATP and its degradation products, ADP and adenosine, are considered today as important signaling molecules in the brain [START_REF] Burnstock | Physiology and pathophysiology of purinergic neurotransmission[END_REF]. Classically ATP is coreleased from vesicles with acetylcholine (437) or GABA (274). However, a recent study indicates that ATP can also be relased by the axon in a nonvesicular manner through volume-activated anion channels (191). In fact, propagating action potentials cause microscopic swelling and movement of axons that may in turn stimulate volume-activated anion channels to restore normal cell volume through the release of water together with ATP and other anions.
Purinergic receptors are divided into three main families: P1 receptors (G protein-coupled, activated by adenosine and subdivided into A 1 , A 2A , A 2B and A 3 receptors), P2X receptors (ligand-gated, activated by nucleotides and subdivided into P2X 1-7 ), and P2Y (G protein-coupled, activated by nucleotides and subdivided into P2Y 1-14 ) [START_REF] Burnstock | Purinergic signalling and disorders of the central nervous system[END_REF]. Purine receptors are found on axon terminals where they modulate transmitter release. For instance, activation of presynaptic A 1 receptor powerfully inhibits glutamate, but not GABA release, in the hippocampus (529, 574). In contrast, activation of presynaptic P2X receptor by ATP enhances GABA and glycine release in spinal cord (263,442). P2X 7 receptors are expressed on developing axons of hippocampal neurons, and their stimulation promotes axonal growth and branching in cultured neurons (155).
III. AXON DEVELOPMENT AND TARGETING OF ION CHANNELS IN THE AXON
Neurons acquire their typical form through a stereotyped sequence of developmental steps. The cell initially establishes several short processes. One of these neurites grows very rapidly compared with the others and becomes the axon (161). The spatial orientation of the growing axon is under the control of many extracellular cues that have been reviewed elsewhere [START_REF] Barnes | Establishment of axon-dendrite polarity in developing neurons[END_REF]156). This section is therefore focused on the description of the major events underlying development and targeting of ion channels in the three main compartments of the axon.
A. Axon Initial Segments
In addition to its role in action potential initiation involving a high density of ion channels, the AIS might be Physiol Rev • VOL 91 • APRIL 2011 • www.prv.org also be defined by the presence of a specialized and complex cellular matrix, specific scaffolding proteins, and cellular adhesion molecules (393). The cellular matrix together with the accumulation of anchored proteins forms a membrane diffusion barrier (375,563). This diffusion barrier plays an important role in preferentially segregating proteins into the axonal compartment. Recently, a cytoplasmic barrier to protein traffic has been described in the AIS of cultured hippocampal neurons (502). This filter allows entry of molecular motors of the kinesin-1 family (KIF5) that specifically carry synaptic vesicle proteins which must be targeted to the axon. The entry of kinesin-1 into the axon is due to the difference in the nature of microtubules in the soma and the AIS (294). Molecular motors (KIF17) that carry dendrite-targeted postsynaptic receptors cannot cross the axonal filter [START_REF] Arnold | Actin and microtubule-based cytoskeletal cues direct polarized targeting of proteins in neurons[END_REF]502,567). This barrier develops between 3 and 5 days in vitro (i.e., ϳ1 day after the initial elongation of the process that becomes an axon).
The scaffolding protein ankyrin G (AnkG) is critical for assembly of AIS and is frequently used to define this structure in molecular terms (233). The restriction of many AIS proteins within this small axonal region is achieved through their anchoring to the actin cytoskeleton via AnkG (296). AnkG is attached to the actin cytoskeleton via IV spectrin [START_REF] Berghs | betaIV spectrin, a new spectrin localized at axon initial segments and nodes of ranvier in the central and peripheral nervous system[END_REF]. Sodium channels, Kv7 channels, the cell adhesion molecule neurofascin-186 (NF-186), and neuronal cell adhesion molecules (NrCAM) are specifically targeted to the AIS through interaction with AnkG (154,206,245,398). Furthermore, deletion of AnkG causes axons to acquire characteristics of dendrites with the appearance of spines and postsynaptic densities (244). While Nav and Kv7 channels are clustered through their interaction with AnkG, clustering of Kv1 channels in the AIS is under the control of the postsynaptic density 93 (PSD-93) protein, a member of the membrane-associated guanylate kinase (MAGUK) family (391). Some of the interactions between channels and AnkG are regulated by protein kinases. For instance, the protein kinase CK2 regulates the interaction between Nav and AnkG [START_REF] Brechet | Protein kinase CK2 contributes to the organization of sodium channels in axonal membranes by regulating their interactions with ankyrin G[END_REF]. But other factors might also control the development and targeting of Na ϩ channels at the AIS. For instance, the sodium channel 1 subunit determines the development of Nav1.6 at the AIS [START_REF] Brackenbury | Functional reciprocity between Na ϩ channel Nav1.6 and beta1 subunits in the coordinated regulation of excitability and neurite outgrowth[END_REF]. The absence of the phosphorylated IB␣ at the AIS, an inhibitor of the nuclear transcription factor-B, impairs sodium channel concentration (458).
The AIS may also contain axon specification signals ( 222). Cutting the axon of cultured hippocampal neurons is followed by axonal regeneration at the same site if the cut is Ͼ35 m from the soma (i.e., the AIS is still connected to the cell body). In contrast, regeneration occurs from a dendrite if the AIS has been removed (222).
B. Nodes of Ranvier
During development, Nav1.2 channels appear first at immature nodes of Ranvier (NoR) and are eventually replaced by Nav1.6 [START_REF] Boiko | Compact myelin dictates the differential targeting of two sodium channel isoforms in the same axon[END_REF]. Later, Kv3.1b channels appear at the juxtaparanodal region, just before Kv1.2 channels (152). While targeting of ion channels at the AIS largely depends on intrinsic neuronal mechanisms, the molecular organization of the NoR and its juxtaparanodal region is mainly controlled by interactions between proteins from the axon and the myelinating glia (310,393,413). For instance, in mutants that display abnormal myelin formation, Nav1.6 channels are dispersed or only weakly clustered in CNS axons [START_REF] Boiko | Compact myelin dictates the differential targeting of two sodium channel isoforms in the same axon[END_REF]276). In PNS axons, nodes are initiated by interactions between secreted gliomedin, a component of the Schwann cell extracellular matrix, and . But once the node is initiated, targeting of ion channels at the NoR resembles that at the AIS. Accumulation of Nav channels at NoR also depends on AnkG (173). However, Kv1 clustering at the juxtaparanodal region of PNS axons depends on the cell adhesion molecules Caspr2 and TAG-1 that partly originate from the glia but not on MAGUKs (257,413,414).
C. Axon Terminals
In contrast to the AIS and the NoR, much less is known about the precise events underlying development and targeting of ion channels in axon terminals. However, the trafficking of N-and P/Q-type Ca 2ϩ channels to axon terminal and that of GABA B receptors illustrate the presence of specific targeting motifs on axonal terminal proteins. The COOH-terminal region of the N-type Ca 2ϩ channel (Cav2.2) contains an amino acid sequence that constitutes a specific binding motif to the presynaptic protein scaffold, allowing their anchoring to the presynaptic terminal (356,357). Furthermore, direct interactions have been identified between the t-SNARE protein syntaxin and N-type Ca 2ϩ channels (323,479). Deletion of the synaptic protein interaction (synprint) site in the intracellular loop connecting domains II and III of P/Q-type Ca 2ϩ channels (Cav2.1) not only reduces exocytosis but also inhibits their localization to axon terminals (370).
One of the two subtypes of GABA B receptor (GABA B1a ) is specifically targeted to the axon (547). The GABA B1a subunit carries two NH 2 -terminal interaction motifs, the "sushi domains" that are potent axonal targeting signals. Indeed, mutations in these domains prevent protein interactions and preclude localization of GABA B1a subunits to the axon, while fusion of the wild-type GABA B1a to mGluR1a preferentially redirects this somatodendritic protein to axons and their terminals [START_REF] Biermann | The Sushi domains of GABA B receptors function as axonal targeting signals[END_REF].
In the pinceau terminal of cerebellar basket cells, HCN1 channels develop during the end of the second postnatal week (334). This terminal is particularly enriched in Kv1 channels (319), but the precise role of molecular partners and scaffolding proteins in clustering these channels remains unknown (392).
IV. INITIATION AND CONDUCTION OF ACTION POTENTIALS
A. Action Potential Initiation
Determining the spike initiation zone is particularly important in neuron physiology. The action potential classically represents the final step in the integration of synaptic messages at the scale of the neuron [START_REF] Bean | The action potential in mammalian central neurons[END_REF]514). In addition, most neurons in the mammalian central nervous system encode and transmit information via action potentials. For instance, action potential timing conveys significant information for sensory or motor functions (491). In addition, action potential initiation is also subject to many forms of activity-dependent plasticity in central neurons (493). Thus information processing in the neuronal circuits greatly depends on how, when, and where spikes are generated in the neuron.
A brief historical overview
Pioneering work in spinal motoneurons in the 1950s indicated that action potentials were generated in the AIS or possibly the first NoR (124,187,202). Microelectrode recordings from motoneurons revealed that the action potential consisted of two main components: an "initial segment" (IS) component was found to precede the full action potential originating in the soma [i.e., the somatodendritic (or SD) component]. These two components could be isolated whatever the mode of action potential generation (i.e., antidromic stimulation, direct current injection, or synaptic stimulation), but the best resolution was obtained with the first derivative of the voltage. The IS component is extremely robust and can be isolated from the SD component by antidromic stimulation of the axon in a double-shock paradigm (124). For very short interstimulus intervals, the SD component fails but not the IS component. With simultaneous recordings at multiple axonal and somatic sites of the lobster stretch receptor neuron, Edwards and Ottoson (176) also reported that the origin of the electrical impulse occurred first in the axon, but at a certain distance from the cell body (176).
This classical view was challenged in the 1980s and 1990s with the observation that under very specific conditions, action potentials may be initiated in the dendrites (438). The development in the 1990s of approaches using simultaneous patch-pipette recordings from different locations on the same neuron was particularly precious to address the question of the site of action potential initiation (514, 516). In fact, several independent studies converged on the view that dendrites were capable of generating regenerative spikes mediated by voltage-gated sodium and/or calcium channels (220,331,462,513,565). The initiation of spikes in the dendrites (i.e., preceding somatic action potentials) has been reported in neocortical (513), hippocampal (220), and cerebellar neurons (431) upon strong stimulation of dendritic inputs. However, in many different neuronal types, threshold stimulations preferentially induce sodium spikes in the neuronal compartment that is directly connected to the axon hillock [START_REF] Bischofberger | Action potential propagation into the presynaptic dendrites of rat mitral cells[END_REF]242,318,354,506,511,513,516). Thus the current rule is that the axon is indeed a low-threshold initiation zone for sodium spike generation. But the initiation site was precisely located only recently by direct recording from the axon.
Initiation in the axon
The recent development of techniques allowing loose-patch [START_REF] Atherton | Autonomous initiation and propagation of action potentials in neurons of the subthalamic nucleus[END_REF][START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF]116,362,422) or whole cell recording (291,355,463,489,561) from single axons of mammalian neurons, together with the use of voltage-sensitive dyes (196,396,397) or sodium imaging [START_REF] Bender | Axon initial segment Ca 2ϩ channels influence action potential generation and timing[END_REF]193,290), provide useful means to precisely determine the spike initiation zone. These recordings have revealed that sodium spikes usually occur in the axon before those in the soma (Fig. 5, A and B). More specifically, the initiation zone can be estimated as the axonal region where the advance of the axonal spike relative to the somatic spike is maximal (Fig. 5C). In addition, bursts of action potentials are generally better identified in the axon than in the cell body (355,561).
In myelinated axons, action potentials are initiated at the AIS [START_REF] Atherton | Autonomous initiation and propagation of action potentials in neurons of the subthalamic nucleus[END_REF]196,283,284,396,397,488,578). Depending on cell type, the initiation zone varies being located between 15 and 40 m from the soma. In layer 5 pyramidal neurons, the action potential initiation zone is located in the distal part of the AIS, i.e., at 35-40 m from the axon hillock [START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF]397,578). Similar estimates have been obtained in subicular pyramidal neurons with an AP initiation zone located at ϳ40 m from the soma, beyond the AIS (119). The precise reason why the locus of axonal spike generation in myelinated fibers varies between the AIS and the first NoR is not known, but it may result from the heterogeneous distribution of Nav and Kv channels as well as the existence of ectopic zones for spike initiation (410,580). In cerebellar Purkinje cell axons, the question was debated until recently. On the basis of latency differences between simultaneous whole cell somatic and cellattached axonal recordings, the action potential was found to be generated at the first NoR (at a distance of ϳ75 m; Ref. 116). However, in another study, it was concluded that spike initiation was located at the AIS (i.e., 15-20 m from the soma; Ref. 284). Here, the authors found that the AIS but not the first NoR was very highly sensitive to focal application of a low concentration of TTX. Initiation in the AIS has recently been confirmed by the use of noninvasive recording techniques (196,396). The origin of the discrepancy between the first two studies has been elucidated. In fact, cell-attached recordings from the axon initial segment are not appropriate because the capacitive and ionic currents overlap, preventing identification of the spike onset.
In unmyelinated axons, the initiation zone has been identified at 20 -40 m from the axon hillock. In CA3 pyramidal neurons, the AP initiation zone is located at 35-40 m from the soma (363). A much shorter distance has been reported in hippocampal granule cell axons. The site of initiation has been estimated at 20 m from the axon hillock (463). This proximal location of the spike initiation zone is corroborated by the labeling of sodium channels and ankyrin-G within the first 20 m of the axon (299). A possible explanation for this very proximal location might be that the very small diameter of granule cell axons (ϳ0.3 m, Ref. 208) increases the electrotonic distance between the soma and proximal axonal compartments, thus isolating the site of initiation from the soma.
Threshold of action potential initiation
An essential feature of the all-or-none property of the action potential is the notion of a threshold for eliciting a spike. Converging evidence points to the fact that neuronal firing threshold may not be defined by a single value. The first studies of Lapicque (317) were designed to describe the role of depolarization time on the threshold current: the threshold current was reduced when its duration increased. Based on Hodgkin-Huxley membrane equations, Noble and Stein (384,385) defined the spike threshold as the voltage where the summed inward membrane current exceeds the outward current.
In contrast with current threshold, voltage threshold could not be assessed in neurons until intracellular records were obtained from individual neurons [START_REF] Brock | The recording of potentials from motoneurones with an intracellular electrode[END_REF]. Given the complex geometry of the neuron, a major question was raised in the 1950s: is action potential threshold uniform over the neuron? Since the spike is initiated in the axon, it was postulated that the voltage threshold was 10 -20 mV lower (more hyperpolarized) in the AIS than in the cell body (124). Because direct recording from the axon was not accessible for a long time, there was little evidence for or against this notion. In an elegant study, Maarten Kole and Greg Stuart recently solved this question with direct patch-clamp recordings from the AIS (292). They showed that the current threshold to elicit an action potential is clearly lower in the AIS (Fig. 6A). However, the voltage threshold defined as the membrane potential at which the rate of voltage (i.e., the first deriv- ative) crosses a certain value (generally 10 -50 V/s, Refs. [START_REF] Anderson | Thresholds of action potentials evoked by synapses on the dendrites of pyramidal cells in the rat hippocampus in vitro[END_REF]201,471) appeared surprisingly to be highest in the axon (Fig. 6A). This counterintuitive observation is due to the fact that Na ϩ channels in the AIS drive a local depolarizing ramp just before action potential initiation that attenuates over very short distances as it propagates to the soma or the axon proper, thus giving the impression that the voltage threshold is higher (Fig. 6B). When this local potential is abolished by focal application of TTX to the AIS, then the voltage threshold is effectively lower in the AIS (292). In other words, the spike threshold measured out of the AIS is a propagating spike, and the correct measure in this compartment is the threshold of the SD component. This subtlety may also be at the origin of unconventional proposals for Na ϩ channel gating during action potential initiation [START_REF] Baranauskas | Sodium currents activate without a Hodgkin-and-Huxley-type delay in central mammalian neurons[END_REF]236,359,379,578). Indeed, the onset of the action potential appears faster in the soma than expected from Hodgkin-Huxley modelling.
The spike threshold is not a fixed point but rather corresponds to a range of voltage. For instance, intrasomatic recordings from neocortical neurons in vivo reveal that spike threshold is highly variable [START_REF] Azouz | Cellular mechanisms contributing to response variability of cortical neurons in vivo[END_REF][START_REF] Azouz | Dynamic spike threshold reveals a mechanism for synaptic coincidence detection in cortical neurons in vivo[END_REF]248). The first explanation usually given to account for this behavior involves channel noise. In fact, the generation of an AP near the threshold follows probability laws because the opening of voltage-gated channels that underlie the sodium spike is a stochastic process (465,560). The number of voltage-gated channels is not large enough to allow the contribution of channel noise to be neglected.
However, this view is now challenged by recent findings indicating that the large spike threshold variability measured in the soma results from back-propagation of the AP from the AIS to the soma when the neuron is excited by trains of noisy inputs (578). In fact, at the point of spike initiation (i.e., the AIS), the spike is generated with relatively low variance in membrane potential threshold but as it back-propagates towards the soma, variability increases. This behavior is independent of channel noise since it can be reproduced by a deterministic Hodgkin-Huxley model ( 578). The apparent increase in spike threshold variance results in fact from the rearrangement of the timing relationship between spikes and the frequency component of subthreshold waveform during propagation.
Timing of action potential initiation
Synchronous population activity is critical for a wide range of functions across many different brain regions including sensory processing (491), spatial navigation (388), and synaptic plasticity [START_REF] Bi | Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type[END_REF]142,143). Whereas temporal organization of network activity clearly relies on both timing at the synapse [START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF] and elsewhere within the network (441), the mechanisms governing precise spike timing in individual neurons are determined at the AIS.
Recently, the rules governing the temporal precision of spike timing have started to emerge.
Outward voltage-gated currents with biophysical properties that sharpen physiological depolarizations, such as EPSPs, reduce the time window during which an action potential can be triggered and thus enhance spike precision [START_REF] Axmacher | Intrinsic cellular currents and the temporal precision of EPSP-action potential coupling in CA1 pyramidal cells[END_REF]200,207). In contrast, outward currents that reduce the rate of depolarization leading to the generation of a spike decrease spike-time precision (131,503). Here, high spike jitter may result from the fact that channel noise near the threshold becomes determinant during slow voltage trajectories. With the recent development of axonal recordings, it will be important to determine how these currents shape voltage in the AIS.
Plasticity of action potential initiation
The probability of action potential initiation in response to a given stimulus is not absolutely fixed during the life of a neuron but subjected to activity-dependent regulation. In their original description of LTP, Bliss and Lømo (63) noticed that the observed increase in the population spike amplitude, which reflects the number of postsynaptic neurons firing in response to a given synaptic stimulation, was actually greater than simply expected by the LTP-evoked increase in the population EPSP [START_REF] Bliss | Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path[END_REF]. This phenomenon was termed EPSP-spike or E-S potentiation. The intracellular signature of E-S potentiation is an increased probability of firing in response to a given synaptic input. This plasticity appears to be of fundamental importance because it directly affects the input-output function of the neuron. Originally described in the dentate gyrus of the hippocampus, E-S potentiation was also found at the Schaffer collateral-CA1 cell synapse when the afferent fibers were tetanized [START_REF] Abraham | Long-term potentiation involves enhanced synaptic excitation relative to synaptic inhibition in guinea-pig hippocampus[END_REF][START_REF] Andersen | Possible mechanisms for long-lasting potentiation of synaptic transmission in hippocampal slices from guinea-pigs[END_REF]136) and maybe induced associatively with coincident activation of synaptic input and a back-propagated action potential [START_REF] Campanac | Spike timing-dependent plasticity: a learning rule for dendritic integration in rat CA1 pyramidal neurons[END_REF]. Although dendritic conductances such as A-type K ϩ (199) or h-type currents [START_REF] Campanac | Downregulation of dendritic I(h) in CA1 pyramidal neurons after LTP[END_REF] are implicated in its expression, regulation of axonal channels cannot totally be excluded. Indeed, hyperpolarization of the spike threshold has been encountered in many forms of long-lasting increase in excitability in cerebellar and hippocampal neurons [START_REF] Aizenman | Rapid, synaptically driven increases in the intrinsic excitability of cerebellar deep nuclear neurons[END_REF]568). Furthermore, activation of the fast transient Na ϩ current is regulated following LTP induction in CA1 pyramidal neurons (568).
B. Conduction of Action Potentials Along the Axon
A brief overview of the principle of conduction in unmyelinated axons
Conduction of the action potential has been primarily studied and characterized in invertebrate axons. According to a regenerative scheme, propagation along unmyeli-nated axons depends on the passive spread of current ahead of the active region to depolarize the next segment of membrane to threshold. The nature of the current flow involved in spike propagation is generally illustrated by an instantaneous picture of the action potential plotted spatially along the axon. Near the leading edge of the action potential there is a rapid influx of sodium ions that will lead to depolarization of a new segment of membrane towards threshold. At the following edge of the action potential, current flows out because potassium channels are opened, thus restoring the membrane potential towards the resting value. Because of both the inactivation of voltage-gated Na ϩ channels and the high conductance state of hyperpolarizing K ϩ channels, the piece of axonal membrane that has been excited is not immediately reexcitable. Thus the action potential cannot propagate backward, and conduction is therefore generally unidirectional. As the action potential leaves the activated region, Na ϩ channels recover from inactivation, K ϩ conductance declines, and the membrane is thus susceptible to be reexcited.
Conduction in myelinated axons
In myelinated (or medulated) axons, conduction is saltatory (from Latin saltare, to jump). Myelin is formed by wrapped sheaths of membrane from Schwann cells in peripheral nerves and oligodendrocytes in central axons. The number of wrappings varies between 10 and 160 [START_REF] Arbuthnott | Ultrastructural dimensions of myelinated peripheral nerve fibers in the cat and their relation to conduction velocity[END_REF]. The presence of the myelin sheath has a critical impact on the physiology of the axon. The effective membrane resistance of the axon is locally increased by several orders of magnitude (up to 300 times), and the membrane capacitance is reduced by a similar factor. The myelin sheath is interrupted periodically by NoR, exposing patches of axonal membrane to the external medium. The internodal distance is usually 100 times the external diameter of the axon, ranging between 200 m and 2 mm (264, 453). The electrical isolation of the axon by myelin restricts current flow to the node, as ions cannot flow into or out of the high-resistance internodal region. Thus only the restricted region of the axon at the node is involved in impulse propagation. Therefore, the impulse jumps from node to node, thereby greatly increasing conduction velocity. Another physiologically interesting consequence of myelination is that less metabolic energy is required to maintain the gradient of sodium and potassium since the flow of these ions is restricted to the nodes. However, a recent study from the group of Geiger [START_REF] Alle | Energy-efficient action potentials in hippocampal mossy fibers[END_REF] indicates that because of matched properties of Na ϩ and K ϩ channels, energy consumption is minimal in unmyelinated axons of the hippocampus.
The principle of saltatory conduction was first suggested by Lillie in 1925 (326) and later confirmed by direct experimental evidence [START_REF] Bostock | The internodal axon membrane: electrical excitability and continuous conduction in segmental demyelination[END_REF]265,407,527). In their seminal paper, Huxley and Stämpfli (265) measured currents in electrically isolated compartments of a single axon, containing either a node or an internode, during the passage of an impulse. They noticed that when the compartment contained a NoR, stimulation of the nerve resulted in a large inward current. In contrast, no inward current was recorded when the chamber contained an internode, thus indicating that there is no regenerative activity. The discontinuous nature of saltatory conduction should not be emphasized too much, however, because 30 consecutive NoR can participate simultaneously in some phases of the action potential.
Conduction velocity
Conduction velocity in unmyelinated axons depends on several biophysical factors such as the number of available Na ϩ channels, membrane capacitance, internal impedance, and temperature (122,148,251,252,277). Conduction velocity can be diminished by reducing external Na ϩ concentration (277) or partially blocking Na ϩ channels with a low concentration of TTX (122). In fact, the larger the sodium current, the steeper the rate of rise of the action potential. As a consequence, the spatial voltage gradient along the fiber is steeper, excitation of adjacent axonal regions is faster, and conduction velocity is increased.
The second major determinant of conduction is membrane capacity. Membrane capacity determines the amount of charge stored on the membrane per unit area. Thus the time necessary to reach the threshold is obviously shorter if the capacity is small.
The third major parameter for conduction velocity is the resistance of the axoplasm (i.e., the intra-axonal medium). For instance, in the giant squid axon, the insertion of a low-impedance wire cable in the axon considerably increases the rate of conduction (148). This property explains why conduction velocity in unmyelinated axons is proportional to the square root of the axon diameter (251). In fact, the current flow is facilitated in largediameter axons because of the high intracellular ion mobility.
Temperature has large effects on the rate of increase of Na ϩ channel conductance and action potential waveform ( 253). Channels open and close more slowly at lower temperature, and subsequently conduction velocity is reduced (106,198).
In myelinated axons, conduction displays linear dependence on fiber diameter [START_REF] Arbuthnott | Ultrastructural dimensions of myelinated peripheral nerve fibers in the cat and their relation to conduction velocity[END_REF]264,444,453). A simple rule is that every micrometer of outer diameter adds 6 m/s to the conduction velocity at 37°C. One particularly fascinating point addressed by the theoretical work of Rushton (453) is the notion of invariance in the conduction properties and morphological parameters of myelinated axons. In fact, the geometry of myelinated axons seems to be tuned by evolution to give the highest conduction velocity.
Conduction velocity in mammalian axons has been evaluated traditionally by antidromic latency measurements or field measurements of axon volleys (300, 521). More direct measurements of conduction velocity have been obtained recently with the development of axonal patch-clamp recordings in brain tissue. In unmyelinated axons, conduction velocity is generally slow. It has been estimated to be close to 0.25 m/s at Schaffer collateral [START_REF] Andersen | The hippocampal lamella hypothesis revisited[END_REF] or at the mossy-fiber axon (299, 463), and reach 0.38 m/s in the axon of CA3 pyramidal neurons (363). In contrast, conduction becomes faster in myelinated axons, but it largely depends on the axon diameter. In fact, myelination pays in terms of conduction velocity when the axon diameter exceeds 1-2 m (453). In the thin Purkinje cell axon (ϳ0.5-1 m), conduction velocity indeed remains relatively slow (0.77 m/s; Ref. 116). Similarly, in the myelinated axon of L5 neocortical pyramidal neurons of the rat (diameter ϳ1-1.5 m; Ref/ 290), conduction velocity has been estimated to be 2.9 m/s (291). Conduction velocity along small axons of neurons from the subthalamic nucleus is also relatively modest (4.9 m/s; diameter ϳ0. 5 m; Ref. [START_REF] Atherton | Autonomous initiation and propagation of action potentials in neurons of the subthalamic nucleus[END_REF]. In contrast, in large-diameter axons such as cat brain stem motoneuron fibers (ϳ5 m), the conduction velocity reaches 70 -80 m/s (214). Similarly, in group I afferents of the cat spinal cord, conduction velocity has been estimated to vary between 70 and 90 m/s (309). The fastest impulse conduction in the animal kingdom has been reported in myelinated axons of the shrimp, which are able to conduct impulses at speeds faster than 200 m/s (569). These axons possess two unique structures (microtubular sheath and submyelinic space) that contribute to speed-up propagation. In particular, the submyelinic space constitutes a low-impedance axial path that acts in a similar way to the wire in the experiment of del Castillo and Moore (148).
Modulation of conduction velocity
Conduction velocity along myelinated axons has been shown to depend also on neuron-glia interactions (123,190,526,570). Importantly, depolarization of a single oligodendrocyte was found to increase the action potential conduction velocity of the axons it myelinates by ϳ10% (570). Although the precise mechanism has not been yet fully elucidated, it may result from ephaptic interaction between the myelin depolarization and the axon (280; see also sect. VIIIC). This finding may have important functional consequences. Mature oligodendrocytes in the rat hippocampus are depolarized by theta burst stimulation of axons. Thus myelin may also dynamically regulate impulse transmission through axons and promote synchrony among the multiple axons under the domain of an individual oligodendrocyte (570). In a recent study, the conduction velocity in small myelinated axons was found to depend on tight junctions between myelin lamellae (153). The absence of these tight junctions in Claudin 11-null mice does not perturb myelin formation but significantly decreases conduction velocity in small, but not in large, myelinated axons. In fact, tight junctions in myelin potentiate the insulation of small axons, which possess only a relatively limited number of myelin loops, by increasing their internodal resistance.
In auditory thalamocortical axons, nicotine enhances conduction velocity and decreases axonal conduction variability (282). Although the precise mechanism remains to be clarified, this process may lower the threshold for auditory perception by acting on the thalamocortical flow of information.
V. FUNCTIONAL COMPUTATION IN THE AXON
A. Activity-Dependent Shaping of the Presynaptic Action Potential
The shape of the presynaptic action potential is of fundamental importance in determining the strength of synapses by modulating transmitter release. The waveform of the depolarization dictates the calcium signal available to trigger vesicle fusion by controlling the opening of voltage-gated calcium channels and the driving force for calcium influx. Two types of modification of the presynaptic action potential have been reported experimentally: modifications of action potential width and/or modifications of action potential amplitude.
Activity-dependent broadening of presynaptic action potential
The duration of the presynaptic spike is not fixed, and activity-dependent short-term broadening of the spike has been observed in en passant mossy fiber boutons (209). The mossy fiber-CA3 pyramidal cell synapse displays fast synchronized transmitter release from several active zones and also shows dynamic changes in synaptic strength over a more than 10-fold range. The exceptionally large synaptic facilitation is in clear contrast to the weak facilitation (ϳ150% of the control) generally observed at most central synapses. Granule cell axons exhibit several voltage-gated potassium channels including Kv1.1 (443), Kv1.2 (477), and two A-type potassium channels, Kv1. 4 (126,478,543) and Kv3.4 (543). Geiger and Jonas (209) have shown that the action potential at the mossy fiber terminal is half as wide as that at the soma. During repetitive stimulation, the action potential gets broader in the axon terminal but not in the soma (209) (Fig. 7). More interestingly, using simultaneous recordings from the granule cell terminal and the corre-sponding postsynaptic apical dendrite of a CA3 neuron, Geiger and Jonas (209) showed that action potential broadening enhanced presynaptic calcium influx and doubled the EPSC amplitude (Fig. 7). This broadening results from the inactivation of A-type K ϩ channels located in the membrane of the terminal. Consequently, the pronounced short-term facilitation probably results from the conjugated action of spike widening and the classical accumulation of residual calcium in the presynaptic terminal. Because ultrastructural analysis reveals A-type channel immunoreactivity in the terminal but also in the axonal membrane (126), activity-dependent spike broadening might also occur in the axon.
Activity-dependent reduction of presynaptic action potential
Reduction of the amplitude of the presynaptic action potential has been reported following repetitive stimulation of invertebrate (230) or mammalian axons (209,552). This decline results from sodium channel inactivation and can be amplified by low concentrations of TTX [START_REF] Brody | Release-independent short-term synaptic depression in cultured hippocampal neurons[END_REF]343). The consequences of sodium channel inactivation on synaptic transmission have been studied at various central synapses. Interestingly, the reduction of the sodium current by application of TTX in the nanomolar range decreases glutamatergic transmission and enhances shortterm depression [START_REF] Brody | Release-independent short-term synaptic depression in cultured hippocampal neurons[END_REF]243,421). In addition, the depolarization of the presynaptic terminal by raising the external potassium concentration increases paired-pulse synaptic depression at autaptic contacts of cultured hippocampal cells (243) and decreases paired-pulse synaptic facilitation at Schaffer collateral-CA1 synapses stimulated extracellularly (364). In this case, the depolarization of the presynaptic axons is likely to enhance presynaptic spike attenuation. Importantly, inactivation of sodium channels by high external potassium increases the proportion of conduction failures during repetitive extracellular stimulation of Schaffer collateral axons (364). However, these results must be interpreted carefully because apparent changes in paired-pulse ratio may simply be the result of stimulation failures produced by the reduction in presynaptic axon excitability.
Interestingly, the manipulations of the sodium current mentioned above have little or no effect on GABAergic axons (243,364,421). Riluzole, TTX, or external potassium affect neither GABAergic synaptic transmission nor short-term GABAergic plasticity. This difference between glutamatergic and GABAergic axons might result from several factors. Sodium currents in interneurons are less sensitive to inactivation, and a slow recovery from inactivation has been observed for pyramidal cells but not for inhibitory interneurons (353). Moreover, the density of sodium current is higher in interneurons than in pyramidal neurons (354). Thus axons of GABAergic interneurons could be better cables for propagation than those of pyramidal cells (194,525). This unusual property could be important functionally: safe propagation along inhibitory axons could protect the brain from sporadic hyperactivity and prevent the development of epileptiform activity.
B. Signal Amplification Along the Axon
Signal amplification is classically considered to be achieved by the dendritic membrane, the cell body, or the proximal part of the axon [START_REF] Astman | Persistent sodium current in layer 5 neocortical neurons is primarily generated in the proximal axon[END_REF]512). Whereas action potential propagation along the axon is clearly an active process that depends on a high density of sodium channels, the process of action potential invasion into presynaptic terminals was, until recently, less well understood. This question is of primary importance because the geometrical perturbation introduced by the presynaptic terminal decreases the safety factor for action potential propagation and may affect the conduction time (see sect. VIII). The invasion of the spike is active at the amphibian neuromuscular junction (278) but passive at the neuromuscular junction of the mouse [START_REF] Brigant | Presynaptic currents in mouse motor endings[END_REF]163) and the lizard (329). This question has been reconsidered at hippocampal mossy fiber boutons (179). In this study, Engel and Jonas (179) showed that sodium channel density is very high at the presynaptic terminal (2,000 channels/mossy FIG. 7. Shaping of the action potential in the axon. A: a mossy fiber bouton (mfb, blue) is recorded in the whole cell configuration and activated at a frequency of 50 Hz. B: during repetitive stimulation of the axon, the action potential becomes wider. The 10th and 50th action potentials are compared with the 1st action potential in the train. C: action potential broadening potentiates transmitter release. A mossy fiber terminal (red) and the corresponding CA3 cell (blue) were recorded simultaneously. Action potential waveforms were imposed at the presynaptic terminal. The increased duration of the waveform incremented the amplitude of the synaptic current. [Adapted from Geiger and Jonas (209), with permission from Elsevier.] fiber bouton). In addition, sodium channels in mossy fiber boutons activate and inactivate with submillisecond kinetics. A realistic computer simulation indicates that the density of sodium channels found in the mossy fiber bouton not only amplifies the action potential but also slightly increases the conduction speed along the axon (179). Similarly, presynaptic sodium channels control the resting membrane potential at the presynaptic terminal of the calyx of Held (260), and hence may determine transmitter release at this synapse.
Another mechanism of activity-dependent signal amplification has been reported at hippocampal mossy fiber (376). In immature hippocampus, repetitive stimulation of the mossy fiber pathway not only facilitates synaptic transmission but also facilitates the amplitude of the presynaptic volley, the electrophysiological signature of the presynaptic action potential in the axon recorded extracellularly. This axonal facilitation is not observed in mature hippocampus. It is associated with the depolarization of mossy fibers and fully inhibited by GABA A receptor antagonists, indicating that GABA released from interneurons depolarizes the axon and increases its excitability. Because the presynaptic axon was not directly recorded in this study, further investigations will be necessary to determine whether GABA A receptor depolarization limits conduction failures or interacts with sodium channel amplification.
C. Axonal Integration (Analog Signaling)
Classically, the somatodendritic compartment is considered as the locus of neuronal integration where subthreshold electrical signals originating from active synapses are temporally summated to control the production of an output message, the action potential. According to this view, the axon initial segment is the final site of synaptic integration, and the axon remains purely devoted to action potential conduction in a digital way. Synaptic strength can be modulated by the frequency of presynaptic action potential firing. Today, this view is now challenged by accumulating evidence in invertebrate and vertebrate neurons showing that the axon is also able to integrate electrical signals arising from the somato-dendritic compartment of the neuron (for reviews, see Refs. 4,115,351,410). In fact, the axon appears now to be a hybrid device that transmits neuronal information through both action potentials in a digital way and subthreshold voltage in an analog mode.
Changes in presynaptic voltage affect synaptic efficacy
The story started with classical observations reported at the neuromuscular junction of the rat (261, 262), and at the giant synapse of the squid (237, 369, 524), where the membrane potential of the presynaptic axon was found to control the efficacy of action potentialtriggered synaptic transmission. Synaptic transmission was found to be gradually enhanced when the membrane potential of the presynaptic element was continuously hyperpolarized to different membrane potentials. Thus the membrane potential of the presynaptic element determines, in an analog manner, the efficacy of the digital output message (the action potential). This facilitation was associated with a reduction in the paired-pulse ratio (369), indicating that it results from enhanced presynaptic transmitter release. Although the mechanisms underlying this behavior have not been clearly identified, it should be noted that graded presynaptic hyperpolarization increased the presynaptic spike amplitude in a graded manner (237,369,524). The importance of the amplitude of the presynaptic action potential is also demonstrated by the reduction of the evoked EPSP upon intracellular injection of increasing concentrations of TTX in the presynaptic axon (279). Thus a possible scheme here would be that hyperpolarization of the presynaptic element induces Na ϩ channel recovery from inactivation and subsequently enhance presynaptic spike and EPSP amplitudes. A similar phenomenon has been recently observed at autaptic contacts in cultured hippocampal neurons (528).
A totally different scenario has been observed in the Aplysia (475, 486, 487) and the leech (382). In these studies on connected pairs of neurons, the authors reported that constant or transient depolarization of the membrane potential in the soma of the presynaptic neuron facilitates synaptic transmission evoked by single action potentials, in a graded manner (Fig. 8A). The underlying mechanism in Aplysia neurons involves the activation of steady-state Ca 2ϩ currents (475) and inactivation of 4-aminopyridine-sensitive K ϩ current (484, 485) which overcome propagation failures in a weakly excitable region of the neuron (184). Thus the possible scenario in Aplysia neurons is that somatic depolarization may inactivate voltage-gated K ϩ currents located in the axon that control the propagation and subsequently the amplitude and duration of the action potential.
It is also important to mention that many types of invertebrate neuron release neurotransmitter as a graded function of presynaptic membrane potential [START_REF] Angstadt | A hyperpolarization-activated inward current in heart interneurons of the medicinal leech[END_REF][START_REF] Burrows | Graded synaptic transmission between local interneurones and motor neurones in the metathoracic ganglion of the locust[END_REF]227).
In these examples, synaptic transmission does not strictly depend on spiking but rather on variations of the presynaptic membrane potential, further supporting the idea that membrane potential alone is capable of controlling neuronal communication in an analog manner.
Space constant in axons
In the experiments reported in Aplysia, facilitation was induced by changing membrane potential in the soma, indicating that the presynaptic terminal and the cell body are not electrically isolated. Thus the biophysical characteristics of electrical transfer along the axon appear as the critical parameter determining axonal integration of somatic signals.
For biophysicists, the axon is viewed as a cylinder that can be subdivided into unit lengths. Each unit length is a parallel circuit with its own membrane resistance (r m ) and capacitance (c m ). All the circuits are connected by resistors (r i ), which represent the axial resistance of the intracellular cytoplasm, and a short circuit, which represents the extracellular fluid (Fig. 8B). The voltage response in such a passive cable decays exponentially with distance due to electrotonic conduction (253a). The space (or length) constant, , of axons is defined as the distance over which a voltage change imposed at one site will drop to 1/e (37%) of its initial value (Fig. 8C). In fact, the depolarization at a given distance x from the site of injection x ϭ 0 is given by V x ϭ V 0 /e x/ , where e is the Euler's number and is the space or length constant. The length constant is expressed as ϭ (r m /r i ) 1/2 . For a cable with a diameter d, it is therefore expressed as ϭ [(d/ 4)(R M /R A )] 1/2 , where R A is the axial resistance and R M is the specific membrane resistance (425). Thus the length constant of the axon depends on three main parameters. In myelinated axons, R M may reach very high values because of the myelin sheath. Therefore, space constants in myelinated axons are very long. For instance, in cat brain stem neurons, the space constant amounts to 1.7 mm (214). EPSPs generated in the soma are thus detectable at long distances in the axon. In thin unmyelinated axons, the situation was thought to be radically different because R M is relatively low and the diameter might be very small. Space constants below 200 m were considered in models of nonmyelinated axons (295,313). The recent use of whole-cell recordings from unmyelinated axons [START_REF] Bischofberger | Patchclamp recording from mossy fiber terminals in hippocampal slices[END_REF] profoundly changed this view. In hippocampal granule cell axons, the membrane space constant for an EPSP generated in the somato-dendritic compartment is ϳ450 m (5; see also sect. VC3). Similarly, the axonal space constant in L5 pyramidal neurons is also 450 m (489). However, these values might be underestimated because the EPSP is a transient event and the space constant is inversely proportional to the frequency content of the signal (468, 489). For instance, the axonal space constant for slow signals (duration Ն200 ms) may reach ϳ1,000 m in L5 pyramidal cell axons (112).
Axonal integration in mammalian neurons
Axonal integration is not peculiar to invertebrate neurons and synaptic facilitation, produced by depolarization of the presynaptic soma, has been reported in at least three central synapses of the mammalian brain. First, at synapses established between pairs of CA3-CA3 pyramidal cells, steady-state depolarization of the presynaptic neuron from Ϫ60 to Ϫ50 mV enhances synaptic transmission (460). More recently, an elegant study published by Alle and Geiger (5) shows, by using direct patch-clamp recording from presynaptic hippocampal mossy fiber boutons, that granule cell axons transmit analog signals (membrane potential at the cell body) in FIG. 8. Axonal integration. A: graded control of synaptic efficacy by the membrane potential in a pair of connected Aplysia neurons. The hyperpolarization of the presynaptic neuron gradually reduces the amplitude of the synaptic potential. [Adapted from Shimahara and Tauc (487).] B: electrical model of a passive axon. Top: the axon is viewed as a cylinder that is subdivided into unit lengths. Bottom: each unit length is considered as a parallel circuit with its membrane resistance (r m ) and capacitance (c m ). All circuits are connected intracellularly by resistors (r i ). C: space constant of the axon. Top: schematic representation of a pyramidal cell with its axon. Bottom: plot of the voltage along the axon. A depolarization to V 0 is applied at the cell body (origin in the plot). The potential decays exponentially along the axon according to V ϭ V 0 /e X/ . The color represents the membrane potential (red is depolarized and blue is the resting potential). The space constant is defined as the distance for which V is 37% of V 0 (dashed horizontal line on the plot). addition to action potentials. Surprisingly, excitatory synaptic potentials evoked by local stimulation of the molecular layer in the dentate gyrus could be detected in the mossy-fiber bouton located at several hundreds microns from the cell body (Fig. 9). Excitatory presynaptic potentials (EPreSP) recorded in the mossy-fiber bouton represent forward-propagated EPSPs from granule cell dendrite. They were not generated locally in the CA3 region because application of AMPA receptor or sodium channel blockers locally to CA3 has no effect on the amplitude of EPreSP [START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF].
As expected from cable theory, this signal is attenuated and the EPSP waveform is much slower in the terminal than in the soma of granule cells. The salient finding here is that the space constant of the axon is much wider (ϳ450 m) than initially expected. Consistent with propagation of electrical signals over very long distances, the analog facilitation of synaptic transmission has a slow time constant [START_REF] Ahern | Induction of persistent sodium current by exogenous and endogenous nitric oxide[END_REF][START_REF] Aizenman | Rapid, synaptically driven increases in the intrinsic excitability of cerebellar deep nuclear neurons[END_REF][START_REF] Alle | Analog signalling in mammalian cortical axons[END_REF]Refs. 291,382,489). The functional consequence is that slow depolarizations of the membrane in somatic and dendritic regions are transmitted to the axon terminals and can influence the release of transmitter at the mossy fiber-CA3 cell synapse. Similar observations have been reported in the axon of L5 cortical pyramidal neurons recorded in whole cell configuration at distances ranging between 90 and 400 m (489; Fig. 10A). In this case, whole cell recording is possible on the axon because sectioning the axon produces a small enlargement of its diameter that allows positioning of a patch pipette. Here again, incoming synaptic activity in the presynaptic neuron propagates down the axon and can modulate the efficacy of synaptic transmission. The modulation of synaptic efficacy by somatic potential is blocked at L5-L5 connections (489) or reduced by Ca 2ϩ chelators at the mossy fiber input (5; but see Ref. 468), and may therefore result from the control of background calcium levels at the presynaptic terminal [START_REF] Awatramani | Modulation of transmitter release by presynaptic resting potential and background calcium levels[END_REF].
At least one mechanism controlling the voltage-dependent broadening of the axonal action potential has been recently identified in L5 pyramidal neurons (291, 490). Kv1 channels are expressed at high densities in the AIS ( 267), but they are also present in the axon proper. With cell-attached recordings from the axon at different distances from the cell body, Kole et al. (291) elegantly showed that Kv1 channel density increases 10-fold over the first 50 m of the AIS but remains at very high values in the axon proper (ϳ5-fold the somatic density). The axonal current mediated by Kv1 channels inactivates with a time constant in the second range (291, 489). Pharmacological blockade or voltage inactivation of Kv1 channels produce a distance-dependent broadening of the axonal spike, as well as an increase in synaptic strength at proximal axonal terminals (291). For instance, when the membrane potential is shifted from Ϫ80 to Ϫ50 mV, the D-type current is reduced by half (291). Subsequently, the axonal spike is enlarged and transmitter release is enhanced (Fig. 10B). Thus Kv1 channels occupy a strategic position to integrate slow subthreshold signals generated in the dendrosomatic region and control the presynaptic action potential waveform to finely tune synaptic coupling in local cortical circuits.
Axonal speeding
The role of the axonal membrane compartment is also critical in synaptic integration. The group of Alain Marty (365) showed by cutting the axon of cerebellar interneurons with two-photon illumination that the axonal membrane speeds up the decay of synaptic potentials recorded in the somatic compartment of cerebellar interneurons. This effect results from passive membrane properties of the axonal compartment. In fact, the axonal compartment acts as a sink for fast synaptic currents. The capacitive charge is distributed towards the axonal membrane synaptic current, thus accelerating the EPSP decay beyond the speed defined by the membrane time constant of the neuron (usually 20 ms). Functionally, axonal speeding has important consequences. EPSP decay is faster and, consequently, axonal speeding increases the tempo-FIG. 9. Integration of subthreshold synaptic potential in the axon of hippocampal granule cells. Electrically evoked synaptic inputs in the dendrites of a granule cell can be detected in the mossy fiber terminal (EPreSP). Bottom panel: synaptic transmission at the mossy fiber synapse was facilitated when the simulated EPreSP ("EPreSP") was associated with a presynaptic action potential (AP ϩ "EPreSP"). [Adapted from Alle and Geiger (5), with permission from the American Association for the Advancement of Science.] ral precision of EPSP-spike coupling by reducing the time window in which an action potential can be elicited (200,418).
Backward axonal integration
Voltage changes in the somatic compartment modify release properties at the nerve terminal, and the effect is reciprocal. Small physiological voltage changes at the nerve terminal affect action potential initiation (400). In their recent study, Paradiso and Wu (400) have shown that small subthreshold depolarizations (Ͻ20 mV) of the calyx of Held produced by current injection or by the afterdepolarization (ADP) of a preceding action potential were found to decrease the threshold for action potential generated by local stimulation 400 -800 m from the nerve terminal. Conversely, a small hyperpolarization of the nerve terminal (Ͻ15 mV) produced either by current injection or the AHP increased the threshold for spike initiation. Thus this elegant study showed for the first time that axonal membrane, like dendrites, can backpropagate signals generated in the nerve terminal. Presynaptic GABA A currents originating in the axon have been recently identified in the cell body of cerebellar interneurons (535). Thus axonal GABAergic activity can probably influence somatic excitability in these neurons, further supporting the fact that axonal and somatodendritic compartments are not electrically isolated.
The functional importance of axonal integration is clear, but many questions remain open. The three examples where hybrid (analog-digital) signaling in the axon has been observed are glutamatergic neurons [CA3 pyramidal neurons (460), granule cells (5), and L5 pyramidal neurons (291, 489)]. Do axons of GABAergic interneurons also support hybrid axonal signaling? A study indicates that this is not the case at synapses established by parvalbumin-positive fast-spiking cells that display delayed firing and pyramidal neurons in cortical layer 2-3 (117, 217). However, the equilibrium between excitation and inhibition probably needs to be preserved in cortical circuits, and one cannot exclude that hybrid axonal signaling may exist in other subclasses of cortical or hippocampal GABAergic interneurons. In cerebellar interneurons, GABA release is facilitated by subthreshold depolarization of the presynaptic soma (110). Can inhibitory postsynaptic potentials spread down the axon, and if so, how do they influence synaptic release? In dendrites, voltagegated channels amplify or attenuate subthreshold EPSPs. Do axonal voltage-gated channels also influence propagation of subthreshold potentials? Now that the axons of mammalian neurons are finally becoming accessible to direct electrophysiological recording, we can expect answers to all these questions.
VI. PROPAGATION FAILURES
One of the more unusual operations achieved by axons is selective conduction failure. When the action potential fails to propagate along the axon, no signal can reach the output of the cell. Conduction failure represents a powerful process that filters communication with postsynaptic neurons (549). Propagation failures have been observed experimentally in various axons including vertebrate spinal axons [START_REF] Barron | Intermittent conduction in the spinal cord[END_REF]301), spiny lobster or crayfish motoneurons (230,231,241,401,496), leech mechanosensory neurons [START_REF] Baccus | Synaptic facilitation by reflected action potentials: enhancement of transmission when nerve impulses reverse direction at axon branch points[END_REF][START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF]234,541,572), thalamocortical axons ( 151), rabbit nodose ganglion neurons ( 167), rat dorsal root ganglion neurons (335, 336), neurohypophysial axons [START_REF] Bielefeldt | A calcium-activated potassium channel causes frequency-dependent action-potential failures in a mammalian nerve terminal[END_REF]172), and hippocampal pyramidal cell axons (144,364,498). However, some axons in the auditory pathways are capable of sustaining remarkably high firing rates, with perfect entrainment occurring at frequencies of up to 1 kHz (467). Several factors determine whether propagation along axons fails or succeeds.
A. Geometrical Factors: Branch Points and Swellings
Although the possibility that propagation may fail at branch points was already discussed by Krnjevic and Miledi (301), the first clear indication that propagation is perturbed by axonal branch points came from the early studies on spiny lobster, crayfish, and leech axons (230,231,401,496,497,541,572). The large size of invertebrate axons allowed multielectrode recordings upstream and downstream of the branch point. For example, in lobster axons, conduction across the branch point was found to fail at frequencies above 30 Hz (Fig. 11A,Ref. 230). The block of conduction occurred specifically at the branch point because the parent axon and one of the daughter branches continued to conduct action potentials. Failures appeared first in the thicker daughter branch, but they could be also observed in the thin branch at higher stimulus frequency. In the leech, conduction block occurs at central branch points where fine axons from the periphery meet thicker axons (572). Branch point failures have been observed or suspected to occur in a number of mammalian neurons (144,151,167).
Propagation failures also occur when the action potential enters a zone with an abrupt change in diameter. This occurs with en passant boutons [START_REF] Bourque | Intraterminal recordings from the rat neurohypophysis in vitro[END_REF]272,581) but also when impulses propagating along the axon enter the soma [START_REF] Antic | Functional profile of the giant metacerebral neuron of Helix aspersa: temporal and spatial dynamics of electrical activity in situ[END_REF]185,336). For instance, in the metacerebral cell of the snail, propagation failures have been observed when a spike enters the cell body (Fig. 11B; Ref. 12).
These failures result because the electrical load is significantly higher on the arriving action potential, and the current generated by the parent axon is not sufficient to support propagation (reviewed in Ref. 470). Simulations show that at geometrical irregularities the propagating action potential is usually distorted in amplitude and width, and local conduction velocity can be changed. For instance, an abrupt increase in axon diameter causes a decrease in both velocity and peak amplitude of the action potential, whereas a step decrease in diameter has the opposite local effects on these two parameters (221,226,272,337,338,348,349,403). In fact, the interplay between the total longitudinal current produced by the action potential and the input impedance of the axon segments ahead of the action potential determines the fate of the propagating action potential. The case of the branch point has been studied in detail (219,221,583). The so-called 3/2 power law developed by Rall describes an ideal relationship between the geometry of mother and daughter branches (221,424,426). A geometrical parameter (the geometrical ratio, GR) has been defined as follows: GR ϭ (d 3/2 daughter 1 ϩ d 3/2 daughter 2 )/d 3/2 mother , where d daughter 1 and d daughter 2 are the diameters of the daughter branches and d mother is the diameter of the parent axon.
For GR ϭ 1, impedances match perfectly and spikes propagate in both branches. If GR Ͼ 1, the combined electrical load of the daughter branches exceeds the load of the main branch. In other words, the active membrane of the mother branch may not be able to provide enough current to activate both branches. If GR Ͼ 10, conduction block occurs in all daughter branches (404). For 1 Ͻ GR Ͻ 10, the most common situation by far, propagation past the branch point occurs with some delay. All these conclusions are only true if the characteristics of the membrane are identical, and any change in ion channel density may positively or negatively change the safety factor at a given branch point. The amplification of the propagating action potential by sodium channels in the mossy fiber bouton is able to counteract the geometrical effects and speeds up the propagation along the axon (179). Details on the experimental evaluation of GR at axon branch points has been reviewed elsewhere (139).
B. Frequency-Dependent Propagation Failures
Depending on the axon type, conduction failures are encountered following moderate or high-frequency (200 -300 Hz) stimulation of the axon. For instance, a frequency of 20 -30 Hz is sufficient to produce conduction failures at the neuromuscular terminal arborization (301) or at the branch point of spiny lobster motoneurons (230). These failures are often seen as partial spikes or spikelets that are electrotonic residues of full action potentials. The functional consequences of conduction failures might be important in vivo. For example, in the leech, propagation failures produce an effect similar to that of sensory adaptation. They represent a nonsynaptic mechanism that temporarily disconnects the neuron from one defined set of postsynaptic neurons and specifically routes sensory information in the ganglion (234,339,541,572).
What are the mechanisms of frequency-dependent conduction failure? As mentioned above, the presence of a low safety conduction point such as a branch point, a bottleneck (i.e., an axon entering the soma) or an axonal swelling determines the success or failure of conduction. However, these geometrical constraints are not sufficient to fully account for all conduction failures, and additional factors should be considered. The mechanisms of propagation failure can be grouped in two main categories.
First, propagation may fail during repetitive axon stimulation as a result of a slight depolarization of the membrane. At spiny lobster axons, propagation failures were associated with a 10 -15% reduction of the action potential amplitude in the main axon and a membrane depolarization of 1-3 mV (230). These observations are consistent with potassium efflux into the peri-axonal space induced by repetitive activation. In most cases, the membrane depolarization produced by external accumulation of potassium ions around the axon probably contributes to sodium channel inactivation. In fact, hyperpolarization of the axon membrane or local application of physiological saline with a low concentration of potassium in the vicinity of a block can restore propagation in crayfish axons (496). Elevation of the extracellular potassium concentration produced conduction block in spiny lobster axons (231). However, this manipulation did not reproduce the differential block induced by repetitive stimulation, as failures occurred simultaneously in both branches (230). Interestingly, conduction could also be restored by the elevation of intracellular calcium concentration. Failures were also induced with a lower threshold when the electrogenic Na ϩ /K ϩ pump was blocked with ouabain. Thus differential conduction block could be explained as follows. During high-frequency activation, potassium initially accumulates at the same rate around the parent axon and the daughter branches. Sodium and calcium accumulate more rapidly in the thin branch than in the thick branch because of the higher surface-to-volume ratio. Thus the Na ϩ /K ϩ pump is activated and extracellular potassium is lowered, more effectively around the thin branch (231). Accumulation of extracellular potassium has also been observed in the olfactory nerve (178) and in hippocampal axons (416), and could similarly be at the origin of unreliable conduction.
Propagation failures have also been reported in the axon of Purkinje neurons under high regimes of stimulation (283, 372; Fig. 12A). In this case, the cell body was recorded in whole cell configuration, whereas the signal in the axon was detected in cell-attached mode at a distance up to 800 m from the cell body. Propagation was found to be highly reliable for single spikes at frequencies below 200 Hz (failures were observed above 250 Hz). In physiological conditions, Purkinje cells typically fire simple spikes well below 200 Hz, and these failures are unlikely to be physiologically relevant (196). However, Purkinje cells also fire complex spikes (bursts) following stimulation of the climbing fiber. The instantaneous frequency during these bursts may reach 800 Hz (283, 372). Interestingly, complex spikes did not propagate reliably in Purkinje cell axons. Generally, only the first and the last spike of the burst propagate. The failure rate of the complex spike is very sensitive to membrane potential, and systematic failures occur when the cell body is depolarized (372). The limit of conduction has not been yet fully explored in glutamatergic cell axons, but conduction failures have been reported when a CA3 pyramidal neuron fire at 30 -40 Hz during a long plateau potential (362). Thus the conduction capacity seems much more robust in inhibitory cells compared with glutamatergic neurons. However, this study was based on extracellular recordings, and the apparent conduction failures may result from detection problems. In fact, very few failures were observed with whole cell recordings in neocortical pyramidal neurons (489). Furthermore, the robustness of spike propagation along axons of inhibitory neurons will require further studies.
Propagation failures induced by repetitive stimulation may also result from hyperpolarization of the axon. Hyperpolarization-induced conduction block has been observed in leech (339,541,572), locust (247), and mammalian axons [START_REF] Bielefeldt | A calcium-activated potassium channel causes frequency-dependent action-potential failures in a mammalian nerve terminal[END_REF]167). In this case, axonal hyperpolarization opposes spike generation. Activity-dependent hyperpolarization of the axon usually results from the activation of the Na ϩ -K ϩ -ATPase and/or the activation of calcium-dependent potassium channels. Unmyelinated axons in the PNS, for example, vagal C-fibers, hyperpolarize in response to repeated action potentials (445, 446) as a result of the intracellular accumulation of Na ϩ and the subsequent activation of the electrogenic Na ϩ /K ϩ pump [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]445,446). In crayfish axons, this hyperpolarization may amount to 5-10 mV [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]. The blockade of the Na ϩ -K ϩ -ATPase with ouabain results in axon depolarization, probably as a consequence of posttetanic changes in extracellular potassium concentration. In the leech, hyperpolarization-dependent conduction block occurs at central branch points in all three types of mechanosensory neurons in the ganglion: touch (T), pressure (P), and nociceptive (N) neurons. In these neurons, hyperpolarization is induced by the Na ϩ -K ϩ -ATPase and by cumulative activation of a calcium-activated potassium conductance. It is interesting to note that the conduction state can be changed by neuromodulatory processes. 5-HT decreases the probability of conduction block in P and T cells, probably by a reduction of the hyperpolarization (350).
Hyperpolarization-dependent failures have also been reported in axons of hypothalamic neurons (from paraventricular and supraoptic nuclei) that run into the neurohypophysis. The morphology of their boutons is unusual in that their diameter varies between 5 and 15 m (581). In single axons, propagation failures are observed at stimulation rates greater than 12 Hz and are concomitant with a hyperpolarization of 4 mV [START_REF] Bielefeldt | A calcium-activated potassium channel causes frequency-dependent action-potential failures in a mammalian nerve terminal[END_REF]. Here, the induced hyperpolarization of the neuron results from activation of the calcium-dependent BK potassium channels.
Several recent studies indicate that the hyperpolarization produced by repetitive stimulation could be dampened by hyperpolarization-induced cationic current (I h ) [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]498). This inward current is activated at resting membrane potential and produces a tonic depolarization of the axonal membrane [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]. Thus reduction of this current induces a hyperpolarization and perturbs propagation. The pharmacological blockade of I h by ZD-7288 or by external cesium can in fact produce more failures in Schaffer collateral axons (498). The peculiar biophysical properties of I h indicate that it may limit large hyperpolarizations or depolarizations produced by external and internal accumulation of ions. In fact, hyperpolarization of the axon will activate I h , which in turn produces an inward current that compensates the hyperpolarization [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]. Reciprocally, this compensatory mechanism is also valid for depolarization by removing basal activation of I h . In addition, activity-induced hyperpolarization of the axonal membrane may modulate the biophysical state of other channels that control propagation.
C. Frequency-Independent Propagation Failures
Action potential propagation in some axon collaterals of cultured CA3 pyramidal neurons can be gated by activation of presynaptic A-type K ϩ current, independently of the frequency of stimulation (144,295). Synaptic transmission between monosynaptically coupled pairs of CA3-CA3 or CA3-CA1 pyramidal cells in hippocampal slice cultures can be blocked if a brief hyperpolarizing current pulse is applied a few milliseconds before the induction of the action potential in the presynaptic neuron (Fig. 12B). This regulation is observed in synaptic connections that have no transmission failures, therefore indicating that the lack of postsynaptic response is the consequence of a conduction failure along the presynaptic axon. In contrast to axonal integration where transmitter can be gradually regulated by the presynaptic membrane potential, transmission is all or none. Interestingly, failures can also be induced when the presynaptic hyperpolarizing current pulse is replaced by a somatic IPSP (144,295). When presynaptic cells are recorded with a microelectrode containing 4-aminopyridine (4-AP), a blocker of I A -like conductances, failures are abolished, indicating that I A gates action potential propagation (see also Ref. 389). Because A-channels are partly inactivated at the resting membrane potential, their contribution during an action potential elicited from the resting membrane potential is minimal, and the action potential propagates successfully from the cell body to the nerve terminal. In contrast, A-channels recover from inactivation with a transient hyperpolarization and impede successful propagation to the terminal.
Propagation failures have been induced in only 30% of cases (144), showing that propagation is generally reliable in hippocampal axons (341,342,422). In particular, I A -dependent conduction failures have been found to occur at some axon collaterals but not at others (144). With the use of a theoretical approach, it has been shown that failures occur at branch points when A-type K ϩ channels are distributed in clusters near the bifurcation (295). Perhaps because these conditions are not fulfilled in layer II/III neocortical neurons (128, 289) and in dissociated hippocampal neurons (341), this form of gating has not been reported in these cell types. It would be interesting to explore the actual distribution of K ϩ channel clusters near branch points using immunofluorescence methods.
Functionally, this form of gating may determine part of the short-term synaptic facilitation that is observed during repetitive presynaptic stimulation. Apparent paired-pulse facilitation could be observed because the first action potential fails to propagate but not the second spike, as a result of inactivation of A-type K ϩ current (145). A recent study suggests that repetitive burst-induced inactivation of A-type K ϩ channels in the axons of cortical cells projecting onto accumbens nucleus leads to short-term synaptic potentiation through an increased reliability of spike propagation [START_REF] Casassus | Short-term regulation of information processing at the corticoaccumbens synapse[END_REF].
VII. REFLECTION OF ACTION POTENTIAL PROPAGATION
Branch points are usually considered as frequency filters, allowing separate branches of an axon to activate their synapses at different frequencies. But another way that a neuron's branching pattern can affect impulse propagation is by reflecting the impulse (221,402,428). Reflection (or reverse propagation) occurs when an action potential is near failure (221). This form of axonal computation has been well described in leech mechanosensory neurons (Fig. 13A; Refs. [START_REF] Baccus | Synaptic facilitation by reflected action potentials: enhancement of transmission when nerve impulses reverse direction at axon branch points[END_REF][START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF] in which an unexpected event occurs when conduction is nearly blocked: the action potential that has nearly failed to invade the thick branch of the principal axon sets up a local potential that propagates backwards. Reflection occurs because impulses are sufficiently delayed as they travel through the branch point. Thus, when the delay exceeds the refractory period of the afferent axon, the impulse will propagate backwards as well as forwards, creating a reflection. This phenomenon can be identified electrophysiologically at the cell body of the P neuron because action potentials that reflect had a longer initial rising phase (or "foot"), indicating a delay in traveling through the branch point. This fast double firing in the thin branch of mechanosensory neurons has important functional consequences. It facilitates synaptic transmission at synapses formed by this axon and postsynaptic neurons by a mechanism of paired-pulse facilitation with the orthodromic spike and the antidromic action potential that reflected at the branch point (Fig. 13A). Reflection is not limited to P cells but also concerns T cells [START_REF] Baccus | Synaptic facilitation by reflected action potentials: enhancement of transmission when nerve impulses reverse direction at axon branch points[END_REF]. Interestingly, the facilitation of synaptic transmission also affects the chemical synapse between the P cell and the S neuron, a neuron that plays an essential role in sensitization, a nonassociative form of learning [START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF].
Reflected propagation is not restricted to mechanosensory neurons of the leech but has also been noted in the axon of an identified snail neuron [START_REF] Antic | Functional profile of the giant metacerebral neuron of Helix aspersa: temporal and spatial dynamics of electrical activity in situ[END_REF]. Reflection has not yet been definitively reported in mammalian axons (270), but it has been demonstrated in dendrites. In mitral cells of the mammalian olfactory bulb, both conduction FIG. [START_REF] Antonini | Morphology of single geniculocortical afferents and functional recovery of the visual cortex after reverse monocular deprivation in the kitten[END_REF]. Reflection of action potentials. A: reflection and conduction block produce multilevel synaptic transmission in mechanosensory neurons of the leech. Left column: an action potential initiated by anterior minor field stimulation invades the whole axonal arborization (red) and evokes an EPSP in all postsynaptic cells. Middle column: following repetitive stimulation, the cell body is slightly hyperpolarized (orange) and the same stimulation induces a reflected action potential at the branch point between the left branch and the principal axon. The reflected action potential (pink arrow 2) stimulates the presynaptic terminal on postsynaptic cell 1 twice, thus enhancing synaptic transmission (arrow). Right column: when the cell body is further hyperpolarized (blue), the stimulation of the minor field now produces an action potential that fails to propagate at the branch point. The failed spike is seen as a spikelet at the cell body (upward arrow). No postsynaptic response is evoked in postsynaptic cell 2 (downward arrow). [Adapted from Baccus et al. (28,[START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF].] B: reflection of action potential propagation in the presynaptic dendrite of the mitral cell. The dendritic and somatic compartments are recorded simultaneously. An action potential (1) initiated in the dendrite (d) fails to propagate towards the soma (s, dotted trace), is then regenerated at the soma ( 2), and propagates back to the dendrite, thus producing a double dendritic spike (thick trace in the inset). The asterisk marks the failing dendro-somatic spike. [Adapted from Chen et al. (108).] failures (107) and reflection (108) have been observed for impulses that are initiated in dendrites (Fig. 13B). Propagation in dendrites of mitral cells is rather unusual compared with classical dendrites. Like axons, it is highly active, and no decrement in the amplitude of the AP is observed between the soma and the dendrite [START_REF] Bischofberger | Action potential propagation into the presynaptic dendrites of rat mitral cells[END_REF]. In addition, mitral cell dendrites are both pre-and postsynaptic elements. "Ping-pong" propagation has been observed following near failure of dendritic action potentials evoked in distal primary dendrites (108). Forward dendritic propagation of an action potential can be evoked by an EPSP elicited by a strong stimulation of the glomerulus. This particular form of propagation may fail near the cell body when the soma is slightly hyperpolarized. For an intermediate range of membrane potential, the action potential invades the soma and may trigger a back-propagating AP, which is seen as a dendritic double spike in the primary dendrite. The function of reflected propagation is not yet definitively established, but when axonal output is shut down by somatic inhibition, the primary dendrite of the mitral cell may function as a local interneuron affecting its immediate environment. Reflection of fast action potentials has also been observed in dendrites of retinal ganglion cells (544).
VIII. SPIKE TIMING IN THE AXON
A. Delay Imposed by Axonal Length
Axonal conduction introduces a delay in the propagation of neuronal output, and axonal arborization might transform a temporal pattern of activity in the main axon into spatial patterns in the terminals (113). Axonal delay initially depends on the velocity of the action potential in axons (generally between 0.1 m/s in unmyelinated axons and 100 m/s in large myelinated axons) which directly results from the diameter of the axon and the presence of a myelin sheath. Axonal delays may have crucial functional consequences in the integration of sensory information. In the first relay of the auditory system of the barn owl, differences in the axonal conduction delay from each ear, which in this case depends on the differences in axonal length, produce sharp temporal tuning of the binaural information that is essential for acute sound localization (Fig. 14A; Refs. [START_REF] Carr | Axonal delay lines for time measurement in the owl's brainstem[END_REF][START_REF] Carr | A circuit for detection of interaural time differences in the brain stem of the barn owl[END_REF]358).
What is the functional role of axonal delay in network behavior? Theoretical work shows that synchronization of cortical columns and network resonance both depend FIG. [START_REF] Antonini | Rapid remodeling of axonal arbors in the visual cortex[END_REF]. Axonal propagation and spike timing. A: delay lines in the auditory system of the barn owl. Each neuron from the nucleus laminaris receives an input from each ear. Note the difference in axonal length from each side. [Adapted from Carr and Konishi (96).] B: comparison of the delay of propagation introduced by a branch point with GR Ͼ 1 (dashed traces) versus a branch point with perfect impedance matching (GR ϭ 1, continuous traces). Top: schematic drawing of a branched axon with 3 points of recording. At the branch point with GR ϭ 8, the shape of the action potential is distorted and the propagation displays a short latency (⌬t). [Adapted from Manor et al. (349).] C: propagation failures in hippocampal cell axons are associated with conduction delays. The presynaptic neuron was slightly hyperpolarized with constant current to remove inactivation of the A-current (I A ). A presynaptic action potential induced with a short delay after onset of the depolarizing pulse did not elicit an EPSC in the postsynaptic cell because of the large activation of I A . Increasing the delay permitted action potential propagation because I A was reduced during the action potential. For complete inactivation of I A (bottom pair of traces), latency decreased. [Adapted from Debanne et al. (144), with permission from Nature Publishing Group.] on axonal delay [START_REF] Bush | Inhibition synchronizes sparsely connected cortical neurons within and between columns in realistic network models[END_REF]344). A recent theoretical study emphasizes the importance of axonal delay in the emergence of poly-synchronization in neural networks (271). In most computational studies of storage capacity, axonal delay is totally ignored, but in fact, the interplay between axonal delays and synaptic plasticity based on timing (spike-timing-dependent plasticity, STDP) generates the emergence of polychronous groups (i.e., strongly interconnected groups of neurons that fire with millisecond precision). Most importantly, the number of groups of neurons that fire synchronously exceeds the number of neurons in a network, resulting in a system with massive memory capacity (271).
However, differences in axonal delay may be erased to ensure synchronous activity. A particularly illustrative example is given by the climbing fiber inputs to cerebellar Purkinje cells. Despite significant differences in the length of individual olivocerebellar axons, the conduction time is nearly constant because long axons are generally thicker. Thus this compensatory mechanism allows synchronous activation of Purkinje cells with millisecond precision (518). Similarly, the eccentricity of X-class retinal ganglion cells within the retina is compensated by their conduction velocity to produce a nearly constant conduction time (507). Thus, regardless of the geometrical constraints imposed by retinal topography, a precise spatiotemporal representation of the retinal image can be maintained in the visual relay.
B. Delays Imposed by Axonal Irregularities and Ion Channels
In addition to this axonal delay, local changes in the geometry of the axon produce an extra delay. The presence of axonal irregularities such as varicosities and branch points reduces conduction velocity (Fig. 14B). This reduction in conduction velocity occurs as a result of a high geometrical ratio (GR; see sect. VIA). The degree of temporal dispersion has been simulated in the case of an axon from the somatosensory cortex of the cat (349). The delay introduced by high GR branch points could account for a delay of 0.5-1 ms (349). But this extra delay appears rather small compared with the delay imposed by the conduction in axon branches with variable lengths (in the range of 2-4 ms).
A third category of delay in conduction can be introduced during repetitive stimulation or during the activation of specific ion channels. Thus the magnitude of this delay is usually variable. It has been measured in a few cases. In lobster axons, the conduction velocity of the axon was lowered by ϳ30% following repetitive stimulation (231). In dorsal root ganglion neurons, the latency of conducted spikes was found to be enhanced by ϳ1 ms following antidromic paired-pulse stimulation of the axon (336). Com-putational studies indicate that this delay may also result from a local distortion of the action potential shape. Activitydependent delays may have significant consequences on synaptic transmission. For instance, the synaptic delay was found to increase by 1-2 ms during repetitive stimulation of crayfish motor neurons (241). Monosynaptic connections to motoneurons show an increase in synaptic latency concomitant with the synaptic depression induced by repetitive stimulation at 5-10 Hz, which induced near-propagation failures (510). Similarly, a longer synaptic delay has been measured between connected hippocampal cells when conduction nearly fails, due to reactivation of A-type potassium channels (Fig. 14C; Ref. 144). Thus axonal conduction may introduce some noise into the temporal pattern of action potentials produced at the initial segment. At the scale of a nerve, delays in individual axons introduce a temporal dispersion of conduction, suggesting a stuttering model of propagation (374).
Synaptic timing at L5-L5 or CA3-CA3 unitary connections is largely determined by presynaptic release probability [START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF]. Synaptic latency is inversely correlated with the amplitude of the postsynaptic current, and changes in synaptic delay in the range of 1-2 ms are observed during paired-pulse and long-term plasticity involving regulation of presynaptic release [START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF]. Probability of transmitter release is not the only determinant of synaptic timing, however. The waveform of the axonal spike also plays a critical role. The enlargement of the axonal spike by a Kv channel blocker significantly prolongs synaptic latency at L5-L5 synapses [START_REF] Boudkkazi | Presynaptic action potential waveform determines cortical synaptic latency[END_REF]. The underlying mechanism results from the shift in the presynaptic calcium current. Because the presynaptic action potential overshoots at approximately ϩ50 mV, the calcium current develops essentially during the repolarizing phase of the presynaptic spike [START_REF] Augustine | Calcium entry into voltage-clamped presynaptic terminals of squid[END_REF][START_REF] Bischofberger | Timing and efficacy of Ca 2ϩ channel activation in hippocampal mossy fiber boutons[END_REF]279,328,454). Thus the spike broadening produced by 4-AP delays the calcium current and subsequently shifts glutamate release towards longer latencies. Physiologically, spike broadening in the axon may occur when Kv channels are inactivated during repetitive axonal stimulation and may participate in the stabilization of synaptic delay [START_REF] Boudkkazi | Presynaptic action potential waveform determines cortical synaptic latency[END_REF].
The probabilistic nature of voltage-gated ion channels (i.e., channel noise) may also affect conduction time along fibers below 0.5 m diameter. A simulation study indicates that four distinct effects may corrupt propagating spike trains in thin axons: spikes being added, deleted, jittered, or split into subgroups (186). The local variation in the number of Na ϩ channels may cause microsaltatory conduction.
C. Ephaptic Interactions and Axonal Spike Synchronization
Interactions between neighboring axons were first studied by Katz and Schmitt (280,281) in crab. The passage of an impulse in one axonal fiber produced a sub-threshold change in excitability in the adjacent fiber. As the action potential approaches in the active axon, the excitability of the resting fiber was first reduced, and then quickly enhanced (280,288). This effect results from the depolarization of the resting axon by the active axon because it generates locally an extracellular potential of a few millivolts. Interactions of this type are called ephaptic (from the Greek for "touching onto," Ref. 20). They are primarily observed when the extracellular conductance is reduced [START_REF] Barr | Electrophysiological interaction through the interstitial space between adjacent unmyelinated parallel fibers[END_REF]280). This condition is fulfilled, for instance, in bundles of unmyelinated axons where the periaxonal space is minimal, as in olfactory nerves [START_REF] Blinder | Intercellular interactions in the mammalian olfactory nerve[END_REF][START_REF] Bokil | Ephaptic interactions in the mammalian olfactory system[END_REF]. Ephaptic interactions between axons have also been observed in frog sciatic nerve (288) and in demyelinated spinal axons of dystrophic mice (436).
One of the most interesting features of ephaptic interaction between adjacent axons is that the conduction velocity in neighboring fibers might be unified, thus synchronizing activity in a bundle of axons. If one action potential precedes the other by a few milliseconds, it accelerates the conduction rate of the lagging action potential in the other axon [START_REF] Barr | Electrophysiological interaction through the interstitial space between adjacent unmyelinated parallel fibers[END_REF]280;Fig. 15). This phenomenon occurs because the ephaptic potential created in the adjacent fiber is asymmetrical. When the delay between the two spikes is small (ϳ1-2 ms; Ref. 37), the depolarizing phase of the ephaptic potential facilitates spike generation and increases conduction velocity. However, perfectly synchronized action potentials decrease the conduction velocity in both branches because of the initial hyperpolarizing phase of the ephaptic potentials. Synchronization can only occur if the individual velocities differ only slightly and are significant for a sufficient axonal length (280). Does such synchronization also occur in mammalian axons? There is no evidence for this yet, but modeling studies indicate that the relative location of nodes of Ranvier on two adjacent myelinated axons might also determine the degree of temporal synchrony between fibers [START_REF] Binczak | Ephaptic coupling of myelinated nerve fibers[END_REF]440). On small unmyelinated axons, ephaptic interaction between axons is predicted to be very small (254), but future research in this direction might reveal a powerful means to thoroughly synchronize neuronal activity downstream of the site of action potential initiation.
D. Electric Coupling in Axons and Fast Synchronization
Fast communication between neurons is not only ensured by chemical synapses, but electrical coupling has been reported in a large number of cell types including inhibitory cortical interneurons (249). In the hippocampus, one type of high-frequency oscillation (100 -200 Hz) called "ripple" arises from the high-frequency firing of inhibitory interneurons and phase-locked firing of many CA1 neurons (533). Some of the properties of ripple oscillation are, however, difficult to explain. First, the oscillations are so fast (near 200 Hz) that synchrony across many cells would be difficult to achieve through chemical synaptic transmission. In addition, ripples persist during pharmacological blockade of chemical transmission in vitro (162). While some inhibitory interneurons may synchronize a large number of pyramidal cells during the ripple (286), a significant part of the synchronous activity could be mediated by axo-axonal electrical synaptic contacts through gap junctions (464). Antidromic stimulation of a neighboring axon elicits a small action potential, a spikelet with a fast rate of rise (near 180 mV/ms). Spikelets can be evoked at the rate of a ripple (200 Hz), and they are blocked by TTX or by the gap junction blocker FIG. [START_REF] Aponte | Hyperpolarizationactivated cation channels in fast-spiking interneurons of rat hippocampus[END_REF]. Ephaptic interaction in axons. A: local circuit diagram in a pair of adjacent axons. The red area indicates the "active region." The action currents produced by the action potential penetrates the inactive axon. B: schematic representation of resynchronization of action potentials in a pair of adjacent axons. While the spikes propagate along the axons, the initial delay between them becomes reduced. [Adapted from Barr and Plonsey (37) and Katz and Schmitt (280).] carbenoxolone. Simultaneous recording from the axon and cell body showed that the spikelet first traversed the axon prior to invading the soma and the dendrites. Finally, the labeling of pyramidal neurons with rhodamine, a small fluorescent molecule, showed dye coupling in adjacent neurons that was initiated through the axon (464). Thus the function of the axon is not limited to the conduction of the impulses to the terminal, and information may process between adjacent pyramidal neurons through electrical synapses located close to their axon hillock.
A similar mechanism of electrical coupling between proximal axons of Purkinje cells is supposed to account for very fast oscillations (Ͼ75 Hz) in the cerebellum. Very fast cerebellar oscillations recorded in cerebellar slices are indeed sensitive to gap junction blockers (368). In addition, spikelets and fast prepotentials eliciting full spikes are observed during these episodes. In fact, the simulation of a cerebellar network where Purkinje cells are sparsely linked though axonal gap junctions replicates the experimental observations (534).
Cell-cell communication through axons of CA1 pyramidal neurons has recently been suggested in vivo (181). Using the newly developed technique of in vivo whole cell recording in freely moving rats (321,322,352), the group of Michael Brecht found that most records from CA1 cells (ϳ60%) display all-or-none events, with electrophysiological characteristics similar to spikelets resulting from electrical coupling in the axon. These events have a fast rise time (Ͻ1 ms) and a biphasic decay time. They occur during ripples as bursts of three to six events (181).
IX. ACTIVITY-DEPENDENT PLASTICITY OF AXON MORPHOLOGY AND FUNCTION
A. Morphological Plasticity
The recent development of long-term time lapse imaging in vitro and in vivo (255) has revealed that axon morphology is highly dynamic. Whereas the large-scale organization of the axonal arborization remains fairly stable over time in adult central neurons, a subset of axonal branchlets can undergo impressive structural rearrangements in the range of a few tens of micrometers (review in Ref. 256). These rearrangements affect both the number and size of en passant boutons as well as the complexity of axonal arborization. For instance, the hippocampal mossy fiber terminals are subject to dramatic changes in their size and complexity during in vitro development and in the adult in vivo following exposure to enriched environment (203,204,216). The turnover of presynaptic boutons in well identified Schaffer collateral axons is increased following induction of LTD in vitro [START_REF] Becker | LTD induction causes morphological changes of presynaptic boutons and reduces their contacts with spines[END_REF]. Finally, in an in vitro model of traumatic epilepsy, transection between the CA3 and CA1 region induces axonal sprouting associated with an increase in the density of boutons per unit length (360).
Axonal reorganization has also been reported in vivo. In the visual cortex, a subset of geniculo-cortical axonal branches can undergo structural rearrangements during development [START_REF] Antonini | Rapid remodeling of axonal arbors in the visual cortex[END_REF] and in the adult (508) or following activity deprivation (239,240,554). Similar observations have been reported in the barrel cortex during development (417) and in the adult mice (137). However, one should note that the magnitude of axonal rearrangements is much larger during the critical period of development.
In the adult mice cerebellum, transverse, but not ascending branches of climbing fibers are dynamic, showing rapid elongation and retraction (383). The motility of axonal branches is clearly demonstrated in all these studies, and it certainly reflects dynamic rewiring and functional changes in cortical circuits. Neuronal activity seems to play a critical role in the motility of the axon, but the precise mechanisms are not clearly understood. For instance, stimulation of the axon freezes dynamic changes in cerebellar climbing fibers in vivo (383). Similarly, the fast motility of axonal growth cone of hippocampal neurons in vitro is reduced by stimulation of GluR6 kainate receptors or electrical stimulation and depends on axonal calcium concentration (266). In contrast, the slow remodeling of local terminal arborization complexes of the mossy fiber axon is reduced when Na ϩ channel activity is blocked with TTX (204).
Electrical activity not only determines axon morphology but also controls induction of myelination in developing central and peripheral axons. For instance, blockade of Na ϩ channel activity with TTX reduces the number of myelinated segment and the number of myelinating oligodendrocytes, whereas increasing neuronal excitability has the opposite effects (149). In contrast, electrical stimulation of dorsal root ganglion neurons delays myelin formation (509). In this case, ATP released by active axons is subsequently hydrolyzed to adenosine that stimulates adenosine receptors in Schwann cells and freezes their differentiation. Neuronal activity is also thought to determine the maintenance of the myelin sheath in adult axons. In the hindlimb unloading model, myelin thickness is tightly controlled by motor activity [START_REF] Canu | Activity-dependent regulation of myelin maintenance in the adult rat[END_REF]. Myelin is thinner in axons controlling inactive muscles but thicker in hyperactive axons.
B. Functional Plasticity
Beyond morphological rearrangements, the axon is also able to express many forms of functional plasticity (520, 557). In fact, several lines of evidence suggest that ion channel activity is highly regulated by synaptic or neuronal activity (reviews in Refs. 135, 493, 582). There-fore, some of the axonal operations described in this review could be modulated by network activity. Axonal plasticity can be categorized into Hebbian and homeostatic forms of functional plasticity according to the effects of the induced changes in neuronal circuits. Hebbian plasticity usually serves to store relevant information and to some extent destabilizes neuron ensembles, whereas homeostatic plasticity is compensatory and stabilizes network activity within physiological bounds (420, 539).
Hebbian plasticity of axonal function
There are now experimental facts suggesting that Hebbian functional plasticity exists in the axon. For instance, the repetitive stimulation of Schaffer collateral axons at 2 Hz leads to a long-lasting lowering of the antidromic activation threshold (361). Although the precise expression mechanisms have not been characterized here, this study suggests that axonal excitability is persistently enhanced if the axon is strongly stimulated. Furthermore, LTP and LTD are respectively associated with increased and decreased changes in intrinsic excitability of the presynaptic neuron (205, 324). These changes imply retrograde messengers that target the presynaptic neuron. Although these changes are detected in the cell body, the possibility that ion channels located in the axon are also regulated cannot be excluded. Two parallel studies have recently reported a novel form of activity-dependent plasticity in a subclass of inhibitory interneurons of the cortex and hippocampus containing neuropeptide Y (476, 523). Stimulation of the interneuron at 20 -40 Hz leads to an increase in action potential firing lasting several minutes. In both studies, the persistent firing is consistent with the development of an ectopic spike initiation zone in the distal region of the axon.
Homeostatic axonal plasticity
The expression of axonal channels might be regulated by chronic manipulation of neuronal activity according to the homeostatic scheme of functional plasticity. For instance, blocking neuronal activity by TTX enhances both the amplitude of the transient Na ϩ current (150) and the expression of Na ϩ channels in hippocampal neurons [START_REF] Aptowicz | Homeostatic plasticity in hippocampal slice cultures involves changes in voltage-gated Na ϩ channel expression[END_REF]. Although the subcellular distribution of Na ϩ channels was not precisely determined in these studies, they might be upregulated in the axon. Indeed, axon regions that become silent because of acute demyelination express a higher density of Na ϩ channels which eventually allows recovery of active spike propagation [START_REF] Bostock | The internodal axon membrane: electrical excitability and continuous conduction in segmental demyelination[END_REF]195,555). Activity deprivation not only enhances intrinsic excitation but also reduces the intrinsic neuronal brake provided by voltage-gated K ϩ channels (131,141,150). Chronic inactivation of neuronal activity with TTX or synaptic blockers inhibits the expression of Kv1.1, Kv1.2, and Kv1.4 potassium channels in the cell body and axon of cultured hippocampal neurons (229). Although the functional consequences were not analyzed here, this study suggests that downregulation of Kv1 channels would enhance neuronal excitability and enlarge axonal spike width.
The position of the AIS relative to the cell body is also subject to profound activity-dependent reorganization (Fig. 16A). In a recent study, Grubb and Burrone (232,233) showed that brief network-wide manipulation of electrical activity determines the position of the AIS in hippocampal cultured neurons. The AIS identified by its specific proteins ankyrin-G and -IV-spectrin is moved up to 17 m distally (i.e., without any change in the AIS length) when activity is increased by high external potas- sium or illumination of neurons transfected with channelrhodopsin-2 during 48 h (232; Fig. 16B). The relocation of the AIS is reversible and depends on T-and L-type calcium channels, suggesting that intra-axonal calcium may control the dynamic of the AIS protein scaffold. This bidirectional plasticity might be a powerful means to adjust the excitability of neurons according to the homeostatic rule of plasticity (539). In fact, neurons with proximal AIS are generally more excitable than those with distal AIS, suggesting that shifting the location of the AIS distally elevates the current threshold for action potential generation (232,298). Thus these data indicate that AIS location is a mechanism for homeostatic regulation of neuronal excitability.
Homeostatic AIS plasticity might be a general rule and may account for the characteristic frequency-dependent distribution of sodium channels along the axon of chick auditory neurons (302,303). In neurons that preferentially analyze high auditory frequencies (ϳ2 kHz), sodium channels are clustered at 20 -50 m from the soma, whereas they are located in the proximal part of the axon in neurons that detect lower auditory frequencies (ϳ600 Hz; Ref. 302). A recent study from Kuba and coworkers (304) directly demonstrates the importance of afferent activity in AIS position in chick auditory neurons. Removing cochlea in young chicks produces an elongation of the AIS in nucleus magnocellularis neurons without affecting its distance from the cell body (Fig. 16C; Ref. 304). This regulation is associated with a compatible increase in the whole cell Na ϩ currents.
Axonal excitability is also homeostatically tuned on short-term scales. Sodium channel activity is downregulated by many neuromodulators and neurotransmitters including glutamate that classically enhances neuronal activity [START_REF] Cantrell | Neuromodulation of Na ϩ channels: an unexpected form of cellular plasticity[END_REF][START_REF] Carlier | Metabotropic glutamate receptor subtype 1 regulates sodium currents in rat neocortical pyramidal neurons[END_REF]. Although further studies will be required to precisely determine the location of the regulated Na ϩ channels, it is nevertheless tempting to speculate that AIS excitability might be finely tuned.
X. PATHOLOGIES OF AXONAL FUNCTION
Beyond Wallerian degeneration that may be caused by axon sectioning, deficits in axonal transport (121,138), or demyelination (381), the axon is directly involved in at least two large families of neurological disorders. Neurological channelopathies such as epilepsies, ataxia, pain, myotonia, and periodic paralysis usually result from dysfunction in ion channel properties or targeting (130,306,409,461). The major consequences of these alterations are dysfunctions of neuronal excitability and/or axonal conduction (297). In addition, some forms of Charcot-Marie-Tooth disease affect primarily the axon (297,367,519). They mainly lead to deficits in axonal propagation (297, 519).
A. Axonal Diseases Involving Ion Channels
Epilepsies
Many ion channels expressed in the axons of cortical neurons are mutated in human epilepsies, and dysfunction of the AIS is often at the origin of epileptic phenotypes (562). For instance, mutations of the gene SCN1A encoding Nav1.1 cause several epileptic phenotypes including generalized epilepsy with febrile seizure plus (GEFSϩ) and severe myoclonic epilepsy of infancy (SMEI) [START_REF] Baulac | A second locus for familial generalized epilepsy with febrile seizures plus maps to chromosome 2q21-q33[END_REF]114,182;Fig. 17). Some of these mutations do not produce a gain of function (i.e., hyperexcitability) as expected in the case of epilepsy, but rather a loss of function ( 505). Since Nav1.1 channels are highly expressed in the axons of GABAergic neurons (394), a decrease in excitability in inhibitory neurons will enhance excitability of principal neurons that become less inhibited. Mice lacking SCN1A display spontaneous seizures because the sodium current is reduced in inhibitory interneurons but not in pyramidal cells (576). Similarly, deletions or mutations in Kv1.1 channels produce epi-FIG. 17. Axonal channelopathies in cortical circuits. The possible roles of axonal ion channels implicated in epilepsy are illustrated schematically. Mutations in Nav1.1 from axons of GABAergic interneurons produce a loss of Na-channel function (i.e., reduced excitability of inhibitory interneurons but increased network activity) that might underlie epilepsy with febrile seizure plus (GEFSϩ) or severe myoclonic epilepsy of infancy (SMEI). Mutations in Kv7.2/7.3 channels lead to a loss of function (i.e., an increase in excitability of principal neurons) and may result in benign familial neonatal convulsions (BFNC). Deletions or mutations in Kv1.1 increase neuronal excitability and produce episodic ataxia type 1. lepsy (495) and episodic ataxia type 1 (EA1), characterized by cerebellar incoordination and spontaneous motorunit activity [START_REF] Browne | Episodic ataxia/myokymia syndrome is associated with point mutations in the human potassium channel gene, KCNA1[END_REF]. Mutations in KCNQ2/3 (Kv7.2/Kv7.3) channels produce several forms of epilepsy such as benign familial neonatal convulsions (BNFC;Refs. 406,466,492;Fig. 17). Some mutations may also target ion channels located in axon terminals. For instance, a missense mutation in the KCNMA1 gene encoding BK channels is associated with epilepsy and paroxysmal dyskinesia (164; Fig. 15).
Epilepsies may also be acquired following an initial seizure. For instance, many epileptic patients display graduated increases in the frequency and strength of their crises, indicating that epilepsy might be acquired or memorized by neuronal tissue. The cellular substrate for this enhanced excitability is thought to be long-lasting potentiation of excitatory synaptic transmission [START_REF] Bains | Reciprocal interactions between CA3 network activity and strength of recurrent collateral synapses[END_REF]146), but enhanced neuronal excitability might be also critical [START_REF] Beck | Plasticity of intrinsic neuronal properties in CNS disorders[END_REF][START_REF] Bernard | Acquired dendritic channelopathy in temporal lobe epilepsy[END_REF][START_REF] Blumenfeld | Role of hippocampal sodium channel Nav1.6 in kindling epileptogenesis[END_REF]517). These changes in excitability are generally proepileptic, but additional work will be required to determine whether axonal channels specifically contribute to acquired epilepsy phenotypes.
In addition to epilepsy, mutations in the SCNA1A or CACNA1A gene can also lead to cases of familial hemiplegic migraine; these mutations have mixed effects when studied in expression systems that could explain how they concur to cortical spreading depression (103, 408).
Axonal channelopathies in the PNS
Mutations in axonal channels may be involved in several diseases that affect the PNS. For instance, pain disorders are often associated with mutations of the SCN9A gene encoding the alpha subunit of Nav1.7, that cause either allodynia (i.e.,burning pain;Refs. 189,571) or analgesia (129). Pain is usually associated with a gain of function of Nav1.7 (i.e., lower activation threshold or reduced inactivation; Refs. 189,238).
B. Axonal Diseases Involving Myelin
Multiple sclerosis
Multiple sclerosis (MS) is characterized by multiple attacks on CNS myelin that may lead to sensory (principally visual) and/or motor deficits (532,555). MS is generally diagnosed in the young adult (before 40), and the progression of the disease often alternates phases of progression and remission where the patient recovers because compensatory processes occur, such as Na ϩ channel proliferation in the demyelinated region (555). Althought the etiology of MS is multiple with hereditary, infectious, and environmental factors, the most important determinant of MS is dysregulation of the immune system including autoimmune diseases directed against myelin proteins. The main consequence is a partial or total loss of myelin that prevents axonal conduction in axons of the optic nerves or corticospinal tracts.
Charcot-Marie-Tooth disease
Charcot-Marie-Tooth (CMT) disease affects myelin of PNS axons and constitutes a highly heterogeneous group of genetic diseases. These diseases generally invalidate molecular interactions between axonal and glial proteins that stabilize myelin produced by Schwann cells. The most frequent forms, CMT1A, CMT1B, and CMT1X, are caused by mutations in genes which encode three components of the myelin sheath, peripheral myelin protein-22 (PMP22), myelin protein zero (MPZ), and connexin 32, respectively (519).
Hereditary neuropathy with liability to pressure palsies
Hereditary neuropathy with liability to pressure palsies (HNPP) is a genetic disease that results from a deficiency in the gene coding for PMP22 (104). HNPP is characterized by focal episodes of weakness and sensory loss and is associated with abnormal myelin formation leading to conduction blocks [START_REF] Bai | Conduction block in PMP22 deficiency[END_REF].
XI. CONCLUDING REMARKS
A. Increased Computational Capabilities
Axons achieve several fundamental operations that go far beyond classical propagation. Like active dendrites, axons amplify and integrate subthreshold and suprathreshold electrical signals [START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF]144,179,291,489). In addition, the output message can be routed in selective axonal pathways at a defined regime of activity. The consequences of this are not yet well understood in mammalian axons, but branch point failures may participate in the elaboration of sensory processing in invertebrate neurons (234). Axonal propagation may also bounce back at a branch point or at the cell body, but at present, there are only a handful of examples showing reflected propagation [START_REF] Antic | Functional profile of the giant metacerebral neuron of Helix aspersa: temporal and spatial dynamics of electrical activity in situ[END_REF][START_REF] Baccus | Synaptic facilitation by reflected action potentials: enhancement of transmission when nerve impulses reverse direction at axon branch points[END_REF][START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF]108). Reflected impulses may limit the spread of the neuronal message and enhance synaptic transmission. Theoretical and experimental studies indicate that reflection of action potentials could occur in axons that display large swellings or a branch point with high GR. Moreover, axonal delay is important to set network resonance (344) and increase storage capacity in neuronal networks (271). Finally, axonal coupling through ephaptic interactions or gap junctions may precisely synchronize network activity (448,464). All these operations increase the computational capabilities of axons and affect the dynamics of synaptic coupling. Many pieces of the puzzle are, however, still missing.
The computational capabilities of axons might be further extended by another unexpected and important feature: their capacity to express both morphological and functional plasticity. There is now evidence for Hebbian and homeostatic long-term axonal plasticities that might further enhance the computational capacity of the circuits (232,233,304). Thus activity-dependent plasticity is not restricted to the input side of the neuron (i.e., its dendrites and postsynaptic differentiation), but it may also directly involve axonal function.
B. Future Directions and Missing Pieces
In the recent past, most (if not all) of our knowledge about axonal computation capabilities was derived from experiments on invertebrate neurons or from computer simulations (470). The use of paired-recording techniques (140, 144) and the recent spread of direct patch-clamp recordings from the presynaptic terminal [START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF][START_REF] Bischofberger | Patchclamp recording from mossy fiber terminals in hippocampal slices[END_REF]179,432) or from the axon (259, 291, 292, 488 -490) suggest that the thin mammalian axon will yield up all its secrets in the near future. There are good reasons to believe that, combined with the development of high-resolution imaging techniques like multiphoton confocal microscopy (128,193,194,289), second-harmonic generation microscopy (160) and voltage-sensitive dyes [START_REF] Antic | Functional profile of the giant metacerebral neuron of Helix aspersa: temporal and spatial dynamics of electrical activity in situ[END_REF][START_REF] Bradley | Submillisecond optical reporting of membrane potential in situ using a neuronal tracer dye[END_REF]196,215,228,327,396,397,580), this technique will be a powerful tool to dissect the function of axons. Development of nanoelectronic recording devices will also probably offer promising solutions to solve the problem of intracellular recording from small-diameter axons (530).
Axonal morphology and the subcellular localization of ion channels play crucial roles in conduction properties and propagation failures or reflected propagation may result from the presence of axonal irregularities such as varicosities and branch points. However, detailed quantitative analysis of the morphometry of single axons combined with the quantitative immunostaining of sodium channels as used recently by Lorincz and Nusser (333) will be needed. The use of recently developed molecular tools to target defined channel subunits towards specific axonal compartments could be of great help in determining their role in axonal propagation.
Fine temporal tuning can be achieved by axons. Differences in axonal length in the terminal axonal tuft introduce delays of several milliseconds. Is temporal scaling of action potential propagation in the axonal arborization relevant to the coding of neuronal information? Differential conduction delays in axonal branches participate in precise temporal coding in the barn owl auditory system [START_REF] Carr | Axonal delay lines for time measurement in the owl's brainstem[END_REF][START_REF] Carr | A circuit for detection of interaural time differences in the brain stem of the barn owl[END_REF]358). But the role of axonal delays has only been studied in artificial neural networks [START_REF] Bush | Inhibition synchronizes sparsely connected cortical neurons within and between columns in realistic network models[END_REF]271,344) or in vitro neuronal circuits [START_REF] Bakkum | Long-term activity-dependent plasticity of action potential propagation delay and amplitude in cortical networks[END_REF], and additional work will have to be done to describe its implication in hybrid (i.e., neuron-computer) or in in vivo networks. Furthermore, understanding the conflict faced by cortical axons between space (requirement to connect many different postsynaptic neurons) and time (conduction delay that must be minimized) will require further studies [START_REF] Budd | Neocortical axon arbors trade-off material and conduction delay conservation[END_REF].
Local axonal interactions like ephaptic coupling and gap-junction coupling allow very fast synchronization of activity in neighboring neurons. Surprisingly, little experimental effort has been devoted to ephaptic interactions between axons. This mechanism represents a powerful means to precisely synchronize output messages of neighboring neurons. Perhaps ephaptic interactions between parallel axons could compensate the "stuttering conduction" that is introduced by axonal varicosities and branch points (374). The implications of these mechanisms in synchronized activity will have to be determined in axons that display favorable geometrical arrangement for ephaptic coupling (i.e., fasciculation over a sufficient axonal length). Callosal axons, mossy fibers, and Schaffer collaterals are possible candidates.
In conclusion, we report here evidence that beyond classical propagation many complex operations are achieved by the axon. The axon displays a high level of functional flexibility that was not expected initially. Thus it may allow a fine tuning of synaptic strength and timing in neuronal microcircuits. There are good reasons to believe that after the decade of the dendrites in the 1990s, a new era of axon physiology is now beginning.
FIG. 3 .
3 FIG. 3. High concentration of functional sodium channels at the AIS of cortical pyramidal neurons. A: changes in intracellular Na ϩ during action potentials are largest in the AIS. A L5 pyramidal neuron was filled with the Na ϩ -sensitive dye SBFI and the variations in fluorescence measured at different distances from the axon hillock. The signal is larger in the AIS (25 m) and rapidly declines along the axon (55 m) or at proximal locations (5 m or soma). [Adapted from Kole et al. (290), with permission from Nature Publishing Group.] B: Na ϩ channel density is highest at the AIS. Top: Na ϩ currents evoked by step depolarizations (30 ms) from a holding potential of Ϫ100 to ϩ20 mV in outside-out patches excised from the soma (black), AIS (orange, 39 m), and axon (red, 265 m). Bottom: average amplitude of peak Na ϩ current obtained from different compartments. [From Hu et al. (259), with permission from Nature Publishing Group.] C: high-resolution immunogold localization of the Nav1.6 subunit in AIS of CA1 pyramidal neuron. Gold particles labeling the Nav1.6 subunits are found at high density on the protoplasmic face of an AIS. Note the lack of immunogold particles in the postsynaptic density (PSD) of an axo-axonic synapse. [From Lorincz and Nusser (333), with permission from the American Association for the Advancement of Science.]
FIG. 5 .
5 FIG.[START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF]. Spike initiation in the AIS. A: confocal images of two L5 pyramidal neurons labeled with biocytin (A. Bialowas, P. Giraud, and D. Debanne, unpublished data). Note the characteristic bulbous end of the severed axon ("bleb") B: dual soma-axonal bleb recording in whole-cell configuration from a L5 pyramidal neuron. Left: scheme of the recording configuration. Right: action potentials measured in the soma (black) and in the axon (red). C: determination of the spike initiation zone. Scheme of the time difference between axonal and somatic spikes as a function of the axonal distance (origin: soma). The maximal advance of the axonal spike is obtained at the AIS (i.e., the spike initiation zone). The slope of the linear segment of the plot gives an estimate of the conduction velocity along the axon.
FIG. 6 .
6 FIG. 6. Spike threshold is lowest in the AIS. A: lower currrent threshold but high voltage in the AIS of L5 pyramidal neurons. Left: overlaid voltage responses during current injection into the AIS (blue) or soma (black) at the action potential threshold. Note the depolarized voltage threshold in the AIS compared with the soma. Right: average amplitude of injected current versus action potential probability for action potentials evoked by current injection in the AIS (open circles) or soma (solid circles). Note the lower current threshold in the AIS. B: slow depolarizing ramp mediated by Na ϩ channel in the AIS but not in the soma. Left: action potentials generated by simulated EPSC injection at the soma and recorded simultaneously at the soma (AIS) and AIS (blue). Middle: same recording in the presence of TTX (1 M). Right: voltage difference (AIS-soma) in control (gray) and TTX (red) reveals a depolarizing ramp in the AIS before spike intiation. [Adapted from Kole and Stuart (292), with permission from Nature Publishing Group.]
FIG. 10 .
10 FIG.[START_REF] Anderson | Thresholds of action potentials evoked by synapses on the dendrites of pyramidal cells in the rat hippocampus in vitro[END_REF]. Depolarization of the presynaptic soma facilitates synaptic transmission through axonal integration. A: facilitation of synaptic transmission in connected L5-L5 pyramidal neurons. Left: experimental design. Synaptic transmission is assessed when presynaptic action potentials are elicited either from rest (Ϫ62 mV) or from a depolarized potential (Ϫ48 mV). Right: averaged EPSP amplitude at two presynaptic somatic membrane potentials. Note the facilitation when the presynaptic potential is depolarized. [Adapted fromShu et al. (489), with permission from Naure Publishing Group.] B: mechanism of presynaptic voltage-dependent facilitation of synaptic transmission. Top: the cell body and the axon of a cortical pyramidal neuron are schematized. When an action potential is elicited from the resting membrane potential (RMP, Ϫ65 mV), the spike in the axon is identical in the proximal and distal part of the axon. Postsynaptic inward currents are shown below. Bottom: an action potential elicited from a steady-state depolarized value of Ϫ50 mV is larger in the proximal part of the axon (because I D is inactivated) but unchanged in the distal part (because I D is not inactivated by the somatic depolarization). As a result, synaptic efficacy is enhanced for the proximal synapse (red inward current) but not for the distal synapse (blue inward current).
FIG. 11 .
11 FIG. 11. Propagation failures in invertebrate neurons. A: propagation failure at a branch point in a lobster axon. The main axon and the medial and lateral branches are recorded simultaneously. The repetitive stimulation of the axon (red arrow) at a frequency of 133 Hz produces a burst of full spike amplitude in the axon and in the lateral branch but not in the medial branch. Note the electrotonic spikelet in response to the third stimulation. [Adapted from Grossman et al. (230), with permission from Wiley-Blackwell.] B: propagation failure at the junction between an axonal branch and the soma of a snail neuron (metacerebral cell). The neuron was labeled with the voltage-sensitive styryl dye JPW1114. The propagation in the axonal arborization was analyzed by the local fluorescence transients due to the action potential. The recording region is indicated by an outline of a subset of individual detectors, superimposed over the fluorescence image of the neuron in situ. When the action potential was evoked by direct stimulation of the soma, it propagated actively in all axonal branches (red traces). In contrast, when the action potential was evoked by the synaptic stimulation (EPSP) of the right axonal branch (Br1), the amplitude of the fluorescent transient declined when approaching the cell body, indicating a propagation failure (black traces). [Adapted from Antic et al. (12), with permission from John Wiley & Sons.]
FIG. 12 .
12 FIG. 12. Propagation failures in mammalian axons. A: propagation failures in a Purkinje cell axon. Top: fluorescent image of a Purkinje cell filled with the fluorescent dye Alexa 488. The locations of the somatic and axonal recordings are indicated schematically. [Adapted from Monsivais et al. (372).] B: gating of action potential propagation by the potassium current I A . Left: at resting membrane potential, presynaptic I A was inactivated and the action potential evoked in the presynaptic cell propagated and elicited an EPSP in the postsynaptic cell. Right: following a brief hyperpolarizing prepulse, presynaptic I A recovered from inactivation and blocked propagation. Consequently, no EPSP was evoked by the presynaptic action potential. [Adapted from Debanne et al. (144), with permission from Nature Publishing Group.]
FIG. 16 .
16 FIG. 16. Activity-dependent plasticity of AIS. A: scheme of the homeostatic regulation of AIS location in cultured hippocampal neurons (left) and in brain stem auditory neurons (right). AIS is moved distally following chronic elevation of activity by high external K ϩ or photostimulation of neurons expressing the light-activated cation channel channelrhodopsin 2 (ChR2) (left). AIS length is augmented in chick auditory neurons following cochlea removal (right). B: ankyrin G label in control neurons and in neurons treated with 15 mM K ϩ during 48 h (scale bar: 20 m). [From Grubb and Burrone (232), with permission from Nature Publishing Group.] C: AIS plasticity in chick auditory neurons. Sodium channels have been immunolabeled with pan-Na channel antibody. Neurons from deprived auditory pathway display longer AIS (right) than control neurons (left). [From Kuba et al. (304), with permission from Nature Publishing Group.]
ACKNOWLEDGMENTS
We thank M. Seagar for constant support, P. Giraud for providing confocal images of labeled neurons, and S. Binczak for helpful discussion. We thank J. J. Garrido, A. Marty, M. Seagar, and F. Tell for helpful comments on the manuscript and the members of D. Debanne's lab for positive feedback.
Address for reprint requests and other correpondence: D. Debanne, Université de la Méditerranée, Faculté de médecine secteurnord,IFR11,Marseille,F-13916France(e-mail:dominique. [email protected]).
GRANTS
This work was supported by Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Ministry of Research (doctoral grants to E. Campanac and A. Bialowas), Fondation pour la Recherche Médicale (to E. Campanac), and Agence Nationale de la Recherche (to D. Debanne and G. Alcaraz).
DISCLOSURES
No conflicts of interest, financial or otherwise, are declared by the authors. | 170,142 | [
"843972"
] | [
"528581",
"528581",
"528581",
"528581",
"46221"
] |
01766868 | en | [
"math"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01766868/file/INDRUM2018_Vandebrouck-Bechir%20reviewed.pdf | Sghaier Salem Béchir
email: [email protected]
Fabrice Vandebrouck²
Teaching and learning continuity with technologies
Keywords: teaching and learning of analysis and calculus, novel approaches to teaching, continuity, digital technologies
We developed a digital tool aiming at introducing the concept oflocal -continuity together with its formal definition for Tunisian students at the end of secondary school. Our approach is a socioconstructivist one, mixing conceptualisation in the sense of Vergnaud together with Vygotski's concepts of mediation and ZPD. In the paper, we focus on the design of the tool and we give some flashes about students' productions with the tool and teachers' discourses in order to foster students' understanding of the continuity.
The definition of continuity of functions at a given point, together with the concept of continuity, remains a major difficulty in the teaching and learning of analysis. There is a dialectic between the definition and the concept itself which make necessary the introduction of the two aspects together.
The definition of continuity brings FUG aspects in the sense of [START_REF] Rabardel | L'acquisition de la notion de convergence des suites dans l'enseignement supérieur[END_REF]. This means first that it permits to formalize (F) the concept of continuity. But it also allows to unify (U) several different images (or situations) of continuity encountered by students: in [START_REF] Tall | Concept image and concept definition in mathematics, with special reference to limits and continuity[END_REF], several emblematic situations of continuity are established (see below) and the definition aims at unifying all these different kinds of continuity. Moreover, the definition of continuity allows generalisations (G) to all other numerical functions, not already encountered and not necessarily with graphical representations, or more general functions inside other spaces of functions. As [START_REF] Rabardel | L'acquisition de la notion de convergence des suites dans l'enseignement supérieur[END_REF] stresses for the definition of limit of sequences, notions which bring FUG aspects must be introduced with a specific attention to mediations and especially the role of the teacher.
Our ambition is then to design a technological tool which allows on one hand students activities concerning the two aspects of continuity and, on the other hand, allows the teacher to introduce the concept of continuity with its formal definition, referring to the activities developed on the technological tool. As it was noticed in the first INDRUM conference, papers about introduction of technologies in the teaching of analysis remain very few.
We first come back to well-known concept images and concept definitions of continuity. Then, we explain our theoretical frame about conceptualisation and mathematical activities. This theoretical frame leads us to the design of the technological tool which brings most of the aspects we consider important for the conceptualisation of continuity. Due to the text constraints, the results of the paper are mostly in term of the design itself and the way the tool encompasses our theoretical frame and our hypotheses about conceptualisation (with tasks, activities and opportunities for mediations). Then, we can give some flashes about students' activities with the software and also teachers' discourses to introduce the definition of continuity, based on students' mathematical activities on the software.
CONCEPT IMAGES AND CONCEPT DEFINITIONS OF CONTINUITY
No one can speak about continuity without referring to Tall and Vinner's paper about concept images and concept definitions in mathematics, whose particular reference is about limits and continuity [START_REF] Tall | Concept image and concept definition in mathematics, with special reference to limits and continuity[END_REF]. Tall considers that the concept definition is one part of the total concept image that exists in our mind. Additionally, it is understood that learners enter their acquisition process of a newly introduced concept with preexisting concept images. [START_REF] Sierpinska | On understanding the notion of function[END_REF] used the notion of epistemological obstacles regarding some properties of functions and especially the concept of limit. Epistemological obstacles for continuity are very close to those observed for the concept of limit and they can be directly relied to students' concept images, as a specific origin of theses conceptions (El Bouazzaoui, 1988). One of these obstacles can be associated to what we call a primitive concept image: it is a geometrical and very intuitive conception of continuity, related to the aspects of the curve. With this concept image, continuity and derivability are often mixed and continuity means mainly that the curve is smooth and have no angles. Historically, this primitive conception leads Euler to introduce a definition of continuity based on algebraic representations of functions. This leads to a second epistemological obstacle: a continuous function is given by only one algebraic expression, which can be called the algebraic concept image of continuity. This conception has led to a new obstacle with the beginning of Fourier's analysis. Then, a clear definition is necessary. This definition comes with Cauchy and Weierstrass and it is close to our actual formal definition.
We also refer to [START_REF] Bkouche | Points de vue sur l'enseignement de l'analyse : des limites et de la continuité dans l'enseignement[END_REF] who identifies three points of view about continuity of functions which are more or less connected to the epistemological obstacles we have highlighted. The first one is a cinematic point of view. Bkouche says that the variable pulls the function with this dynamic concept image. The other one is an approximation point of view: the desired degree of approximation of the function pulls the variable. This last point of view is more static and leads easily to the formal definition of continuity. These two points of view are also introduced by [START_REF] Rabardel | L'acquisition de la notion de convergence des suites dans l'enseignement supérieur[END_REF] when she studies the introduction of the formal definition of limit (for sequences). A third point of view is also identified by Bkouche that is the algebraic point of view, which is about algebraic rules, without any idea of the meaningful of these rules.
At last, we refer to more recent papers and specifically the one of Hanke and Schafer (2017) about continuity in the last CERME congress. Their review of central papers on concept images about students' conceptions of continuity leads to a classification of the eight possible mental images that are reported in the literature: I : Look of the graph of the function : "A graph of a continuous function must be connected" -II : Limits and approximation : "The left hand side and right hand side limit at each point must be equal" -III : Controlled wiggling : "If you wiggle a bit in x, the values will only wiggle a bit, too" -IV : Connection to differentiability : "Each continuous function is differentiable" -V : General properties of functions : "A continuous function is given by one term and not defined piecewise"-VI : Everyday language : "The function continues at each point and does not stop" -VII : Reference to a formal definition : "I have to check whether the definition of continuity applies at each point" -VIII : Miscellaneous
We can recognize some of the previous categories, even if some refinements are brought. Mainly, concept images I, II, IV and VI can be close to the primitive concept image whereas VII refers to the formal definition and V seems to refer to the algebraic approach of continuity.
CONCEPTUALISATION OF CONTINUITY
We base our research work on these possible concepts image and concepts definition of continuity. However, we are more interested in conceptualisation, as the process which describes the development of students' mathematical knowledge. Conceptualisation in our sense has been mainly introduced by [START_REF] Vergnaud | La théorie des champs conceptuels[END_REF] and it has been extended within an activity theoretical frame developed in the French didactic of mathematics. These developments articulate two epistemological approaches: that of mathematics didactics and that of developmental cognitive psychology as it is discussed and developed in [START_REF] Vandebrouck | Activity Theory in French Didactic Research[END_REF].
Broadly, conceptualisation means that the developmental process occurs within students' actions over a class of mathematical situations, characteristic of the concept involved. This class of situations brings technical tasksdirect application of the concept involved -as well as tasks with adaptations of this concept. A list of such adaptations can be found in [START_REF] Horoks | Tasks Designed to Highlight Task-Activity Relationships[END_REF]: for instance mix between the concept and other knowledge, conversions between several registers of representations [START_REF] Duval | Sémiosis et pensée humaine: registres sémiotiques et apprentissages intellectuels[END_REF], use of different points of view, etc. Tasks that require these adaptations of knowledge or concepts are called complex tasks. These ones encourage conceptualisation, because students become able to develop high level activities allowing availability and flexibly around the relevant concept.
A level of conceptualisation refers to such a class of situations, in a more modest sense and with explicit references to scholar curricula. In this paper, the level of conceptualisation refers to the end of scientific secondary school in Tunisia or the beginning of scientific university in France. It supposes enough activities which can permit the teacher to introduce the formal definition of continuity together with the sense of the continuity concept. The aim is not to obtain from students a high technicity about the definition itselfstudents are not supposed to establish or to manipulate the negation of the definition for instance. However, this level of conceptualisation supposes students to access the FUG aspects of the definition of continuity.
Of course, we also build on instrumental approach and instrumentation as a sub process of conceptualisation [START_REF] Rabardel | L'acquisition de la notion de convergence des suites dans l'enseignement supérieur[END_REF]. Students' cognitive construction of knowledge (specific schemes) arise during the complex process of instrumental genesis in which they transform the artifact into an instrument that they integrate within their activities. [START_REF] Artigue | Learning mathematics in a cas environment: The genesis of a reflection about instrumentation and the dialectics between technical and conceptual work[END_REF] says that it is necessary to identify the new potentials offered by instrumented work, but she also highlights the importance of identifying the constraints induced by the instrument and the instrumental distance between instrumented activities and traditional activities (in paper and pencil environment). Instrumentation theory also deals with the complexity of instrumental genesis.
We also refer to Duval's idea of visualisation as a contribution of the conceptualisation process (even if Duval and Vergnaud have not clearly discussed this point inside their frames). However, the technological tool brings new dynamic representations, which are different from static classical figures in paper and pencil environment. These new representations lead to enrich students' activitiesmostly in term of recognition -bringing specific visualization processes. Duval argues that visualization is linked to visual perception, and can be produced in any register of representation. He introduces two types of visualization, namely the iconic and the non-iconic, saying that in mathematical activities, visualization does not work with iconic representations [START_REF] Duval | Representation, vision and visualisation: cognitive functions in mathematical thinking. Basic issues for learning[END_REF].
At last, we refer on Vygotsky (1986) who stresses the importance of mediations within a student's zone of proximal developmental (ZPD) for learning (scientific concepts). Here, we also draw on the double approach of teaching practices as a part of French activity theory coming from [START_REF] Robert | A cross-analysis of the mathematics teacher's activity. An example in a French 10th-grade class[END_REF]. The role of the teacher' mediations is specifically important in the conceptualisation process, especially because of the FUG aspects of the definition of continuity (as we have recalled above).
First of all, we refine the notion of mediation by adding a distinction between procedural and constructive mediations in the context of the dual regulation of activity. Procedural mediations are object oriented (oriented towards the resolution of the tasks), while constructive mediations are more subject oriented. We also distinguish individual (to pairs of students) and collective mediations (to the whole class).
Secondly, we use the notion of proximities [START_REF] Bridoux | Les moments d'exposition des connaissances : analyses et exemples[END_REF] which are discourses' elements that can foster students' understanding and then conceptualisation -according to their ZPD and their own activities in progress. In this sense, our approach is close to the one of Bartolini [START_REF] Bartolini Bussi | Semiotic Mediation in the Mathematics Classroom: Artifacts and Signs after a Vygotskian Perspective[END_REF] with their Theory of Semiotic Mediations. However, we do not refer explicitly at this moment to this theory which supposes a focus on signs and a more complex methodology than ours. According to us, the proximities characterize the attempts of alignment that the teacher operates between students' activities (what has been done in class) and the concept at stake. We therefore study the way the teacher organizes the movements between the general knowledge and its contextualized uses: we call ascending proximities those comments which explicit the transition from a particular case to a general theorem/property; descending proximities are the other way round; horizontal proximities consist in repeating in another way the same idea or in illustrating it.
DESIGN OF THE TECHNOLOGICAL TOOL
The technological tool called "TIC-Analyse" is designed to grasp most of the aspects which have been highlighted above. First of all, it is designed to foster students' activities about continuity aspects in the two first points of view identified by Bkouche: several functions are manipulatedcontinuous or notand for each of them, two windows are in correspondence. In one of the window, the cinematicdynamical point of view is highlighted (figure 1) whereas in the second window the approximation-static point of view is highlighted (figure 2). The correspondence between the two points of view is in coherence with Tall's idea of incorporation of the formal definition into the pre-existing students' concept images. It is also in coherence with the importance for students to deal with several points of view for the conceptualisation of continuity (adaptations). In second, the functions at stake in the software are extracted from the categories of [START_REF] Tall | Concept image and concept definition in mathematics, with special reference to limits and continuity[END_REF]. For instance, we have chosen a continuous function which is defined by two different algebraic expressions, to avoid the algebraic concept image of continuity and to avoid the amalgam between continuity and derivability. We also have two kinds of discontinuity, smooth and with angle.
There is an emphasis not only on algebraic representations of functions in order to avoid algebraic conceptions of functions. Three registers of representations of functions (numerical, graphical and algebraic) are coordinated to promote students' activities about conversions between registers (adaptations). The design of the software is coherent with the instrumental approach mostly in the sense that the instrumental distance between the technological environment, the given tasks, and the traditional paper and pencil environment is reduced. However the software produces dynamic new representationsa moving point on the curve associated to a numerical table of values within the dynamic window; two static intervals, one being included or not in the other, for the static windowoccurring non iconic visualisations which intervene in the conceptualisation process. The software promotes students' actions and activities about given tasks: in the dynamic window, they are supposed to command the dynamic point on the given curvecorresponding to the given algebraic expression. They can observe the numerical values of coordinates corresponding to several discrete positions of the point and they must fill a commentary with free words about continuity aspects of the function at the given point (figures 1, 3). In the static window, they must fill the given array with values of α, the β being given by the software (figures 2, 4). Then, they have to fill a commentary which begins differently according to the situation (continuity or not) and the α they have found (figures 4, 5).
As we have mentioned in our theoretical frame, students are not supposed with these tasks and activities to get the formal definition by themselves. However, students are supposed to have developed enough knowledge in their ZPD so that the teacher can introduce the definition together with the sense and FUG aspects of continuity.
STUDENTS ACTIVITIES AND TEACHER'S PROXIMITIES
The students work by pair on the tool. The session is a one hour session but four secondary schools with four teachers are involved. Students have some concept images of continuity but nothing has been thought about the formal definition. The teacher is supposed to mediate students' activities on the given tasks. Students are not supposed to be in a total autonomy during the session according to our socio constructivist approach. We have collected video screen shots, videos of the session (for each schools) and recording of students' exchanges in some pairs. Students' activities on each tasks are identified, according to the tasks' complexity (mostly kinds of adaptations), their actions and interactions with computers and papers (written notes), the mediations they receive (procedural or constructive mediations, individual or collective, from the tool, the pairs or the teacher) and the discourses' elements seen as "potential" proximities proposed by the teacher.
It appears that the teacher mostly gives collective procedural mediations to introduce the given tasks, to assure an average progression of the students and to take care of the instrumental process. Some individuals mediations are only technical ones ("you can click on this button"). Some collective mediations are most constructive such as "now, we are going to see a formal approach. We are going to see again the four activities (ie tasks) but with a new approach which we are going to call formal approach...". The constructive mediations are not tasks oriented but they aim at helping students to organize their new knowledge and they contribute to the aimed conceptualisation according to our theoretical approach.
As examples of students' written notes (as traces of activities), we can draw on figure 3 and4. A pair of students explains the dynamic non-continuity with their words "when x takes values more and more close to 2 then f(x) takes values close to -2,5 and -2. It depends whether it's lower or higher" (figure 3) which is in coherence with the primitive concept image of continuity. The same pair of students explains the non-continuity in relation to what they can observe on the screen: "there exists β positive, for all α positivealready proposed by the tool in case of non-continuitysuch that f(i) not completely in j… f is not continuous". We can note that the students are using "completely" to verbalize that the intersection of the two intervals is not empty. However, the inclusiveness of an interval into another one is not expected as a formalized knowledge at this level of conceptualisation. Their commentary is acceptable. Students are expressing what they have experimented several times : for several values of β (β = 0,3 in figure 4), even with α very small (α = 0,01 in figure 4), the image of the interval ]2-α, 2+ α[ is not included in ]-2,5-β, -2,5+ β[. Concerning a case of continuity, the students are also able to write an acceptable commentary (figure 5) "for all β positive, their exists α positivealready proposed by the tool in case of continuitysuch that f(i) is included in j."
Students' activities on the given tasks are supposed to help the teacher to develop proximities with the formal definition. It is really observed that some students are able to interact spontaneously with the teacher when he wants to write the formal definition on the blackboard. This is interpreted as a sign that the teacher's discourse encounters these students' ZPD. Then the observed proximities seem to be horizontal ones: the teacher reformulates several times the students' propositions in a way which lead gradually to the awaited formal definition, for instance "so, we are going to reformulate, for all β positive, their exists α positive, such that if x belong to a neighbour of α … we can note it x 0 -α, x 0 + α…."
Of course, it is insufficient to ensure proof and effectiveness of our experimentation. The conceptualisation of continuity is an ongoing long process with is only initiated by our teaching process. However, we want to highlight here the important role of the teacher and more generally the importance of mediations in the conceptualisation process of such a complex concepts. We only have presented the beginning of our experimentation. It is completed by new tasks on the tool which are designed to come back on similar activities and to continue the conceptualisation process.
Figure 1 :
1 Figure 1: two windows for a function, the dynamic point of view about continuity
Figure 2 :
2 Figure 2: two windows for a function, the static points of view about continuity
Figure 3 :
3 Figure 3: example of commentary given by a pair of students in the dynamic window
Figure 4 :
4 Figure 4: example of commentary given by a pair of students in the static window
Figure 5 :
5 Figure 5: example of commentary given by a pair of students in the static window | 22,824 | [
"20514"
] | [
"1057646",
"143355"
] |
01766869 | en | [
"math"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01766869/file/IL_Vandebrouck%20new.pdf | Fabrice Vandebrouck
ACTIVITY THEORY IN FRENCH DIDACTIC RESEARCH
Keywords: Mathematics, Tasks, Activity, Mediations, Technologies
The theoretical and methodological tools provided by the first generation of Activity Theory have been expanded in recent decades by the French community of cognitive ergonomists, followed by a sub-community of researchers working in didactics of mathematics. The main features are first the distinction between tasks and activity and second the dialectic between the subject of the activity and the situation within which this activity takes place. The core of the theory is the twofold regulatory loop which reflect both the codetermination of the activity by the subject and by the situation, and the developmental dimension of the subject's activity. This individual and cognitive understanding of Activity Theory mixes aspects of Piaget and Vygotsky's frameworks. It is first explored in this paper, associated with a methodology for analyzing students' mathematical activites. Then we present findings that help to understand the complexity of student mathematical activities when working with technology.
Introduction
Activity Theory is a cross-disciplinary theory that has been adopted to study various human activities, including teaching and learning in ordinary classrooms, where individual and social levels are interlinked. These activities are seen as developmental processes mediated by various contextual elementshere we consider the teacher, the pair and the artefact (Vandebrouck et al., 2012: 13). Activity is always motivated by an object; a characteristic that distinguishes one activity from another. Transforming the object into an outcome is another key feature of activity. Subject and object form a dialectic unit: the subject transforms the object and at the same time is him/herself transformed. This framework can be adapted to describe the actions and interactions that emerge in the teaching/learning environment, and that relate to the subjects, the objects, the artefacts and the outcomes of the activity [START_REF] Wertsch | The concept of activity in soviet psychology: an introduction[END_REF].
Activity Theory was originally developed by, among others, [START_REF] Leontiev | Activity, consciousness and personality[END_REF]. A well-known extension is the systemic model proposed by Engeström and al. (1999), called third generation of Activity Theory. It expresses the complex relationships between the elements that mediate activity in an activity system. In this paper, we take a more cognitive and individual perspective. This school of thought has been expanded over the course of the past four decades by French researchers working in the domain of occupational psychology and cognitive ergonomics, and has since been adapted to the didactics of mathematics. The focus is on the individual as a cognitive subject and an actor in the activity, rather than the overall systemeven if individual activity is seen as embedded in a collective system, and cannot be analysed outside the context in which it occurs.
An example of this adaptation is already well-established internationally. Specifically, it refers to the distinction between the artefact and the instrument, which is used to understand the complex integration of technologies into the classroom. The notion of instrumental genesis (or instrumental approach) was first introduced by [START_REF] Rabardel | Les hommes et les technologies, approche cognitive des instruments contemporains[END_REF] in the context of cognitive ergonomics, then extended to didactics of mathematics by [START_REF] Artigue | Learning mathematics in a cas environment: The genesis of a reflection about instrumentation and the dialectics between technical and conceptual work[END_REF] and it is concerned with the subject-artefact dialectic of turning an artefact into an instrument. In this paper, we draw upon and try to encompass this instrumental approach.
1 -2 First, we describe how Activity Theory has been developed in the French context. These developments are both general and focused on students' mathematical activity. Next, we present a general methodology for analysing students' mathematical activity when working with technology. Then we develop an example of application, and we describe our findings. Finally, we present some conclusions.
Activity theory in French context
The first notable feature of Activity Theory in the French context is the distinction between tasks and activity [START_REF] Rogalski | Theory of Activity and Developmental Frameworks for an Analysis of Teachers' Practices and Students' Learning[END_REF]. Activity relates to the subject, while tasks relate to the object. Activity refers to what the subject engages in to complete the task: external actions but also inferences, hypotheses, thoughts and actions he/she decides to take or not. It also concerns elements that are specific to the subject, such as time management, workload, fatigue, stress, enjoyment, and interactions with others. As for the taskas described by [START_REF] Leontiev | Activity, consciousness and personality[END_REF] and extended in cognitive ergonomicsthis refers to a goal to be attained under certain conditions [START_REF] Leplat | Regards sur l'activité en situation de travail[END_REF].
Activity Theory draws upon two key concepts: the subject and the situation. The subject refers to an individual person, who has intentions and competencies (potential resources and constraints). The situation provides the task and the context for the task. Together, situation (notably task demands) and the subject codetermine activity. The dynamic of the activity produces feedback in the form of twofold regulatory loop (Figure 1) that reflects the develomental dimension of Activity Theory [START_REF] Leplat | Regards sur l'activité en situation de travail[END_REF]. The concept of twofold regulation reflects the fact that the activity modifies both the situation and the subject. On the one hand (upper loop), the situation is modified, giving rise to new conditions for the activity (e.g. a new task). On the other hand (lower loop), the subject's own knowledge is modified (e.g. by the difference between expectations, acceptable outcomes and the results of actions).
More recently, the dialectic between the upper and lower regulatory loops (shown in Figure 1) has been expanded through a distinction between the productive and constructive dimensions of activity [START_REF] Pastré | Apprendre des situations[END_REF][START_REF] Samurcay | Modèles pour l'analyse de l'activité et des compétences: propositions[END_REF]. Productive activity is object-oriented (motivated by task completion), while constructive activity is subject-oriented (the subject aims to develop his or her knowledge). In teaching/learning situations, especially those that involve technologies, the constructive dimension in the students' activity is key. The teacher aims the students to develop constructive activity. However, especially with computers, students are mostly engaged in producing results and the motivation of their activity can be only towards the productive dimension. Then the effects of their activity on students' knowledge -as it is stipulated by the dual regulatory loop -are mostly indirect with less or without any constructive aspects.
The last important point to note is the fact that French Activity Theory mixes Piagetian approach of epistemological genetics, together with Vygotsky's socio-historical framework to specifiy the developmental dimension of activity. As Jaworski (in Vandebrouck, 2013) writes, "the focus on the individual subjectas a person-subject rather than a didactic subjectis perhaps somewhat more surprising, especially since it leads the authors to consider a Piagetian approach of epistemological genetics alongside Vygotsky's sociohistorical framework". Rogalski (op. cit.) responds with "the Piagetian theory looks from the student's side at epistemological analyses of mathematical objects in play while the Vygotskian theory takes into account the didactic intervention of the teacher, mediating between knowledge and student in support of the students' activity".
The dual regulation of activity is consistent with the constructivist theories of Piaget and Vygotsky.
The first author [START_REF] Piaget | The equilibration of cognitive structures: The central problem of intellectual development[END_REF] provides tools to identify the links between activities and development, through epistemological analyses. [START_REF] Vergnaud | Cognitive and developmental psychology and research in mathematics education: some theoretical and methodological issues[END_REF][START_REF] Vergnaud | La théorie des champs conceptuels[END_REF], expands the Piagetian theoretical framework regarding conceptualisation and conceptual fields by highlighting situation classes relative to a knowledge domain. We therefore define the students' learningand development -with reference to Vergnaud's conceptualisation.
On the other hand, Vygotsky (1986) stresses the importance of mediation within the student's zone of proximal developmental (ZPD) for learning (scientific concepts). Here, we refine the notion of mediation by adding a distinction between procedural and constructive mediations in the context of the dual regulation of activity. Procedural mediations are object-oriented (oriented towards the resolution of the task), while constructive mediations are more subject-oriented. This distinction can be seen as an extension to what Robert [START_REF] Robert | Why and how to understand what is at stake in a mathematics class?[END_REF]) calls teachers' procedural and constructive teacher's aids. A more detailed exploration of the complementarity Piaget/Vygotski can be found in [START_REF] Cole | Beyond the individual-social antinomy in discussions of Piaget and Vygotski[END_REF].
General methodology for analysing students' mathematical activities
Following Activity Theory, we postulate that students' learning depends directly on their activity, even though other elements can play a part -and even if activity is partially inaccessible to us and differ from one student to another. Students' activity is developed through the actions that are carried out to complete tasks. Through their actions, subjects aim to achieve goals, and their actions are driven by the motivation for the activity. Here, we draw upon the three levels originally introduced by [START_REF] Leontiev | Activity, consciousness and personality[END_REF]: activity associated with a motive; actions associated with goals; and operations associated with conditions. Activity takes place in a specific situation, such as the classroom, at home, or during a practical session. Actions, involved by the proposed precise tasks, can be external (i.e. spoken, written or performed), or internal (e.g. hypotheses or decisions) and partially converted in operations. As [START_REF] Galperine | Essai sur la formation par étapes des actions et des concepts[END_REF] and [START_REF] Wells | Reevaluating the IRF sequence: A proposal for the articulation of theories of activity and discourse for the analysis of teaching and learning in the classroom[END_REF] note, the three levels are relative and, for instance, operations can be considered as actions that have been routinized.
Here, we use the generic term mathematical activities (rather than activity), to refer to students' activity on a specific mathematical task in a given context. Mathematical activities refer to everything that surrounds actions and operations (also non actions for instance). They are a function of a number of factors (including task complexity, but extending to the characteristics of the context and all mediations that occur as tasks are performed) that contribute to regulation and intended development in terms of mathematical knowledge.
Two methodological levels can be adopted from the dynamic of activity within the twofold regulatory loop. First of all, regulations can be considered at a local level as short-term adjustments of activities to previous actions and as procedural learning (also called functional regulations, upper loop in the figure 1). Secondly, at a global level, regulations are mostly constructive ones (also called structural regulations) and they correspond to the long-term development of the subject (in link with conceptualisation).
2.a The local level
At the local level, the analysis focuses on students' activities in the situation, in the form of tasks, their context, and their completion by students with or without direct help from the teacher. The initial step is an a priori analysis of the tasks given to students (by the teacher, the computer…), which is closely linked to the situational context (e.g. the students' academic level and age). We use [START_REF] Robert | Outil d'analyse des contenus mathématiques à enseigner au lycée et à l'université[END_REF] categorisation to characterise these tasks.
First, we identify the mathematical knowledge to be used for a given task: the representation(s) of a concept, theorem(s), definition(s), method(s), formula(s), types of proof, etc. The analysis aims to answer several crucial questions: does the mathematical knowledge to be used already exist for students or is it new? Do students have to find the knowledge to be used by themselves? Do the task only require the direct application of this knowledge without any adjustment (technical task), or does it require adaptations and/or carrying out subtasks? A list of such adaptations can be found in Horoks and [START_REF] Robert | Tasks Designed to Highlight Task-Activity Relationships[END_REF]: mix of knowledge, the use of intermediaries, change of register [START_REF] Duval | Sémiosis et pensée humaine: registres sémiotiques et apprentissages intellectuels[END_REF], change of mathematical domain or setting [START_REF] Douady | Jeux de cadre et dialectique outil-objet[END_REF], introduction of steps, choices, use of different points of view, etc. Tasks that require the adaptation of knowledge are referred to as complex tasks and encourage conceptualisation, as students become able to more readily and flexibly access the relevant knowledge, depending however on the implementation in the classroom.
The a priori analysis of tasks leads us to describe what we have called the intended students'activities associated with the tasks. Here we draw upon [START_REF] Galperine | Essai sur la formation par étapes des actions et des concepts[END_REF] functions of operations, and adapt them to mathematical activities. Galperine distinguishes three functions: orientation, execution and control. Next, we use three "critical" mathematical activities that are characteristic of complex tasks [START_REF] Vandebrouck | Proximités en acte mises en jeu en classe par les enseignants du secondaire et ZPD des élèves : analyses de séances sur des tâches complexes[END_REF].
First, recognizing activities refer mainly to orientation and control. They occur when students have to recognize mathematical concepts as objects or tools that can be used to solve the tasks they are given.
Students may also be asked to recognize modalities of application or adaptation of these tools.
Second, organizing activities refer mainly to orientation: students have to identify the logical and temporal steps in their mathematical reasoning, together with any intermediaries.
Third, treatment activities refer to all of the mathematical activities associated with execution on mathematical objects. Students may be asked to draw a figure, compute, substitute, transform expressions (with or without giving the steps), change registers, change mathematical domains, etc.
Following Vygotsky, we supplement our local analysis of intended students' activities by developing ways to analyse classroom teaching (a posteriori), and to approach effective students' activities as functions of the different mediations that occur. For this, we use videos and observations in the classroom. We also record students' discussions, teacher's discourses and writings, and capture students' computer screens to identify observable activities. The data that is collected concerns how long students spend working on tasks, the format of their work (the whole class, in small groups, by pairs of students etc.), its nature (copying, reading, calculation, investigation, written or oral, graded or not, etc.) and all elements of the context that may modify intended activities. This highlights, at least partially, the autonomy given to students, the nature of mediations, and opportunities for students to show initiative, in relation to the adaptation and availability of knowledge.
Multiple aspects of mediations are analysed with respect to their assumed influence on student activities. Some relate to their format (interactions with students, between students, with teacher, with computers, etc.), while others concern the specific ways of taking into account the mathematical content (mathematical aids, assessment, reminders, explanations, corrections and evaluations, presentation of knowledge, direct mathematical content, etc.).
Two types of mediations have already been introduced, depending on whether they modify intended activities, or whether they add to activities (effective or at last observed). The first are object-oriented; here we use the term procedural mediations. These mediations modify intended activities and correspond to instructions given by the teacher, the screen or by other students, directly or indirectly, before or during task completion. They are often seen in open-ended questions form the teacher such as 'What theorem can you use?' They can be given by the computer giving feedbacks which transform the task to be performed or with some limitations in the provided tools which give indirect indications to students about the way to achieve the task. These procedural mediations may lead to the subdivision of a complex task into subtasks. They usually change knowledge adaptations in complex tasks and simplify the intended activities in such a way that it becomes more like technical tasks (for instance students having to apply a contextualized method).
The second type of mediations are more subject-oriented, here we use the term constructive mediations. They are designed to add something to the students' activities and the knowledge that can emerge from these activities. They can take the form of a simple summary of what has been developed by students, an explanation of choices, a partial decontextualization or generalisation, assessments and feedbacks, a discussion of results, etc. On some computers, the way a geometrical figure has been achieved by a student can be replayed to recall him the order in which instructions have been given without any wrong ways.
It should be noted here that our framework leads to the hypothesis that there is an internal transformation of the subject in the learning process: constructive mediations aim to contribute to this process. However, the mediations can be constructive for some students and remain procedural for others. On the contrary, some procedural mediations can become constructive for some students, for instance if they extract by themselves a generalisation from a local indication. Moreover some constructive mediationbut also perhaps productivecan belong to some students' ZPD in the sense of Vygotski or they can remain out of the ZPD. When they belong to the ZPD, they can be identified to appreciate the explicit links between the expression of the general concepts to be learned and their precise applications, in contextualised tasks, according to the necessary dynamic between them. Distinguishing between the kinds of mediations and the way they belong or not to some students' ZPD can be very difficult.
2.b The global level
The local level can be extended to a global level that takes into account the set of mathematical activities, the link with the intended conceptualisation (long term constructive loops), and teaching practices in the long term. We link mathematical students' activities to the intended conceptualisation of the relevant mathematical notion, establishing a "relief map" of this mathematical notion. This relief map is developed from an epistemological and mathematical analysis of the notion, the study of the curricula, and didactical analyses (e.g. students'common difficulties). This global analysis focuses on the similarity between students' activities (intended, observed, effective) and the set of activities that characterise the intended conceptualisation of the relevant notion.
However, the didactical analysis of one teaching session is insufficient. It is necessary to take into account, on a day-to-day basis, all of the tasks students are asked to complete, and teachers' interventions. We use the term scenario to describe a sequence of lessons and exercises on a given topic. The global scenario could be understood as a long term "cognitive road" [START_REF] Robert | A cross-analysis of the mathematics teacher's activity. An example in a French 10th-grade class[END_REF].
Example of application: the 'shop sign' situation
To illustrate the utilisation of our Activity Theory, this section presents an example of a situation that aims to contribute to students' conceptualisations of the notion of function. Then we outline some limitations of the methodology at the global level.
The example relates in fact to the GeoGebra 'shop sign' family for learning functions. This family refers to mathematical situations that lie at the interface between two mathematical domains: geometry and functions.
There are many examples of shop sign situations, but they share the idea that a coloured area is the lit area of a shop sign [START_REF] Artigue | The challenge of developing a European course for supporting teachers' use ICT[END_REF] which depends on some moving variables in the shop sign. The task is set for grade 10 students (15 years old). One solution is to identify DE as an independent variable x. Then f(x), the sum of the two areas, is equal to x² (for the square) plus 4(4-x)/2 (for the triangle): equivalent to x²-2x + 8. In the French curriculum at grade 10, the derivative is not known and students must compute and understand the canonical form (x-1)² + 7 as a way to identify the minimum 7 for the distance DE=1 (which is the actual position on the figure).
Students are working in pairs on computers. They have already worked with functions in the traditional pencil and paper context, and they also have manipulated GeoGebra for geometrical tasks that do not refer to functions. In this new situation, GeoGebra helps them to begin the task by making conjectures about the minimum. Students can also trace the graph of the function, as shown in Figure 6. Then, in the algebraic register, they can find the canonical form of the function f(x) and the characteristics of the minimum.
We first identify the relief map on the notion of function and the intended conceptualisation. Then we give the a priori analysis of the task and the intended students' activities. We finish with the observation to two pairs of students to identify observable and effective activities.
3.a The global level: relief map on the notion of function and intended conceptualisation
The function is a central concept in mathematics and links it to other scientific fields and real-life situations. It both formalises and unifies [START_REF] Robert | Why and how to understand what is at stake in a mathematics class?[END_REF] a diversity of objects and situations that students encounter in secondary school: proportionality, geometrical transformations, linear, polynomial growth, etc. A diversity of systems of representations (numerical, graphical, algebraic, formal, etc.) and a diversity of perspectives (pointwise, local and global) are frequently combined when working with them [START_REF] Duval | Sémiosis et pensée humaine: registres sémiotiques et apprentissages intellectuels[END_REF][START_REF] Maschietto | Graphic Calculators and Micro Straightness: Analysis of a Didactic Engineering[END_REF][START_REF] Vandebrouck | Points de vue et domaines de travail en analyse[END_REF]. As it is summarized by [START_REF] Artigue | Mathematics thinking and learning at post-secondary level[END_REF], the processes of teaching and learning of function entail various intertwining difficulties that reinforce one another in complex ways.
Educational research [START_REF] Bergen | A theory of mathematical growth through embodiment, symbolism and proof[END_REF][START_REF] Gueudet | Investigating the secondary-tertiary transition[END_REF]Hitt and Gonzalez-Martin, 2016) shows that an efficient conceptualisation of the notion requires a rich experience that illustrates the diversity illustrated above, and the diversity of settings in which functions are used [START_REF] Douady | Jeux de cadre et dialectique outil-objet[END_REF]. It also means that functions are available as tools for solving tasks, and can be flexibly linked with other concepts. There must be a progression from embodied conceptualisations (where functions are highly dependent on physical experience) to proceptual conceptualisations (where they are considered dialectically and work both as processes and objects), paving the way for more formal conceptualisations [START_REF] Tall | Thinking through three worlds of mathematics[END_REF][START_REF] Bergen | A theory of mathematical growth through embodiment, symbolism and proof[END_REF].
At grade 10, the intended conceptualisation can be characterized by a set of tasks in which functions are used as tools and objects. They can be combined and used to link different settings (including geometrical and functional), numerical, algebraic and graphical representations, and the dialectic between pointwise and global perspectives. The shop sign task is useful in this respect, as students have to engage in such mathematical activities. A priori, optimisation tasks in geometrical modelling help to build the intended functional experience, and link geometrical and functional settings.
Technology provides a new support for physical experience, as the modelling process provides new systems of representation and helps to identify the dynamic connections between them. It also offers a new way to approach and connect pointwise and global perspectives on functional objects, and supports the building of rich functional experiences. A famous contribution is the one of [START_REF] Arzarello | Approaching functions through motion experiments[END_REF], who use sensors to introduce students to the functional domain. The framework is already an activity theoretical framework, together with more semiotic approaches, but it is not in a context of dynamic geometry. Many experiences exist about learning functions throught dynamical geometrical situations. For instance, [START_REF] Falcade | Approaching functions: Cabri tools as instruments of semiotic mediation[END_REF] study the potential of a didactical engineering with Cabri-Géomètre. The authors take a Vygotskian perspective about semiotic mediations which is more precise than our adaptation of Vygotsky inside Activity Theory, but which is also more restrictive in the sense that they don't consider deep connexions between given tasks and mathematical activities. Moreover, it doesn't concern ordinary classrooms. More recently, [START_REF] Minh | Connected functional working spaces: a framework for the teaching and learning of functions at upper secondary level[END_REF] analyze students' activities on functions using Casyopée. This solftware is directly built for the learning of functions and the authors adopt the model of Mathematic Working Spaces [START_REF] Kuzniak | Mathematical Working Spaces in Schooling[END_REF]. They built on three important challenges for students in the learning of functions: to consider functional dependencies, to understand the idea of independent variable, and at last to make sense of functional symbolism. The aims of the 'shop sign' family is consistent with such a progression which is closed to Tall's one introduced above.
3.b The local level: a priori analysis of the task and intended students'activities
The task is to identify the position of E on [DC] in order that the sum of the areas DFGE and AGB are minimal (Figure 2). It requires actual knowledge about geometrical figures and functions. However, it assumes that the notion of function is available, i.e. students have to identify the need for a function by themselves.
In a traditional pencil and paper environment, students first draw a generic figure. They can try to estimateby geometrical measurementssome values of the areas for different positions of E. They can draw a table of values but this kind of procedure is usully not enough to obtain a good conjecture of the minimum value. Moreover such a procedure can reinforce the pointwise perspective because it doesn't bring the continuous aspects of the function at stake. Usually, the teacher quickly ask students to produce algebraic expressions of the areas. Students try themselves to introduce an algebraic variable (DE=x), or the teacher gives them procedural aids.
In the example given here, the teacher provided students with a sheet of paper showing a figure similar to the one given in Figure 2, and the instructions as summarized in Figure 3. Figure 3 shows that the overall task is divided into three subtasks. Organizing activities are directed by procedural mediations (functional regulation), which is a way to ensure that most students can engage in productive activity.
A priori analysis of the first subtask: the construction of the figure
In the geometrical subtask students have to identify the fixed points (A, B, C, D), the free point (E) on [DC], and the dependent points (F and G). The order of construction is crucial to the robustness of the final figure, but are not important in the paper and pencil environment. Consequently, organizing activitiesthe order of instructionsare more important in the GeoGebra environment.
The subtask also requires students to make choices. It is possible to draw either G or F first, and the sequence of instructions is not the same. Moreover, there are other choices that have no equivalent in the paper and pencil environment: whether to define the polygons (the square and triangle) with the polygon instruction, or by the length of their sides; whether to use analytic coordinates of fixed points or a geometrical construction; whether to use a cursor to define E; etc. These choices refer not just to mathematical knowledge, but also to instrumental knowledge (following the instrumental genesis approach). This means that treatment activities include instrumental knowledge and are more complex than in the traditional environment. Once the construction is in place, students can verify its robustnessa treatment that is also specific to the dynamic environment.
A priori analysis of the second subtask: the conjecture
There is no task really equivalent to this subtask in the paper and pencil environment. This again leads to specific treatment activities. These are engaged with the feedback provided by the software, which assigns numerical values of the areas DFGE and AGB, according to the position of E. However, students are required to redefine DFGE and AGB as polygons if they have not already used this instruction to complete subtask 1 (Figure 5). They also have to create in GeoGebra environment a new numerical value that is the sum of the two areas in order to refine their conjecture. It is not clear in what extent these specific treatment activities refer to mathematical knowledge, and we will return to this point later.
A priori analysis of the third subtask: the algebraic proof
This subtask appears similar to its equivalent in the paper and pencil environment. However, as students already know the value of the minimum, the motivation for activity is different and only relates to the proof itself. The most important step is the introduction of x, as a way to pass from the geometrical setting to the functional setting. This step brings recognizing activities (students must recognize that the functional setting is needed), which is triggered by a procedural mediation (the instructions given on the sheet).
Students have to determine the algebraic expression of the function. Existing knowledge about the area of polygons must be available. They also have to recognize a second order polynomial function associated to specific treatments. Treatment activities remain to obtain the canonical form (as students have not been taught about derivatives, they must be helped in this by the teacher). At last, the recognition of the canonical form as a way to obtain the minimum of the area and the position of E which correspond to this minimum is correlate to the importance of the dialectic between pointwise and global perspectives on functions.
3.c A posteriori analysis: observable and effective activities
Students worked in pairs. The teacher only intervened at the beginning of the session (to ensure that all students were working), and at the end (to summarise the session). Students mostly worked autonomously, although the teacher helped individual pairs of students. The following observations are based on two pairs of students: Aurélien and Arnaud, and Lolita and Farah.
Analysis of the first pair of students'activities: Aurélien and Arnaud
This pair took a long time to construct their figure (more than 20 minutes). They began with A,B,C,D, in sequence, using coordinates, and then drawing lines between pairs of points. This approach is closest to the paper and pencil situation, and while it is time-consuming it is not crucial for global reasoning. They then introduced a cursora numerical variable j that took a value between 0 and 4in order to position E on [D, C]. However, the positioning of F (at 0, 3), was achieved without the cursor, which leads to a wrong square (Figure 4). G was drawn correctly. After they had completed their construction, they moved the cursor in order to verify that their figure was robust; an operation which revealed that it was not (Figure 4). This mediation from the screen is supposed to be a constructive mediation: it does not change the nature of the task and it is supposed to permit a constructive regulation of students' activities (lower loop in reference to Figure 1). However, the mediation doesn't encounter the students' ZPD and it is insufficient for them to regulate their activity by their own. The mediation supposes in fact new recognizing activities, specific to dynamic geometry on computers, that these students are not able to develop.
In this case, the teacher makes a procedural mediation and helps the students to rebuild their figure ("You use the polygon instruction to make DFGE […] then again to make the polygon ABG"). Once the two polygons have been correctly drawn, the values of their areas appear in the numerical window of GeoGebra (called poly1 and poly2, shown on the left-hand side of the screens presented in Figure 5). In the conjecture phase (second subtask, 8 minutes), the students made the conjecture that the sum is always 8 ("Look, it's always 8…"), by computing poly1+poly2 in their mind. The numerical window of GebGebra now shows 18 different pieces of information, including the areas of DFGE (poly1) and ABG (poly2). Students must introduce another numerical variable (e.g. poly3) that is equal to the sum of poly1+poly2. However, this requires new organizing activities that GeoGebra does not help with. In fact, there is already too much information in the numerical window. Here again, the teacher provides direct procedural assistance ("introduce poly3=poly1+poly2").
In the algebraic phase (third subtask, 20 minutes), the students are unable to express the areas DFGE and ABG as functions of x. Analyses reveal that again new recognizing activities are awaited to switch from the computer environment to the paper and pencil environment. These new recognizing activities are not an evidence for the students. They suppose both mathematical knowledge and instrumental knowledge about the potentialities of solftware and the mathematical way of proving the existence and the values of the minimum. Then, students' attempt to implement DE=x in the input bar leads to feedback from GeoGebra (in the form of a syntax error), which informs them that their procedure is wrongbut does not provide any guidance about what to do instead.
It is difficult to know whether to categorise this kind of mediation as procedural or constructive as it does not add any mathematical knowledge.
The teacher asks the students to try to find a solution with pencil and paper (procedural assistance). However, the introduction of x, which is linked to the change of mathematical setting (adaptation of knowledge), seems very artificial. The students start working on their algebraic formula by looking at their static figure, with E positioned at (1, 4). The base of the triangle measures 4 and its height is 3. One of the pair suggests that "it depends on x" means that each algebraic expression ends in x, as the following dialogue between the two students shows:
"This is 4x At this point, the teacher provides another direct procedural assistance. This once again shows that although the mediation of GeoGebra help students to discuss and progress, it is insufficient for them to correctly regulate their activity. Without procedural assistance from the teacher, they are unable to find the formula for the area of triangle. In the end, the students don't have enough time to finish the task by themselves.
At end of the session, the teacher gives a procedural explanation to whole class of how to find the canonical form (as "x²-2x+8 = (x-_)²+_"). Although Aurélien and Arnaud write it down, they do not make the link between it, and their classroom work. Consequently, they do not understand the motivation for the activity and cannot make sense of the explanation of the canonical transformation given by the teacher.
Then the teacher gives constructive explanation about the meaning of the coefficients in the canonical form and the way they give the minimum and the corresponding value of x. But according to Aurélien and Arnaud's activities, it is too early and they do not make the link with their numerical conjecture. In other words, the collective mediation of the teacher seems too far from the students' ZPD and it is not at all constructive for this pair of students.
Analysis of the second pair of students' activities: Lolita and Farah
Lolita and Farah are better students and quickly draw their robust figure. Their numerical conjecture is correct and the teacher gives them another subtask: to find a graphical confirmation of their conjecture. The procedural instruction is to find a new point, M, whose abscissa is the same as E and ordinate is the value poly1+poly2. However, Lolita and Farah do not recognize this. One says "this is not a curve" and then "the minima, we have seen this for functions but here…".
They only recognize the trace as a part of a parabola (geometrical setting) and associate its lowest point with the value of the minimum area.
The graphical observation confirms to Lolita and Farah that their numerical conjecture was correct. However, this is a proof for them and they do not understand the motivation of the third subtask which does not make sense to them. Although they succeed in defining the algebraic expression of the function and they find the canonical expression, they do not make the link with their graphical observation.
Here again, the teacher's summary of how to obtain the canonical form of the function, the value of the minimum and the corresponding value of x is not useful for this pair as it is not the problem they encountered.
A constructive intervention about the motivation for the third subtask and how the canonical form was linked to the conjecture would have been a mediation closer to their ZPD.
What does this tell us about students' mathematical activities?
The main result concerns complex activity involving technology: here the complexity is introduced by mathematical activities that require either mathematical or instrumental knowledge, particularly knowledge about the real potentialities of technologies in contrast with what is supposed to be solved within the paper and pencil environment. This leads also to new treatment activities (e.g. in the construction and conjecture subtasks) and new recognizing activities, New, onscreen, representations appear, typically dynamic, and students must recognize them as mathematical objects (or not). The example of Aurélien and Arnaud shows how difficult it was for them to recognize a robust figure, and dynamic and numerical representations of variations in areas. Similarly, it was difficult for Lolita and Farah to recognize the trace of M as a special part of the graph of a function.
The second main result concerns the increase in recognizing activities and the new balance between the three types of critical activities. While in a traditional session, the teacher can point out the mathematical objects to use, the screen presents far more information to students, meaning that they have to recognize what is most important in their treatment activities. Organizing activities also increase, both before treatment activities related to construction, and during conjecture. For instance, Aurélien and Arnaud failed in the conjecture task because they were not able to introduce a third numerical variable by themselves. Classroom observation [START_REF] Vandebrouck | Proximités en acte mises en jeu en classe par les enseignants du secondaire et ZPD des élèves : analyses de séances sur des tâches complexes[END_REF] has led to the idea that most of effective students'activities are treatment activities as the teacher must make productive interventions before most students can begin the task.
Recognizing and organizing activities are mostly activities for best students. These students often have an idea of how to begin the resolution of the task, they are able to adapt quickly their knowledge, and they develop all three types of critical mathematical activities, whereas weaker students find it difficult to engage in the task waiting for any procedurale assistance of the teacher. In classroom sessions that use technology, students are confronted alone with all of these critical activities, which may help to explain the difficulty of weaker students.
A further finding concerns mediations. In such sessions, teacher's mediations are mostly procedural and clearly aim to foster productive activity. Onscreen mediation leads to specific, new recognizing activities (dynamism) but is insufficient for students (not only weaker) to regulate their own activity. It appears that most of the time this mediation is not procedural or constructive enough, leading to more teacher intervention. Moreover, it seems that onscreen mediation is always associated with treatment activities and does not help students in their recognizing or organizing activities.
The last point concerns constructive mediation and the heterogeneity of the students' knowledge (and ZPD).
Student activities in classroom sessions that use technology are difficult for the teacher to evaluate. Even if he/she tries to manage the best "average" constructive mediations for all students, our examples show that this is very challenging. This raises the question of what is the real impact of such sessions with respect to the intended conceptualisation. The availability and recognition of functions as tools to complete such tasks was not really investigated, in the sense that the independent variable x was given to students (on paper), and none of them returned to the geometrical setting as in the traditional modelling cycle -in reference to Kaiser and Blum [START_REF] Maas | What are modelling competencies?[END_REF]. Moreover, Aurélien and Arnaud did not explore the dynamic numerical-graphicalalgebraical flexibility, which was one of the aims of the session; on the other hand Lolita and Farah did, but lacked the constructive mediations needed to complete the cycle.
Conclusion
We have presented Activity Theory in the context of French didactics, notably the dual regulation found in the activity model, which was first developed in ergonomic psychology and then adapted to didactics of mathematics, for studying students' activities. Other works, which we have not discussed here, have looked at teachers' practices in some different ways [START_REF] Robert | A didactical framework for studying students' and teachers' activities when learning and teaching mathematics[END_REF][START_REF] Robert | A cross-analysis of the mathematics teacher's activity. An example in a French 10th-grade class[END_REF]). An important component of this model is the impact of activity on subjects, which represents the developmental dimension of students'activity. This focus highlights the commonalities and complementarities of the constructivist theories of Piaget (extended to Vergnaud's conceptual fields) and Vygotsky. The connection between Activity Theory, the work of Piaget and Vygotsky, and didactics of mathematics, provides a theoretical foundation for a dual approach to students' activity from the viewpoint of mathematics (the didactical approach) and subjects (the cognitive approach).
Our analysis does not provide a model of students' activity (or teachers' practices). However, it leads to the identification of similarities and differences in terms of the relations between subtasks, students' ways of working, mediations, mathematical activities, and compares this complex task with the traditional, paper and pencil environment. One of the specificity of our approach is the deep connection between the students' activities analysis and the a priori tasks' analysis, including mathematical content. But we do not look for the teacher's own intention, unlike what is done in some English research (for instance, [START_REF] Jaworski | Bridging the macro-and micro-divide: using an activity theory model to capture sociocultural complexity in mathematics teaching and its development[END_REF]. Moreover, we do not attempt to raise the global dynamic between individual and collective interactions and learning. We should take now a threefold approach to the investigation of students' practicesdidactical, cognitive and socio-cultural. As [START_REF] Radford | The epistemic, the cognitive, the human: a commentary on the mathematical working space approach[END_REF] argues, with respect to Mathematical Working Space [START_REF] Kuzniak | Mathematical Working Spaces in Schooling[END_REF], the individual-collective dynamic remains difficult to understand in both our Activity Theory and MWS which are discussed together. This represents a new opportunity to better investigate the socio cultural dimension of Activity Theoryespecially the one developed by Engestrom -and integrate it into our didactical and cognitive approach.
Figure 1 :
1 Figure 1: Codetermination of activity and twofold regulatory loop
Figure 2 :
2 Figure 2: Shop sign
First
Figure 3: Main instructions given to students
Figure 4 :
4 Figure 4: Exploring the robustness of the shop sign
Figure 5 :
5 Figure 5: Exploration of varying areas by moving the point E on [DC]
Figure 6 :
6 Figure 6: The shop sign task showing part of the graph of the function | 47,895 | [
"20514"
] | [
"143355",
"481195"
] |
01682735 | en | [
"chim"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01682735/file/2017-buffeteau-et-al.pdf | Thierry Buffeteau
Delphine Pitrat
Nicolas Daugey
Nathalie Calin
Marion Jean
Nicolas Vanthuyne
Laurent Ducasse
Frank Wien
Thierry Brotin
Chiroptical properties of cryptophane-111 †
The two enantiomers of cryptophane-111 (1), which possesses the most simplified chemical structure of cryptophane derivatives and exhibits the highest binding constant for xenon encapsulation in organic solution, were separated by HPLC using chiral stationary phases. The chiroptical properties of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were determined in CH 2 Cl 2 and CHCl 3 solutions by polarimetry, electronic circular dichroism (ECD), vibrational circular dichroism (VCD), and Raman optical activity (ROA) experiments and were compared to those of cryptophane-222 (2) derivative. Synchroton Radiation Circular Dichroism (SRCD) spectra were also recorded for the two enantiomers of 1 to investigate lowlying excited states in the 1 B b region. Time-dependent density functional theory (TDDFT) calculations of the ECD and SRCD as well as DFT calculations of the VCD and ROA allowed the [CD(À) 254 ]-PP-1 and [CD(+) 254 ]-MM-1 absolute configurations for 1 in CH 2 Cl 2 and CHCl 3 solutions. Similar configurations
were found in the solid state from X-ray crystals of the two enantiomers but the chemical structures are significantly different from the one calculated in solution. In addition, the chiroptical properties of the two enantiomers of 1 were independent of the nature of the solvent, which is significantly different to that observed for cryptophane-222 compound. The lack of solvent molecule (CH 2 Cl 2 or CHCl 3 ) within the cavity of 1 can explain this different behaviour between 1 and 2. Finally, we show in this article that the encapsulation of xenon by 1 can be evidenced by ROA following the symmetric breathing mode of the cryptophane-111 skeleton at 150 cm À1 . † Electronic supplementary information (ESI) available: Synthesis of (rac)-1.
Separation of the two enantiomers of 1 by HPLC using chiral stationary phase. 1 H and 13 C NMR spectra of the two enantiomers of 1 in CD 2 Cl 2 solution. Crystallographic data and pictures of the X-ray crystals of the two enantiomers of 1. UV-vis, ECD, SRCD, IR, VCD, Raman and ROA spectra of the two enantiomers of 1 in various solvents. ROA spectrum calculated at the B3PW91/6-31G** level (IEFPCM = CHCl 3 ) for conformer A of MM-1. Experimental SOR values measured at several wavelengths in various solvents and SOR values calculated at the B3PW91/6-31G** level (IEFPCM = CHCl 3 and CH 2 Cl 2 ) for conformer A of MM-1. CCDC 1537585 and 1537591.
Introduction
The cryptophane backbone displays a very simple and easily recognizable chemical structure, which is composed of six aromatic rings positioned into a rigid molecular frame. 1,2 The six aromatic rings are assembled into two independent cyclotribenzylene (CTB) sub-units connected together by three linkers, whose length and nature can be varied. This structure generates a lipophilic cavity that can accommodate a large variety of guest molecules, such as halogenomethanes and ammonium salts, or atoms in organic or aqueous solutions. 2 The cryptophane-111 skeleton (compound 1 in Scheme 1) appears as the most simplified structure of cryptophane derivatives and its synthesis has been reported for the first time in 2007. 3 This compound exhibits the highest binding constant (10 4 M À1 at 293 K) for xenon encapsulation in organic solvent but it does not bind halogenomethanes due to its small internal cavity. 3,4 In 2010, Rousseau and co-workers published a high-yielding scalable Scheme 1 Chemical structures of PP-1 and PP-2. [START_REF]IUPAC Tentative Rules for the Nomenclature of Organic Chemistry. Section E. Fundamental Stereochemistry[END_REF]19 a Bordeaux University, Institut des Sciences Mole ´culaires, CNRS UMR 5255, 33405 Talence, France. E-mail: [email protected] b Lyon 1 University, Ecole Normale Supe ´rieure de Lyon, CNRS UMR 5182, Laboratoire de Chimie, 69364 Lyon, France. E-mail: [email protected] c Aix-Marseille University, CNRS, Centrale Marseille, iSm2, Marseille, France d Synchrotron SOLEIL, L'Orme des Merisiers, 91192 Gif sur Yvette, France synthesis of this derivative by optimizing the cyclotriphenolene unit dimerization, 5 whereas Holman and co-workers reported the X-ray structure of the racemate of 1 and the first water-soluble cryptophane-111 with Ru complexes. 6 Later, in 2011, Rousseau and co-workers published the synthesis of a metal-free watersoluble cryptophane-111. 7 Finally, Holman and co-workers reported the first rim-functionalized derivatives of cryptophane-111 ((MeO) 3 -111 and Br 3 -111), which limit the range of achievable conformations of the cryptophane-111 skeleton, 8 and they also showed the very high thermal stability (up to about 300 1C) of the Xe@1 complex in the solid state. 9 Besides their interesting binding properties, most of the cryptophane derivatives exhibit an inherently chiral structure due to the anti arrangement of the linkers or to the presence of two different CTB caps. Thus, the anti arrangement of the methylenedioxy linkers makes 1 a chiral molecule. During the past decade, we have thoroughly investigated enantiopure cryptophanes using several techniques such as polarimetry, electronic circular dichroism (ECD), vibrational circular dichroism (VCD), and Raman optical activity (ROA) because the chiroptical properties of these derivatives are extremely sensitive to the encapsulation of guest molecules. [10][11][12][13][14][15][16][17] For instance, water-soluble cryptophanes display unique chiroptical properties depending on the nature of the guest (neutral or charged species) present within the cavity of the host. [10][11][12][13][14] In addition, cryptophane-222 (compound 2 in Scheme 1) possesses unusual chiroptical properties in organic solvents never observed before with cryptophane derivatives. 17 Indeed, a very different behaviour of the specific optical rotation (SOR) values was observed in the nonresonance region above 365 nm in CHCl 3 and CH 2 Cl 2 solutions. This feature was related to conformational changes of the three ethylenedioxy linkers upon encapsulation of the two solvent molecules by 2. This explanation could be confirmed by investigating the chiroptical properties of the new derivative 1. Indeed, 1 differs from 2 only by the length of the three linkers connecting the two CTB units, leading to a smaller size of the cavity. Moreover, the three portals of 1 are too small to allow any solvent molecules to enter the cavity of the host. Even CH 2 Cl 2 (V vdW = 52 Å 3 ) is too large to cross the portals of 1, leaving the cavity only accessible for smaller guests such as methane or xenon. 4 In addition, the replacement of ethylenedioxy by methylenedioxy linkers presents the advantage to decrease the number of conformations for the three bridges. Thus, we believe that the two enantiomers of 1 are important compounds for understanding the role of the solvent on the chiroptical properties of cryptophane derivatives in general. A change of the chiroptical properties of 1 regardless of the nature of the solvent will tend to demonstrate that the bulk solvent plays an important role in the chiroptical properties of cryptophane. In contrast, a lack of modification on the chiroptical properties of 1 will show that only the solvent molecule present within the cavity of the cryptophanes (that is the case for 2) has an effect on their chiroptical properties.
In this article we focus our attention on the chiroptical properties of 1 since they have never been reported in the literature, probably due to the difficulties encountered for the optical resolution of 1 into its two enantiomers (+)-1 and (À)-1.
In addition, the simplified chemical structure of 1 (87 atoms) allows more sophisticated theoretical calculations (better basis set) for the prediction of the VCD, ROA and ECD spectra by using density functional theory (DFT and time-dependent DFT) methods.
We report in this article the separation of the two enantiomers of 1 by high-performance liquid chromatography (HPLC) using chiral stationary phases and the detailed study of their chiroptical properties in CHCl 3 and CH 2 Cl 2 solutions by polarimetry, ECD, VCD, and ROA spectroscopy. Synchrotron Radiation Circular Dichroism (SRCD) spectra of the two enantiomers of 1 were also recorded in the two solvents to investigate the low-lying excited states in the 1 B b region (190-220 nm). The chiroptical properties of 1 were compared to those recently published for 2. 17 DFT and TD-DFT calculations were performed to predict SOR values as well as the ECD, VCD, and ROA spectra for several geometries of 1. The X-ray structures of these two enantiomers were also reported and compared to the optimized geometries of 1 calculated by DFT. Finally, the xenon encapsulation by 1 was followed by VCD and ROA spectroscopy.
Experimental
X-ray crystallography X-ray structures of the two enantiomers of 1 were obtained from crystals mounted on a Kappa geometry diffractometer (Cu radiation) and using the experimental procedure previously published. 16 CCDC 1537591 and 1537585 contain the crystallographic data of [CD(+) 254 ]-1 and [CD(À) 254 ]-1, respectively. †
Polarimetric, UV-vis and ECD measurements
Optical rotations of the two enantiomers of 1 were measured in two solvents (CHCl 3 , CH 2 Cl 2 ) at several wavelengths (589, 577, 546, 436, and 365 nm) using a polarimeter with a 10 cm cell thermostated at 25 1C. Concentrations used for the polarimetric measurements were typically in the range 0.22-0.27 g/ 100 mL. ECD spectra of the two enantiomers of 1 were recorded in four solvents (CHCl 3 , CH 2 Cl 2 , tetrahydrofuran (THF) and CH 3 CN) at 20 1C with a 0.2 cm path length quartz cell (concentrations were in the range 5 Â 10 À5 -1 Â 10 À4 M). Spectra were recorded in the wavelength ranges of 210-400 nm (THF and CH 3 CN) or 230-400 nm (CH 2 Cl 2 and CHCl 3 ) with a 0.5 nm increment and a 1 s integration time. Spectra were processed with standard spectrometer software, baseline corrected and slightly smoothed by using a third order least square polynomial fit. UV-vis spectra of the two enantiomers of 1 were recorded in CH 2 Cl 2 (230-400 nm) and THF (210-400 nm) at 20 1C with a 0.5 and 0.2 cm path lengths quartz cell, respectively.
SRCD measurements
Synchrotron Radiation Circular Dichroism (SRCD) measurements were carried out at the DISCO beam-line, SOLEIL synchrotron. 20,21 Samples of the two enantiomers of 1 were dissolved in CH 2 Cl 2 and CHCl 3 . Serial dilutions of the concentrations in view of data collection in three spectral regions were chosen between 100 g L À1 , 10 g L À1 to 2.5 g L À1 . Accurate concentrations were reassessed by absorption measurements allowing the scaling of spectral regions to each other. Samples were loaded in circular demountable CaF 2 cells of 3.5 mm path lengths, using 2-4 mL. 22 Two consecutive scans for each spectral region of the corresponding dilution, were carried out for consistency and repeatability. CD-spectral acquisitions of 1 nm steps and 1 nm bandwith, between 320-255 nm, 260-216 nm and 232-170 nm were performed at 1.2 s integration time per step for the samples. Averaged spectra were then subtracted from corresponding averaged baselines collected three times. The temperature was set to 20 1C with a Peltier controlled sample holder. Prior, (+)-camphor-10-sulfonic acid was used to calibrate amplitudes and wavelength positions of the SRCD experiment. Data-treatment including averaging, baseline subtraction, smoothing, scaling and standardisation were carried out with CDtool. 23
IR and VCD measurements
The IR and VCD spectra were recorded on an FTIR spectrometer equipped with a VCD optical bench, 24 following the experimental procedure previously published. 16 Samples were held in a 250 mm path length cell with BaF 2 windows. IR and VCD spectra of the two enantiomers of 1 were measured in CDCl 3 and CD 2 Cl 2 solvents at a concentration of 0.015 M. Additional spectra were measured in CDCl 3 in presence of xenon.
ROA measurements
Raman and ROA spectra were recorded on a ChiralRAMAN spectrometer, following the experimental procedure previously published. 15 The two enantiomers of 1 were dissolved in CDCl 3 and CD 2 Cl 2 solvents at a concentration of 0.1 M and filled into fused silica microcell (4 Â 3 Â 10 mm). The laser power was 200 mW (B80 mW at the sample). The presented spectra in CDCl 3 (CD 2 Cl 2 ) are an average over about 32 (52) h. Additional experiments were performed in the two solvents in presence of xenon.
Theoretical calculations
All DFT and TDDFT calculations were carried out with Gaussian 09. [START_REF] Frisch | Gaussian 09[END_REF] Preliminary conformer distribution search of 1 was performed at the molecular mechanics level of theory, employing MMFF94 force fields incorporated in ComputeVOA software package. Twenty one conformers were found within roughly 8 kcal mol À1 of the lowest energy conformer. Their geometries were optimized at the DFT level using B3PW91 functional [START_REF] Perdew | [END_REF] and 6-31G** basis set, 27 leading to ten different conformers within a energetic window of 7.5 kcal mol À1 . Finally, only the three lowest energetic geometries were kept, and reoptimized with the use of IEFPCM model of solvent (CH 2 Cl 2 and CHCl 3 ). 28,29 Vibrational frequencies, IR and VCD intensities, and ROA intensity tensors (excitation at 532 nm) were calculated at the same level of theory. For comparison to experiment, the calculated frequencies were scaled by 0.968 and the calculated intensities were converted to Lorentzian bands with a full-width at half-maximum (FWHM) of 9 cm À1 . Optical rotation calculations have been carried out at several standard wavelengths (365, 436, 532 and 589 nm) by means of DFT methods (B3PW91/6-31G**) for the three conformers reoptimized with the use of PCM solvent model.
ECD spectra were calculated at the time-dependent density functional theory (TDDFT) level using the MPW1K functional 30 and the 6-31+G* basis set. Calculations were performed for the three conformers reoptimized with the use of PCM solvent model (IEFPCM = CH 2 Cl 2 ), considering 120 excited states. For comparison to experiment, the rotational strengths were converted to Gaussian bands with a FWHM of 0.1 eV.
Results
Synthesis and HPLC separation of the two enantiomers of 1
The racemic mixture of 1, (rac)-1, was prepared according to a known procedure (Fig. S1 in the ESI †). 3 Compound 1 does not possess any substituent that could be exploited for separating the two enantiomers of 1 by the formation of diastereomeric derivatives. Consequently, the two enantiomers of 1 were separated using a chiral HPLC column (Chiralpak ID, eluent: heptane/EtOH/CHCl 3 50/30/20, 1 mL min À1 ), which allowed the efficient separation of the two enantiomers of 1 with an excellent resolution factor (R s = 3.24), as shown in Fig. 1. A circular dichroism detector provided the sign of each enantiomer at 254 nm. It was observed that enantiomer [CD(À) 254 ]-1 was first eluted at t = 6. ESI †). Thus, from 350 mg of racemic material, 160 mg of each enantiomer were obtained with an excellent enantiomeric excess (ee 4 99% for [CD(À) 254 ]-1 and ee 4 99.5% for [CD(+) 254 ]-1). In order to improve the chemical purity of the two compounds, an additional purification step was conducted on both enantiomers. Thus, compounds [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were purified on silica gel (eluent: CH 2 Cl 2 /acetone 90/10) and then recrystallized in a mixture of CH 2 Cl 2 /EtOH. These additional purification steps provide both enantiomers with high chemical purity. The 1 H NMR and S1).
Compounds [CD(À) 254 ]-1 and [CD(+) 254 ]-1 crystallize in P2 1 2 1 2 1 and P2 1 space groups, respectively. No disorder was observed in the two X-ray structures and the cavities do not contain any substrate (solvent or gas molecules). The two enantiomers adopt a contracted conformation of the bridges that minimizes the internal cavity volume. Using a probe radius of 1.4 Å, the estimated cavity volume of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were 30 and 32 Å 3 , respectively. It is noteworthy that these two X-ray structures are significantly different from the one reported for the racemate. 6 Indeed, the X-ray structures of the two enantiopure derivatives adopt a more flattened shape with respect to the X-ray structure of the racemate, characterized by a large twist angle of 55.31 between the two CTB caps. [START_REF]average dihedral angles between the arene ring centroids of OCH 2 O-connected arenes with respect to the C 3 axis of the host[END_REF] For the racemate, a twist angle of 18.11 was found between the two CTB caps, associated with a cavity volume of 72 Å 3 . 6 Interestingly, these structures are also less symmetrical than the one observed for racemate and a topview of these two structures reveals that the six benzene rings are totally eclipsed (Fig. S8 in the ESI †). In contrast, the X-ray structure of the racemic derivative shows a strong overlapping of the phenyl rings.
Polarimetry and electronic circular dichoism
The two enantiomers of 1 are well soluble in CH 2 Cl 2 and CHCl 3 but unfortunately they show poor solubility in other organic solvents. Thus, polarimetric experiments were performed only in CH 2 Cl 2 and CHCl 3 solutions. The specific optical rotation (SOR) values of the two enantiomers of 1 are reported in the ESI, † (Table S2) and the wavelength dependence of [CD(+) 254 ]-1 is shown in Fig. 2. In CH 2 Cl 2 , the SOR values of [CD(+) 254 ]-1 are slightly positive at 589 and 577 nm, close to zero at 546 nm and significantly negative at 436 and 365 nm. Nevertheless, despite the low values measured for this compound at 589, 577 and 546 nm, SOR values with opposite sign are obtained for the two enantiomers of 1 (Table S2, ESI †). In CHCl 3 , the wavelength dependence of the SOR values evolves similarly with values slightly higher. This result contrasts with the measurements performed with compound 2 that exhibited significant differences in CH 2 Cl 2 and CHCl 3 solutions. Finally, as previously observed with compound 2, 17 a change of the SOR sign is observed in the nonresonant region (around 546 nm in CH 2 Cl 2 and 475 nm in CHCl 3 ).
UV-Vis and ECD experiments require lower concentration of solute and consequently this allows us to extend the range of solvents. Thus, UV-Vis and ECD spectra of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were successfully recorded in CH 2 Cl 2 , CHCl 3 , THF, and CH 3 CN solvents. The UV-Vis spectra measured in THF and CH 2 Cl 2 solvents are reported in Fig. S9 in the ESI. † These spectra are very similar to those published for compound 2. 17 The ECD spectra of the two enantiomers are reported in Fig. S10 in the ESI, † for the four solvents. A perfect mirror image is observed in all solvents for the two enantiomers as expected for enantiomers with high enantiomeric excess. For CH 2 Cl 2 and CHCl 3 solutions, the ECD spectra give only access to the bands corresponding to the two forbidden 1 L a and 1 L b transitions in the UV-visible region (230-300 nm). For THF and CH 3 CN solutions, the spectral range can be extended up to 210 nm. This allows us to have access to another spectral region corresponding to the allowed 1 B b transition. This spectral region usually gives rise to intense ECD signals in organic solution. It was observed that the ECD spectra of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 are very similar in shape and intensity, regardless of the nature of the solvent used in these experiments. For instance, in CH 2 Cl 2 the ECD spectrum of the [CD(+) 254 ]-1 enantiomer shows four bands, as shown in Fig. 3. Three ECD bands (two negative and one slightly positive from high to low wavelengths) are observed in the spectral region related to the 1 L b transition (260-300 nm). At shorter wavelengths, only a single positive ECD band was observed between 230 and 255 nm ( 1 L a transition). Interestingly, it can be noticed that the ECD spectra of [CD(+) 254 ]-1 and [CD(À) 254 ]-2 show a lot of similarities even though some significant spectral differences are observed especially in the 1 L a region. Indeed, the bisignate ECD signal usually observed in the 1 L a region for cryptophane derivatives and observed for [CD(À) 254 ]-2 is no longer present in the ECD spectrum of [CD(+) 254 ]-1. In the past, the sign of this bisignate ECD signal was exploited to determine the absolute configuration (AC) of cryptophane-A molecule. [START_REF] Canceill | [END_REF] Then, we have confirmed that this approach could be used to assign the AC of other cryptophane derivatives in organic solution. This study shows that the approach can not be used for cryptophane-111.
Synchrotron radiation circular dichroism experiments were also performed to obtain additional information at lower wavelengths, in the 1 B b region (180-230 nm). The SRCD spectra of the two enantiomers of 1 recorded in CH 2 Cl 2 and CHCl 3 are reported in Fig. S11 in the ESI. † For wavelengths higher than 230 nm, the SRCD spectra of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 are identical in shape and intensities to the ECD spectra described above. For wavelengths lower than 230 nm, the SRCD spectra reveal two additional bands with opposite sign. For instance, the [CD(+) 254 ]-1 enantiomer exhibits in CH 2 Cl 2 a positivenegative bisignate pattern from short to long wavelengths related to the 1 B b transition. It is noteworthy that similar (in shape and in intensities) SRCD spectra were recorded in CHCl 3 solution, in contrast to what was observed for compound 2. 17
VCD and ROA spectroscopy
The chiroptical properties of enantiopure cryptophane 1 have been also investigated by VCD in CDCl 3 and CD 2 Cl 2 solutions. The IR spectra of the [CD(+) 254 ]-1 enantiomer are similar in the 1700-1000 cm À1 region for the two solutions (Fig. S12 in ESI †). In addition, the presence of xenon in the CDCl 3 solution does not modify the IR spectrum in this spectral range. The VCD spectra of the two enantiomers of 1 measured in CDCl 3 and CD 2 Cl 2 solvents are reported in Fig. S13 in ESI, † whereas the comparison of experimental VCD spectra of [CD(+) 254 ]-1 in the two solvents is presented in Fig. 4. As shown in Fig. 4, the VCD spectra of 1 seem independent of the nature of the solvent, even though slight spectral differences are observed in the 1050-1010 cm À1 region. In addition, a slightly lower intensity of the VCD bands is observed in CD 2 Cl 2 solution, which can be related to the lower molar absorptivities measured in CD 2 Cl 2 with respect to CDCl 3 solution. Finally, the presence of xenon in the CDCl 3 solution does not modify the VCD spectrum of [CD(+) 254 ]-1 (Fig. S14 in ESI †).
The ROA spectra of the two enantiomers of 1 measured in CDCl 3 solution (0.1 M), in presence or not of xenon, are shown in Fig. S15 in ESI. † These ROA spectra are nearly perfect mirror images (Fig. S15a andb in ESI †), as expected for enantiopure materials. The ROA spectra measured in CD 2 Cl 2 solution were similar (Fig. S15c andd in ESI †), indicating that the ROA spectra of 1 is independent of the solvent, as already mentioned for ECD and VCD experiments. On the other hand, the ROA spectra of [CD(+) 254 ]-1 in CD 2 Cl 2 solution in presence or not of xenon reveal a clear spectral difference at wavenumbers lower than 200 cm À1 , as shown in Fig. 5. Indeed, in presence of xenon, we observe an important decrease of the intensity of the band at 150 cm À1 . The same effect is observed on ROA spectra measured in CDCl 3 solution (Fig. S16 in ESI †). The vibrational assignment of this mode was made by visual inspection of modes represented and animated by using the Agui program. All the displacement vectors of carbon atoms point towards the center of the cavity, indicating that this mode corresponds to the symmetric breathing mode of the cryptophane-111 skeleton. This result clearly indicates that the presence of a guest inside the cavity of a cryptophane derivative modifies the intensity of its symmetric breathing mode. The examination of this mode could be used in the future to reveal the complexation of guest molecules by cryptophane derivatives. However, it is noteworthy that it would not be possible to observe this effect for the compound 2 in CHCl 3 or CH 2 Cl 2 solutions, since these two solvent molecules can enter the cavity of 2 and would be therefore strong competitors for xenon.
Discussion
AC and conformational analysis of 1
As it is now recommended, different techniques have been used to assign unambiguously the absolute configuration (AC) of the two enantiomers of 1. [33][34][35] Thanks to the determination of the Flack and Hooft parameters, the X-ray crystallography analysis provides an easy way to determine the AC of the two [CD(+) 254 ]-1 and [CD(À) 254 ]-1 enantiomers. Thus, based on the analysis of the two X-ray structures, the following assignment [CD(+) 254 ]-MM-1 and [CD(À) 254 ]-PP-1 has been found for the two enantiomers of 1. Consequently, considering the specific optical rotation measured at 589 nm the AC become (+) 589 -MM-1 and (À) 589 -PP-1. It is noteworthy that these last descriptors are identical to those determined for compound 2, as suggested by the similarity observed in their experimental ECD spectra (Fig. 3).
To confirm the AC of the two enantiomers of 1, determined by X-ray crystallography, we have used VCD and ROA spectroscopy associated with DFT calculations, which are known to be a valuable approach to assign the AC of organic compounds. Conformer distribution search was performed at the molecular mechanics level of theory for the MM-1 configuration, starting from the more symmetrical structure obtained from X-ray analysis of the racemic compound. 6 Twenty one conformers within roughly 8 kcal mol À1 of the lowest energy conformer were found and their geometries optimized at the DFT level (B3PW91/6-31G**), leading to ten different conformers. The electronic and Gibbs energies as well as the twist angle between the two CTB caps for the three most stable conformers are reported in Table 1 and compared to those calculated from the optimized geometries of the enantiomer crystals. The conformer A leads to the lowest Gibbs free energy and represents more than 99% of the Boltzmann population of conformers at 298 K. This conformer exhibits the most symmetrical structure with an average value of the twist angle between the two CTB caps of 19.11 (dihedral angles [START_REF]average dihedral angles between the arene ring centroids of OCH 2 O-connected arenes with respect to the C 3 axis of the host[END_REF] As shown in Fig. 6, the VCD spectrum predicted for the MM configuration of conformer A reproduces very well the sign of most of the bands observed in the experimental spectrum of [CD(+) 254 ]-1, confirming the AC assignment [CD(+) 254 ]-MM-1, determined by X-ray crystallography. A very good agreement between predicted ROA spectrum of conformer A and experimental ROA spectrum is also obtained (Fig. S17 in ESI †). This conformational analysis shows that only one conformer is present for 1, contrary to the conformational equilibrium observed for 2 due to the ethylenedioxy linkers (i.e. possibility of trans and gauche conformations of the three linkers). This lack of conformational equilibrium for 1 may explain the overall higher intensities of the VCD bands for 1 and the lowest FWHM (9 cm À1 for 1 vs. 14 cm À1 for 2) used to reproduce the experimental VCD spectrum for 1.
As above mentioned, the bisignate pattern observed in the 1 L a region (230-260 nm) of the ECD spectra of cryptophane derivatives can be another way to determine the AC of these derivatives in organic solvents. 16,17 Using the Khun-Kirkwood excitonic model, Gottarelli and co-workers have shown that this bisignate resulted from different excited states (one A 2 and two degenerate E components) for cryptophane possessing a D 3 -symmetry. [START_REF] Canceill | [END_REF] For the 1 L a transition, the A 2 component is always located at lower energy and the two E components show opposite rotational strengths. This model leads to a positive/ negative bisignate pattern (from short to long wavelength) for the MM configuration of cryptophane-A derivatives. For cryptophane-111, this bisignate pattern is not observed and this rule does not apply. Indeed, in the case of compound 1, TD-DFT calculations show that the A 2 component located at high wavelength possesses a lower negative rotational strength (R = À0.38 cgs) than the one measured for compound 2 (R = À0.70 cgs). Thus, the contribution of the A 2 component in the experimental ECD spectrum is embedded in the two E components exhibiting a larger rotational strength (R = 1.05 cgs), leading to a broader positive band in the 1 L a region. The strong decrease of the negative A 2 component of the 1 L a transition suggests that the classical excitonic coupling model can not be used to determine the AC of cryptophane-111 molecule and that other contributions should be involved in the interpretation of the ECD spectrum. For instance, as it has been reported by Pescitelli and co-workers in some cases, 36,37 the coupling between the electric and magnetic transition moments (mm term) can contribute significantly to the overall rotational strength for a given excited state. This contribution, which is usually neglected in the case of the classical excitonic coupling model, can dominate the electric-electric coupling (mm term). Nevertheless, the bisignate pattern observed in the 1 B b region (190-230 nm) of the SRCD spectra can be used to determine the AC of 1. Indeed, the positivenegative sequence from short to long wavelength observed for [CD(+) 254 ]-1 was associated with the MM configuration by TD-DFT calculations (Fig. S18 in ESI †).
Comparison between the chiroptical properties of 1 and 2
In a recent article, different behaviours of the chiroptical properties (in particular, polarimetric properties) were observed for 2 in CHCl 3 and CH 2 Cl 2 solutions. 17 These modifications were interpreted by a subtle conformational equilibrium change of the ethylenedioxy linkers upon encapsulation of CHCl 3 and CH 2 Cl 2 molecules. A preferential G À conformation of the linkers was found in CH 2 Cl 2 solution, in order to decrease the cavity size and to favour hostguest interactions. In contrast, a higher proportion of G + conformation of the linkers was found in CHCl 3 solution, increasing the size of the cavity suitable for the complexation of chloroform molecule. The comparison of the chiroptical properties of 1 and 2 is very interesting because these two compounds possess identical CTB units and differ only by the nature of the alkyl linkers. The conformational equilibrium observed for compound 2 due to the possibility of trans and gauche (G + and G À ) conformations of the three ethylenedioxy linkers is not possible for compound 1 which possess methylenedioxy linkers. In addition, it has been shown that (rac)-1 does not bind halogenomethane molecules so that neither CH 2 Cl 2 nor CHCl 3 can enter its cavity. 3,4 Thus, no spectral modification in the ECD (or SCRD) and VCD (or ROA) spectra is expected for 1 in these two solvents. This assumption is confirmed by our experiments, as shown in the result section.
Our results reveal also that the SOR values of 1 behave similarly in the two solvents. We observe a change of the sign of the SOR values in the nonresonant region, as shown in Fig. 3. This surprising effect has been previously reported with compound 2 for experiments in chloroform, acetone and dimethylformamide. Calculations of the SOR at the B3PW91/6-31G** level (IEFPCM = CHCl 3 ) reproduce perfectly the experimental data measured in CHCl 3 solution (Fig. S19 in ESI †).
Conclusions
This article reports a thorough study of the chiroptical properties of the two enantiomers of cryptophane-111 (1) by X-ray crystallography, polarimetry, ECD (SRCD), VCD, and ROA spectroscopy. The absolute configuration of the two enantiomers has been determined based on X-ray crystallographic data. Thus, the (+) 589 -MM-1 ((À) 589 -PP-1) AC has been assigned. This result has been confirmed by the combined analysis of the VCD (ROA) spectra and DFT calculations. In a second part of this article, the chiroptical properties of 1 have been compared to those of cryptophane-222 (2). Despite the similarity in the two structures, derivatives 1 and 2 exhibit different behaviours of their chiroptical properties with respect to CH 2 Cl 2 and CHCl 3 solvents. In these two solvents, polarimetric measurements and SRCD spectra are clearly different for compound 2, whereas they remain almost unchanged for 1. This different behaviour can be explained by the incapacity of compound 1 to encapsulate a solvent molecule within its cavity, regardless of the nature of the solvent. Consequently, the nature of the solvent has almost no influence on the conformation of the methylenedioxy linkers. This result confirm our previous assumption that the different chiroptical properties observed for 2 in chloroform and dichloromethane solutions are certainly due to the conformation equilibrium change of the ethylenedioxy linkers upon encapsulation of CH 2 Cl 2 or CHCl 3 molecules.
Thus, the comparison of the chiroptical properties of cryptophane 1 and 2 sheds light on the importance of the solvent present within the cavity to understand the chiroptical properties of the cryptophane derivatives in general. In addition, our results shows that the bulk solvent has no significant effect on the chiroptical properties of 1.
Fig.1Separation of the two enantiomers of 1 using an analytical chiral HPLC column.
13 C
13 NMR spectra (Fig. S3-S6 in the ESI †) are identical to those reported for (rac)-1. X-ray crystallographic structures of the two enantiomers of 1 X-ray crystals of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were obtained in a CH 2 Cl 2 /EtOH mixture and in pyridine, respectively (Fig. S7a and b in the ESI †). The crystallographic data of the two X-ray crystal structures are reported in the ESI, † (Table
Fig. 2
2 Fig. 2 Specific optical rotation values (10 À1 deg cm 2 g À1 ) of [CD(+) 254 ]-1 recorded at several wavelengths (365, 436, 546, 577 and 589 nm) in chloroform (c = 0.22), and dichloromethane (c = 0.27) solvents.
Fig. 3
3 Fig. 3 Comparison of experimental ECD spectra of [CD(+) 254 ]-1 (black spectrum) and [CD(À) 254 ]-2 (red spectrum) in CH 2 Cl 2 solution.
Fig. 4
4 Fig. 4 Comparison of experimental VCD spectra of [CD(+) 254 ]-1 in CDCl 3 (black spectrum) and in CD 2 Cl 2 (red spectrum) solutions.
Fig. 5
5 Fig. 5 Comparison of experimental ROA spectra of [CD(+) 254 ]-1 in CD 2 Cl 2 solution in presence (red spectrum) or not (black spectrum) of xenon.
of 19.0, 19.1 and 19.1). Conformers B and C present higher twist angles with average values of 23.61 and 28.71, respectively. It is noteworthy that the structure is less symmetrical than for conformer A with one dihedral angle, which differs from the two others (21.11, 21.51 and 28.31 for conformer B and 23.81, 31.11 and 31.21 for conformer C).
Fig. 6
6 Fig. 6 Comparison of the experimental VCD spectrum of [CD(+) 254 ]-1 recorded in CDCl 3 solution with the calculated spectrum at the B3PW91/ 6-31G** level (IEFPCM = CHCl 3 ) for conformer A of MM-1.
Table 1
1 Conformations, twist angles and energies of the three conformers of MM-1 calculated from the crystal of (rac)-1, and of the one calculated from the crystal of MM-1
Conformers Twist angle Energy (hartrees) Electronic Gibbs DG (kcal mol À1 ) %
A 19.1 À2187.01467627 À2186.367759 0 99.7
B 23.6 À2187.00811518 À2186.362379 3.38 0.3
C 28.7 À2187.00412196 À2186.359201 5.26 0.0
Crystal MM-1 55.3 À2187.00057700 À2186.355790 7.51 0.0
Acknowledgements
Supports from the French Ministry of Research (project ANR-12-BSV5-0003 MAX4US) is acknowledged. The authors are indebted to the CNRS (Chemistry Department) and to Re ´gion Aquitaine for financial support in VCD and ROA equipments. They also acknowledge computational facilities provided by the MCIA (Me ´socentre de Calcul Intensif Aquitain) of the Universite de Bordeaux and of the Universite ´de Pau et des Pays de l'Adour, financed by the ''Conseil Re ´gional d'Aquitaine'' and the French Ministry of Research and Technology. The GDR 3712 Chirafun is acknowledged for allowing a collaborative network between the partners of this project. | 36,646 | [
"170508",
"740794",
"769604",
"18938",
"758476"
] | [
"663",
"776",
"663",
"186403",
"186403",
"24493",
"1744"
] |
00176692 | en | [
"chim",
"sdu",
"phys"
] | 2024/03/05 22:32:15 | 2008 | https://hal.science/hal-00176692/file/inpress_JNCS_Massiot.pdf | Dominique Massiot
email: [email protected]
Franck Fayon
Valérie Montouillout
Nadia Pellerin
Julien Hiet
Claire Roiland
Pierre Florian
Jean-Pierre P Coutures
Laurent Cormier
Daniel R Neuville
Structure and dynamics of Oxyde Melts
whether they are published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
Oxide glasses are known and used for thousands of years and tuning of properties like colour, durability, viscosity of the molten state were mostly known and dominated by glass makers. Despite this millenary knowledge, the range of glass forming system of interest is still expanding and many non-elucidated points remain in the understanding of the glass and melts structure and properties. The aim of this contribution is to underline, from the experimental point of view provided by Nuclear Magnetic Resonance, the relations existing between the structure and dynamics of the high temperature molten oxide systems and the short and medium range order of their related glasses.
The strength of Nuclear Magnetic Resonance for describing structure and dynamics of amorphous or disorganised system like oxide glasses or melts, comes firstly from its ability to selectively observe the environment of the different constitutive atoms (providing that they bear a nuclear spin) and secondly from its sensitivity to small variation in the first and second coordination sphere of the observed nucleus. This often provides spectral separation of the different types of environment [START_REF] Mackenzie | MultiNuclear Solid State NMR of Inorganic Materials[END_REF]. The information derived from NMR experiments are then complementary to those obtained by other means : optical spectroscopies, IR or Raman, X-Ray absorption, X-Ray or neutrons elastic or inelastic scattering etc… It is important to remark that NMR has a much slower characteristic time (ranging from Hz to MHz) than most of the above mentioned methods, leading to fundamental differences in the signatures of the viscous high temperature molten states.
One dimensional NMR experiments
In liquid state in general, and in the high temperature molten state in the case of oxide glass forming systems, the mobility is such that only the isotropic traces of the anisotropic interactions express in their NMR spectra. Fluctuation of these interactions leads to relaxation mechanisms that can allow discussion of the characteristic times of rearrangement and overall mobility of the system. In solid state materials and in glasses the anisotropy of the different interaction fully express in the static NMR spectra giving broad and often featureless line shapes accounting for all the different orientations of the individual structural motifs of the glass. Although these broad spectra contain many different information on the conformation of the structural motifs (Chemical Shift Anisotropy -CSA), spatial proximity between spins (homo-and hetero-nuclear Dipolar interactions), chemical bonds (indirect J coupling), electric field gradient at the nucleus position (Quadrupolar interaction for I>1/2 nuclei), these information are often hardly evidenced. Magic Angle Spinning is this unique tool that solid state NMR has at hand to average out all (or most) of the anisotropic part of the interactions only leaving their traces mimicking (or giving a coarse approach of) the Brownian reorientation of the liquid phase. Under rapid Magic Angle Spinning, Chemical Shift is averaged to its isotropic value and distribution directly given by the line position and width in the case of a dipolar (I=1/2) spin, while the traceless Dipolar interaction is averaged out, and the scalar (or isotropic) part of J-coupling is usually small enough to be completely masked in a 1D spectrum, even in crystalline phases.
Phosphates, silicates, alumino-silicates or aluminates oxide glasses structures are mostly based on tetrahedral species whose polymerization is characterized by their number of bridging oxygens (Q n : Q=P,Si,Al and n the number of bridging oxygens). Figure 1 presents the 31 P MAS NMR 1D spectra of a (60% PbO-40% P 2 O 5 ) glass. It shows two broad but resolved resonances in a 1/1 ratio that can unambiguously ascribed to end-chain groups (Q 1 750Hz 6.2 ppm width) and middle-chain groups (Q 2 1100Hz 9 ppm width) environments.
Both these lines are considerably broader than that of the corresponding crystalline sample (Pb 3 P 4 O 13 linewidth < 1 ppm) due to the disorder in the glass structure and the loss of long range order. In the case of simple binary glasses of phosphates or silicate the broad lines corresponding to the various Q n tetrahedral sites are often resolved enough to allow quantification of their respective abundance and evaluation of the disproportionation equilibrium constants (K n : 2Q n <->Q n-1 +Q n+1 ) [START_REF] Stebbins | [END_REF]. Figure 2 reports these quantitative results for PbO-SiO 2 [3]and PbO-P 2 O 5 [4] binary systems. In lead-phosphate glasses the K n values remain very small which correspond to a binary distribution and indicates that only two types of Q n environments can co-exist at a given composition, while in lead-silicate glasses the equilibrium constant are much higher, close to that of a randomly constructed network with a competition between lead based and silicon based covalent networks. 207 Pb NMR and L III -EXAFS experiments confirmed this interpretation by showing that the coordination numbers of Pb in silicate is of 3 to 4 oxygen with short covalent bonds and a very asymmetric environment (pyramid with lead at the top) while it is of more than 6 in phosphate glasses with a more symmetric environment, behaving more as a network modifier [3][4][5].
Polyatomic molecular motifs
Although these information already give important details on the structure of these phosphates or silicate binary glasses, it would be of great interest to obtain a larger scale image of the polyatomic molecular motifs constituting these glasses and especially to evaluate the length of phosphate chains possibly present in the glass, that makes the difference between the long range ordered crystalline phase and the amorphous phase. That type of information can be obtained by implementing multidimensional NMR experiments that allow to evidence Dipolar [4] or J-coupling [6,7,8] interaction and further use them to build correlation experiments separating the different contributions of well defined molecular motifs. Figure 1 gives a general picture of the possibilities offered by the J-coupling mediated experiments that allow to directly evidence the P-O-P bonds bridging phosphate units through J 2 P-O-P interaction. Let us consider the example of the 60% PbO-40% P 2 O 5 glass already introduced above. Its 1D spectrum (fig 1a) shows partly resolved Q 1 and Q 2 lines with strong broadening (750 and 1100Hz) signing the to be understood glass disorder. Because the Q 1 and Q 2 line width is essentially inhomogeneous, due to distribution of frequencies for each individual motif, this broadening can be refocused in an echo which is modulated by the small (and unresolved) isotropic J 2 P-O-P coupling [7]. Figure 1b shows the J-resolved spectrum of the glass that reveals the J coupling patterns consisting in a doublet for Q 1 , and to a triplet for Q 2 , thus justifying the spectral attribution previously made based on the 31 P isotropic chemical shift. It is also of importance to notice that this experiment clearly shows that the isotropic J-coupling does vary across the 1D lines, typically increasing with decreasing chemical shift. The new type of information provided by this experiment is likely to bear important information on the covalent bond hybridisation state and geometry. Because this isotropic J-coupling can be measured, it can also be used to reveal -or to spectrally edit -different polyatomic molecular units in the glass. Figure 1c and 1d respectively show the two-dimensional correlation spectra that enable the identification of through-bond connectivity between two linked PO 4 tetrahedra (fig. 1c) [6] and between three linked PO 4 tetrahedra (fig. 1d) [8]. These experiments, fully described in the referenced papers allow spectral separation of dimers, end-chain groups, and middle-chain groups when selecting Q-Q pairs (fig. 1c) and trimers, end-chain triplets or centre-chain triplets when selecting Q-Q-Q triplets (fig. 1d composition [11]. They showed that while the two Q 3 and Q 4 contributions can be resolved from their different chemical shift anisotropy or from their isotropic chemical shift, below glass temperature, they begin to exchange just above glass transition with characteristic times of the order of seconds [12] and finally end up into merging in a unique line in the high temperature molten state. This experiment underlines two important points. First, although silicate glasses can be regarded as SiO 2 based polymer, the melting of silicate glasses implies rapid reconfiguration of the structural motifs through a mechanism that was proposed to involve a higher SiO 5 coordination state of silicon with oxygen, second that the characteristic time scales of NMR spectroscopy allow to explore a large range of time scales involved in this mechanism. We can remark that this has been recently extended to below T g structural reorganisation of BO 3 and BO 4 configurations in borosilicate glasses [START_REF] Sen | XI International Conference on Physics on Non Crystalline Solids[END_REF]. The existence of higher (and previously unexpected) SiO 5 coordination state of silicon was proved experimentally by acquiring high quality 29 Si NMR spectra [START_REF] Stebbins | [END_REF] with clear effects of quenchrates and pressure stabilizing these high coordination silicon environments.
The high temperature NMR setup developed in our laboratory, combining CO 2 laser heating and aerodynamic levitation allows acquisition of 27 Al resolved NMR spectra in molten oxide at high temperature with a good sensitivity [15,16]. Figure 3a shows the experimental setting and an example of a 27 Al spectrum acquired in one scan for a liquid molten sample CaAl 2 O 4 at ~2000°C [17]. The sensitivity of this experiment is such that one can follow in a time-resolved manner the evolution of the 27 Al signal when cooling the sample from high temperature, until disappearance of the signal when the liquid becomes too viscous.
As in the case of the high temperature molten silicate discussed above, we only have a single sharp line giving the average chemical shift signature of the rapidly exchanging chemical species. This later point is confirmed by independent T 1 (spin-lattice) and T 2 (spin-spin) relaxation measurements giving similar values and reliably measured in the 1D spectrum from the linewidth. This relaxation time can be modelled using a simple model of quadrupolar relaxation which requires the knowledge of the instantaneous quadrupolar coupling that can be estimated from the 27 Al MAS NMR spectrum of the corresponding glass at room temperature. The obtained correlation times, corresponding to the characteristic time of the rearrangement of aluminium bearing structural units, can be directly compared to characteristic times of the macroscopic viscosity with a convincing correspondence in the case of aluminates melts [18] (Figure 3b&c).
Structure and dynamics of alumino-silicates
In alumino-silicate glasses of more complex composition, aluminium is able to substitute silicon in tetrahedral network forming positions, providing charge compensation by a neighbouring cation. In such case, the NMR signature of 29 Si spectra is much more complex and difficult to interpret [19]. Because 29 Si Q n species isotropic chemical shifts depend upon show proofs of the depolymerization of the AlO 4 based network [20]. In alumino-silicate glasses, aluminium species with higher coordination were evidenced [22] and quantified [21] using a detailed modelling of the 27 Al MAS and MQMAS NMR spectra obtained at high principal fields. One can also remark that no SiO 5 environments have ever been evidenced in aluminosilicate compositions. Going further and examining the whole SiO 2 -Al 2 O 3 -CaO phase diagram [23], we showed that these AlO 5 environments are not confined to the charge compensation line or to the hyper-aluminous region of the ternary diagram, where there exist a deficit of charge compensators, but that AlO 5 species are present, at a level of ~5%, for any alumino-silicate composition of this ternary diagram, including those presenting the smallest fraction of alumina but excluding the Calcium Aluminates of the CaO-Al 2 O 3 join which nearly exclusively show aluminium in AlO 4 coordination state. For C3A composition XANES unambiguously shows that Al occupy Q 2 environments both in crystal, and glass [20,23].
These finding that there exist no or very few AlO 5 in compositions close to CaO-Al 2 O 3 composition is somehow in contradiction with our previous interpretation of chemical shift temperature dependence with a negative slope [17]. At that time we proposed to consider that there could exist significant amounts of five fold coordinated aluminium in the high
Conclusion
From the above discussed experimental results we can draw several important points about the relations between structure and properties of oxide glasses and their related molten states which appear to be closely related. It first clearly appears that in many cases, even if most of the structure of the glasses, and consequently of their related high temperature molten states are built around a network of µ 2 connected tetrahedra (P, Si, Al…), there exist in many cases unexpected environments showing up as minor contributions in the glass structures (~5% or less) but significantly present and relevant to molecular motifs that can be identified. This is the case of SiO 5 species in binary alkali silicates [START_REF] Stebbins | [END_REF], AlO 5 (AlO 6 ) [21][22][23] in aluminosilicates, violations to Al avoidance principle [25] or tricluster µ 3 oxygens [9]. This implies that modelling of these complex materials in their solid or molten state will often be difficult using limited box sizes. Just consider that 5% of Aluminum species in a glass containing 5% Al 2 O 3 in a Calcium Silicate only represent 1 to 2 atoms over 1000 or that 5% µ 3 oxygens in a CaAl 2 O 4 composition represent less than 3 occurrences in a box of 100 atoms.
Furthermore Charpentier and coworkers [26] recently showed that a proper rendering of NMR parameters from all electrons ab-initio computations in glasses requires a combination of classical and ab-initio MD simulation. Going further we also emphasize that an important part of what we qualify with the general term of disorder can be described in terms of distribution of poly-atomic molecular motifs extending over a much larger length scale than the usual concept of coordination.
Figure Captions :
Summary of NMR experiments on a 60%PbO-40%P 2 O 5 glass evidencing polyatomic molecular motifs with : (a) 1D spectrum, (b) the J resolved spectrum showing doublet for Q 1 and triplet for Q 2 [7], (c) the INADEQUATE experiment evidencing pairs of phosphates (Q-Q) [6], and (d) the 3Quantum spectrum evidencing triplets of phosphates (Q-Q-Q) [8].
Figure 2
Quantitative interpretation of 29 Si and 31 P 1D spectra allowing the measurement of disproportionation constants for (a) lead silicate [3] and (b) lead phosphate glasses [4].
Q 1 -Q 1 Q 1 -Q 2 Q 2 -Q 2 Q-Q pairs Chemical bond Q 1 Q 2 (a) (b) Q-Q-Q triplets Q 1 -Q 2 -Q 1 Q 2 -Q 2 -Q 2 Q 1 -Q 2 -Q 2 (d) (c)
). From these experiments it becomes possible to identify the different structural motifs constituting these glasses in terms of molecular building blocks extending over 6 chemical bonds (O-P-O-P-O-P-O) over lengths up to nearly 10Å if we consider a linear chain. Other experiments of the same type now allow to describe hetero-nuclear structural motifs of different types involving Al-O[9], Al-O-Si, P-O-Si[10] or opening the possibilities of more detailed description of glasses or disordered solids at large length scale.High Temperature NMR experimentsEven if most of the resolution is lost when going to static NMR spectra in the general case, the very different chemical shift anisotropy of Q 3 and Q 4 silicon environment can be source of enough resolution for evidencing dynamic process occurring close to or above glass transition temperature as shown by Stebbins and Farnan in the case of a binary K 2 O-4SiO 2
Al substitution in neighbouring tetrahedra,29 Si silicon spectra are usually broad Gaussian lines covering the full range of possible environment. Similarly 27 Al aluminium spectra are broadened by the combination of a distribution of chemical shifts and a distribution of second order quadrupolar interaction[21] and give only average pictures of the structure with possible resolution of different coordination states but no resolution of the different Al based Q n species except in the case of binary CaO-Al 2 O 3 glasses in which NMR and XANES both
temperature molten state, based on the thermal dependence of chemical shift and on state of the art MD computations. A more detailed study shows that, across the CaO-Al 2 O 3 join, the slope of the temperature dependence of the average chemical shift in the high temperature molten state drastically changes from a positive value for Al 2 O 3 to very negative (~-4 to -5ppm) for composition around CaAl 2 O 4 . Indeed we can even remark that all compositions able to vitrify in aerodynamic levitation contactless conditions have a slope smaller than -2ppm/1000°C (Figure4a). Stebbins and coworkers recently studied the17 O NMR signature of similar composition[24]. They evidenced a significant amount of non bridging oxygen atoms and discussed the possibility of a seldom observed µ 3 tricluster oxygen linking three tetrahedral Al sites that exists in the closely related CA 2 (CaAl 4 O 7 -Grossite) crystalline phase. Thanks to the development of new methods of hetero-nuclear correlation between quadrupolar nuclei through J-coupling at high principal field (750MHz)[9], we could reexamine this question and show that a {17 O} 27 Al experiment carried out on a CaAl 2 O 4 glass was able to clearly evidence the signature of ~5% µ 3 tricluster oxygen linked to aluminium with chemical shift decreased by 5 ppm per linked tricluster (Figure 4b). It thus appears that molecular motif of type µ 3 [AlO 3 ] 3 can be quenched in the glass and do exist in the molten state while AlO 5 remains negligible, rising a new interpretation of the thermal dependence of the 27 Al isotropic chemical shift in the CaO-Al 2 O 3 melts.
Figure 3 (
3 Figure 3 (a) High Temperature aerodynamic levitation NMR setup and a characteristic one shot spectrum, (b) temperature dependence of the chemical shift and (c) viscosity and NMR correlation times[adapted from ref.17]
Figure 4 (
4 Figure 4 (a) Slope of the thermal dependence of average chemical shift in high temperature versus composition for the CaO-Al 2 O 3 join. (b) { 17 O} 27 Al HMQC experiment of CaO-Al 2 O 3 glass at 750 MHz showing clear signature of µ 3 tricluster oxygens[adapted from ref.9].
Figure 1
D
.Massiot, XI PNCS, Rhodos Nov 2006
Figure 2
Figure 3
Figure 4 dδ
4 Figure 4
Acknowledgements
We acknowledge financial support from CNRS UPR4212, FR2950, Région Centre, MIIAT-BP and ANR contract RMN-HRHC. | 19,574 | [
"5931",
"737184",
"739260",
"8712"
] | [
"450",
"450",
"450",
"450",
"450",
"450",
"450",
"57022",
"3204",
"1852",
"21996"
] |
01694219 | en | [
"spi"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01694219/file/FULLTEXT01.pdf | Hatim Alnoor
email: [email protected]
Adrien Savoyant
Xianjie Liu
Galia Pozina
Magnus Willander
Omer Nur
An effective low-temperature solution synthesis of Co-doped [0001]-oriented ZnO nanorods
Keywords: Low-temperature aqueous chemical synthesis, ZnO NRs, Co-doping, EPR, intrinsic point defects
We demonstrate an efficient possibility to synthesize vertically aligned pure zinc oxide (ZnO) and Co-doped ZnO nanorods (NRs) using the low-temperature aqueous chemical synthesis (90 ºC). Two different mixing methods of the synthesis solutions were investigated for the Codoped samples. The synthesized samples were compared to pure ZnO NRs regarding the Co incorporation and crystal quality. Electron paramagnetic resonance (EPR) measurements confirmed the substitution of Co 2+ inside the ZnO NRs giving a highly anisotropic magnetic Co 2+ signal. The substitution of Zn 2+ by Co 2+ was observed to be combined with a drastic reduction in the core-defect (CD) signal (g ~ 1.956) which is seen in pure ZnO NRs. As revealed by the cathodoluminescence (CL) the incorporation of Co causes a slight red-shift of the UV peak position combined with an enhancement in the intensity of the defect-related yellow-orange emission compared to pure ZnO NRs. Furthermore, the EPR and the CL measurements allow a possible model of the defect configuration in the samples. It is proposed that the as-synthesized pure ZnO NRs likely contain Zn interstitial (Zni + ) as CDs and oxygen vacancy (VO) or oxygen interstitial (Oi) as surface defects. As a result, Co was found to likely occupy the Zni + leading to the observed CDs reduction, and hence enhancing the crystal quality. These results open the possibility of synthesis of highly crystalline quality ZnO NRs-based diluted magnetic semiconductors (DMSs) using the low-temperature aqueous chemical method.
1-Introduction
Zinc Oxide (ZnO) is a direct wide band gap (3.4 eV at room temperature) semiconductor with a relatively large exciton binding energy of 60 meV and possesses a significant luminescence covering the whole visible region. [1][2][3][4] Moreover, ZnO can be easily synthesized in a diversity of one-dimensional (1D) nanostructure morphologies on any substrate being crystalline or amorphous. [1][2][3][4][5][6][7] In particular, 1D ZnO nanostructures such as nanowires (NWs) and nanorods (NRs) have recently attracted considerable research due to their potential for the development of many optoelectronic devices, such as light-emitting diodes (LEDs), ultraviolet (UV) photodetectors and solar cells. [2][3][4][8][9][10] Also, ZnO NRs-based diluted magnetic semiconductors (DMSs), where a low concentration of magnetic elements (such as manganese (Mn) and cobalt (Co)) is diluted in the ZnO crystal lattice, show great promise for the development of spintronics and magneto-optical devices. [11][12][13][14] Among the different synthesis techniques utilized for ZnO NRs, the low-temperature solution-based methods are promising due to many advantages, i.e., low-cost, large-scale production possibility and the properties of the final product can be varied by tuning the synthesis parameters. [5][6][7] However, synthesizing ZnO NRs with optimized morphology, orientation, electronic and optical properties by low-temperature solution-based methods remains a challenge. The potential of ZnO NRs in all above-mentioned applications would require synthesis of high crystal quality ZnO NRs with controlled optical and electronic properties. [2][3][4]15 It is known that the optical and electronic properties of ZnO NRs are mostly affected by the presence of the native (intrinsic) and impurities (extrinsic) defects. [1][2][3][4] Therefore, understanding the nature of these intrinsic and extrinsic defects and their spatial distribution is critical for optimizing the optical and electronic properties of ZnO NRs. [1][2][3][4][16][17][18] However, identifying the origin of such defects is a complex matter, especially in nanostructures, where the information on anisotropy is usually lost due to the lack of coherent orientation. Recently, we have shown that by optimizing the synthesis parameters such as stirring times and the seed layer properties, the concentration of intrinsic point defects ( i.e. vacancies and interstitial defects ) along the NRs and at the interface between the NRs and substrate can significantly be tuned. 8,19,20 Thus, the ability to tune such point defects along the NRs could further enable the incorporation of Co ions where these ions could occupy such vacancies through substitutional or interstitial doping, e.g. a Co ion can replace a Zn atom or be incorporated into interstitial sites in the lattice. 21 Here, by developing theses synthesis methods, we obtained welloriented ZnO NRs, and by studying them at low temperature, we can access the magnetic anisotropy of such defects. Furthermore, by incorporating a relatively low amount of diluted Co into ZnO NRs the crystal structure of the as-synthesized well-oriented ZnO NRs can significantly be improved. The well-oriented pure ZnO and Co-doped ZnO NRs were synthesized by the lowtemperature aqueous chemical synthesis (90 ºC). The structural, optical, electronic, and magnetic properties of the as-synthesized well-oriented NRs have been systematically investigated by mean of field-emission scanning electron microscopy (SEM), X-ray powder diffraction (XRD), electron paramagnetic resonance (EPR), cathodoluminescence (CL) and X-ray photoelectron spectroscopy (XPS).
2-Experimental
The pure ZnO and Co-doped ZnO NRs were synthesized by the low-temperature aqueous chemical synthesis at 90 ºC on sapphire substrates. For pure ZnO NRs, a 0.075 M synthesis solution was prepared by dissolving hexamethylenetetramine (HMTA) and zinc nitrate hexahydrate in a deionized (DI) water and then stirred for three hours at room temperature (later denoted as M0 sample). After that, a sapphire substrate precoated with ZnO seed layer, 8,19,20 were submerged horizontally inside the above-mixed solutions and kept in a preheated oven at 90 °C for 5 hours.
Afterward, the samples were rinsed with DI-water to remove any residuals and finally, dried using blowing nitrogen. The synthesis process of the pure ZnO NRs is described in more details in Ref. 8,19,20 The Co-doped ZnO NRs were grown under similar conditions where two different approaches were used to prepare the synthesis solution. The first synthesis solution was prepared by mixing a 0 .075 M concentration of HMTA and zinc nitrate and stirred for 15 hours. Then a diluted solutions of Cobalt(II) nitrate hexahydrate with an atomic concentration of 7% was added dropwise to the above solution and stirred for extra 3 hours (later denoted as M1). The second synthesis solution was prepared by mixing a 7% diluted solution of Cobalt(II) nitrate hexahydrate with 0.075 M HMTA and stirred for 15 hours, and then a 0.075 M solution of zinc nitrate hexahydrate was added dropwise to the above-mentioned solution and stirred for extra 3 hours (later denoted as M2).
The morphology of the as-synthesized pure ZnO and Co-doped ZnO NRs was characterized using field-emission scanning electron microscopy (FE-SEM, Gemini LEO 1550). The crystalline and electronic structure were investigated by XRD using a Philips PW1729 diffractometer equipped with Cu-Kα radiation (λ = 1.5418 Å) and EPR, respectively. The EPR measurements were performed using a conventional Bruker ELEXSYS continuous wave spectrometer operating at X-band (ν = 9.38 GHz) equipped with a standard TE102 mode cavity. The angle between the static magnetic field and the NRs axis, denoted by θ, was monitored by a manual goniometer. The optical properties were examined by cathodoluminescence (CL) using Gatan MonoCL4 system combined with Gemini LEO 1550 FE-SEM. The CL measurements were performed on aggregated nanorods using an acceleration voltage of 5 kV. The chemical composition was analyzed by XPS measurements recorded by Scienta ESCA-200 spectrometer using monochromator Al Kα X-ray source (1486.6 eV). All the measurements were carried out at room temperature (RT) except the EPR measurements which were performed at 6 K.
3-Results and discussion
Fig. 1 shows the top-view FE-SEM images of the as-synthesized pure ZnO (M0) and Codoped ZnO NRs (M1) and (M2), respectively. The SEM images reveal that all the as-synthesized NRs were vertically-aligned with a hexagonal shape. The average diameter of the NRs was found to be ~160, ~400 and ~200 nm, for the M0, M1, and M2, respectively. The significant and slight increase in NRs average diameter in the case of M1 and M2 compared to M0 is likely due Co doping. 22,23 Fig. 1: SEM images of pure ZnO (M0) and Co-doped ZnO NRs as-synthesized using approaches M1 and M2, respectively.
The structural quality of the as-synthesized pure ZnO and Co-doped ZnO NRs have been confirmed by the XRD measurements as illustrated in Fig. 2. The XRD patterns showed that all the as-synthesized samples have a wurtzite structure and possess a good crystal quality with preferred growth orientation along the c-axis, as demonstrated by the intensity of the (002) peak. 15,[23][24][25] Also, it should be noted that no secondary phase related to Co was observed in the XRD patterns of all three NRs samples. As shown in the inset of Fig. 2 the position of the (002) peak is slightly shifted toward lower 2θ angle in M1, and toward higher 2θ angle in M2 as compared to M0. The peak position shift toward lower and higher 2θ angle are reported to be a confirmation of the successful incorporation of Co into the ZnO crystal lattice. 15,23,26 Also, the peak position shift is reported to be due to the variation of oxygen vacancies (Vo) and zinc interstitials (Zni) caused by Codoping. 27,28 In this study, the Co concentration in the synthesis solution is the same (7 %) for both M1 and M2. Thus, the observed shifts in the peak position could be attributed to either Co incorporation or to the variation of the defects concentration, e.g. such as vacancies and interstitials induced by Co doping. These results show that the way of preparing the synthesis solution have a significant influence on the Co incorporation in the synthesized ZnO NRs. The inset shows the normalized XRD data for the (002) peaks, indicating peak shifts.
Further, to confirm the crystal quality and the incorporation of Co into the ZnO crystal lattice as suggested by the XRD results, EPR spectra were recorded at 6 K, and the results are shown in Fig. 3 (a)-(b). The EPR spectra of pure ZnO NRs (M0) is characterized by the well-known defect signal from ZnO apparent at 350 mT (g ~1.956) [29][30][31][32][33][34] as shown in Fig. 3 (a), and commonly attributed to core-defects (CDs) arising from ZnO nanostructures rather than shell defects. 32,33 However, the identification of the exact nature of this CDs (1.956) signal is controversial, 35 and up to date, no experiment can give a concerete answer. Previously, many defect signals close to this value (1.956) have been reported and Zn interstitials (Zni + ) and the so-called D * center were proposed. 31,34 Indeed, the angle-dependent spectra of the CD signal shown in Fig. 3(a) display a slight easy-axis magnetic anisotropy and are composed of two overlapped lines. This observed anisotropy is compatible with a Zni + defect (easy-axis) but not with D* defect (easy plane), so that Zni + appears to be the most probable defect. 31,34 In our previous study, these CDs were characterized by three lines, which supports our hypothesis that these lines are likely a variation of the same defect, i.e. the same defects with slightly different parameters. 36 The successful substitution of Co 2+ was confirmed by the observed Co-related signal characterized by an eight-lines structure at g ~ 2.239 (θ = 0º ) and a broad asymmetric signal at g ~ 4.517 (θ = 90º ), 21,36,37 as shown in Fig. 3 (b). The observed magnetic anisotropy of the Co 2+ signal is a clear indication that the as-synthesized NRs are single crystalline,well-aligned and that Co is highly diluted along the NRs. 36 Interestingly, the substitution of Co 2+ caused a drastic reduction of the CD signal (g ~1.956) as indicated by dashed line (Fig. 3(b)) compared to that in the pure ZnO NRs (M0) (as shown Fig. 3 (a)), as previously observed in similar samples. 36 In fact, this could suggest that a certain amount of the incorporated Co is involved in the CDs neutralization. This neutralization could be due to substitutional doping, where a Zn atom is replaced by a Co atom (Cozn), or to interstitial doping, where a Co atom is incorporated into interstitial sites in the lattice (Coi). 21 As shown in Fig. 3(b) the intensity of the Co 2+ signal of M2 at θ = 90º and θ = 0º is significantly higher as compared to that of M1. Moreover, the line width of the Co 2+ signal at θ = 0º for M2 is found to be a slightly smaller (4 G) than that of M1(5 G). As the Co concentration in the synthesis solution is the same (7 %) for both M1 and M2, and by assuming a uniform doping and same coverage of the NRs, these results clearly show that the way of preparing the synthesis solution have a significant influence on the Co incorporation in the synthesized ZnO NRs in agreement with the XRD results shown in Fig. 2. It should be noted that the hyperfine constant (the spacing between two hyperfine line) is ~15.3 G in both samples, that is the same value for the bulk Co-doped ZnO. 36 Thus, we can deduce that the observed EPR signal comes from the substitutional Co 2+ inside the NRs, and not from ions on the surface. However, such an observation did not make exclusive evidence that the presences of Co is on the surface of the as-synthsized Co-doped ZnO NRs. In the solution-based synthesis method, it is possible that Co 2+ can be incorporated in the core of ZnO nanostructures or can be adsorbed at their surface. 21 Furthermore, in order to get more information on the defects in the as-synthesized pure ZnO and Co-doped ZnO NRs, room-temperature CL spectra were carried out, and the results are shown in Fig. 4. The emission spectra of all samples were dominated by UV emission peak centered at ~382 nm (3.24 eV) due to near-band-edge (NBE) emission and a strong broad yellow-orange emission centered at ~ 610 nm (2.03 eV) associated with deep-level defects related emission in ZnO. [1][2][3][4][38][39][40] Apparently, the CL spectra of Co-doped NRs exhibited a small red-shift of the UV peak position from 382 nm to 384 nm (as shown in the inset of Fig. 4) as compared to pure ZnO NRs, which is likely due to the change in the energy of the band structure as a result of doping. 22,41 It is important to note that the CL defect-related yellow-orange emission intensity decreases from M1 to M2 (Fig. 4) and the Co EPR signal increases from M1 to M2 (Fig. 3 (b)). This observation suggests that the way of preparing the synthesis solution have a significant influence on the Co incorporation and defect formation in the as-synthesized ZnO NRs in agreement with the XRD results shown in Fig. 2.
The physical origin of intrinsic defects-related yellow-orange emission is controversial, and it is proposed to be associated to the Vo, Oi and Zni. 23,[38][39][40] Recently, it was proposed that the defects-related orange emission is to be likely from the Zni in the core of ZnO NRs. 39 In this study, we believe that the defect-related yellow-orange emission is likely to originate from the Zni in the core of ZnO NRs or the Vo and Oi on the surface of ZnO NRs. As a consequence, the intensity of the defects-related yellow-orange emission is significantly enhanced by the Co doping (Fig. 4), which probably due to the increase in Vo and Oi in the NRs or to the Co-related defect. 15,23,41 Moreover, this suggests the above-observed red-shift of the UV peak could be attributed to the variation of the Zni + concentration in the Co-doped samples (M1 and M2) compared with the pure ZnO NRs (M0). In fact, these results indicate that the bulk quality of the ZnO NRs is improved by the substitution of Co, while the doping has the adverse effect on the surface defects related emission, in agreement with previous results. 15,41 Fig. 4: CL spectra of the as-synthesized pure ZnO and Co-doped ZnO NRs synthesized using different synthesis preparation approaches as indicated. The inset shows the red-shift in the UV peak. For clarity, the spectra are normalized to the near band edge intensity.
In view of the EPR and the CL results, a defect distribution model for ZnO NRs is shown in Fig. 5, 32 which propose that the incorporation of Co during the synthesis process could probably result in occupying Zni + through substitutional or interstitial doping and, subsequently, enhance the crystal quality. The other possibility is that a substitutional Co 2+ very close to a Zni + interstitial may form a non-magnetic complex, which is then not anymore EPR detectable. Also, the incorporation of Co was found to lead to the increase concentration of surface defects such as VO and Oi. Further experimental study combined with detailed theoretical calculations are necessary to fully understand the observed phenomena.
To elaborate more on surface related defect concentration, XPS spectra of all samples have been investigated. Figure 6 (a) shows the Zn 2p core level spectra of all samples, which is composed of two peaks centered at ~ 1022.2 and 1045.0 eV corresponding to binding energy lines of the Zn 2p3/2 and Zn 2p1/2, respectively, with a spin-orbital splitting of 23.1 eV suggesting that Zn is present as Zn 2+ . 22 Co signal in the ZnO NRs was not detected by the XPS; this could be attributed to the surface sensitive XPS technique with the Co 2+ at the inner core of ZnO NRs, as indicated in Fig. 5, and also due to the low Co concentration, as suggested by the EPR measurements in Fig. 3 (b). The O1s core level peak for all samples exhibits an asymmetric profile, which can be decomposed into three Gaussian peaks, donated to OI, OII, and OIII, respectively, as shown in Fig. 6 (b). The OI peak at low binding energy at ~530.9 eV is attributed to the Zn-O bond within the ZnO crystal lattice.
Whereas the OII peak centered at ~532.2eV is commonly assigned to oxygen-deficiency in the ZnO crystal lattice. 16,42 Finally, the OIII peak centered at ~ 533.1 eV is related to the absorbed oxygen on the ZnO surface, e.g. H2O, O2. 16,42 Fig. 5: Schematic illustration of the cross-sectional view of the as-synthesized pure ZnO and Codoped ZnO NRs containing Zni + as core-defect and oxygen vacancies/interstitials as surface defects, respectively.
The relative concentration of oxygen vacancies is estimated from the intensity ratios of OII/OI using the integrated XPS peak areas and the element sensitivity of the O and Zn. 42 The ratios of OII/OI were found to be 0.54, 0.52 and 0.49 for M0, M1, and M2, respectively, suggesting that M2 has a lower concentration of oxygen vacancies compared to M1 and M0, respectively. However, there is no obvious relationship between the samples defect composition estimated from the CL and XPS measurements. For instance, M0 shows lower CL defect emission intensity and higher OII/OI ratio.
4-Conclusions
The optical properties of ZnO NRs are commonly dominated by the presence of native intrinsic point defects, and identifying these defects is a difficult matter, especially in the nanostructure, where the information on anisotropy is usually lost due to the lack of coherent orientation. Here, by studying well-oriented ZnO NRs at low temperature, we were able to access the magnetic anisotropy of theses defects. Furthermore, by incorporating a relatively low amount of diluted Co inside ZnO NRs the crystal structure of the as-synthesized well-oriented ZnO NRs is significantly improved. Pure ZnO and Co-doped ZnO NRs were synthesized by the lowtemperature aqueous chemical method, where the crystal structure, orientation, and incorporation of the Co ion is tuned by the preparation procedure of the synthesis solution. The SEM and XRD measurements showed that the as-synthesized pure ZnO and Co-doped ZnO NRs are vertically aligned along c-axis and have a wurtzite crystal structure of high quality, as demonstrated by the intensity of the (002) diffraction peak. Moreover, the (002) peak position was observed to be shifted to lower or higher 2θ angle depending on the synthesis solution mixing procedure used. This is probably attributed to either Co incorporation or to the variation of the defect concentration in the samples, e.g. such vacancies and interstitials induced by Co doping. EPR measurements have confirmed the substitution of Co 2+ inside ZnO NRs giving a highly anisotropic magnetic Co 2+ signal characterized by eight lines indicating that the as-synthesized NRs are single crystalline, well-aligned and the Co is homogeneously distributed along the NRs. Also, the substitution of the Co 2+ was observed to be accompanied by a drastic reduction in the CD signal (g ~ 1.956) found in pure ZnO NRs. As revealed by CL, the incorporation of Co causes a red shift in the UV peak position with an observed enhancement in the intensity of defect-related emission as compared to pure ZnO NRs. In view of the different results from these complementary measurements, we proposed that the as-synthesized pure ZnO NRs likely contain Zn interstitial (Zni + ) as CDs and oxygen vacancy (VO) or interstitial (Oi) as surface defects. These results open for the possibility of synthesis of highly crystalline quality ZnO-based DMSs using the low-temperature aqueous chemical method.
Fig. 2 :
2 Fig. 2: XRD patterns of the as-synthesized pure ZnO (M0) and Co-doped ZnO NRs (M1 and M2).
Fig. 3 :
3 Fig. 3: (a) EPR spectra show anisotropy of the CD signal in the pure ZnO sample (M0). (b) EPR spectra of Co-doped ZnO NRs ( M1 and M2 ) for parallel (θ = 0º) and perpendicular (θ = 90º) orientation of magnetic field, recorded at T = 5 K. The upper axis gives corresponding g factor values.
Fig
Fig. : XPS core level spectra of the (a) Zn 2p peak and (b) O 1s peak of the as-synthesized pure and Co-doped ZnO NRs as indicated.
Acknowledgement:
This work was supported by the NATO project Science for Peace (SfP) 984735, Novel magnetic nanostructures. | 22,339 | [
"738839"
] | [
"199957"
] |
01767018 | en | [
"sdv",
"sdu"
] | 2024/03/05 22:32:15 | 2016 | https://hal.science/hal-01767018/file/Croitor%26Cojocaru2016_author.pdf | Roman Croitor
email: [email protected]
Ion Cojocaru
An Antlered Skull of a Subfossil Red Deer, Cervus elaphus L., 1758 (Mammalia: Cervidae), from Eastern Romania
Keywords: Carpathian red deer, Cervus elaphus maral, morphology, systematics, taxonomy, Romania 1898: p. 79
A subfossil antlered braincase of red deer discovered in the Holocene gravel deposits of Eastern Romania is described. The morphology of antlers suggests that the studied specimen is related to the Caucasian and Caspian stags and belongs to the oriental subspecies Cervus elaphus maral OGILBY, 1840. An overview and discussion of taxonomical issues regarding modern red deer from South-eastern Europe and some fossil forms of the region are proposed. The so-called Pannonian red deer (Cervus elaphus pannoniensis BANWELL, 1997) is considered a junior synonym of Cervus elaphus maral OGILBY, 1840. Cervus elaphus aretinus AZZAROLI, 1961 from the last interglacial stage of Italy seems to be very close to Cervus elaphus maral.
Introduction
The subspecies status and systematic position of the red deer from the Carpathian Mts. is still a matter of discussions. The comparatively larger Carpathian red deer has massive antlers with less developed crown tines as compared to the red deer subspecies from Western Europe. It was assigned to two subspecies, C. vulgaris montanus BOTEZAT, 1903 (the "mountain common deer") and C. vulgaris campestris BOTEZAT, 1903 (the "lowland common deer"). [START_REF] Botezat E | Gestaltung und Klassifikation der Geweihe des Edelhirsches, nebst einem Anhange über die Stärke der Karpathenhirsche und die zwei Rassen derselben[END_REF] proposed for red deer species the name Cervus vulgaris, since, according to his opinion, the Linnaean Greek-Latin name Cervus elaphus is tautological. [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] and [START_REF] Grubb P | Valid and invalid nomenclature of living and fossil deer, Cervidae[END_REF] considered the name C. vulgaris as a junior synonym of C. elaphus. LYDEKKER (1898) included the Eastern Carpathians in the geographical range of the Caspian red deer Cervus elaphus maral OGILBY. Nonetheless, in his later publication, [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] generally accepted BOTEZAT's viewpoint on the taxonomical distinctiveness between the two Carpathian forms of red deer. However, LYDEKKER (1915) indicated that C. vulgaris campestris is preoccupied since it has been used as Cervus campestris CUVIER, 1817 (a junior synonym of Odocoileus virginianus). Therefore, [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] considered the red deer from the typical locality Marmoros and Bukovina districts of the Hungarian and Galician Carpathians as Cervus elaphus ssp. According to [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] this deer may be to some degree intermediate between Cervus elaphus germanicus from Central Europe and Cervus elaphus maral from Northern Iran and Caucasus. With some doubts, [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] included C. vulgaris montanus in the synonymy of Cervus elaphus maral and suggested that both Carpathian red deer forms described by BOTEZAT may represent recently immigrated dwarfed forms of C. elaphus maral. [START_REF] Heptner | Deer of the USSR (Systematics and Zoogeography). -Transactions on Study of Fauna and Flora of the USSR[END_REF] also rejected BOTEZAT's subspecies name campestris as preoccupied; however, they recognized the validity of Cervus elaphus montanus BOTEZAT with type locality in Bukovina (Romania) and the vast area of distribution that included the entire Carpathian-Balkan region. This subspecies is characterised by underdeveloped neck mane, the missing black stripe bordering the rump patch (or caudal disk), generally grayish colour of pelage, poorly developed distal crown in antlers, and comparatively larger body size [START_REF] Heptner | Deer of the USSR (Systematics and Zoogeography). -Transactions on Study of Fauna and Flora of the USSR[END_REF]. [START_REF] Flerov | Musk deer and deer. The Fauna of USSR[END_REF] and [START_REF] Sokolov | Hoofed animals (Orders Perissodactyla and Artiodactyla). Fauna of the USSR[END_REF] placed the Carpathian red deer in the nominotypical subspecies Cervus elaphus elaphus Linnaeus since the diagnostic characters of antler morphology, pelage colour as well as body size used for the description of the Carpathian red deer are not constant characters and, therefore, are not suitable for subspecies designation. According to [START_REF] Flerov | Musk deer and deer. The Fauna of USSR[END_REF], the morphological peculiarities of the Carpathian and Crimean red deer are insignificant and do not permit to place those populations in any separate subspecies. ALMAŞAN et al. (1977) referred the Carpathian red deer to the Central European subspecies Cervus elaphus hippelaphus ERXLEBEN, 1777. According to [START_REF] Danilkin | Deer (Cervidae). (Series: Mammals of Russia and adjacent regions)[END_REF], the "Carpathian race" montanus is a transitional form between the Western European C. elaphus elaphus and the Caucasian C. elaphus maral. [START_REF] Tatarinov | Mammals of Western Regions of Ukraine[END_REF] applied a new subspecies name Cervus elaphus carpathicus for the red deer from the Ukrainian part of the Carpathian Mts. [START_REF] Heptner | Artiodactyla and Perissodactyla[END_REF] regarded TATARINOV's subspecies as a junior synonym of campestris and montanus and considered it as a nomen nudum. [START_REF] Grubb P | Valid and invalid nomenclature of living and fossil deer, Cervidae[END_REF] considered C. vulgaris campestris BOTEZAT and C. vulgaris montanus BOTEZAT as homonyms of Cervus campestris CUVIER, 1817 and Cervus montanus CATON, 1881, respectively, and, therefore, both names were suggested to be invalid. [START_REF] Banwell | The Pannonians -Cervus elaphus pannoniensis -a race apart[END_REF] proposed another new subspecies name, Cervus elaphus pannoniensis, for red deer from Hungary, Romania and the Balkan Peninsula. [START_REF] Banwell | The Pannonians -Cervus elaphus pannoniensis -a race apart[END_REF][START_REF] Banwell | Identification of the Pannonian, or Danubian, Red Deer[END_REF] described a set of specific morphological characters that distinguish the so-called "maraloid" Pannonian red deer from the Western European red deer. However, BANWELL did not provide the diagnostic characters distinguishing Cervus elaphus pannoniensis from Cervus elaphus maral. Nonetheless, BANWELL 's subspecies C. elaphus pannoniensis was accepted by several authors (GROVES & GRUBB 2011; MARKOV 2014) and even its taxonomic status was raised to the species level [START_REF] Groves C | Ungulate Taxonomy[END_REF]. [START_REF] Zachos | Species inflation and taxonomic artefacts -A critical comment on recent trends in mammalian classification[END_REF] regard the full-species status for the Pannonian red deer as an objectionable "taxonomic inflation". [START_REF] Geist | Deer of the World: Their Evolution, Behaviour and Ecology[END_REF], in his comprehensive publication on evolution, biology and systematics of red deer and wapiti (C. elaphus canadensis ERXLEBEN, 1777, or Cervus canadensis according to the latest genetic studies, see e.g. [START_REF] Polziehn | A Phylogenetic Comparison of Red Deer and Wapiti Using Mitochondrial DNA[END_REF], did not indicate explicitly the systematical position of the Carpathian red deer. However, he supported BOTEZAT's idea on the presence of two forms of red deer in the Carpathian region. According to [START_REF] Geist | Deer of the World: Their Evolution, Behaviour and Ecology[END_REF] 1777) who first applied this name for the red deer from Germany and Ardennes and gave its scientific description supplemented with synonymy and detailed bibliographic references. Later, KERR (1792) applied the species and subspecies name Cervus elaphus hippelaphus ("maned stag") with a reference to ERXLEBEN's (1777) work.].
The recently published results on genetic analysis of red deer populations from Western Eurasia bring new views on systematic position and taxonomical status of red deer from the Carpathian region. According to [START_REF] Ludt Ch | Mitochondrial DNA phylogeography of red deer (Cervus elaphus)[END_REF], the analysis of mtDNA cytochrome b sequence could not distinguish the red deer from the Balkan-Carpathian region from the red deer forms of Central and Western Europe. However, the study of [START_REF] Ludt Ch | Mitochondrial DNA phylogeography of red deer (Cervus elaphus)[END_REF] confirmed the subspecies status of C. elaphus barbarus from North Africa, C. elaphus maral from the Caspian Region, and C. elaphus bactrianus and C. elaphus yarkandensis from Central Asia. All the mentioned subspecies and forms of red deer are included in the so-called Western group of red deer. KUZNETZOVA et al. (2007) confirmed that the molecular-genetic analysis of red deer from Eastern Europe did not support the validity of red deer subspecies C. elaphus montanus from the Balkan-Carpathian area and C. elaphus brauneri from Crimea as well as C. elaphus maral from North Caucasus. The genetic integrity of the Carpathian populations of red deer was confirmed through the haplotype distribution, private alleles and genetic distances [START_REF] Feulner | Mitochondrial DNA and microsatellite analyses of the genetic status of the presumed subspecies Cervus elaphus montanus (Carpathian red deer)[END_REF]. Therefore, the complicated ancestral pattern for Carpathian red deer suggested by [START_REF] Geist | Deer of the World: Their Evolution, Behaviour and Ecology[END_REF] was not supported. [START_REF] Skog | Phylogeography of red deer (Cervus elaphus) in Europe[END_REF] and [START_REF] Zachos | Phylogeography, population genetics and conservation of the European red deer Cervus elaphus[END_REF] suggested that the modern Carpathian red deer had originated from the Balkan Late Glacial refugium. [START_REF] Skog | Phylogeography of red deer (Cervus elaphus) in Europe[END_REF] also assumed that the Balkan Late Glacial refugium could extend further to the south-east (Turkey and Middle East). [START_REF] Sommer | Late Quaternary distribution dynamics and phylogeography of the red deer (Cervus elaphus) in Europe[END_REF] regarded Moldova (East Carpathian foothills) as a part of the East European Late Glacial refugium.
However, a certain caution is needed with the results of the genetic analysis. [START_REF] Micu | Ungulates and their management in Romania[END_REF] reported that the Austrian red deer with multi-tine crowns were introduced to Romania in the 19th and early 20th centuries in order to "improve" the quality of antlers of the local red deer race. Therefore, although the level of genetic introgression may be low, the modern populations of Carpathian red deer are not truly natural anymore (ZACHOS & HARTL 2011).
The taxonomic status and systematic position of the Carpathian red deer is complicated further by the fact that the previously published data on morphology of Cervus elaphus from the Carpathian Region are poor and quite superficial (ALMAŞAN et al. 1977;[START_REF] Saraiman | Prezenţa speciilor Bos primigenius Boj. şi Cervus elaphus L., în terasa de 8-10 m a Siretului. -Analele ştiinţifice ale Universităţii[END_REF].
In the context of the above-mentioned controversies, the new subfossil material of red deer from the Carpathian Region represents a special interest and may elucidate the systematic position of the aboriginal red deer forms. In the present work, we propose a morphological description of the well preserved antlered braincase from Holocene gravel deposits in Eastern Romania and a discussion on the systematic position of the original red deer from the Eastern Carpathian area.
Material and Methods
The studied specimen represents an antlered braincase with almost complete left antler and proximal part of the right antler. The specimen was discovered in a gravel pit located in the area of Răchiteni Village, Iasi County, north of the Roman town (Fig. 1). Most likely, the gravel deposits from Răchiteni are of Post-Glacial (Holocene) age (Paul TIBULEAC, personal communication). The cranial measurements are taken according to von den DRIESCH (1976). The antler measurements are taken following [START_REF] Heintz E | Les cervidés villafranchiens de France et d'Espagne. -Mémoires du Muséum[END_REF]. The terminology of antler morphology is according LISTER (1996).
Results
Systematics
Genus Cervus LINNAEUS, 1758
Cervus elaphus LINNAEUS, 1758
Cervus elaphus maral OGILBY, 1840
Description
The antlered skull of red deer from Răchiteni belongs to a mature but not old male individual: its pedicles are rather short and robust (their height is significantly smaller than their diameter; Table 1, Fig. 2), the bone sutures of neurocranium are still visible but in some places (the area between pedicles) are completely obliterated and, therefore, indicate the fully mature age (MYSTKOWSKA 1966). We assume, therefore, that the antlers of the red deer from Răchiteni most probably attained their maximal development.
The cranial measurements of the specimen suggest that the individual from Răchiteni was rather large, exceeding body size of modern red deer from Bialowieza Forest and Caucasus. The greatest breadth of the skull across orbits in males of Cervus elaphus hippelaphus from Bialowieza Forest (three individuals) range 165-181 mm; the breadth of occipital condyles ranges 72-76 mm [START_REF] Heptner | Deer of the USSR (Systematics and Zoogeography). -Transactions on Study of Fauna and Flora of the USSR[END_REF]. The analogous measurements of males of Cervus elaphus maral from Caucasus (nine individuals) range 145-187 mm, and from 67 mm to 80 mm, respectively [START_REF] Heptner | Deer of the USSR (Systematics and Zoogeography). -Transactions on Study of Fauna and Flora of the USSR[END_REF]. The corresponding measurements of the skull from Răchiteni were greater than the measurements of the largest Caucasian stag reported by HEPTNER & ZALKIN (1947) with ca. 1 cm (the greatest breadth across the orbits and the breadth of occipital condyles were 198.0 mm and 87.8 mm, respectively).
The antlers from Răchiteni were characterized by a comparatively long curved brow (first) tine situated at a short distance from the burr, the missing bez (second) tine, and the rather long and strong trez (third) tine, which is, however, shorter than the brow tine (Table 2). The antler beam was somewhat bent toward the posterior at the level of trez tine insertion and after slightly arched acquiring the upright orientation in lateral view. The distal portion of antler formed a crown that consisted of six tines (Fig. 3). Therefore, the total number of antler tines amounted to eight. The crown of antler was formed by two transversely oriented forks, the additional prong and the apical tine (broken). The antler beam was curved towards the posterior in the area of distal crown and formed the pointed posterior axe of the crown, reminding the morphological pattern typical of the Caucasian and Caspian red deer C. elaphus maral (LYDEKKER 1915: 127, fig. 23). The antler surface was covered with a characteristic "pearling" specific for the so-called Western group of red deer [START_REF] Geist | Deer of the World: Their Evolution, Behaviour and Ecology[END_REF].
Discussion
According to LYDEKKER (1898), the number of tines of Cervus elaphus maral seldom exceeded eight. GEIST (1998) described the antlers of Carpathian stags as large, heavy but poorly branched as compared to Western European red deer. LYDEKKER (1898) reported also a frequent poor development of bez tine in Cervus elaphus maral. According to LYDEKKER (1898), the bez tine was often much shorter than brow tine or even might be absent in the Carpathian red deer, as could be seen in the case of the specimen from Răchiteni.
The antlers of red deer from Prăjeşti (Siret Valley) described by [START_REF] Saraiman | Prezenţa speciilor Bos primigenius Boj. şi Cervus elaphus L., în terasa de 8-10 m a Siretului. -Analele ştiinţifice ale Universităţii[END_REF] also show a rather weak bez tine, which is less developed than brow tine and much shorter than trez tine. The distal crown in two better preserved larger antlers from Prăjeşti (SARAIMAN & ŢARĂLUNGĂ 1978: Pl. V, figs. 1, 2) is rather weak. It consists of four tines, of which the first crown tine is much distinct in the crown as in modern Caspian deer (see the description in LYDEKKER 1898). Therefore, the crown shape of red deer from Prăjeşti resembles the typical morphological condition seen in the Caucasian and Caspian red deer. The remains of red deer from Prăjeşti have been found together with a fragment of skull of Bos primigenius. [START_REF] Saraiman | Prezenţa speciilor Bos primigenius Boj. şi Cervus elaphus L., în terasa de 8-10 m a Siretului. -Analele ştiinţifice ale Universităţii[END_REF] have suggested the Würmian age for the osteological remains from Prăjeşti. [START_REF] Spassov | The Remains of Wild and Domestic Animals from the Late Chalcolithic Tell Settlement of Hotnitsa (Northern Bulgaria)[END_REF] described from the Late Chalcolithic (4100-4500 BC) of North Bulgaria remains of a very large form of red deer that rivalled the size of Siberian maral Cervus canadensis. Besides the larger size, the subfossil red deer from Bulgaria was characterised by massive antler beams, a simplified antler crown and a relatively limited number of tines. This brief description generally corresponds to the characteristics of the Caucasian and Caspian red deer Cervus elaphus maral LYDEKKER, 1898, and suggests its close resemblance to the Romanian subfossil red deer. The larger size of the subfossil red deer, as compared to the modern forms from the same area, is explained by the long tradition of trophy hunting that has likely led to dwarfing of the populations of game species [START_REF] Spassov | The Remains of Wild and Domestic Animals from the Late Chalcolithic Tell Settlement of Hotnitsa (Northern Bulgaria)[END_REF].
Understanding the significance of the observed peculiarities of antler morphology of fossil and subfossil red deer from Eastern Romania and neighbouring countries, and their resemblance to the Caucasian and Caspian modern red deer, requires a discussion of already described taxa of red deer from Southeastern Europe. A conspicuously weak bez tine may be also noticed in the modern Crimean deer, which is often regarded as a true subspecies: Cervus elaphus brauneri [START_REF] Charlemagne N | Les Mammiferes de l'Oukraine. Court manuel de determination, collection et observation des mammiferes de l'Oukraine[END_REF]. DANILKIN (1999: fig. 122-2) presented the antlered skull of Crimean red deer from the collection of the Zoological Museum of the Moscow State University that shows a very weak bez tine on the left antler and a missing bez tine on the right antler, while its distal crown reminds the morphological condition of Cervus elaphus maral.
The origin of the modern Crimean population is not clear and its taxonomic status is controversial. FLEROV (1952: 162) placed the Crimean stag in an informal group together with the Balkan and Carpathian red deer within the European subspecies Cervus elaphus elaphus, since, according to his opinion, the morphological peculiarities of the above mentioned populations are not taxonomically significant. SOKOLOV (1959: 219) also considered that the separation of the Crimean subspecies brauneri is not justified. Nonetheless, HEPTNER et al. (1988) believed that the Crimean deer represented a taxonomically independent form that occupied an intermediate position between the Carpathian and Caucasian red deer. [START_REF] Danilkin | Deer (Cervidae). (Series: Mammals of Russia and adjacent regions)[END_REF] regarded the Crimean population of red deer as a small-sized "insular" form of North-Caucasian red deer that was introduced in Crimea in the early 20th century. Finally, VOLOKH (2012) reported multiple and uncontrolled introductions of red deer individuals to Crimea at least from the times of the Crimean Khanate until very recent times. Therefore, the debates on the taxonomical status of the modern Crimean red deer become useless. [START_REF] Ludt Ch | Mitochondrial DNA phylogeography of red deer (Cervus elaphus)[END_REF] discovered that the modern red deer from Crimea belongs to the haplogroup of Western European red deer. However, this conclusion was based only on two modern specimens from Crimea. Obviously, the adequate results of genetic analysis could be obtained only from subfossil and archaeozoological remains. [START_REF] Stankovic | First ancient DNA sequences of the Late Pleistocene red deer (Cervus elaphus) from the Crimea, Ukraine[END_REF] analysed the ancient DNA sequences of Late Pleistocene red deer from Crimea and revealed a very interesting fact: the Crimean Peninsula was colonized several times by various forms of red deer of different zoogeographic origin: the youngest form of red deer from Crimea (two specimens dated 33.100 ± 400 BP and 42.000 ± 1200 BP) are genetically close to C. elaphus songaricus from China, while the older specimen (>47,000 BP) is close to the Balkan populations of red deer. The origin of indigenous Holocene Crimean population of red deer still remains unclear. It is necessary to mention that the subfossil red deer from Crimea (early Iron Age, settlement of Uch-Bash, Sevastopol) is characterised by a peculiar high frequency of primitive unmolarised lower fourth premolar (P4), which distinguishes this population from Cervus elaphus of Western Europe (CROITOR, 2012).
The recently established new subspecies
Cervus elaphus pannoniensis BANWELL, 1997 from the Middle Danube area also requires a special discussion here. Although [START_REF] Banwell | The Pannonians -Cervus elaphus pannoniensis -a race apart[END_REF][START_REF] Banwell | Identification of the Pannonian, or Danubian, Red Deer[END_REF] had the opportunity to see the red deer from Anatolia and the Balkan Peninsula, the description of his new subspecies was based only on morphological differences between the so-called Pannonian red deer and Western European ("Atlantic") Cervus elaphus hippelaphus, while a differential diagnosis between Cervus elaphus pannoniensis and Cervus elaphus maral and a comparison of these two subspecies were not provided. The antlered skull from Southern Hungary (displayed in the Chateau Chambord) presented by [START_REF] Banwell | The Pannonians -Cervus elaphus pannoniensis -a race apart[END_REF], should be considered as a type specimen (lectotype according to GROVES & GRUBB 2011). Its extremely large antlers bear additional long tines on its beams and crowns, well-developed both brow and bez tines and apparently represent an exceptional hunter's trophy specimen. BANWELL (1998) provides a good and very detailed morphological description of the Pannonian red deer, which are distinguished from the Western European forms, according to the description, by the larger size and elongated Romannosed face (obviously, these two characters are correlated), poorly developed mane, underdeveloped caudal disk, large antlers with poorly developed distal crown. Finally, as BANWELL (1997,1998,2002) reasonably noticed, the Pannonian red deer belongs to the Oriental "maraloid" type. The area of distribution of the new Pannonian subspecies includes, according to BANWELL (1998), Hungary, Romania, the Western Balkan states, Bulgaria, and may range until Crimea, Eastern Turkey and Iran. One can notice that the assumed area of distribution of BANWELL's subspecies broadly overlaps with the known area of distribution of Cervus elaphus maral. Although [START_REF] Groves C | Ungulate Taxonomy[END_REF] affirm that BANWELL has provided a set of characters (colour, spotting, mane and antlers) distinguishing Cervus elaphus pannoniensis from Cervus elaphus maral, such data are not available. The latter subspecies was ignored in BANWELL's (1994,1997,1998,2002) publications. Therefore, taking in consideration the absence of distinguishing diagnostic characters and the overlapping of claimed areas of distribution, we regard Cervus elaphus pannoniensis BANWELL as a junior synonym of Cervus elaphus maral OGILBY.
Most probably, the studied fossil and sub-fossil Carpathian red deer are also closely related to Cervus elaphus aretinus AZZAROLI, 1961 from the last interglacial phase of Val di Chiana (Central Italy). The Italian fossil red deer is characterised by a presence of only one basal tine (the brow tine) and a massive distal crown, which, however, still resembles the maral type (Fig. 4). It is necessary to mention here the observed by [START_REF] Banwell | Identification of the Pannonian, or Danubian, Red Deer[END_REF][START_REF] Banwell | In defence of the Pannonian Cervus elaphus pannoniensis[END_REF] development of slight distal palmation in the so-called Pannonian red deer; in our opinion, this also makes it similar to Cervus elaphus aretinus. One of the authors of the present study [START_REF] Croitor R | Functional morphology of small-sized deer from the early and middle Pleistocene of Italy: implication to the paleolandscape reconstruction[END_REF][START_REF] Croitor R | Early Pleistocene small-sized deer of Europe[END_REF] assumed in his previous publications that Cervus elaphus aretinus (or Cervus aretinus) represents a local archaic specialized form. However, the morphological resemblance between the fossil form Cervus elaphus aretinus and the modern Cervus elaphus maral, in our opinion, is obvious and one may not exclude that those two subspecies could be even synonymous. Another antlered fragment of skull that strongly reminds the morphology of Cervus elaphus maral is reported from the Late Pleistocene of Liguria (Le Prince, Italy; BARRAL & SIMONE 1968: 87, Figs. 14-1).
Apparently, the origin of the indigenous Carpathian red deer is linked to the Balkan-Anatolian-Caucasian glacial refugium [START_REF] Sommer | Late Quaternary distribution dynamics and phylogeography of the red deer (Cervus elaphus) in Europe[END_REF][START_REF] Skog | Phylogeography of red deer (Cervus elaphus) in Europe[END_REF][START_REF] Meiri | Late-glacial recolonization and phylogeography of European red deer[END_REF]. The Italian Cervus elaphus aretinus could be very close also to the red deer form from the glacial refugium in Eastern Europe. The placement of the postglacial Carpathian red deer in the subspecies Cervus elaphus maral, according to our opinion, is supported by the reported in the present study antler morphology. Nonetheless, the history of the red deer from the Carpathian-Balkan area and the adjacent regions requires a more complex and extensive interdisciplinary research combining zoological, archaeozoological, palaeontological and genetic data in the future.
Fig. 1 .
1 Fig. 1. Geographical location of the Răchiteni site, Iaşi County, Romania
Fig. 2 .
2 Fig. 2. Cervus elaphus maral OGILBY from Răchiteni: A, lateral view of the braincase; B, occipital view of the braincase; C basal view of the braincase.
Fig. 3 .
3 Fig. 3. Cervus elaphus maral OGILBY from Răchiteni: A, frontal view; B lateral view; C medial view of left antler.
Fig. 4 .
4 Fig. 4. Cervus elaphus aretinus AZZAROLI from the last interglacial phase of Val di Chiana, Italy (adapted from AZZAROLI 1961): A frontal view of antlered frontlet; B lateral view of antler crown.
, European west (C. elaphus elaphus) and east (C. elaphus maral) types of red deer meet in the Balkans. Within this context, GEIST (1998) also discussed the so-called "cave stag", Strongyloceros spelaeus OWEN, 1846 from Western Europe, a Glacial Age wapiti that rivalled the size of the giant deer Megaloceros giganteus Blumenbach, 1799. GEIST (1998), taking in consideration PHILIPOWICZ's (1961) description of the Carpathian red deer, presumed that the largest European red deer
with somewhat simplified smooth antlers (not pearled as in West European red deer) from the Carpathian Alpine meadows is a descent of the giant Glacial Age wapiti. Later,[START_REF] Geist V | Defining subspecies, invalid taxonomic tools, and the fate of the woodland caribou[END_REF] placed the Carpathian red deer in Central European subspecies Cervus elaphus hippelaphus KERR, 1792 [Sic! The authorship of subspecies Cervus elaphus hippelaphus belongs to ERXLEBEN (
Table 1 .
1 Measurements of the skull of Cervus elaphus maral OGILBY from Răchiteni (measurements are numbered according to von den Driesch 1976: fig. 11).
Measurements mm notes
dorsal view
(10) Median frontal length 198.0 incompletely preserved
(11) Lambda -Nasion 152.0 incompletely preserved
(31) Least frontal breadth 178.0 orbits incompletely preserved
(32) Greatest breadth across the orbits 198.0 orbits incompletely preserved
(41) distal circumference of the burr 211.0 in both antlers
Distance between antler burrs 79.8
Distance between pedicles and nucal crest 113.0
lateral view
(38) basion -the highest point of the superior nuchal crest 97.0
(40) proximal circumference of the burr 190.0
bazal view
(6) basicranial axis 130.0 basicranium length
91.0 taken from the visible suture to the posterior
edge
(26) Greatest breadth of occipital condyles 87.8
(28) Greatest breadth of the foramen magnum 35.4
(27) Greatest breadth at the bases of the paraoccipital 158.0 incompletely preserved
processes
Table 2 .
2 Measurements of the antlers of Cervus elaphus maral OGILBY from Răchiteni.
Acknowledgements:
We thank Adrian LISTER, Nikolai SPASSOV and Stefano MATTIOLI for their kindness while providing missing bibliographical sources used in this research. | 30,594 | [
"1031012"
] | [
"208163",
"532670"
] |
00475834 | en | [
"phys"
] | 2024/03/05 22:32:15 | 2010 | https://hal.science/hal-00475834/file/Cu-Co%20CHATAIN.pdf | I Egry
D M Herlach
L Ratke
M Kolbe
D Chatain
S Curiotto
L Battezzati
E Johnson
N Pryds
Interfacial properties of immiscible Co-Cu alloys
Keywords: miscibility gap, interfacial tension, surface tension, levitation, oscillating drop, microgravity
Using electromagnetic levitation under microgravity conditions, the interfacial properties of an Cu 75 Co 25 alloy have been investigated in the liquid phase. This alloy exhibits a metastable liquid miscibility gap and can be prepared and levitated in a configuration consisting of a liquid cobalt-rich core surrounded by a liquid copper-rich shell. Exciting drop oscillations and analysing the frequency spectrum, both surface and (liquid-liquid) interfacial tension can be derived from the observed oscillation frequencies. This paper briefly reviews the theoretical background and reports on a recent experiment carried out on board the TEXUS 44 sounding rocket.
Introduction
Alloys with a metastable miscibility gap are fascinating systems due to the interplay between phase separation and solidification. In contrast to systems with a stable miscibility gap, the demixed microstructure can be frozen in by rapid solidification from the undercooled melt. Electromagnetic levitation offers the possibility to study compound drops consisting of a liquid core, encapsulated by a second liquid phase. The oscillation spectrum of such a compound drop contains information about both, the surface and the interface tension. The binary monotectic alloy CuCo is an ideal model system for such investigations. Its phase diagram is shown in Figure 1.
In order to study this system, including potential industrial applications, the European Space Agency ESA funded a European project, COOLCOP [1]. In the past years, this team devoted a lot of effort to understand the behaviour of such systems, starting from phase diagram calculations [2], drop dynamics [3], modelling of interfacial properties [4], and extending to solidification theories and experiments [5]. The investigations laid the ground for microgravity experiments. First results for a Co 25 Cu 75 alloy onboard a sounding rocket are reported here.
As the temperature of a homogeneous melt of the alloy is lowered below the binodal temperature, demixing sets in and small droplets of one liquid, L1, in the matrix of the other liquid, L2, are formed. These two immiscible liquids do not consist of the pure components, but have concentrations according to the phase boundary of the miscibility gap; therefore, L1 is rich in component 1, while L2 is rich in component 2. Initially, depending on the nucleation kinetics, a large amount of liquid droplets is created. This initial phase is energetically very unfavourable, due to the high interface area created between the different drops. In the next stage, Ostwald ripening sets in [START_REF] Ratke | Immiscible Liquid Metals and Organics[END_REF]. This diffusive mechanism leads to the growth of large drops at the expense of the small ones, thereby coarsening the structure of the dispersion and finally leading to two separated liquid phases. For a levitated drop, without contact to a substrate, the liquid with the lower surface tension -in the present case the copper-rich liquidencapsulates the liquid (cobalt-rich) core. Terrestrial levitation experiments suffer from the detrimental side effects of the levitation field, in particular by electromagnetic stirring effects which destroy the separated two-phase configuration. Therefore, it was decided to perform such an experiment under microgravity conditions on board a TEXUS sounding rocket. As will be discussed below, the drawback of this carrier is the short available experiment time of about 160 s. Due to a specific preparation of the sample, it was nevertheless possible to conduct three melting cycles during this short time.
Drop Dynamics
Generally speaking, the interfacial tension between two liquids is difficult to measure, and only few data exist [START_REF] Merkwitz | [END_REF]. The oscillating drop technique [8] is a non contact measurement technique for surface tension measurements of levitated liquid drops. In its original form, it assumes a homogeneous non-viscous drop, free of external forces. In this ideal case, the frequency of surface oscillations is simply related to the surface tension s 0 by Rayleigh's formula [9] :
2 0 0 3 0 0 8 R s w r = (1)
where r 0 is the density of the drop and R 0 its radius. By substituting r 0 R 0 3 = 3M/4p, the apparent density dependence of the frequency disappears which makes this equation particularly easy to use.
The oscillating drop technique can be extended to the measurement of the interfacial tension between two immiscible liquids [10]. Based on the theory of Saffren et al. [START_REF] Saffren | Proceedings of the 2 nd International Colloquium on Drops and Bubbles[END_REF], the theory was worked out for force-free, concentric spherical drops. The geometry considered is summarized in Figure 2. Due to the presence of the interface between liquid L1 and L2, this sytem possesses two fundamental frequencies, driven by the surface tension s 0 and the interfacial tension s 12 .
Adopting the nomenclature of ref [START_REF] Saffren | Proceedings of the 2 nd International Colloquium on Drops and Bubbles[END_REF], the normal mode frequencies w of a concentric, force-free, inviscid compound drop read:
2 W K J w ± ± = (2)
K and J are dimensionless, while W is a frequency squared. W/J is given by:
2 8 0 10 1 (1 ) 2 3 i i W J w t s r t r = + D + D (3)
Here, a number of symbols has been introduced which are defined as follows: w 0 is the unperturbed Rayleigh frequency (eqn (1)) of a simple drop with density r 0 , Radius R 0 and surface tension s 0 (see also Figure 2 for the definition of the symbols).
0 i R R t = (4)
is the square root of the ratio between outer and inner radius,
0 12 s s s = (5)
is the square root of the ratio of the surface tension and the interface tension, and
0 0 3 5 i i r r r r - D = (6)
is the weighted relative density difference between liquids L1 and liquid L2. It remains to write down the expression for K. It is given by:
2 3 3 0 0 3 3 1 1 1 2 4 i i m m m m K s t s t t s t s ± ae ö ae ö = + ± - + ç ÷ ç ÷ è ø è ø (7)
where two additional symbols have been introduced, namely:
5 5 0 3 2 5 5 m t t - = + (8) and 5 5
(1 )
i i i m r t r t - = + D -D (9)
For large s and small Dr i , approximate equations can be derived for the two frequencies w + and w -:
2 2 0 2 4 1 1 w w s t + ae ö = + ç ÷ è ø (10) 6 2 2 1 0 0 2 5 1 3 t w w t s - - ae ö = - ç ÷ è ø (11)
From an experimental point of view it is interesting to discuss the frequencies as a function of the initial, homogeneous composition of the drop. To this end we introduce the relative mass fraction of component 2, i.e. the component with the lower surface tension which will eventually constitute the outer liquid shell. it is given by:
1 3 0 0 1 shell i i rel i m R m m R R r r - ae ö ae ö ç ÷ = = + ç ÷ ç ÷ - è ø è ø (12)
In Figure 3, the frequency spectrum is shown as a function of m rel for parameters corresponding to the Cu-Co system.
Figure 3. The normalized normal mode frequencies w ± /w 0 as a function of the relative mass fraction m rel . For the figure, following parameters were chosen: s 0 =1.3 N/m, s 12 =0.5 N/m, r 0 =7.75 g/cm 3 ,
r i =7.86 g/cm 3 .
Although the oscillations of the inner radius, R i , cannot be observed optically for nontransparent liquid metals, both eigenfrequencies can be determined from the oscillations of the outer radius, R 0 , alone. This is due to the coupling of the two oscillators via the common velocity field in the melt. The relative amplitudes of the oscillations of the outer and inner surface are shown in Figure 4 for both oscillatory modes. The larger the value of |dR 0 /dR i |, the better the detectability. Consequently, the optimal choice to detect both modes, lies between 0.7 < m rel < 0.8.
Results
The experiments were carried out using the TEXUS-EML module during the TEXUS 44 campaign. Two experiments, one on demixing of CuCo, described here, and one on calorimetry and undercooling of an Al-Ni alloy were accommodated. The allotted time span of microgravity for the present experiment was 160 s.
As this time is much too short for undercooling and complete phase separation, it was decided to perform the experiment on a sample which was prepared ex-situ as a two-phase compound drop using a DTA furnace and a melt flux technique which allows deep undercooling and subsequent phase separation of the Cu-Co sample [START_REF] Willnecker | [END_REF]. Of course, such a system is not in equilibrium when it is remelted, but it takes some time to destroy the interface between the two liquids L1 and L2, and this time is sufficient to excite and observe oscillations of the (unstable) interface.
The experiment consisted of three heating cycles:
• one cycle with a completely phase-separated sample
• one cycle with a homogenised sample
• one cycle for maximum undercooling
The experiment was conducted with one sample of the composition Cu 75 Co 25 with a pre-separated microstructure. Careful heating should melt the Cu shell first and then, at higher temperature, the Co core. The aim of the experiment was to observe the oscillations of this separated microstructure in the liquid and to experimentally determine the interface energy of Cu-Co. In a second heating cycle, the microstructure was homogenized and two pulses were applied to observe oscillations of a homogeneous sample for comparison. A third heating cycle was used to investigate growth of a droplet dispersion starting after undercooling at the binodal.
The most important parameter in the preparation of the experiment was the choice of the maximum temperature in the first heating cycle. It had to be chosen such that both, the outer copper shell, and the inner cobalt core are fully molten, but not intermixed. Microstructure analysis of the samples from previous parabolic flights showed that a maximum temperature of 1800°C of the heating cycle was too high, as the pre-separated microstructure has been destroyed. On the other hand, a minimum temperature of about 1500 °C is required to melt the cobalt-rich core. Consequently, a maximum temperature of 1600°C has been chosen for the TEXUS experiment.
The experiment was successful and three heating cycles could be conducted. The second cycle led probably to homogenisation of the liquid sample (T max » 1850°C). Two heating pulses for excitation of the homogeneous droplet oscillations have been applied in the high temperature region. The third cycle led to an undercooling of the melt and a recalescence due to release of latent heat, which is indicated by an arrow in the temperature-time profile in Figure 5. The sample was saved and the experiment with Cu-Co was finished.
Discussion
Temperature Calibration
The emissivity of the sample changes depending on whether or not it is phase separated. The pyrometer data were calibrated for e=0.1, corresponding to a demixed sample. It is assumed that the second heating homogenizes the sample, leading to an emissivity of e=0.13. Therefore, the pyrometer signal had to be corrected according to:
ln 0 1 1 2 2 2 1 1 T T c l e e -= (13)
where l 0 is the operating wavelength of the pyrometer, and c 2 = 1.44 10 4 µm K.
The pyrometer operates in a band of 1.45 -1.8 µm. Assuming an effective wavelength of l 0 = 1.5 µm results in a correction of -2.733 10 -5 K -1 .
Taking this correction into account, the pyrometer signal was recalibrated and is shown in Figure 5. Also shown (dashed line) is the heater control voltage, controlling the heating power in the coil system of the EML module. The sample is molten within 30 s, between 45260 and 45290 s. During cooling, short heater pulses are applied to excite oscillations of the liquid drop. Due to the time resolution of the data acquisition, not all such pulses are shown in Figure 5.
The temperature signal is rather noisy, especially during heating. This is due to sample movement and sample rotation. As explained above, the sample was prepared in a melt flux, and part of this flux was still attached to the sample surface. This flux has a much higher emissivity than the metallic sample. Whenever such a clod of glass entered the measuring spot of the pyrometer, its signal went up, resulting in spikes. In fact, the temporal distance of these spikes is a quantitative measure of the sample's rotation frequency.
The solidus temperature is, according to the phase diagram, T s = 1080 °C and is visible in the signal around 45280 s. The liquidus temperature, T L , of the phaseseparated sample is around 1437 °C, also visible at 45290 s. The liquidus temperature of the homogeneous, single-phase sample is determined as T L = 1357 °C. The binodal temperature is located at T b = 1248 °C and can be recognised around 45350 s. After the final sequence the sample undercooled and solidified at 45410 s, displaying a recalescence peak. Undercooling relative to the corresponding liquidus temperature was about DT = 200 K.
Oscillation Spectra
For the analysis of the spectra, a number of sample parameters need to be known. First of all, the mass was determined before and after the flight. The sample mass before the flight was M 0 = 1.31 g and M ¥ = 1.30 g after the flight, resulting in a small weight loss due to evaporation of dM = 0.01 g, which can be assumed to be mainly copper.
The initial masses of copper and cobalt were M Cu = 1.00566 g, M Co = 0.3064 g, resulting in 76.65 wt% copper. Due to evaporation, the copper content decreased to 76.38 wt% after flight. Therefore, the concentration changed only by 0.27 wt%, which is acceptable.
For the evaluation of the oscillation frequencies, the radius in the liquid phase is required. This cannot be measured directly, and we estimate it from the sample mass according to
3 eff 3M R 4pr = (14)
The densities of liquid copper and cobalt were measured by Saito and coworkers [13,14]. At the melting point, the quoted values are: r Cu (T m )=7.86 g/cm 3 , r Co (T m )=7.75 g/cm 3 . The temperature dependent densities are as follows:
r Co (T) = 9.71 -1.11 10 -3 T g/cm 3 r Cu (T) = 8.75 -0.675 10 -3 T g/cm 3 At T = 2000 K, we obtain r Co (2000 K)=7.49 g/cm 3 and r Cu (2000 K)=7.44 g/cm 3 . As these two densities are very close, we have decided to neglect the density difference and to assume r Co = r Cu = 0.765 r Cu (2000 K) + 0.235 r Co (2000 K) = 7.45 g/cm 3 throughout the analysis. Inserting this value into above equation, we obtain
R eff = R 0 = 3.
mm
We still need to determine m rel and R i . For these two quantities we need to know the compositions of the two separated liquids L1 and L2. This of course depends on the solidification path and is not known a priori. From EDX analysis of samples prepared identically to the flight sample we estimate that the L2 liquid consists of app. 90 wt% copper and 10 wt% cobalt, while L1 is composed of 16 wt% copper and 84 wt% cobalt. We therefore estimate m rel = (1.31) -1 and obtain .
3 i 0 r e l R R 1 m 2 149 mm = - = .
In order to get a feeling for the oscillation frequencies, we need estimates for the surface and interfacial tensions. The surface tensions of the Cu-Co system have been measured by Eichel et al. [15]. For the composition Cu 70 Co 30 which is very close to our sample, their result is:
( ) . . ( )
3 0 N T 1 22 0 29 10 T 1365 C m s - = - × - °
For T = 1665 °C this yields s 0 =1.13 N/m. Inserting this into the Rayleigh equation, eqn
, we obtain a Rayleigh frequency
n 0 = w 0 /(2p) = 27.06 Hz.
For the interfacial tension, we assume complete wetting, yielding s 12 = s L1s L2 .
From ref 15 we have:
( ) . . ( )
3
Surface Tension
As pointed out before, sample oscillations were excited by short current pulses through the heating coil, which led to a compression and subsequent damped oscillations of the sample. The sample shape was recorded by a video camera, looking along the symmetry axis of the sample (top view) operating at 196 Hz.
The obtained images were analysed off-line by image processing with respect to a number of geometrical parameters; the most important ones are: area of the visible cross section and radii in two orthogonal directions. From the latter, two more parameters can be constructed, namely the sum and the difference of these two radii. In case of non-spherical samples, the latter should have slightly different peaks in their oscillation spectra [16], while the Fourier spectrum of the area signal should contain all peaks. Although there were no big differences between the signals, the area signal was used for further analysis. The time signals of these oscillations are shown in Figure 6 for the first melting cycle. In the first cycle, three oscillations are clearly visible, but the first one is somewhat disturbed. The second cycle also shows three oscillations; they are not shown here. In order to obtain the oscillation frequencies, each oscillation was analysed separately by performing a Fourier transformation. The result is shown in Figure 7 for all pulses analysed. Except for the first pulse of the first cycle, all spectra display a single peak around 28 Hz. The first pulse of the first cycle displays two peaks at 28 Hz, and a small peak around 15 Hz. Positions and corresponding temperatures of the main peaks are shown in Table 1. Assuming that, after the first pulse, the sample is single-phase, these frequencies correspond to the Rayleigh frequency, eqn(1). We then obtain the surface tension as a function of temperature, as shown in Figure 8. Linear fit to the data yields
( ) . . ( ) / 4 T 1 29 2 7710 T 1357 C N m s - = - - ° (15)
This is in excellent agreement with the data measured by Eichel [15].
As is evident from Figure 6 and Figure 7, the oscillations during the first pulse of the first cycle are more complex than for the other pulses. We have therefore analysed this pulse in greater detail, as shown Figure 9. Regardless of the parameter analysed, two peaks around 29 Hz and 28 Hz and a small peak at 15 Hz are clearly visible. Therefore, we conclude that the liquid drop was initially phase separated, giving rise to two peaks around 15 Hz and 29 Hz, and homogenized in the course of the oscillations yielding the Rayleigh frequency at 28 Hz. If this is correct, we must be able to fit all three frequencies by two values for the surface tension s 0 and the interfacial tension s 12 . This is shown in Table 2. From the fit we obtain:
Pulse
s 0 = 1.21 N/m, s 12 = 0.17 N/m
The value for the surface tension corresponds to 1590 °C and agrees well with the fit obtained from the other pulses, see Figure 8. The value of the interfacial tension is somewhat lower than previously estimated. This may be due to a slight shift in the compositions of the two liquids L 1 and L
Summary
Using the EML module on board the TEXUS 44 microgravity mission, a Co 25 Cu 75 sample was successfully processed. Following results were obtained:
• surface tension as function of temperature • interfacial tension at 1590 °C • size distribution of precipitated Co drops The interfacial tension could not be measured as a function of temperature because the unstable interface was destroyed during the first pulse. The final and decisive experiment will have to be performed on board the ISS, when time is sufficient to keep the sample in the undercooled phase until complete phase separation is obtained and a metastable interface exists between the two liquid phases.
Figure 1 .
1 Figure 1. Phase diagram of Cu-Co showing the metastable miscibility gap. Symbols indicate experimentally determined liquidus and binodal temperatures.
Figure 2 .
2 Figure 2. Cross section of a spherical, concentric compound drop consisting of two immiscible liquids with densities r i and r 0 , radii R 0 and R i , surface tension of the outer liquid s 0 , and interfacial tension s 12 .
Figure 4 .
4 Figure 4. Relative amplitudes of the oscillations of inner and outer surface as a function of mass fraction for both modes.
Figure 5 .
5 Figure 5. Temperature-time profile of the Cu-Co TEXUS 44 experiment. Dotted lines show the heater activity, not all pulses are shown due to the time resolution of the display. The arrow indicates final solidification.
two-phase drop.
Figure 6 .
6 Figure 6. Oscillations of the visible cross section during the first melting cycle.
Figure 7 .
7 Figure 7. Fourier transforms of the area signal for all evaluated pulses. The spectra are shifted vertically for clarity. From bottom to top: cycle1/pulse 1, cycle 1/pulse 2, cycle 1/pulse 3, cycle 2/pulse 1, cycle 2/pulse 2.
Figure 8 .
8 Figure 8. Surface tension of Cu 75 Co 25 alloy.
Figure 9 .
9 Figure 9. Fourier spectra of the 1 st pulse in the 1 st cycle. FFT of cross section (top) and radius sum (bottom) is shown. Spectra are shifted vertically for clarity.
Table 1 .
1 Temperatures and peak positions of the pulsed oscillations.
Temperature Frequency Remarks
°C Hz
cycle 1 split peak
pulse 1 1590 (28,6) (28,1+29,06)/2
cycle 1
pulse 2 1490 28,7
cycle 1
pulse 3 1410 28,8
cycle 2
pulse 0 1750 27,8
cycle 2
pulse 1 1660 28,0
cycle 2
pulse 2 1570 28,3
Table 2 .
2 2 . Measured and calculated frequencies for the first pulse of the first cycle.
measured calculated
n 0 28.1 28.0
n + 29.1 29.1
n - 15.5 15.4
Acknowledgements
Our sincere thanks go to the EADS team in Bremen and Friedrichshafen, to the launch team at Esrange, and, last but not least, to the DLR-MUSC team for their continuous and excellent support. We also would like to thank ESA for providing this flight opportunity. Without their help, this experiment would not have been possible. | 21,887 | [
"18129",
"16536"
] | [
"32956",
"215109",
"215109",
"32956"
] |
01767057 | en | [
"nlin",
"math"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01767057/file/Lozi_Garasym_Lozi_Industrial_Mathematics_2017.pdf | Jean-Pierre Lozi
email: [email protected]
Oleg Garasym
René Lozi
J.-P Lozi
The Challenging Problem of Industrial Applications of Multicore-Generated Iterates of Nonlinear Mappings
Keywords: Chaos, Cryptography, Mappings, Chaotic pseudorandom numbers Attractors AMS Subject Classification 37N30, 37D45, 65C10, 94A60 p. 6
The study of nonlinear dynamics is relatively recent with respect to the long historical development of early mathematics since the Egyptian and the Greek civilization, even if one includes in this field of research the pioneer works of Gaston Julia and Pierre Fatou related to one-dimensional maps with a complex variable, nearly a century ago. In France, Igor Gumosky and Christian Mira began their mathematical researches in 1958; in Japan, the Hayashi' School (with disciples such as Yoshisuke Ueda and Hiroshi Kawakami), a few years later, was motivated by applications to electric and electronic circuits. In Ukraine, Alexander Sharkovsky found the intriguing Sharkovsky's order, giving the periods of periodic orbits of such nonlinear maps in 1962, although these results were only published in 1964. In 1983, Leon O. Chua invented a famous electronic circuit that generates chaos, built with only two capacitors, one inductor and one nonlinear negative resistance. Since then, thousands of papers have been published on the general topic of chaos. However, the pace of mathematics is slow, because any progress is based on strictly rigorous proof. Therefore, numerous problems still remain unsolved. For example, the long-term dynamics of the Hénon map, the first example of a strange attractor for mappings, remain unknown close to the classical parameter values from a strictly mathematical point of view, 40 years after its original publication. In spite of this lack of rigorous mathematical proofs, nowadays, engineers are actively working on applications of chaos for several purposes: global optimization, genetic algorithms, CPRNG (Chaotic Pseudorandom Number Generators), cryptography, and so on. They use nonlinear maps for practical applications without the need of sophisticated theorems. In this chapter, after giving some prototypical examples of the industrial
Introduction
The last few decades have seen the tremendous development of new IT technologies that incessantly increase the need for new and more secure cryptosystems.
For instance, the recently invented Bitcoin cryptocurrency is based on the secure Blockchain system that involves hash functions [START_REF] Delahaye | Cryptocurrencies and blockchains[END_REF]. This technology, used for information encryption, is pushing forward the demand for more efficient and secure pseudorandom number generators [START_REF] Menezes | Handbook of Applied Cryptography[END_REF] which, in the scope of chaos-based cryptography, were first introduced by Matthews in the 1990s [START_REF] Matthews | On the derivation of chaotic encryption algorithm[END_REF]. Contrarily to most algorithms that are used nowadays and based on a limited number of arithmetic or algebraic methods (like elliptic curves), networks of coupled chaotic maps offer quasi-infinite possibilities to generate parallel streams of pseudorandom numbers (PRN) at a rapid pace when they are executed on modern multicore processors. Chaotic maps are able to generate independent and secure pseudorandom sequences (used as information carriers or directly involved in the process of encryption/decryption [START_REF] Lozi | Noise-resisting ciphering based on a chaotic multi-stream pseudorandom number generator[END_REF]). However, the majority of well-known chaotic maps are not naturally suitable for encryption [START_REF] Li | Period extension and randomness enhancement using high-throughput reseeding-mixing PRNG[END_REF] and most of them do not exhibit even satisfactory properties for such a purpose.
In this chapter, we explore the novel idea of coupling a symmetric tent map with a logistic map, following several network topologies. We add a specific injection mechanism to capture the escaping orbits. In the goal of extending our results to industrial mathematics, we implement these networks on multicore machines and we test up to 100 trillion iterates of such mappings, in order to make sure that the obtained results are firmly grounded and able to be used in industrial contexts such as e-banking, e-purchasing, or the Internet of Things (IoT).
The chaotic maps, when used in the sterling way, could generate not only chaotic numbers, but also pseudorandom numbers as shown in [START_REF] Noura | Design of a fast and robust chaos-based cryptosystem for image encryption[END_REF] and as we show in this chapter with more sophisticated numerical experiments.
Various choices of PNR Generators (PRNGs) and crypto-algorithms are currently necessary to implement continuous, reliable security systems. We use a software approach because it is easy to change a cryptosystem to support protection, whereas p. 2
Chapter 4
replacing hardware used for True Random Number Generators would be costly and time-consuming. For instance, after the secure software protocol Wi-Fi Protected Access (WPA) was broken, it was simply updated and no expensive hardware had to be replaced.
It is a very challenging task to design CPRNGs (Chaotic Pseudo Random Number Generators) that are applicable to cryptography: numerous numerical tests must ensure that their properties are satisfactory. We mainly focus on two-to fivedimension maps, although upper dimensions can be very easily explored with modern multicore machines. Nevertheless, in four and five dimensions, the studied CRPNGs are efficient enough for cryptography.
In Sect. 4.2, we briefly recall the dawn and the maturity of researches on chaos. In Sect. 4.3, we explore two-dimensional topologies of networks of coupled chaotic maps. In Sect. [START_REF] Lozi | Noise-resisting ciphering based on a chaotic multi-stream pseudorandom number generator[END_REF].4, we study more thoroughly a mapping in higher dimensions (up to 5) far beyond the NIST tests which are limited to a few millions of iterates and which seem not robust enough for industrial applications, although they are routinely used worldwide. In order to check the portability of the computations on multicore architectures, we have implemented all our numerical experiments on several different multicore machines. We conclude this chapter in Sect. 4.5.
The Dawn and the Maturity of Researches on Chaos
The study of nonlinear dynamics is relatively recent with respect to the long historical development of early mathematics since the Egyptian and the Greek civilizations (and even before). The first alleged artifact of mankind's mathematical thinking goes back to the Upper Paleolithic era. Dating as far back as 22,000 years ago, the Ishango bone is a dark brown bone which happens to be the fibula of a baboon, with a sharp piece of quartz affixed to one end for engraving. It was first thought to be a tally stick, as it has a series of what has been interpreted as tally marks carved in three columns running the length of the tool [START_REF] Bogoshi | The oldest mathematical artifact[END_REF].
Twenty thousand years later, the Rhind Mathematical Papyrus is the best example of Egyptian mathematics. It dates back to around 1650 BC. Its author is the scribe Ahmes who indicated that he copied it from an earlier document dating from the 12th dynasty, around 1800 BC. It is a practical handbook, whose the first part consists of reference tables and a collection of 20 arithmetic and 20 algebraic problems and linear equations. Problem 32 for instance corresponds (in modern notation) to solving x + x 3 + x 4 = 2 for x [START_REF] Smith | History of Mathematics[END_REF]. Since those early times, mathematics have known great improvements, flourishing in many different fields such as geometry, algebra (both linked, thanks to the invention of Cartesian coordinates by René Descartes [START_REF] Descartes | Discours de la méthode[END_REF]), analysis, probability, number and set theory, and so on.
However, nonlinear problems are very difficult to handle, because, as shown by Galois' theory of algebraic equations which provides a connection between field theory and group theory, it is impossible to solve any polynomial equation p. 3 of degree equal or greater than 5 using only the usual algebraic operations (addition, subtraction, multiplication, division) and the application of radicals (square roots, cube roots, etc.) [START_REF] Galois | Mémoire sur les conditions de résolubilité des équations par radicaux (mémoire manuscrit de 1830)[END_REF].
The beginning of the study of nonlinear equation systems goes back to the original works of Gaston Julia and Pierre Fatou regarding to one-dimensional maps with a complex variable, nearly a century ago [START_REF] Julia | Mémoire sur l'itération des fonctions rationnelles[END_REF][START_REF] Fatou | Sur l'itération des fonctions transcendantes entières[END_REF]. Compared to thousands of years of mathematical development, a century is a very short period. In France, 30 years later, Igor Gumosky and Christian Mira began their mathematical researches with the help of a computer in 1958 [START_REF] Gumowski | Recurrence and Discrete Dynamics systems[END_REF]. They developed very elaborate studies of iterations. One of the best-known formulas they published is
x n+1 = f (x n ) + by n y n+1 = f (x n+1 ) -x n , with f (x) = ax + 2(1 -a) x 2 1 + x 2 (4.1)
which can be considered as a non-autonomous mapping from the plane R 2 onto itself that exhibits esthetic chaos. Surprisingly, slight variations of the parameter value lead to very different shapes of the attractor (Fig. 4.1).
In Ukraine, Alexander Sharkovsky found the intriguing Sharkovsky's order, giving the periods of periodic orbits of such nonlinear maps in 1962, although these results were only published in 1964 [START_REF] Sharkovskiȋ | Coexistence of cycles of a continuous map of the line into itself[END_REF]. In Japan the Hayashi' School (with disciples like Yoshisuke Ueda and Hiroshi Kawakami), a few years later, was motivated by applications to electric and electronic circuits. Ikeda proposed the Ikeda attractor [START_REF] Ikeda | Multiple-valued stationary state and its instability of the transmitted light by a ring cavity system[END_REF][START_REF] Ikeda | Optical turbulence: chaotic behavior of transmitted light from a ring cavity[END_REF] which is a chaotic attractor for u ≥ 0.6 (Fig. 4.2).
x n+1 = 1 + u(x n cos t n -y n sin t n ) y n+1 = u(x n sin t n + y n cos t n ), with t n = 0.4 - 6 1 + x 2 n + y 2 n (4.2)
In 1983, Leon O. Chua invented a famous electronic circuit that generates chaos built with only two capacitors, one inductor and one nonlinear negative resistance [START_REF] Chua | The double scroll family[END_REF]. Since then, thousands of papers have been published on the general topic of chaos. However the pace of mathematics is slow, because any progress is based on strictly rigorous proof. Therefore numerous problems still remain unsolved. For example, the long-term dynamics of the Hénon map [START_REF] Hénon | Two-dimensional mapping with a strange attractor[END_REF], the first example of a strange attractor for mappings, remains unknown close to the classical parameter values from a strictly mathematical point of view, 40 years after its original publication.
Nevertheless, in spite of this lack of rigorous mathematical results, nowadays, engineers are actively working on applications of chaos for several purposes: global optimization, genetic algorithms, CPRNG, cryptography, and so on. They use nonlinear maps for practical applications without the need of sophisticated theorems. During the last 20 years, several chaotic image encryption methods have been proposed in the literature.
Dynamical systems which present a mixing behavior and that are highly sensitive to initial conditions are called chaotic. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems. This effect, popularly known as the butterfly effect, renders long-term predictions impossible in general [START_REF] Lorenz | Deterministic nonperiodic flow[END_REF]. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. Mastering the global properties of those dynamical systems is a challenging issue nowadays that we try to fix by exploring several network topologies of coupled maps.
In this chapter, after giving some prototypical examples of industrial applications of iterations of nonlinear maps, we focus on the exploration of topologies of coupled nonlinear maps that have a very rich potential of complex behavior. Very long computations on multicore machines are used, generating up to one hundred trillion iterates, in order to assess such topologies. We show the emergence of randomness from chaos and discuss the promising future of chaos theory for cryptographic security.
p. 5
Miscellaneous Network Topologies of Coupled Chaotic Maps
Tent-Logistic Entangled Map
In this section we consider only two 1-D maps: the logistic map
f µ (x) ≡ L µ (x) = 1 -µx 2 (4.3)
and the symmetric tent map
f µ (x) ≡ T µ (x) = 1 -µ|x| (4.4)
both associated to the dynamical system
x n+1 = f µ (x n ), ( 4.5)
where µ is a control parameter which impacts the chaotic degree. Both mappings are sending the one-dimensional interval [-1,1] onto itself.
Since the first study by R. May [START_REF] May | Stability and Complexity of Models Ecosystems[END_REF][START_REF] May | Biological populations with nonoverlapping generations: stable points, stable cycles, and chaos[END_REF] of the logistic map in the frame of nonlinear dynamical systems, both the logistic (4.3) and the symmetric tent map (4.4) have been fully explored with the aim to easily generate pseudorandom numbers [START_REF] Lozi | Giga-periodic orbits for weakly coupled tent and logistic discretized maps[END_REF].
However, the collapse of iterates of dynamical systems [START_REF] Yuan | Collapsing of chaos in one dimensional maps[END_REF] or at least the existence of very short periodic orbits, their non-constant invariant measure, and the easilyrecognized shape of the function in the phase space, could lead to avoid the use of such one-dimensional maps (logistic, baker, tent, etc.) or two-dimensional maps (Hénon, Standard, Belykh, etc.) as PRNGs (see [START_REF] Lozi | Can we trust in numerical computations of chaotic solutions of dynamical systems?[END_REF] for a survey). Yet, the very simple implementation as computer programs of chaotic dynamical systems led some authors to use them as a base for cryptosystems [25,[START_REF] Ariffin | Modified baptista type chaotic cryptosystem via matrix secret key[END_REF]. Even if the logistic and tent maps are topologically conjugates (i.e., they have similar topological properties: distribution, chaoticity, etc.), their numerical behavior differs drastically due to the structure of numbers in computer realization [START_REF] Lanford | Informal remarks on the orbit structure of discrete approximations to chaotic maps[END_REF].
As said above, both logistic and tent maps are never used in serious cryptography articles because they have weak security properties (collapsing effect) if applied alone. Thus, these maps are often used in modified form to construct CPRNGs [START_REF] Wong | A modified chaotic cryptographic method[END_REF][START_REF] Nejati | A realizable modified tent map for true random number generation[END_REF][START_REF] Lozi | Mathematical chaotic circuits: an efficient tool for shaping numerous architectures of mixed chaotic/pseudo random number generator[END_REF].
Recently, Lozi et al. proposed innovative methods in order to increase randomness properties of the tent and logistic maps over their coupling and sub-sampling [START_REF] Lozi | Emergence of randomness from chaos[END_REF][START_REF] Rojas | New alternate ring-coupled map for multirandom number generation[END_REF][START_REF] Garasym | Robust PRNG based on homogeneously distributed chaotic dynamics[END_REF]. Nowadays, hundreds of publications on industrial applications of chaosbased cryptography are available [START_REF] Jallaouli | Design and analyses of two stream ciphers based on chaotic coupling and multiplexing techniques[END_REF][START_REF] Garasym | Application of observer-based chaotic synchronization and identifiability to the original CSK model for secure information transmission[END_REF][START_REF] Farajallah | Fast and secure chaos-based cryptosystem for images[END_REF][START_REF] Taralova | Chaotic generator synthesis: dynamical and statistical analysis[END_REF].
In this chapter, we explore more thoroughly the original idea of combining features of tent (T µ ) and logistic (L µ ) maps to produce a new map with improved properties, through combination in several network topologies. This idea was recently introduced [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]39] in order to improve previous CPRNGs. Looking at both Eqs. (4.3) and (4.4), it is possible to reverse the shape of the graph of the tent map T and to entangle it with the graph of the logistic map L. We obtain the combined map
f µ (x) ≡ T L µ (x) = µ|x| -µx 2 = µ(|x| -x 2 ) (4.6)
When used in more than one dimension, the T L µ map can be considered as a twovariable map
T L µ (x (i) , x ( j) ) = µ(|x (i) | -(x ( j) ) 2 ), i = j (4.7)
Moreover, we can combine again the T L µ map with T µ in various ways. If with choose, for instance, a network with a ring shape (Fig. 4.3). It is possible to define a mapping M µ, p : J p → J p where
J p = [-1, 1] p ⊂ R p : M µ, p ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ x (1) n x (2) n . . . x ( p) n ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ x (1) n+1 x (2) n+1 . . . x ( p) n+1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ T µ (x (1) n ) + T L µ (x (1) n , x (2) n ) T µ (x (2) n ) + T L µ (x (2) n , x (3) n ) . . . T µ (x ( p) n ) + T L µ (x ( p) n , x (1) n ) ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (4.8)
However, if used in this form, system (4.8) has unstable dynamics and iterated points x (1) n , x (2) n , . . . ,
x ( p)
n quickly spread out. Therefore, to solve the problem of keeping dynamics in the torus J p = [-1, 1] p ⊂ R p , the following injection mechanism has to be used in conjunction with (4.8) if (x (i) n+1 < -1) then add 2 if (x (i) n+1 > 1) then subtract 2
, i = 1, 2, . . . , p. (4.9)
p. 7 The T L µ function is a powerful tool to change dynamics. Used in conjunction with T µ , the map T L µ makes it possible to establish mutual influence between system components x (i) n in M µ, p . This multidimensional coupled mapping is interesting because it performs contraction and distance stretching between components, improving chaotic distribution.
The coupling of components has an excellent effect in achieving chaos, because they interact with global system dynamics, being a part of them. Component interaction has a global effect. In order to study this new mapping, we use a graphical approach, however other theoretical assessing functions are also involved.
Note that system (4.8) can be made more generic by introducing constants k i which generalize considered topologies. Let k = (k 1 , k 2 , . . . , k p ), we define
M k µ, p ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ x (1) n x (2) n . . . x ( p) n ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ x (1) n+1 x (2) n+1 . . . x ( p) n+1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ T µ (x (1) n ) + k 1 × T L µ (x (i) n , x ( j) n ), i, j = (1, 2) or (2, 1) T µ (x (2) n ) + k 2 × T L µ (x (i) n , x ( j) n ) i, j = (2, 3) or (3, 2) . . . T µ (x ( p) n ) + k p × T L µ (x (i) n , x ( j) n ) i, j = ( p, 1) or (1, p) ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (4.10) System (4.10) is called alternate if k i = (-1) i or k i = (-1) i+1 , 1 ≤ i ≤ p, or non-alternate if k i = +1, or k i = -1. It
i j i' j' #1 +1 +1 1 2 1 2 #2 +1 -1 1 2 1 2 #3 -1 +1 1 2 1 2 #4 -1 -1 1 2 1 2 #5 +1 +1 2 1 2 1 #6 +1 -1 2 1 2 1 #7 -1 +1 2 1 2 1 #8 -1 -1 2 1 2 1 #9 +1 +1 1 2 2 1 #10 +1 -1 1 2 2 1 #11 -1 +1 1 2 2 1 #12 -1 -1 1 2 2 1 #13 +1 +1 2 1 1 2 #14 +1 -1 2 1 1 2 #15 -1 +1 2 1 1 2 #16 -1 -1 2 1 1 2
Two-Dimensional Network Topologies
We first consider the simplest coupling case, in which only two equations are coupled.
The first condition needed to obtain a multidimensional mapping, in the aim of building a new CPRNG, is to obtain excellent uniform distribution of the iterated points. The second condition is that the CPRNG must be assessed positively by the NIST tests [START_REF] Rukhin | Statistical test suite for random and pseudorandom number generators for cryptographic applications[END_REF]. In [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]39] this two-dimensional case is studied in detail. Using a bifurcation diagram and computation of Lyapunov exponents, it is shown that the best value for the parameter is µ = 2. Therefore, in the rest of this chapter we use this parameter value and we only briefly recall the results found with this value in both of those articles. The general form of
M k 2,2 is then M k 2,2 x (1) n x (2) n = x (1) n+1 x (2) n+1 = T 2 (x (1) n ) + k 1 × T L 2 (x (i) n , x ( j) n ) T 2 (x (2) n ) + k 2 × T L 2 (x (i ) n , x ( j ) n ) (4.11)
with i, j, i , j = 1 or 2, i = j, and i = j . Considering this general form, it is possible to define 16 different maps (Table 4.1). Among this set of maps, we study case #3 and case #13. The map of case #3 is called Single-Coupled alternate due to the shape of the corresponding network and denoted T T L SC 2 ,
p. 9
T T L SC 2 = ⎧ ⎨ ⎩ x (1) n+1 = 1 -2|x (1) n | -2(|x (1) n | -(x (2) n ) 2 ) = T 2 (x (1) n ) -T L 2 ((x (1) n ), (x (2) n )) x (2) n+1 = 1 -2|x (2) n | + 2(|x (1) n | -(x (2) n ) 2 ) = T 2 (x (2) n ) + T L 2 ((x (1) n ), (x (2) n ))
(4.12) and case #13 is called Ring-Coupled non-alternate and denoted T T L RC 2 ,
T T L RC 2 = ⎧ ⎨ ⎩ x (1) n+1 = 1 -2|x (1) n | + 2(|x (2) n | -(x (1) n ) 2 ) = T 2 (x (1) n ) + T L 2 ((x (2) n ), (x (1) n )) x (2) n+1 = 1 -2|x (2) n | + 2(|x (1) n | -(x (2) n ) 2 ) = T 2 (x (2) n ) + T L 2 ((x (1) n ), (x (2) n )) (4.13)
Both systems were selected because they have balanced contraction and stretching processes between components. They allow achieving uniform distribution of the chaotic dynamics. Equations (4.12) and (4.13) are used, of course, in conjunction with injection mechanism (4.9). The largest torus where points mapped by (4.12) and (4.13) are sent is [-2, 2] 2 . The confinement from torus [-2, 2] 2 to torus [-1, 1] 2 of the dynamics obtained by this mechanism is shown in Figs. 4.5 and 4.6: dynamics cross from the negative region (in blue) to the positive one, and conversely to the negative region, if the points stand in the positive regions (in red). Through this operation, the system's dynamics are trapped inside [-1, 1] 2 . In addition, after this operation is done, the resulting system exhibits more complex dynamics with additional nonlinearity, which is advantageous for chaotic encryption (since it improves security).
A careful distribution analysis of both T T L SC 2 and T T L RC 2 has been performed using approximated invariant measures. x (1) n ≡ x (1) n -2; if x (1) n < -1 then x (1) n ≡ x (1) n + 2 (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
p. 10
n ≡ x (2) n -2; if x (2) n < -1 then x (2) n ≡ x (2)
n + 2 (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
Approximated Invariant Measures
We recall in this section the definition of approximated invariant measures which are important tools for assessing the uniform distribution of iterates. We have previously introduced them for the first studies of the weakly coupled symmetric tent map [START_REF] Lozi | Giga-periodic orbits for weakly coupled tent and logistic discretized maps[END_REF]. We first define an approximation P M,N (x) of the invariant measure, also called the probability distribution function linked to the one-dimensional map f (Eq. (4.5)) when computed with floating numbers (or numbers in double precision). To this goal, we consider a regular partition of M small intervals (boxes) r i of J = [-1, 1] defined by
s i = -1 + 2i M , i = 0, M, (4.14)
r i = [s i , s i+1 [, i = 0, M -2, (4.15) r M-1 = [s M-1 , 1], (4.16
)
J = M-1 0 r i . (4.17)
The length of each box r i is equal to
s i+1 -s i = 2 M (4.18)
All iterates f (n) (x) belonging to these boxes are collected (after a transient regime of Q iterations decided a priori, i.e., the first Q iterates are discarded). Once the p. 11 computation of N + Q iterates is completed, the relative number of iterates with respect to N /M in each box r i represents the value P N (s i ). The approximated P N (x) defined in this article is therefore a step function, with M steps. Since M may vary, we define
P M,N (s i ) = 1 2 M N (#r i ) (4.19)
where #r i is the number of iterates belonging to the interval r i and the constant 1/2 allows the normalisation of P M,N (x) on the interval J .
P M,N (x) = P M,N (s i ), ∀ x ∈ r i (4.20)
In the case of p-coupled maps, we are more interested by the distribution of each component x (1) , x (2) , . . . , x ( p) of the vector
X = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝
x (1) x (2) . . .
x ( p) ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠
rather than by the distribution of the variable X itself in J p . We then consider the approximated probability distribution function P M,N (x ( j) ) associated to one component of X . In this chapter, we use either N disc for M or N iter for N , depending on which is more explicit. The discrepancies E 1 (in norm L 1 ), E 2 (in norm L 2 ), and E ∞ (in norm L ∞ ) between P N disc ,N iter (x) and the Lebesgue measure, which is the invariant measure associated to the symmetric tent map, are defined by
E 1,N disc ,Niter (x) = P N disc ,Niter (x) -0.5 L 1 (4.21) E 2,N disc ,Niter (x) = P N disc ,Niter (x) -0.5 L 2 (4.22) E ∞,N disc ,Niter (x) = P N disc ,Niter (x) -0.5 L ∞ (4.23)
In the same way, an approximation of the correlation distribution function C M,N (x, y) is obtained by numerically building a regular partition of M 2 small squares (boxes) of J 2 , embedded in the phase subspace (x l , x m )
s i = -1 + 2i M , t j = -1 + 2 j M , i, j = 0, M (4.24) r i, j = [s i , s i+1 [×[t j , t j+1 [, i, j = 0, M -2 (4.25) r M-1, j = [s M-1 , 1] × [t j , t j+1 ], j = 0, M -2 (4.26) r i,M-1 = [s i , s i+1 [×[t M-1 , 1], j = 0, M -2 (4.27) r M-1,M-1 = [s M-1 , 1] × [t M-1 , 1] (4.28)
p. 12
The measure of the area of each box is
(s i+1 -s i ).(t i+1 -t i ) = 2 M 2 (4.29)
Once N + Q iterated points (x l n , x m n ) belonging to these boxes are collected, the relative number of iterates with respect to N /M 2 in each box r i, j represents the value C N (s i , t j ). The approximated probability distribution function C N (x, y) defined here is then a two-dimensional step function, with M 2 steps. Since M can take several values in the next sections, we define
C M,N (s i , t j ) = 1 4 M 2 N (#r i, j ) (4.30)
where #r i, j is the number of iterates belonging to the square r i, j and the constant 1/4 allows the normalisation of C M,N (x, y) on the square J 2 .
C M,N (x, y) = C M,N (s i , t j ) ∀(x, y) ∈ r i, j (4.31)
The discrepancies y) and the uniform distribution on the square are defined by
E C 1 (in norm L 1 ), E C 2 (in norm L 2 ) and E C ∞ (in norm L ∞ ) between C N disc ,N iter (x,
E C 1 ,N disc ,N iter (x, y) = C N disc ,N iter (x, y) -0.25 L 1 (4.32) E C 2 ,N disc ,N iter (x, y) = C N disc ,N iter (x, y) -0.25 L 2 (4.33) E C ∞ ,N disc ,N iter (x, y) = C N disc ,N iter (x, y) -0.25 L ∞ (4.34)
Finally, let AC N disc ,N iter be the autocorrelation distribution function which is the correlation function C N disc ,N iter of (4.31), defined in the delay space (x (i) n , x (i) n+1 ) instead of the phase (x l , x m ) space. We define in the same manner than (4.32), (4.33), and (4.34)
E C 1 ,N disc ,N iter (x, y), E C 2 ,N disc ,N iter (x, y), and E C ∞ ,N disc ,N iter (x, y).
Study of Randomness of TTL SC
2 and TTL RC
, and Other Topologies
Using numerical computations, we assess the randomness properties of the two-dimensional maps T T L SC 2 and T T L RC 2 . If all requirements 1-8 of Fig. 4.7 are verified, the dynamical systems associated to those maps can be considered as pseudorandom and their application to cryptosystems is possible.
p. 13 Whenever one among the eight criteria is not satisfied for a given map, one cannot consider that the associated dynamical system is a good CPRNG candidate. As said above, when µ = 2, the Lyapunov exponents of both considered maps are positive.
In the phase space, we plot the iterates in the system of coordinates x (1) n versus x (2) n in order to analyze the density of the points' distribution. Based on such an analysis, it is possible to assess the complexity of the behavior of dynamics, noticing any weakness or inferring on the nature of randomness. We also use the approximate invariant measures to assess more precisely the distribution of iterates.
The graphs of the attractor in phase space for the T T L RC 2 non-alternate (Fig. 4.8) and T T L SC 2 alternate (Fig. 4.9) maps are different. The T T L SC 2 map has wellscattered points in the whole pattern, but there are some more "concentrated" regions forming curves on the graph. Instead, the map T T L RC 2 has good repartition. Some other numerical results we do not report in this chapter show that even if those maps have good random properties, it is possible to improve mapping randomness by modifying slightly network topologies.
p. 14
T T L SC 2 (x (1) n , x (2) n ) = x (1) n+1 = 1 + 2(x (2) n ) 2 -4|x (1) n | x (2) n+1 = 1 -2(x (2) n ) 2 + 2(|x (1) n | -|x (2) n |) (4.35)
In [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF], it is shown that if the impact of component x (1) n is reduced, randomness is improved. Hence, the following MT T L SC 2 map is introduced
MT T L SC 2 (x (1) n , x (2) n ) = x (1) n+1 = 1 + 2(x (2) n ) 2 -2|x (1) n | x (2) n+1 = 1 -2(x (2) n ) 2 + 2(|x (1) n | -|x (2) n |) (4.36)
and the injection mechanism (4.9) is used as well, but it is restricted to three phases: (1) n+1 > 1) then subtract 2 if (x (2) n+1 < -1) then add 2 if (x (2) n+1 > 1) then subtract 2 (4.37)
⎧ ⎪ ⎨ ⎪ ⎩ if (x
This injection mechanism allows the regions containing iterates to match excellently (Fig. 4.10).
The change of topology leading to MT T L SC 2 greatly improves the density of iterates in the phase space (Fig. 4.11) where 10 9 points are plotted. The point distribution of iterates in phase delay for the variable x (2) is quite good as well (Fig. 4.12). On both pictures, a grid of 200 × 200 boxes is generated to use the box counting method defined in Sect. 4.3.3. Moreover, the largest Lyapunov exponent is equal to 0.5905, indicating a strong chaotic behavior.
p. 15 2 alternative map, on the (x (1) , x (2) ) plane (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]) However, regarding the phase delay for the variable x (1) , results are not satisfactory. We have plotted in Fig. 4.13 10 9 iterates of MT T L SC 2 in the delay plane, and in Fig. 4.14 the same iterates using the counting box method.
When such a great number of iterates is computed, one has to be cautious with raw graphical methods because irregularities of the density repartition are masked due to the huge number of plotted points. Therefore, these figures highlight the necessity of using the tools we have defined in Sect. 4.3.3.
Nevertheless, NIST tests were used to check randomness properties of MT T L SC 2 . Since they only require binary sequences, we generated 4 × 10 6 iterates whose 5 × 10 5 first ones were cut off. The rest of the sequence was converted to binary form according to the IEEE-754 standard (32-bit single-precision floating point).
p. 16
Fig. 4.12 Approximate density function of the MT T L SC
2 alternative map, on the (x (1) n , x (1) n+1 ) plane (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]) As said in the introduction, networks of coupled chaotic maps offer quasi-infinite possibilities to generate parallel streams of pseudorandom numbers. For example, in [39], the following modification of MT T L SC 2 is also studied and shows good randomness properties
N T T L SC 2 (x (1) n , x (2) n ) = ⎧ ⎪ ⎨ ⎪ ⎩ x (1) n+1 = 1 -2|x (2) n | = T 2 (x (2) n ) x (2) n+1 = 1 -(2x (2) n ) 2 -2(|x (2) n | -|x (1) n |) = L 2 (x (2) n ) + T 2 (x (2) n ) -T 2 (x (1) n ) (4.38)
p. 17
Mapping in Higher Dimension
Higher dimensional systems make it possible to achieve better randomness and uniform point distribution, because more perturbations and nonlinear mixing are involved. In this section, we focus on a particular realization of the M k µ, p map (4.10) from dimension two to dimension five. (2) (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
p. 18
Usually, three or four dimensions are complex enough to create robust random sequences as we show here. Thus, it is advantageous if the system can increase its dimension. Since the MT T L SC 2 alternative map cannot be nested in higher dimensions, we describe how to improve randomness and to obtain the best distribution of points, and how to produce more complex dynamics than the T T L SC 2 (x (2) , x (1) ) alternative map in dimension greater than 2. Let
T T L RC, pD 2 = ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ x (1) n+1 = 1 -2|x (1) n | + 2(|x (2) n | -(x (1) n ) 2 ) x (2) n+1 = 1 -2|x (2) n | + 2(|x (3) n | -(x (2) n ) 2 ) . . . x ( p) n+1 = 1 -2|x ( p) n | + 2(|x (1) n | -(x ( p) n ) 2 ) (4.39)
be this realization. We show in Figs. 4.17 and 4.18 successful NIST tests for T T L RC, pD 2 in 3-D and 4-D, for the variable x (1) .
Fig. 4.17 N I ST test for T T L RC,3D
2 for x (1) (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]) p. 19 2 for x (1) (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
Numerical Experiments
All NIST tests for dimensions three to five for every variable are successful, showing that these realizations in 3-D up to 5-D are good CPRNGs. In addition to those tests, we study the mapping more thoroughly, far beyond the NIST tests which are limited to a few million iterates and which seem not robust enough for industrial mathematics, although they are routinely used worldwide.
In order to check the portability of the computations on multicore architectures, we have implemented all our numerical experiments on several different multicore machines.
Checking the Uniform Repartition of Iterated Points
We first compute the discrepancies E 1 (in norm L 1 ), E 2 (in norm L 2 )n and E ∞ (in norm E ∞ ) between P N disc ,N iter (x) and the Lebesgue measure which is the uniform measure on the interval J = [-1, 1]. We set M = N iter = 200, and vary the number N iter of iterated points in the range 10 4 to 10 14 . From our knowledge, this article is the first one that checks such a huge number of iterates (in conjunction with [39]). We compare E 1,200,N iter (x (1) ) for T T L RC, pD 2 with p = 2 to 5 (Table 4.2, Fig. 4
.19).
As shown in Fig. 4.19, E 1,200,N iter (x (1) ) decreases steadily when N iter increases. However, the decreasing process is promptly (with respect to N iter ) bounded below for p = 2. This is also the case for other values of p, however, the boundary decreases with p, therefore showing better randomness properties for higher dimensional mappings.
Table 4.3 compares x (1) , x (2) , …,x ( p) for T T L RC,5D
2
, for different values of N iter . It is obvious that the same quality of randomness is obtained for each one of them, contrarily to the results obtained for MT T L SC 2 .
p. 20 with p = 2 to 5, with respect to N iter (horizontal axis, logarithmic value) Table 4.3 E 1,200,N iter (x (i) ) for T T L RC,5D 2 for i = 1 to 5
N iter
x (1) x (2) x (3) x (4) x (5) 0.000160547 0.000159192 0.000160014 0.000159213 0.000159159 10 13 5.04473e-05 5.03574e-05 5.05868e-05 5.04694e-05 5.01681e-05 10 14 1.59929e-05 1.60291e-05 1.59282e-05 1.59832e-05 1.60775e-05 p. 21 (1) ), E 2,200,N iter (x (1) ), and E ∞,200,N iter (x (1) ) for T T L RC,5D 0.000160547 0.000201102 0.0008602 10 13 5.04473e-05 6.32233e-05 0.00026894 10 14 1.59929e-05 2.00533e-05 9.89792e-05 Fig. 4. [START_REF] May | Stability and Complexity of Models Ecosystems[END_REF] Comparison between E 1,200,N iter (x (1) ), E 2,200,N iter (x (1) ), and E ∞,200,N iter (x (1) ) (vertical axis) for T T L RC,5D 2 with respect to N iter (horizontal axis, logarithmic value)
The comparisons between E 1,200,N iter (x (1) ), E 2,200,N iter (x (1) ), and E ∞,N iter (x (1) ) for T T L RC,5D 2 in Table 4.4 and Fig. 4.20 show that E 1,200,N iter (x (1) ) < E 2,200,N iter (x (1) ) < E ∞,N iter (x (1) ) (4.40)
for every value of N iter .
Autocorrelation Study in the Delay Space
In this section, we assess autocorrelation errors
E AC 1 ,N disc ,N iter (x, y), E AC 2 ,N disc ,N iter (x,
n+2 ), and E AC1,200,N iter (x (1) n , x (1) n+3 (1) n , x (1) n+1 ), E AC1,200,N iter (x (1) n , x (1) n+2 ), and ments for M = 20 to 20, 000, however, in this chapter, we only present the results for M = 200. We first compare E AC 1 ,200,N iter (x (1) n , x (1) n+1 ) with E AC 1 ,200,N iter (x (1) n , x (1) n+2 ) and E AC 1 ,200,N iter (x (1) n , x (1) n+3 ) for T T L RC, pD 2 when the dimension of the system is within the range p = 2 to 5 (Tables 4.5, 4.6, 4.7 and 4.8). It is possible to see that better randomness properties are obtained for higher dimensional mappings.
) for T T L RC,2D 2 N iter (x (1) n , x (1) n+1 ) (x (1) n , x (1) n+2 ) (x (1) n , x (1)
E AC1,200,N iter (x (1) n , x (1) n+3 ) for T T L RC,3D 2 N iter (x (1) n , x (1) n+1 ) (x (1) n , x (1) n+2 ) (x (1) n , x (1) n+3 ) 10 4 1
The comparison between E AC 1 ,200,N iter (x (1) n , x (1) n+1 ), E AC 2 ,200,N iter (x (1) n , x (1) n+1 ), and E AC ∞ ,200,N iter (x (1) n , x (1) n+1 ) for T T L RC,5D 2 in Table 4.9 shows that numerically p. 23
n+2 ), and E AC1,200,N iter (x (1) n , x (1) n+3 ) for T T L RC,4D
2 0.000160547 0.000159144 0.000159246 10 13 5.0394e-05 10 14 1.59929e-05
N iter (x (1) n , x (1) n+1 ) (x (1) n , x (1) n+2 ) (x (1) n , x (1)
E AC 1 ,200,N iter (x (1) n , x (1) n+1 ) < E AC 2 ,200,N iter (x (1) n , x (1) n+1 ) < E AC ∞ ,200,N iter (x (1) n , x (1) n+1 ) (4.41) Equation (4.41) is not only valid for M = 200, but also for other values of M and every component of X .
In order to illustrate the numerical results displayed in these tables, we plot in Fig. 4.21 the repartition of iterates of T T L RC,5D 2 in the delay plane (x (1) n , x (1) n+1 ), using the box counting method. On a grid of 200 × 200 boxes (N iter = M = 200), p. 24 (1) n , x (1) n+1 ) for T T L RC,5D 0.000160547 0.000201102 0.0008602 10 13 5.0394e-05 6.31756e-05 0.000280168 10 14 1.59929e-05 2.00533e-05 9.89792e-05 we have generated 10 6 points. The horizontal axis is x (1) n , and the vertical axis is x (1) n+1 . In order to check very carefully the repartition of the iterates of T T L RC,5D 2 , we have also plotted the repartition in the delay planes (x (1) n , x (1) n+2 ), (x (1) n , x (1) n+3 ), and (x (1) n , x (1) n+4 ) (Figs. 4.22, 4.23, and 4.24). This repartition is uniform everywhere as shown also in Table 4.8.
We find the same regularity for every component x (2) , x (3) , x (4) , and x (5) , as shown in Figs. 4.25, 4.26, 4.27, 4.28, and in
Autocorrelation Study in the Phase Space
Finally, in this section, we assess the autocorrelation errors E C 1 ,N disc ,N iter (x, y), E C 2 ,N disc ,N iter (x, y), and E C ∞ ,N disc ,N iter (x, y), defined by Eqs. (4.32), (4.33), and (4.34), in the phase space. We checked all combinations of the components. Due to space limitations, we only provide part of the numerical computations we have performed to carefully check the randomness of T T L RC, pD 2 for p = 2, 5 and i = 1, p. Like in the previous section, we only provide the results for M = 200. We first compare E C 1 ,200,N iter (x (1) n , x (2) n ), E C 2 ,200,N iter (x (1) n , x (2) n ), and E C ∞ ,200,N iter (x (1) n , x (2) n ) (Table 4.11), and our other results verified that p. 26 (2) n , x (2) n+1 ) of T T L RC,5D 2 ; box counting method, 10 6 points are generated on a grid of 200 × 200 boxes, the horizontal axis is x (2) n , and the vertical axis is x (1) n ,
(2) n+1 E C 1 ,N disc ,N iter (x
x (2) n ) < E C 2 ,N disc ,N iter (x (1) n , x (2) n ) < E C ∞ ,N disc ,N iter (x (1) n , x (2) n ) (4.42)
We have also assessed the autocorrelation errors
E C 1 ,N disc ,N iter (x (i) n , x ( j)
n ) for i, j = 1, 5, i = j, and various values of the number of iterates for T T L RC,5D 2 (Table 4.12). We have performed the same experiments for E C 1 ,N disc ,N iter (x (1) n , x (2) n ) for p = 1, 5 (Table 4.13).
p. 27 Our numerical experiments all show a similar trend: T T L RC, pD 2 is a good candidate for a CPRNG, and the randomness performance of such mappings increases in higher dimensions.
Checking the Influence of Discretization in Computation of Approximated Invariant Measures
In order to verify that the computations we have performed using the discretization M = N disc = 200 of the phase space and the delay space in the numerical experip. 28
(i) n , x (i) n+1 ), E AC1,200,N iter (x (i) n , x (i) n+2 ), and E AC1,200,N iter (x (i) n , x (i) n+3 ) for T T L RC,5D 2 for i = 1 to 5 N iter i (x (i) n , x (i) n+1 ) (x (i) n , x (i) n+2 ) (x (i) n , x (i) n+3 ) 10 4 1
Computation Time of PRNs
The numerical experiments performed in this section have involved several multicore machines. We show in Table 4.15 different computation times (in seconds) for the generation of N iter PRNs for T T L RC, pD 2 with p = 2 to 5, and various values of the number of iterates (N iter ). The machine used is a laptop computer with a Core i7 4980HQ processor with eight logical cores. Table 4.16 shows the computation time of only one PRN in the same experiment. Time is expressed in 10 -10 s.
p. 30 These results show that the pace of computation is very high. When T T L RC,5D 2 is the mapping tested, and the machine used is a laptop computer with a Core i7 4980HQ processor with 8 logical cores, computing 10 11 iterates with five parallel streams of PRNs leads to around 2 billion PRNs being produced per second. Since these PRNs are computed in the standard double precision format, it is possible to extract from each 50 random bits (the size of the mantissa being 52 bits for a double precision floating-point number in standard IEEE-754). Therefore, T T L RC,5D
Conclusion
In this chapter, we thoroughly explored the novel idea of combining features of a tent map (T µ ) and a logistic map (L µ ) to produce a new map with improved properties, through combination in several network topologies. This idea was recently introduced [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]39] in order to improve previous CPRNGs. We have summarized the previously explored topologies in dimension two. We have presented new results of numerical experiments in higher dimensions (up to five) for the mapping T T L RC, pD 2 on multicore machines and shown that T T L RC,5D 2 is a very good CPRNG which is fit for industrial applications. The pace of generation of random bits can be incredibly high (up to 200 billion random bits per second).
Fig. 4 . 1 Fig. 4 . 2
4142 Fig. 4.1 Gumowski-Mira attractor for parameter values a = 0.92768 and a = 0.93333
Fig. 4 . 3
43 Fig. 4.3 Auto and ring-coupling of the T L µ and T µ maps (from [38])
Fig. 4 . 4
44 Fig. 4.4 Return mechanism from the [-2, 2] p torus to [-1, 1] p (from [38])
Fig. 4 . 5
45 Fig. 4.5 Injection mechanism of the iterates from torus [-2, 2] 2 to torus [-1, 1] 2 . If x (1) n > 1 thenx(1) n ≡ x(1) n -2; if x(1)
Fig. 4 . 6
46 Fig. 4.6 If x (2) n > 1 then x (2)
Fig. 4 . 7 Fig. 4 . 8
4748 Fig. 4.7 The main criteria for assessing CPRNG (from [34])
Fig. 4 . 9
49 Fig. 4.9 Phase space behavior of T T L SC 2 alternative (4.18), plot of 20, 000 points
Fig. 4 . 10
410 Fig. 4.10 Injection mechanism (4.21) of the MT T L SC 2 alternative map (From [38])
Fig. 4 .
4 Fig. 4.13 Plot of one billion iterates of MT T L SC 2 in the delay plane
Fig. 4 .Fig. 4 .
44 Fig. 4.14 Plot of one billion iterates of MT T L SC 2 using the counting box method
Fig. 4 .
4 Fig. 4.16 N I ST tests for the variable x(2) (from[START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
Fig. 4 .
4 Fig. 4.18 N I ST test for T T L RC,4D
Fig. 4 .
4 Fig. 4.21 Repartition of iterates in the delay plane (x (1) n , x (1) n+1 ) of T T L RC,5D2
Fig. 4 . 2 , 21 Fig. 4 .
42214 Fig. 4.[START_REF] Lozi | Giga-periodic orbits for weakly coupled tent and logistic discretized maps[END_REF] Repartition of iterates in the delay plane (x(1) n , x(1) n+2 ) of T T L RC,5D 2 , as in Fig.4.21
Fig. 4 . 2 , 21 Fig. 4 .
42214 Fig.4.[START_REF] Lozi | Can we trust in numerical computations of chaotic solutions of dynamical systems?[END_REF] Repartition of iterates in the delay plane (x(1) n , x(1) n+4 ) of T T L RC,5D 2 , as in Fig.4.21
Fig. 4 . 2 ,Fig. 4 .
424 Fig. 4.26 Repartition of iterates in the delay plane (x (3) n , x (3) n+1 ) of T T L RC,5D 2 , as in Fig. 4.25
Fig. 4 . 2 ,
42 Fig. 4.[START_REF] Wong | A modified chaotic cryptographic method[END_REF] Repartition of iterates in the delay plane (x(5) n , x(5) n+1 ) of T T L RC,5D 2 , as in Fig.4.25
Fig. 4 . 29 2 ,
4292 Fig. 4.29 Comparison between E C1,N disc ,N iter (x (1) n , y (2) n ), for T T L RC,4D 2 , M = N disc = 20, 200, 2000, 20, 000, and various values of the number of iterates
Table 4 . 1
41 can be a mix of alternate and non-alternate if k i = +1 or -1 randomly. The sixteen maps defined by Eq.(4.11)
p. 8
Table 4 .
4 2 E 1,200,N iter (x(1) ) for T T L
RC, pD 2 with p = 2 to 5
N iter p = 2 p = 3 p = 4 p = 5
10 4 1.5631 1.5553 1.5587 1.5574
10 5 0.55475 0.5166 0.51315 0.5154
10 6 0.269016 0.159306 0.158548 0.158436
10 7 0.224189 0.050509 0.0501934 0.0505558
10 8 0.219427 0.0164173 0.0159175 0.0160018
10 9 0.218957 0.00640196 0.00505021 0.00509754
10 10 0.218912 0.00420266 0.00160505 0.00160396
10 11 0.218913 0.00392507 0.000513833 0.000505591
10 12 0.218913 0.00389001 0.000189371 0.000160547
10 13 0.218914 0.00388778 0.000112764 5.04473e-05
10 14 0.218914 0.003887 0.000101139 1.59929e-05
Fig. 4.19 Graph of E 1,200,N iter (x (1) ) for T T L RC, pD 2
Table 4 . 4
44 Comparison between E 1,200,N iter (x
Table 4 . 5
45 Comparison between E AC1,200,N iter (x
(1) n , x (1) n+1 ), E AC1,200,N iter (x (1)
n , x
Table 4 . 6
46 Comparison between E AC1,200,N iter (x
n+3 )
Table 4 . 7
47 Comparison between E AC1,200,N iter (x
(1) n , x (1) n+1 ), E AC1,200,N iter (x (1)
n , x
Table 4 . 9
49 Comparison between E AC1,200,N iter (x
(1) n , x (1) n+1 ), E AC2,200,N iter (x (1) n , x (1) n+1 ), and
Table 4 .
4 10.
p. 25
Table 4 .
4 10 Comparison between E AC1,200,N iter (x
Table 4 .
4 11 Comparison between E AC1,200,N iter (x
(1) n , x (2) n ), E AC2,200,N iter (x (1) n , x (2) n ), and
Table 4 .
4 [START_REF] Fatou | Sur l'itération des fonctions transcendantes entières[END_REF] Comparison between E C1,200,N iter (x (i) n , x
( j) n ), for i, j = 1 to 5, i = j, and for various
values of number of iterates for T T L RC,5D 2
N iter 10 6 10 8 10 10 10 12 10 14
x(1), x(2) 0.158058 0.0160114 0.0015927 0.000158795 1.60489e-05
x(1), x(3) 0.158956 0.0159261 0.00159456 0.000159326 1.73852e-05
x(1), x(4) 0.15943 0.0160321 0.00160091 0.000160038 1.74599e-05
x(1), x(5) 0.159074 0.0158962 0.00160204 0.000159048 1.59133e-05
x(2), x(3) 0.15825 0.0159754 0.00159442 0.000160659 1.60419e-05
x(2), x(4) 0.159248 0.0159668 0.00159961 0.000160313 1.73507e-05
x(2), x(5) 0.15889 0.0160116 0.0015934 0.000160462 1.73496e-05
x(3), x(4) 0.159136 0.0158826 0.00158123 0.000158758 1.59451e-05
x(3), x(5) 0.159216 0.0159341 0.00161268 0.000159079 1.75013e-05
x(4), x(5) 0.158918 0.0160516 0.0016008 0.000159907 1.59445e-05
Table 4 .
4 [START_REF] Gumowski | Recurrence and Discrete Dynamics systems[END_REF] Comparison between E C1,200,N iter (x
(i) n , x ( j) n ), for T T L RC, pD 2 for p = 2, . . . , 5, and
various values of the number of iterates
N iter p = 2 p = 3 p = 4 p = 5
10 4 1.5624 1.5568 1.55725 1.55915
10 5 0.57955 0.5163 0.51083 0.514
10 6 0.330084 0.160282 0.158256 0.158058
10 7 0.294918 0.0509584 0.0504002 0.0505508
10 8 0.291428 0.0176344 0.0157924 0.0160114
10 9 0.291012 0.00911485 0.00506758 0.00507915
10 10 0.291025 0.00783204 0.00159046 0.0015927
10 11 0.291033 0.00771201 0.000521561 0.000506086
10 12 0.291036 0.00769998 0.000209109 0.000158795
10 13 0.00769867 0.000150031 5.03666e-05
10 14 0.00769874 0.000144162 1.60489e-05
Table 4 .
4 14 Comparison E C1,N disc ,N iter (x
(1) n , x (2) n ),for T T L RC,4D 2 M = N disc =
Table 4 .
4 15 Comparison of computation times (in second) for the generation of N iter PRNs for T T L RC, pD 2 with p = 2 to 5, and various values of N iter iterates
N iter p = 2 p = 3 p = 4 p = 5
10 4 0.000146 0.000216 0.000161 0.000142
10 5 0.000216 0.000277 0.000262 0.000339
10 6 0.001176 0.002403 0.001681 0.002467
10 7 0.011006 0.016195 0.018968 0.022351
10 8 0.113093 0.161776 0.166701 0.227638
10 9 1.09998 1.58949 1.60441 2.29003
10 10 11.4901 18.0142 18.537 26.1946
10 11 123.765 183.563 185.449 257.244
Table 4 .
4 [START_REF] Ikeda | Optical turbulence: chaotic behavior of transmitted light from a ring cavity[END_REF] Comparison of computation times (in 10 -10 s) for the generation of only one PRN for T T L
RC, pD 2 with p = 2 to 5, and various values of the number of iterates
N iter p p = 3 p 4 p = 5
10 4 73.0 72.0 40.25 28.4
10 5 10.8 9.233 6.55 6.78
10 6 5.88 8.01 4.2025 4.934
10 7 5.503 5.39833 4.742 4.702
10 8 5.65465 4.0444 4.16753 4.55276
10 9 5.4999 5.2983 4.01103 4.58006
10 10 5.74505 4.50335 4.63425 5.23892
10 11 6.18825 6.11877 4.63622 5.14488
can produce 100 billion random bits per second, an incredible pace! With a machine with 4 Intel Xeon E7-4870 processors having a total of 80 logical cores, the computation is twice as fast, producing 2 × 10 11 random bits per second.p. | 51,497 | [
"980387",
"8896"
] | [
"117617",
"21439",
"26"
] |
01767096 | en | [
"shs"
] | 2024/03/05 22:32:15 | 2018 | https://shs.hal.science/halshs-01767096/file/TCRCPv2b.pdf | Nicolas Drouhin
Theoretical considerations on the retirement consumption puzzle and the optimal age of retirement
Keywords: C61 D91 J26 life cycle theory of consumption and saving; optimal retirement, retirement consumption puzzle, discontinuous optimal control
principle of optimality, it provides a very general and parsimonious formula for determining the optimal age of retirement taking into account the possible discontinuity of the optimal consumption profile at the age of retirement.
Introduction
In this article, I build a model that address at the same time the retirement consumption puzzle and and the optimal age of retirement.
Since Hamermesh (1984a) many empirical studies document a drop in consumption at retirement, the retirement consumption puzzle [START_REF] Banks | Is there a retirement-savings puzzle?[END_REF][START_REF] Bernheim | What accounts for the variation in retirement wealth among us households?[END_REF]Battistin et al., 2009, among others). This phenomena is seen as puzzling and "paradoxical" because it seems in contradiction with the idea that, within the intertemporal choice model, which is the backbone of modern economics, when preferences are convex, consumption smoothing is the rule. Then, explanation of this paradox has been searched in relaxing some assumptions of the model of a fully rational forward looking agent. For example the agent may systematically underestimate the drop in earnings associated with retirement (Hamermesh, 1984a). Or, the agent may not be fully time consistent as in the hyperbolic discounting model [START_REF] Angeletos | The hyperbolic consumption model: Calibration, simulation, and empirical evaluation[END_REF].
Without denying that those phenomena may be important traits of "real" agents behavior, building on an insight of [START_REF] Banks | Is there a retirement-savings puzzle?[END_REF] in their conclusion, this paper will emphasize the point that a closer look at the intertemporal choice model of consumption and savings in continuous time allows to understand that what is smooth in the model is not necessarily consumption, but marginal utility of consumption. Of course, if consumption is the only variable of the utility function the two properties are equivalent. But if utility is multi-variate, any discontinuity in a dimension, may imply an optimal discontinuity response in the others. I will illustrate that insight into a very general model of inter-temporal choice that can be considered as a realistic generalisation of the basic one. Two ingredients will be required. First, I will assume a bi-variate, additively intertemporaly separable utility function that depends on consumption and leisure. Second I will assume, realistically, that retirement is not a smooth process with a per period duration of labor that tend progressively to zero, but a discontinuous process.
I will show that, as long as the per period utility function is not additively separable in consumption and leisure, discontinuity of the consumption function is the rule in this general model. However, as insightful is the preceding statement, it is not so easy to prove it formally with all generality because the assumptions imply a discontinuous payoff function, a case that is not standard with usual intertemporal optimization techniques in continuous time. I will provide a general and simple lemma that will make the problem tractable and it's resolution at the same time rigourous and insightful.
So, if we want to solve the paradox within a quite standard model of intertemporal choice, we have to drop additive separability of utility of consumption and leisure. And if we want to extend the problem to the choice of the optimal retirement age, we have to carry on with this non-separability. However, as pointed by d 'Albis et al. (2012) most of the study addressing the question has been made precisely under the assumption of additive separability in consumption and leisure (see d [START_REF] Albis | Endogenous retirement and monetary cycles[END_REF][START_REF] Bloom | Optimal retirement with increasing longevity[END_REF][START_REF] Boucekkine | Vintage human capital, demographic trends, and endogenous growth[END_REF][START_REF] Hazan | Longevity and lifetime labor supply: Evidence and implications[END_REF][START_REF] Heijdra | The individual life-cycle, annuity market imperfections and economic growth[END_REF][START_REF] Heijdra | Retirement, pensions, and ageing[END_REF][START_REF] Kalemli-Ozcan | Mortality change, the uncertainty effect, and retirement[END_REF][START_REF] Prettner | Increasing life expectancy and optimal retirement in general equilibrium[END_REF]Sheshinski, 1978, among others). And if there are some important papers that study a general life cycle model of consumption and savings, without additive separability of consumption and leisure [START_REF] Heckman | Life cycle consumption and labor supply: An explanation of the relationship between income and consumption over the life cycle[END_REF][START_REF] Heckman | A life-cycle model of earnings, learning, and consumption[END_REF][START_REF] Bütler | Neoclassical life-cycle consumption: a textbook example[END_REF] they are mostly focused on the the explanation of co-movement of earnings and consumption all over the life-cycle. Hamermesh (1984b) and [START_REF] Chang | Uncertain lifetimes, retirement and economic welfare[END_REF] study the retirement decision with non separability of consumption and leisure, but they fully endogenize the work decision, without any granularity concerning the per-period duration of worktime, and thus without any discontinuity of per period labor supply, implying model that are unable to explain at the same time retirement consumption paradox and the retirement decision. The model I propose can easily be expanded to endogenize the retirement decision and provide very general condition that fulfills the optimal age of retirement.
I will show that when optimal consumption is discontinuous at the age of retirement, this condition is qualitatively very different than in the traditional case.
consumption paradox
Let's assume that we are in a very standard continuous time life-cycle model of consumption and savings with preference for leisure and retirement.
P max c T t e -θ(s-t) u (c (s) , l(s)) ds s.t.∀s ∈ [t, T ], ȧ(s) = ra(s) + w(s)(1 -l(s)) + b(s) -c(s) a(t)
given and a(T ) ≥ 0 t is the decision date and T is life duration, u is the per-period bi-variate utility function that depends on consumption and leisure. c, is the intertemporal consumption profile, the control variable of the program, l, is the intertemporal leisure profile, that I will assume, in a first stance, to be exogenous. a, is a life-cycle asset, the state variable of the program, that brings interest at the rate r. w, is labor income per period when the individual spend all this time working. b is social security income profile, interpreted as social security benefit when positive (typically after retirement) and social security contribution when negative (typically before retirement). l, c, w and b are assumed to be piecewise continuous and a is assumed to be piecewise smooth, assumptions that are fully compatible with the use of standard optimal control theory. I assume that the utility function includes standard minimum requirements of the microeconomic theory of consumption/leisure trade-off: u 1 > 0, u 2 > 0, u 11 < 0, u 22 < 0 and quasi-concavity (i.e. the indifference curves are convex). It implies that -
u 11 u 2 2 + 2u 12 u 1 u 2 -u 22 u 2 1 > 0.
It is important to notice that, without further assumptions, the sign of the second order crossed derivative is undetermined.
I will assume that there exists a retirement age t R such that:
∀s ∈ [t, t R ), l(s) = κ < 1 ∀s ∈ [t R , T ], l(s) = 1
Of course this assumption is a simplification, but it allows to characterize directly the central idea of the paper: retirement is fundamentally a discontinuity in the labor/leisure profile. This assumption seems much more realistic than usual idea that retirement is the smooth process with per period work duration tending to zero at the age of retirement.1
I denote c * the optimal consumption profile, solution of the program P and a * the associated value of the state variable. Of course those optimal functions are parameterized by all the given of the problem (t, t R , T, a(t), r, w, b, l).
I denote V * the optimal value of the problem i. e.
V * (t, t R , T, a(t), r, w) = T t e -θ(s-t) u (c * (t, t R , T, a(t), r, w, b, l, s), l(s)) ds
Because of the discontinuity of the instantaneous payoff function in t R , the problem is non standard. Therefore it is useful to decompose the problem in two separate ones:
P 0 P 1 max c t R t e -θ(s-t) u (c (s) , κ) ds s.t. ȧ(s) = ra(s) + (1 -κ)w(s) + b(s) -c(s) a(t), a(t R ) given max c T t R e -θ(s-t) u (c (s) , 1)) ds s.t. ȧ(s) = ra(s) + b(s) -c(s) a(t R )
given and a(T ) ≥ 0 P 0 and P 1 . As c * they are also implicit functions of the parameter of their respective program and I can define the value function of P 0 and P 1 .
V 0 (t, t R , a(t), a(t R ), r, w, b, κ) = t R t e -θ(s-t) u c 0 (t, t R , a(t), a(t R ), r, w, b, κ), κ ds V 1 (t R , T, a(t R ), r, b) = T t R e -θ(s-t) u c 1 (t R , T, a(t R ), r, b, s), 1 ds
The two programs are linked by the asset level at the age of retirement. By application of the optimality principle, I can deduce:
Lemma 1 (A Principle of Optimality).
If (c * , a * ) is an admissible pair solution of program P then we have: [START_REF] Bellman | Dynamic programming[END_REF] principle of optimality.
1. V * (t, t R , T, a(t), r, w) = V 0 * (t, t R , a(t), a * (t R ), r, w) + V 1 * (t R , T, a * (t R ), r, w) 2. a * (t R ) = argmax a(t R ) {V 0 (t, t R , a(t), a(t R ), r, w) + V 1 (t R , T, a(t R ), r, w)} Proof: It is a direct application of
I have now all the material to solve the program P.
Proposition 1 (Discontinuity of the consumption profile).
If I denote c 0 (t R ) def = lim s→t R c 0 (s)
, and restrict my analysis to per period utility with a second order cross derivative that is either, everywhere strictly positive, everywhere strictly negative or everywhere equal to zero:
1. The optimal consumption profile solution of program P is unique.
The optimal consumption profile solution of program
P is continuous for every age s in [t, t R ) (t R , T ]. 3. In t R , u 1 (c 0 (t R ), κ) = u 1 (c 1 (t R ), 1
) and the continuity of the optimal consumption profile is determined solely the cross derivative of the per period utility function.
(a) c 0 (t R ) > c 1 (t R ) ⇔ u 12 (c, l) < 0 (b) c 0 (t R ) = c 1 (t R ) ⇔ u 12 (c, l) = 0 (c) c 0 (t R ) < c 1 (t R ) ⇔ u 12 (c, l) > 0
Proof: Relying on Lemma 1, I start by solving the program P 0 and P 1 for a given a(t R ). Denoting µ 0 the costate variable, the Hamiltonian of the Program P 0 is:
H 0 (c(s), a(s), µ 0 (s), s) = e -θ(s-t) u (c (s) , κ) + µ 0 (s) [r a(s) + (1 -κ)w(s) + b(s) -c(s)] (1)
According to Pontryagin maximum principle the necessary condition for optimality is:
∀s ∈ [t, t R ), ∂H 0 (•) ∂c(s) = 0 ⇒ µ 0 (s) = e -θ(s-t) u 1 (c (s) , κ) (2) ∀s ∈ [t, t R ), ∂H 0 (•) ∂a(s) = -μ0 (s) ⇒ μ0 (s) = -r µ 0 (s) (3) ∀s ∈ [t, t R ), ȧ(s) = ra(s) + (1 -κ)w(s) + b(s) -c(s) (4)
Moreover by construction of the Hamiltonian and Pontryagin maximum principle it is well known that:
∂V 0 (t, t R , a(t), a(t R ), r, w, b, κ) ∂a(t R ) = -µ 0 (t R ) (5)
Similarly for program P 1 , we have:
H 1 (c(s), a(s), µ 1 (s), s) = e -θ(s-t) u (c (s) , 1) + µ 1 (s) [r a(s) + b(s) -c(s)] (6) ∀s ∈ (t R , T ], ∂H 1 (•) ∂c(s) = 0 ⇒ µ 1 (s) = e -θ(s-t) u 1 (c (s) , 1) (7) ∀s ∈ (t R , T ], ∂H 1 (•) ∂a(s) = -μ1 (s) ⇒ μ1 (s) = -r µ 1 (s) (8) ∀s ∈ (t R , T ], ȧ(s) = ra(s) + b(s) -c(s) (9) ∂V 1 (t, t R , a(t), a(t R ), r, b) ∂a(t R ) = µ 1 (t R ) (10)
Moreover, P 1 being a constrained endpoint problem, we have to fulfill the transversality condition:
µ 1 (T )a(T ) = 0 ⇒ a(T ) = 0 (11) P 0 and P 1 verifying the standard strict concavity condition of their respective Hamiltonian, they both admit continuous and unique solution on their respective domain.
Let us now turn to the solution problem of the optimal value of the asset at retirement date a * (t R ). Relaying on the principle of optimality (Lemma 1), a necessary condition for a * (t R ) to be a maximum of (V
0 (•) + V 1 (•)) is: ∂V 0 (•) ∂a(t R ) + V 1 (•) ∂a(t R ) = -µ 0 (t R ) + µ 1 (t R ) = 0 ⇔ u 1 (c 0 (t R ), κ) = u 1 (c 1 (t R ), 1) (12)
It is easy to check that the left hand term of the last equality is increasing in a(t R )
while the right hand one is decreasing, assuring the uniqueness of a
* (t R ). If for all c, l in R + × [0, 1], u 12 < 0, then u 1 (c 0 (t R ), κ) < u 1 (c 0 (t R ), 1). Because u 11 < 0, we can only have u 1 (c 0 (t R ), κ) = u 1 (c 1 (t R ), 1), if and only if c 0 (t R ) > c 1 (t R ).
The reasoning is the same for the two other cases.
In this setting, a negative cross derivative of the per period utility of consumption and leisure is necessary to obtain a discontinuous drop in consumption at the age of retirement, i.e. to resolve the retirement consumption puzzle. It means that, if we believe that the model is a proper simplification of the intertemporal choice of agent in the real world, the observation of that kind of drop, informs us on the negative sign of the cross derivative. It may seems strange because many workhorse utility function in labor economics such as the cobb-Douglas or the CES utility function are characterized by a positive cross derivative.
However, it is important to notice that relying on a different model of intertemporal choice with full endogeneity of labor, [START_REF] Heckman | Life cycle consumption and labor supply: An explanation of the relationship between income and consumption over the life cycle[END_REF] also conclude that a negative cross derivative of the per period utility of consumption and leisure was required to explain the hump shape of the intertemporal consumption profile.
In this part, I have given a complete theoretical treatment of an idea that was alluded in [START_REF] Banks | Is there a retirement-savings puzzle?[END_REF] and in the "back-of-the-envelope calculation" in [START_REF] Battistin | The retirement consumption puzzle: Evidence from a regression discontinuity approach[END_REF]. This calculation was grounded on the following parametrical form:
u(c, l) = (c α l 1-α ) 1-γ 1 -γ
with γ > 0 interpreted as the reciprocal of the intertemporal elasticity of substitution. They rightfully conclude that to solve the retirement consumption puzzle in this model, γ > 1 is required, but they miss the right insight for explaining that. Because, in this model, γ fully capture the intensity of the response of consumption to a variation of the rate of interest only when leisure is fully endogenous, but in this case there will be no discontinuity in the consumption function. As we have shown, explaining such a discontinuity, requires leisure to be exogenous at the age of retirement 2 , then it is -c u 11 /u 1 = α(γ -1) + 1 that will capture the intensity of response of consumption to a change of the rate of interest. Moreover, if the model is based on a Cobb-Douglass utility function, it is in fact a power transformation of a Cobb-Douglass, a transformation that can alter the sign of the second order cross derivative.
We have u 12 = α(1 -α)(1 -γ)c α(1-γ)-1 l (1-α)(1-γ)-1 . With this special parametrical form, the sign of the cross derivative of utility is fully given by the position of γ with respect to unity. When γ is higher than one, this cross derivative is negative explaining the downward discontinuity in consumption, as confirmed by the general statement of Proposition 1.3. The effect has nothing to do with the the intertemporal elasticity of substitution per se.
2 Or at least a constraint for a minimum per-period work duration that is binding.
3 Optimal age of retirement I have solved the program P with the age of retirement, t R , being a parameter. I have all the material to characterize the optimal age of retirement, the one that maximizes the value of the program. In particular, the decomposition of the general Program in two sub-programs delimited by the age of retirement, allows to derive this optimal age of retirement in a parsimonious and elegant manner.
Proposition 2 (The optimal age of retirement).
When an interior solution exists, and denoting b
0 (t R ) def = lim s→t - R b (
s) < 0, the optimal age of retirement tR is such that:
u(c 1 ( tR ), 1) -u(c 0 ( tR ), κ) = u 1 (c 0 (t R ), κ) ((1 -κ)w( tR ) + b 0 ( tR )) -b( tR )) + (c 1 ( tR ) -c 0 ( tR )) (13) Proof: tR is a solution of max t R
V * (t, t R , T, a(t), r, w). Because V * is continuous and differentiable in t R , a necessary condition for having an interior solution is:
∂V * (t, t R , T, a(t), r, w) ∂t R = 0 (14)
Relying on Lemma 1 and noting that by construction of the Hamiltonian and Pontryagin maximum principle:
∂V 0 (t, t R , a(t), r, w) ∂t R = H 0 (c 0 (t R ), a 0 (t R ), µ 0 (t R ), t R ) and ∂V 1 (t R , T, a(t), r, w) ∂t R = -H 1 (c 1 (t R ), a 1 (t R ), µ 1 (t R ), t R )
we can easily conclude that tR is such that:
H 0 (c 0 ( tR ), a * ( tR ), µ 0 ( tR ), tR ) = H 1 (c 1 ( tR ), a * ( tR ), µ 1 ( tR ), tR ) (15)
Using the definitions of the Hamiltonian and first order conditions of program P 0 and P 1 and remembering that, in any case, a is continuous in t R , we get the right hand side.
This is a standard marginal condition for optimality. The left hand side of Equation ( 13) is the direct cost in utility of a marginal increase in the retirement age, while the right hand side is the indirect gain in utility due to supplementary resources generated by a longer work duration. The important and innovative point is that when taking into account the retirement consumption puzzle, the endogenous drop of consumption implies that less resources are required to maintain a same level of utility. Thus the earnings differential can be higher when the agents decide to retire.
Proposition 2 provides a very general characterisation of the optimal retirement age.
Moreover, when expanding consumption before and after retirement as implicit function of the parameters of the problem, and when endogenizing the budgetary constraint of the social security system, it allows to derive comparative static results on the optimal age of retirement.
Conclusion
This short paper provides a general methodology to resolve the retirement consumption puzzle and the choice of the optimal age of retirement. The principle is illustrated in a simple model of intertemporal choice in which utility depend on consumption and leisure with certain horizon. To solve the puzzle we need only two assumptions: 1.
retirement implies a discontinuity in the leisure intertemporal profile and, 2. the crossderivative of the utility function is negative. The method is general and can easily be extended in more realistic models with uncertain lifetime3 .
This idea could be generalized by endogenizing per period work duration taking into account a granularity assumption. In general for organizational reason work duration can be zero or something significantly different from zero.
In a companion paper, I am actually working on a calibrated version of the model taking into account a realistic modeling of uncertain lifetime in the spirit of[START_REF] Drouhin | A rank-dependent utility model of uncertain lifetime[END_REF] and the possibility of a non stationary intertemporal utility, allowing for per period utility to change with age in the spirit of[START_REF] Drouhin | Non stationary additive utility and time consistency[END_REF] | 19,881 | [
"749195"
] | [
"2579",
"523723"
] |
01767230 | en | [
"info"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01767230/file/QCGA2018.pdf | Stéphane Breuils
Vincent Nozick
Akihiro Sugimoto
Eckhard Hitzer
Quadric Conformal Geometric Algebra of R 9,6
Keywords: Mathematics Subject Classification (2010). Primary 99Z99; Secondary 00A00 Quadrics, Geometric Algebra, Conformal Geometric Algebra, Clifford Algebra
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Geometric Algebra provides useful and, more importantly, intuitively understandable tools to represent, construct and manipulate geometric objects. Intensively explored by physicists, Geometric Algebra has been applied in quantum mechanics and electromagnetism [START_REF] Doran | Geometric algebra for physicists[END_REF]. Geometric Algebra has also found some interesting applications in data manipulation for Geographic Information Systems (GIS) [START_REF] Luo | A Hierarchical Representation and Computation Scheme of Arbitrary-dimensional Geometrical Primitives Based on CGA[END_REF]. More recently, it turns out that Geometric Algebra can be applied even in computer graphics, either to basic geometric primitive manipulations [START_REF] Vince | Geometric algebra for computer graphics[END_REF] or to more complex illumination processes as in [START_REF] Papaefthymiou | Real-time rendering under distant illumination with conformal geometric algebra[END_REF] where spherical harmonics are substituted by Geometric Algebra entities.
This paper presents a Geometric Algebra framework to handle quadric surfaces which can be applied to detect collision in computer graphics, and to calibrate omnidirectional cameras, usually embedding a mirror with a quadric surface, in computer vision. Handling quadric surfaces in Geometric Algebra requires a high dimensional space to work as seen in subsequent sections. Nevertheless, no low-dimensional Geometric Algebra framework is yet introduced that handles general orientation quadric surfaces and their construction using contact points.
High dimensional Geometric Algebras
Following Conformal Geometric Algebra (CGA) and its well-defined properties [START_REF] Dorst | Geometric algebra for computer science, an object-oriented approach to geometry[END_REF], the Geometric Algebra community recently started to explore new frameworks that work in higher dimensional spaces. The motivation for this direction is to increase the dimension of the relevant Euclidean space (R n with n > 3) and/or to investigate more complex geometric objects.
The Geometric Algebra of R p,q is denoted by G p,q where p is the number of basis vectors squared to +1 and q is that of basis vectors squared to -1. Then, the CGA of R 3 is denoted by G 4,1 . Extending from dimension 5 to 6 leads to G 3,3 defining either 3D projective geometry (see Dorst [START_REF] Dorst | 3d oriented projective geometry through versors of R 3,3[END_REF]) or line geometry (see Klawitter [START_REF] Klawitter | A Clifford algebraic approach to line geometry[END_REF]). Conics in R 2 are represented by the conic space of Perwass [START_REF] Perwass | Geometric algebra with applications in engineering[END_REF] with G 5,3 . Conics in R 2 are also defined by the Double Conformal Geometric Algebra (DCGA) with G 6,2 introduced by Easter and Hitzer [START_REF] Easter | Double conformal geometric algebra[END_REF]. DCGA is extended to handle cubic curves (and some other even higher order curves) in the Triple Conformal Geometric Algebra with G 9,3 [START_REF] Easter | Triple conformal geometric algebra for cubic plane curves[END_REF] and in the Double Conformal Space-Time Algebra with G 4,8 [START_REF] Benjamin | Double conformal space-time algebra[END_REF]. We note that the dimension of the algebras generated by any n-dimensional vector spaces (n = p + q) grows exponentially as they have 2 n basis elements. Although most multivectors are extremely sparse, very few implementations exist that can handle high dimensional Geometric Algebras. This problem is discussed further in Section 7.2.
Geometric Algebra and quadric surfaces
A framework to handle quadric surfaces was introduced by Zamora [START_REF] Zamora-Esquivel | G 6,3 geometric algebra; description and implementation[END_REF] for the first time. Though this framework constructs a quadric surface from control points, it supports only axis-aligned quadric surfaces.
There exist two main Geometric Algebra frameworks to manipulate general quadric surfaces.
On one hand, DCGA with G 8,2 , defined by Easter and Hitzer [START_REF] Easter | Double conformal geometric algebra[END_REF], constructs quadric and more general surfaces from their implicit equation coefficients specified by the user. A quadric (torus, Dupin-or Darboux cyclide) is represented by a bivector containing 15 coefficients that are required to construct the implicit equation of the surface. This framework preserves many properties of CGA and thus supports not only object transformations using versors but also differential operators. However, it is incapable of handling the intersection between two general quadrics and, to our best knowledge, cannot construct general quadric surfaces from control points.
On the other hand, quadric surfaces are also represented in a framework of G 4,4 as first introduced by Parkin [START_REF] Spencer | A model for quadric surfaces using geometric algebra[END_REF] and developed further by Du et al. [START_REF] Du | Modeling 3D Geometry in the Clifford Algebra R 4,4[END_REF]. Since this framework is based on a duplication of the projective geometry of R 3 , it is referred to as Double Perspective Geometric Algebra (DPGA) hereafter. DPGA represents quadric surfaces by bivector entities. The quadric expression, however, comes from a so-called "sandwiching" duplication of the product. DPGA handles quadric intersection and conics. It also handles versors transformations. However, to our best knowledge, it cannot construct general quadric surfaces from control points. This incapability seems true because, for example, wedging 9 control points together in this space results in 0 due to its vector space dimension.
Contributions
Our proposed framework, referred to as Quadric Conformal Geometric Algebra (QCGA) hereafter, is a new type of CGA, specifically dedicated to quadric surfaces. Through generalizing the conic construction in R 2 by Perwass [START_REF] Perwass | Geometric algebra with applications in engineering[END_REF], QCGA is capable of constructing quadric surfaces using either control points or implicit equations. Moreover, QCGA can compute the intersection of quadric surfaces, the surface tangent, and normal vectors for a quadric surface point.
Notation
We use the following notation throughout the paper. Lower-case bold letters denote basis blades and multivectors (multivector a). Italic lower-case letters refer to multivector components (a 1 , x, y 2 , • • • ). For example, a i is the i th coordinate of the multivector a. Constant scalars are denoted using lowercase default text font (constant radius r). The superscript star used in x * represents the dualization of the multivector x. Finally, subscript on x refers to the Euclidean vector associated with the point x of QCGA.
Note that in geometric algebra, the inner product, contractions and outer product have priority over the full geometric product. For instance, a ∧ bI = (a ∧ b)I.
QCGA definition
This section introduces QCGA. We specify its basis vectors and give the definition of a point.
QCGA basis and metric
The QCGA G 9,6 is defined over a 15-dimensional vector space. The base vectors of the space R 9,6 are basically divided into three groups: {e 1 , e 2 , e 3 } (corresponding to the Euclidean vectors in R 3 ), {e o1 , e o2 , e o3 , e o4 , e o5 , e o6 }, and {e ∞1 , e ∞2 , e ∞3 , e ∞4 , e ∞5 , e ∞6 }. The inner products between them are as defined in Table 1.
For some computation constraints, a diagonal metric matrix may be required. The orthonormal vector basis of R
• • • • • • • • • • • • e 2 0 1 0 • • • • • • • • • • • • e 3 0 0 1 • • • • • • • • • • • • e o1 • • • 0 -1 • • • • • • • • • • e ∞1 • • • -1 0 • • • • • • • • • • e o2 • • • • • 0 -1 • • • • • • • • e ∞2 • • • • • -1 0 • • • • • • • • e o3 • • • • • • • 0 -1 • • • • • • e ∞3 • • • • • • • -1 0 • • • • • • e o4 • • • • • • • • • 0 -1 • • • • e ∞4 • • • • • • • • • -1 0 • • • • e o5 • • • • • • • • • • • 0 -1 • • e ∞5 • • • • • • • • • • • -1 0 • • e o6 • • • • • • • • • • • • • 0 -1 e ∞6 • • • • • • • • • • • • • -1 0
squares to +1 along with six other basis vectors {e -1 , e -2 , e -1 , e -4 , e -5 , e -6 } each of which squares to -1 corresponds to a diagonal metric matrix. The transformation from the original basis to this new basis (with diagonal metric) can be defined as follows:
e ∞i = e +i + e -i , e oi = 1 2 (e -i -e +i ), i ∈ {1, • • • , 6}. (2.1)
For clarity, we also define the 6-blades
I ∞ = e ∞1 ∧ e ∞2 ∧ e ∞3 ∧ e ∞4 ∧ e ∞5 ∧ e ∞6 , I o = e o1 ∧ e o2 ∧ e o3 ∧ e o4 ∧ e o5 ∧ e o6 , (2.2)
the 5-blades
I ∞ = (e ∞1 -e ∞2 ) ∧ (e ∞2 -e ∞3 ) ∧ e ∞4 ∧ e ∞5 ∧ e ∞6 , I o = (e o1 -e o2 ) ∧ (e o2 -e o3 ) ∧ e o4 ∧ e o5 ∧ e o6 , (2.3)
the pseudo-scalar of
R 3 I = e 1 ∧ e 2 ∧ e 3 , (2.4)
and the pseudo-scalar
I = I ∧ I ∞ ∧ I o . (2.5)
The inverse of the pseudo-scalar results in
I -1 = -I. (2.6)
The dual of a multivector indicates division by the pseudo-scalar, e.g., a * = -aI, a = a * I. From eq. (1.19) in [START_REF] Hitzer | Carrier method for the general evaluation and control of pose, molecular conformation, tracking, and the like[END_REF], we have the useful duality between outer and inner products of non-scalar blades a and b in Geometric Algebra:
(a ∧ b) * = a • b * , a ∧ (b * ) = (a • b) * , a ∧ (bI) = (a • b)I, (2.7)
which indicates that
a ∧ b = 0 ⇔ a • b * = 0, a • b = 0 ⇔ a ∧ b * = 0. (2.8)
Point in QCGA
The point x of QCGA corresponding to the Euclidean point x = xe 1 +ye 2 + ze 3 ∈ R 3 is defined as
x = x + 1 2 (
x 2 e ∞1 +y 2 e ∞2 +z 2 e ∞3 )+xye ∞4 +xze ∞5 +yze ∞6 +e o1 +e o2 +e o3 .
(2.9) Note that the null vectors e o4 , e o5 , e o6 are not present in the definition of the point. This is merely to keep the convenient properties of CGA points, namely, the inner product between two points is identical with the squared distance between them. Let x 1 and x 2 be two points, their inner product is from which together with Table 1, it follows that
x 1 • x 2 = x 1 e 1 + y 1 e 2 + z 1 e 3 + 1 2 x 2 1 e ∞1 + 1 2 y 2 1 e ∞2 + 1 2 z 2 1 e ∞3 + x 1 y 1 e ∞4 + x 1 z 1 e ∞5 +
x 1 • x 2 = x 1 x 2 + y 1 y 2 + z 1 z 2 - 1 2 x 2 1 - 1 2 x 2 2 - 1 2 y 2 1 - 1 2 y 2 2 - 1 2 z 2 1 - 1 2 z 2 2 = - 1 2 x 1 -x 2 2 .
(2.11)
We see that the inner product is equivalent to minus half the squared Euclidean distance between x 1 and x 2 .
QCGA objects
QCGA is an extension of CGA, thus the objects defined in CGA are also defined in QCGA. The following sections explore the plane, the line, and the sphere to show their definitions in QCGA, and similarity between these objects in CGA and their counterparts in QCGA.
3.1. Plane 3.1.1. Primal plane. As in CGA, a plane π in QCGA is computed using the wedge of three linearly independent points x 1 , x 2 , and x 3 on the plane:
π = x 1 ∧ x 2 ∧ x 3 ∧ I ∞ ∧ I o . (3.1)
The multivector π corresponds to the primal form of a plane in QCGA, with grade 14, composed of six components. The e o2o3 , e o1o3 , e o1o2 components have the same coefficient and can thus be factorized, resulting in a form defined with only four coefficients x n , y n , z n and h: Using distributivity and anticommutativity of the outer product, we obtain
π = x
x ∧ π = xx n + yy n + zz n - 1 3 h(1 + 1 + 1) I = xx n + yy n + zz n -h I = (x • n -h) I, (3.4)
which corresponds to the Hessian form of the plane with Euclidean normal n = x n e 1 + y n e 2 + z n e 3 and with orthogonal distance h from the origin. Proof. Consequence of (2.8).
Because of (2.11), a plane can also be obtained as the bisection plane of two points x 1 and x 2 in a similar way as in CGA.
Proposition 3.3. The dual plane π * = x 1 -x 2 is the dual orthogonal bisecting plane between the points x 1 and x 2 .
Proof. From Proposition 3.2, every point x on π * satisfies x • π * = 0,
x • (x 1 -x 2 ) = x • x 1 -x • x 2 = 0. (3.6)
As seen in (2.11), the inner product between two points results in the squared Euclidean distance between the two points. We thus have
x • (x 1 -x 2 ) = 0 ⇔ x -x 1 2 = x -x 2 2 . (3.7)
This corresponds to the equation of the orthogonal bisecting dual plane between x 1 and x 2 .
3.2. Line 3.2.1. Primal line. A primal line l is a 13-vector constructed from two linearly independent points x 1 and x 2 as follows:
l = x 1 ∧ x 2 ∧ I ∞ ∧ I o . (3.8)
The outer product between the 6-vector I ∞ and the two points x 1 and x 2 removes all their e ∞i components (i ∈ {1, • • • , 6}). Accordingly, they can be reduced to x 1 = (e o1 + e o2 + e o3 + x 1 ) and x 2 = (e o1 + e o2 + e o3 + x 2 ) respectively. For clarity, (3.8) is simplified "in advance" as
l = x 1 ∧ (e o1 + e o2 + e o3 + x 2 ) ∧ I ∞ ∧ (e o1 -e o2 ) ∧ (e o2 -e o3 ) ∧ e o4o5o6 = x 1 ∧ (x 2 ∧ (e o1 -e o2 ) ∧ (e o2 -e o3 ) + 3e o1 ∧ e o2 ∧ e o3 ) ∧ I ∞ ∧ e o4o5o6 = 3e o1 ∧ e o2 ∧ e o3 ∧ (x 2 -x 1 ) + x 1 ∧ x 2 ∧ (e o1 -e o2 ) ∧ (e o2 -e o3 ) ∧ I ∞ ∧ e o4o5o6 .
(3.9)
Setting u = x 2 -x 1 and v = x 1 ∧ x 2 gives l = 3e o1 ∧ e o2 ∧ e o3 ∧ u + v ∧ (e o1o2 -e o1o3 + e o2o3 ) ∧ I ∞ ∧ e o4o5o6 = -3 u I ∞ ∧ I o + v I ∞ ∧ I o . (3.10)
Note that u and v correspond to the 6 Plücker coefficients of a line in R 3 . More precisely, u is the support vector of the line and v is its moment.
Proposition 3.4. A point x with Euclidean coordinates x lies on the line l iff x ∧ l = 0.
Proof.
x ∧ l = (x + e o1 + e o2 + e o3 ) ∧ (-
3 u I ∞ ∧ I o + v I ∞ ∧ I o ) = -3x ∧ u I ∞ ∧ I o + x ∧ v I ∞ ∧ I o + v I ∞ ∧ (e o1 + e o2 + e o3 ) ∧ I o = -3(x ∧ u -v ) I ∞ ∧ I o + x ∧ v I ∞ ∧ I o . (3.11)
The 6-blade I ∞ ∧ I o and the 5-blade I ∞ ∧ I o are linearly independent. Therefore, x ∧ l = 0 yields
x ∧ l = 0 ⇔ x ∧ u = v , x ∧ v = 0. (3.12)
As x , u and v are Euclidean entities, (3.12) corresponds to the Plücker equations of a line [START_REF] Kanatani | Understanding geometric algebra: Hamilton, Grassmann, and Clifford for Computer Vision and Graphics[END_REF].
Dual line.
Dualizing the entity l consists in computing with duals: Proof. Consequence of (2.8).
l * = (-3 u I ∞ ∧ I o + v I ∞ ∧ I o )(-I) = 3 u I + (e ∞3 + e ∞2 + e ∞1 ) ∧ v I . ( 3
Note that a dual line l * can also be constructed from the intersection of two dual planes as follows:
l * = π * 1 ∧ π * 2 .
(3.14)
3.3. Sphere 3.3.1. Primal sphere. We define a sphere s using four points as the 14-blade
s = x 1 ∧ x 2 ∧ x 3 ∧ x 4 ∧ I ∞ ∧ I o . (3.15)
The outer product of the points with I ∞ removes all e ∞4 , e ∞5 , e ∞6 components of these points, i.e., the cross terms (xy, xz, and yz). The same remark holds for I o and e o4 , e o5 , e o6 . For clarity, we omit these terms below. We thus have
s = x 1 ∧x 2 ∧x 3 ∧ 1
s =x 1 ∧ x 2 ∧ (x 3 ∧ x 4 I o ∧ I ∞ + 3(x 3 -x 4 )I o ∧ I ∞ + 1 2 x 4 2 x 3 I ∞ ∧ I o - 1 2 x 3 2 x 4 I ∞ ∧ I o (3.17) + 3 2 ( x 4 2 -x 3 2 )I o ∧ I ∞ .
Again we remark that the resulting entity has striking similarities with a point pair of CGA. More precisely, let c be the Euclidean midpoint between the two entities x 3 and x 4 , d be the unit vector from x 3 to x 4 , and r be half of the Euclidean distance between the two points in exactly the same way as Hitzer et al in [START_REF] Hitzer | Carrier method for the general evaluation and control of pose, molecular conformation, tracking, and the like[END_REF], namely
2r = |x 3 -x 4 | , d = x 3 -x 4 2r , c = x 3 + x 4 2 . (3.18)
Then, (3.17) can be rewritten by
s =x 1 ∧ x 2 ∧ 2r d ∧ c I o ∧ I ∞ (3.19) + 3d I o ∧ I ∞ + 1 2 (c 2 + r 2 )d -2c c • d I ∞ ∧ I o .
The bottom part corresponds to a point pair, as defined in [START_REF] Hitzer | Carrier method for the general evaluation and control of pose, molecular conformation, tracking, and the like[END_REF], that belongs to the round object family. Applying the same development to the two points x 1 and x 2 again results in round objects:
s = - 1 6 x c 2 -r 2 I ∧ I ∞ ∧ I o + e 123 ∧ I ∞ ∧ I o + (x c I ) ∧ I ∞ ∧ I o . (3.20)
Note that x c corresponds to the center point of the sphere and r to its radius. It can be further simplified into
s = x c - 1 6 r 2 (e ∞1 + e ∞2 + e ∞3 ) I, (3.21)
which is dualized to
s * = x c - 1 6 r 2 (e ∞1 + e ∞2 + e ∞3 ), (3.22)
where x c corresponds to x c without the cross terms xy, xz, yz. Since a QCGA point has no e o4 , e o5 , e o6 components, building a sphere with these cross terms is also valid. However, inserting these cross terms (that actually do not appear in the primal form) raises some issues in computing intersections with other objects.
Proposition 3.6. A point x lies on the sphere s iff x ∧ s = 0.
Proof. Since the components e ∞4 , e ∞5 and e ∞6 of x are removed by the outer product with s of (3.17), we ignore them to obtain
x ∧ s = x ∧ (s * I) = x • s * I (3.23) = x + e o1 + e o2 + e o3 + 1 2 x 2 e ∞1 + 1 2 y 2 e ∞2 + 1 2 z 2 e ∞3 (3.24) • x c - 1 6 r 2 (e ∞1 + e ∞2 + e ∞3 ) I,
which can be rewritten by
x ∧ s = xx c + yy c + zz c - 1 2 x 2 c - 1 6 r 2 - 1 2 y 2 c - 1 6 r 2 - 1 2 z 2 c - 1 6 r 2 - 1 2 x 2 - 1 2 y 2 - 1 2 z 2 I = 0. (3.25)
This can take a more compact form defining a sphere Proof. Consequence of (2.8).
(x -x c ) 2 + (y -y c ) 2 + (z -z c ) 2 = r 2 . ( 3
Quadric surfaces
This section describes how QCGA handles quadric surfaces. All QCGA objects defined in Section 3 become thus part of a more general framework.
Primal quadric surfaces
The implicit formula of a quadric surface in R 3 is F (x, y, z) = ax 2 + by 2 + cz 2 + dxy + exz + fyz + gx + hy + iz + j = 0. (4.1)
A quadric surface is constructed by wedging 9 points together with 5 null basis vectors as follows
q = x 1 ∧ x 2 ∧ • • • ∧ x 9 ∧ I o . (4.2)
The multivector q corresponds to the primal form of a quadric surface with grade 14 and 12 components. Again 3 of these components have the same coefficient and can be combined together into the form defined by 10 coefficients a, b, . . . , j, as in q = e 123 2ae o1 + 2be o2 + 2ce o3 + de o4 + ee o5 + fe o6 where in the second equality we used the duality property. The expression for the dual quadric vector is therefore q * = -2ae o1 + 2be o2 + 2ce o3 + de o4 + ee o5 + fe o6
+ ge 1 + he 2 + ie 3 -j 3 (e ∞1 + e ∞2 + e ∞3 ). (4.4)
Proposition 4.1. A point x lies on the quadric surface q iff x ∧ q = 0.
Proof.
x ∧ q = x ∧ (q * I) = (x • q * )I = x • -2ae o1 + 2be o2 + 2ce o3 + de o4 + ee o5 + fe o6
+ ge 1 + he 2 + ie 3 -j 3 (e ∞1 + e ∞2 + e ∞3 ) I = ax 2 + by 2 + cz 2 + dxy + exz + fyz + gx + hy + iz + j I. (
This corresponds to the formula (4.1) representing a general quadric surface.
Dual quadric surfaces
The dualization of a primal quadric surface leads to the 1-vector dual quadric surface q * of (4.4). We have the following proposition whose proof is a consequence of (2.8).
Proposition 4.2. A point x lies on the dual quadric surface q * iff x • q * = 0.
Normals and tangents
This section presents the computation of the normal Euclidean vector n and the tangent plane π * of a point x (associated to the Euclidean point x = xe 1 + ye 2 + ye 3 ) on a dual quadric surface q * . The implicit formula of the dual quadric surface is considered as the following scalar field
F (x, y, z) = x • q * . ( 5.1)
The normal vector n of a point x is computed as the gradient of the implicit surface (scalar field) at x:
n = ∇F (x, y, z) = ∂F (x, y, z) ∂x e 1 + ∂F (x, y, z) ∂y e 2 + ∂F (x, y, z) ∂z e 3 . (5.2)
Since the partial derivative with respect to the x component is defined by
∂F (x, y, z) ∂x = lim h →0 F (x + h, y, z) -F (x, y, z) h , (5.3)
we have
∂F (x, y, z) ∂x = lim h →0 x 2 • q * -x • q * h = lim h →0 x 2 -x h • q * , (5.4)
where x 2 is the point obtained by translating x along the x-axis by the value h. Note that x 2 -x represents the dual orthogonal bisecting plane spanned by x 2 and x (see Proposition 3.3). Accordingly, we have
lim h →0 x 2 -x h = xe ∞1 + ye ∞4 + ze ∞5 + e 1 = (x • e 1 )e ∞1 + (x • e 2 )e ∞4 + (x • e 3 )e ∞5 + e 1 . (5.5)
This argument can also be applied to the partial derivative with respect to the y and z components. Therefore, we obtain
n = (x • e 1 )e ∞1 + (x • e 2 )e ∞4 + (x • e 3 )e ∞5 + e 1 • q * e 1 + (x • e 2 )e ∞2 + (x • e 1 )e ∞4 + (x • e 3 )e ∞6 + e 2 • q * e 2 + (x • e 3 )e ∞3 + (x • e 1 )e ∞5 + (x • e 2 )e ∞6 + e 3 • q * e 3 . (5.6)
On the other hand, the tangent plane at a surface point x can be computed from the Euclidean normal vector n and the point x. Since the plane orthogonal distance from the origin is -2(e o1 + e o3 + e o3 ) • x, the tangent plane π * is obtained as
π * = n + 1 3
e ∞1 + e ∞2 + e ∞3 -2(e o1 + e o3 + e o3 ) • x.
(5.7)
Intersections
Let us consider two geometric objects corresponding to dual quadrics1 a * and b * . Assuming that the two objects are linearly independent, i.e., a * and b * are linearly independent, we consider the outer product c * of these two objects c * = a * ∧ b * . (6.1) If a point x lies on c * , then
x • c * = x • (a * ∧ b * ) = 0. (6.2)
The inner product computation of (6.2) leads to
x • c * = (x • a * )b * -(x • b * )a * = 0. (6.3)
Our assumption of linear independence between a * and b * indicates that (6.3) holds if and only if x • a * = 0 and x • b * = 0, i.e. the point x lies on both quadrics. Thus, c * = a * ∧ b * represents the intersection of the linearly independent quadrics a * and b * , and a point x lies on this intersection if and only if x • c * = 0.
6.1. Quadric-Line intersection For example, in computer graphics, making a Geometric Algebra compatible with a raytracer requires only to be able to compute a surface normal and a line-object intersection. This section defines the line-quadric intersection.
Similarly to (6.1), the intersection x ± between a dual line l * and a dual quadric q * is computed by l * ∧ q * . Any point x lying on the line l defined by two points x 1 and x 2 can be represented by the parametric formula x = α(x 1 -x 2 ) + x 2 = αu + x 2 . Note that u could also be computed directly from the dual line l * (see (3.13)). Any point x 2 ∈ l can be used, especially the closest point of l from the origin, i.e. x 2 = v •u -1 . Accordingly, computing the intersection between the dual line l * and the dual quadric q * becomes equivalent to finding α such that x lies on the dual quadric, i.e., x • q * = 0, leading to a second degree equation in α, as shown in (4.1). In this situation, the problem is reduced to computing the roots of this equation. However, we have to consider four cases: the case where the line is tangent to the quadric, the case where the intersection is empty, the case where the line intersects the quadric into two points, and the case where one of the two points exists at infinity. To identify each case, we use the discriminant δ defined as:
δ = β 2 -4(x 2 • q * ) 6 i=1 (u • e oi )(q * • e ∞i ), (6.4)
where
β = 2u • (a(x 2 • e 1 )e 1 + b(x 2 • e 2 )e 2 + c(x 2 • e 3 )e 3 )+ d (u ∧ e 1 ) • (x 2 ∧ e 2 ) + (x 2 ∧ e 1 ) • (u ∧ e 2 ) + e (u ∧ e 1 ) • (x 2 ∧ e 3 ) + (x 2 ∧ e 1 ) • (u ∧ e 3 ) + f (u ∧ e 2 ) • (x 2 ∧ e 3 ) + (x 2 ∧ e 2 ) • (u ∧ e 3 ) + q * • u . (6.5)
If δ < 0, the line does not intersect the quadric (the solutions are complex). If δ = 0, the line and the quadric are tangent. If δ > 0 and 6 i=1 (u • e oi )(q * • e ∞i ) = 0, we have only one intersection point (linear equation). Otherwise, we have two different intersection points x ± computed by
x ± = u(-β ± √ δ)/ 2 6 i=1 (u • e oi )(q * • e ∞i ) + x 2 . ( 6
• • quadrics intersection • • • quadric plane intersection • • • versors • • Darboux cyclides • • •
in DPGA and DCGA. However, QCGA also faces some limitations that do not affect DPGA and DCGA, as summarized in Table 2. First, DPGA and DCGA are known to be capable of transforming all objects by versors [START_REF] Du | Modeling 3D Geometry in the Clifford Algebra R 4,4[END_REF][START_REF] Easter | Double conformal geometric algebra[END_REF] whereas it is not yet clear whether objects in QCGA can be transformed using versors. An extended version of CGA versors can be used to transform lines in QCGA (and probably all round and flat objects of CGA), but more investigation is needed. Second, the number of basis elements spanned by QCGA is 2 15 ( 32, 000) components for a full multivector. Although multivectors of QCGA are in reality almost always very sparse, this large number of elements may cause implementation issues (see Section 7.2). It also requires some numerical care in computation, especially during the wedge of 9 points. This is because some components are likely to be multiplied at the power of 9.
Implementations
There exist many different implementations of Geometric Algebra, however, very few can handle dimensions higher than 8 or 10. This is because higher dimensions bring a large number of elements of the multivector used, resulting in expensive computation. In many cases, the computation then becomes impossible in practice. QCGA has a 15 vector space dimension and hence requires some specific care during the computation.
We conducted our tests with an enhanced version of Breuils et al. [START_REF] Breuils | A geometric algebra implementation using binary tree[END_REF][START_REF] Breuils | A hybrid approach for computing products of high-dimensional geometric algebras[END_REF] which is based on a recursive framework. We remark that most of the products involved in our tests were the outer products between 14-vectors and 1-vectors, applying one of the less time consuming products of QCGA. Indeed, QCGA with vector space dimension of 15 has 2 15 elements and this number is 1,000 times as large as that of elements for CGA with vector space dimension of 5 (CGA with vector space dimension of 5 is needed for the equivalent operations with QCGA with dimension of 15). The computational time required for QCGA, however, did not need 1,000 times but only 70 times of that for CGA. This means that the computation of QCGA runs in reasonable time on the enhanced version of Breuils et al. [START_REF] Breuils | A geometric algebra implementation using binary tree[END_REF][START_REF] Breuils | A hybrid approach for computing products of high-dimensional geometric algebras[END_REF]. More detailed analysis in this direction is left for future work. Figure 1 depicts a few examples generated with our OpenGL renderer based on the outer product null-space voxels and our ray-tracer. From left to right: a dual hyperboloid built from its equation, an ellipsoid built from its control points (in yellow), the intersection between two cylinders, and a hyperboloid with an ellipsoid and planes (the last one was computed with our ray-tracer).
Conclusion
This paper presented a new Geometric Algebra framework, Quadric Conformal Geometric Algebra (QCGA), that handles the construction of quadric surfaces using the implicit equation and/or control points. QCGA naturally includes CGA objects and generalizes some dedicated constructions. The intersection between objects in QCGA is computed using only outer products. This paper also detailed the computation of the tangent plane and the normal vector at a point on a quadric surface. Although QCGA is defined in a high dimensional space, most of the computations run in relatively low dimensional subspaces of this framework. Therefore, QCGA can be used for numerical computations in applications such as computer graphics.
3. 1 . 2 .
12 Dual plane. The dualization of the primal form of the plane is π * = n + 1 3 h(e ∞1 + e ∞2 + e ∞3 ). (3.5) Proposition 3.2. A point x with Euclidean coordinates x lies on the dual plane π * iff x • π * = 0.
. 13 ) 3 . 5 .
1335 Proposition A point x lies on the dual line l * iff x • l * = 0.
.26) 3 . 3 . 2 .
332 Dual sphere. The dualization of the primal sphere s gives: s * = x c -1 6 r 2 (e ∞1 + e ∞2 + e ∞3 ). (3.27) Proposition 3.7. A point x lies on the dual sphere s * iff x • s * = 0.
Figure 1 .
1 Figure 1. Example of our construction of QCGA objects.From left to right: a dual hyperboloid built from its equation, an ellipsoid built from its control points (in yellow), the intersection between two cylinders, and a hyperboloid with an ellipsoid and planes (the last one was computed with our ray-tracer).
Table 1 .
1 9,6 with the Euclidean basis {e 1 , e 2 , e 3 }, and 6 basis vectors {e +1 , e +2 , e +3 , e +4 , e +5 , e +6 } each of which Inner product between QCGA basis vectors.e 1 e 2 e 3 e o1 e ∞1 e o2 e ∞2 e o3 e ∞3 e o4 e ∞4 e o5 e ∞5 e o6 e ∞6
e 1 1 0 0
y 1 z 1 e ∞6 + e o1 + e o2 + e o3 • x 2 e 1 + y 2 e 2 + z 2 e 3 +
1 2 x 2 2 e ∞1 + 1 2 y 2 2 e ∞2 + 1 2 z 2 2 e ∞3
+ x 2 y 2 e ∞4 + x 2 z 2 e ∞5 + y 2 z 2 e ∞6 + e o1 + e o2 + e o3 .
(2.10)
Proposition 3.1. A point x with Euclidean coordinates x lies on the plane π iff x ∧ π = 0. ∞3 + xye ∞4 + xze ∞5 + yze ∞6 + e o1 + e o2 + e o3 ∧ (x n e 23 -y n e 13 + z n e 12 )I ∞ ∧ I o
Proof.
x ∧ π = xe 1 + ye 2 + ze 3 + z 2 e + 1 2 x 2 e ∞1 + 1 2 y 2 e ∞2 + 1 2 h 3 e 123 I ∞ ∧ (e o2o3 -e o1o3 + e o1o2 ) ∧ e o4o5o6 . (3.3)
n e 23 -y n e 13 + z n e 12 I ∞ ∧ I o + h 3 e 123 I ∞ ∧ e o2o3 -e o1o3 + e o1o2 ∧ e o4o5o6 . (3.2)
∞3 )I ∞ ∧I o -3I ∞ ∧I o +x 4 I ∞ ∧I o . (3.16)Note the similarities with a CGA point x 4 + e o + 1 2 x 4 2 e ∞ . Then, the explicit outer product with x 3 gives:
2 (x 2 4 e ∞1 +y 2 4 e ∞2 +z 2 4 e
• I ∞ ∧ I o + ge 1 + he 2 + ie 3 e 123 I ∞ ∧ I o + j 3 e 123 I ∞ ∧ (e ∞1 + e ∞2 + e ∞3 ) • I o = -2ae o1 + 2be o2 + 2ce o3 + de o4 + ee o5 + fe o6 + ge 1 + he 2 + ie 3 -j 3 e ∞1 + e ∞2 + e ∞3 I = q * I, (4.3)
Table 2 .
2 Comparison of properties between DPGA, DCGA, and QCGA. The symboli • stands for "capable", • for "incapable" and for "unknown".
.6)
7.1. Limitations
The construction of quadric surfaces by the wedge of conformal points presented in Sections 3 and 4 is a distinguished property of QCGA that is missing
The term "quadric" (without being followed by surface) encompasses quadric surfaces and conic sections. | 30,274 | [
"170065",
"1129804",
"865126",
"791220"
] | [
"3210",
"229050",
"3210",
"6501",
"533238"
] |
01767264 | en | [
"spi"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01767264/file/bGMCA.pdf | C Kervazo
J Bobin
C Chenot
Blind separation of a large number of sparse sources
Keywords: Blind source separation, sparse representations, block-coordinate optimization strategies, matrix factorization
Blind separation of a large number of sparse sources
Introduction
Problem statement
Blind source separation (BSS) is the major analysis tool to retrieve meaningful information from multichannel data. It has been particularly successful in a very wide range of signal processing applications ranging from astrophysics [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF] to spectroscopic data in medicine [START_REF] Biswal | Blind source separation of multiple signal sources of fMRI data sets using independent component analysis[END_REF] or nuclear physics [START_REF] Nuzillard | Application of blind source separation to 1-D and 2-D nuclear magnetic resonance spectroscopy[END_REF], to name only a few. In this framework, the observations {x i } i=1,...,m are modeled as a linear combination of n unknown elementary sources {s j } j=1,...,n :
x i = n j=1 a ij s j + z i . The coefficients a ij are measuring the contribution of the j-th source to the observation x i , while z i is modeling an additive noise as well as model imperfections. Each datum x i and source s j is supposed to have t entries. This problem can be readily recast in a matrix formulation:
X = AS + N (1)
where X is a matrix composed of the m row observations and t columns corresponding to the entries (or samples), the mixing matrix A is built from the {a ij } i=1,...,m,j=1,...,n coefficients and S is a n × t matrix containing the sources. Using this formulation, the goal of BSS is to estimate the unknown matrices A and S from the sole knowledge of X.
Blind source separation methods
It is well-known that BSS is an ill-posed inverse problem, which requires additional prior information on either A or S to be tackled [START_REF] Comon | Handbook of Blind Source Separation: Independent component analysis and applications[END_REF]. Making BSS a better-posed problem is performed by promoting some discriminant information or diversity among the sources. A first family of standard techniques, such as Independent Component Analysis (ICA), assumes that the sources are statistically independent [START_REF] Comon | Handbook of Blind Source Separation: Independent component analysis and applications[END_REF].
In this study, we will specifically focus on the family of algorithms dealing with the case of sparse BSS problems (i.e. where the sources are assumed to be sparse), which have attracted a lot of interest during the last two decades [START_REF] Zibulevsky | Blind source separation by sparse decomposition in a signal dictionary[END_REF][START_REF] Bronstein | Sparse ICA for blind separation of transmitted and reflected images[END_REF][START_REF] Li | Underdetermined blind source separation based on sparse representation[END_REF]. Sparse BSS has mainly been motivated by the success of sparse signal modeling for solving very large classes of inverse problems [START_REF] Starck | Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity[END_REF]. The Generalized Morphological Component Analysis (GMCA) algorithm [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] builds upon the concept of morphological diversity to disentangle sources that are assumed to be sparsely distributed in a given dictionary. The morphological diversity property states that sources with different morphologies are unlikely to have similar large value coefficients. This is the case of sparse and independently distributed sources, with high probability. In the framework of Independent Component Analysis (ICA), Efficient FastICA (EFICA) [START_REF] Koldovsky | Efficient variant of algorithm Fas-tICA for independent component analysis attaining the Cramér-Rao lower bound[END_REF] is a FastICA-based algorithm that is especially adapted to retrieve sources with generalized Gaussian distributions, which includes sparse sources. In the seminal paper [START_REF] Zibulevsky | Blind source separation with relative Newton method[END_REF], the author also proposed a Newton-like method for ICA called Relative Newton Algorithm (RNA), which uses quasi-maximum likelihood estimation to estimate sparse sources. A final family of algorithms builds on the special case where it is known that A and S are furthermore non-negative, which is often the case on real world data [START_REF] Gillis | Accelerated multiplicative updates and hierarchical ALS algorithms for nonnegative matrix factorization[END_REF].
However, the performances of most of these methods decline when the number of sources n becomes large. As an illustration, Fig. 1 shows the evolution of the mixing matrix criterion (cf. sec. 3.1, [START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF]) as a function of the number of sources for various BSS methods. This experiment illustrates that most methods do not perform correctly in the "large-scale"regime. In this case, the main source of deterioration is very likely related to the non-convex nature of BSS. Indeed, for a fixed number of samples t, an increasing number of sources n will make these algorithms more prone to be trapped in spurious local minima, which tends to hinder the applicability of BSS on practical issues with a large n. Consequently, the optimization strategy has a huge impact on the separation performances.
Contribution
In a large number of applications such as astronomical [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF] or biomedical signals [START_REF] Biswal | Blind source separation of multiple signal sources of fMRI data sets using independent component analysis[END_REF], designing BSS methods that are tailored to precisely retrieve a large number of sources is of paramount importance. For that purpose, the goal of this article is to introduce a novel algorithm dubbed bGMCA (block-Generalized Morphological Component Analysis) to specifically tackle sparse BSS problems when a large number of sources need to be estimated.
In this setting, which we will later call the large-scale regime, the algorithmic strategy has a huge impact on the separation quality since BSS requires solving highy challenging non-convex problems. For that purpose, the proposed bGMCA algorithm builds upon the sparse modeling of the sources, as well as an efficient minimization scheme based on block-coordinate descent. In contrast to state-of-the art methods [START_REF] Zibulevsky | Blind source separation with relative Newton method[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF][START_REF] Rapin | NMF with sparse regularizations in transformed domains[END_REF][START_REF] Gillis | Accelerated multiplicative updates and hierarchical ALS algorithms for nonnegative matrix factorization[END_REF], we show that making profit of block-based minimization with intermediate block sizes allows the bGMCA to dramatically enhance the separation performances, particularly when the number of sources to be estimated becomes large. Comparisons with the state-of-the art methods have been carried out on various simulation scenarios. The last part of the article will show the flexibility of bGMCA, with an application to sparse and non-negative BSS in the context of spectroscopy.
Optimization problem and bGMCA
General problem
Sparse BSS [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] aims to estimate the mixing matrix A and the sources S by minimizing a penalized least-squares of the form:
min A,S 1 2 X -AS 2 F + J (A) + G(S) (2)
The first term is a classical data fidelity term that measures the discrepancy between the data and the mixture model. The . F norm refers to the Frobenius norms, whose use stems from the assumption that the noise is Gaussian. The penalizations J and G enforce some desired properties on A and S (e.g. sparsity, non-negativity). In the following, we will consider that the proximal operators of J and G are defined, and that J and G are convex. However, the whole matrix factorization problem (2) is non-convex.
Consequently, the strategy of optimization has a critical impact on the separation performances, especially to avoid spurious local minimizers and to reduce the sensitivity to initialization. A common idea of several strategies (Block Coordinate Relaxation -BCR [START_REF] Tseng | Convergence of a block coordinate descent method for nondifferentiable minimization[END_REF], Proximal Alternating Linearized Minimization -PALM [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF], Alternating Least Squares -ALS) is to benefit from the multi-convex structure of (2) by using blocks [START_REF] Xu | A globally convergent algorithm for nonconvex optimization based on block coordinate update[END_REF] in which each sub-problem is convex. The minimization is then performed alternately with respect to one of the coordinate blocks while the other coordinates stay fixed, which entails solving a sequence of convex optimization problems. Most of the already existing methods can then be categorized in one of two families, depending on the block sizes:
-Hierarchical or deflation methods: these algorithms use a block of size 1. For instance, Hierarchical ALS (HALS) ( [START_REF] Gillis | Accelerated multiplicative updates and hierarchical ALS algorithms for nonnegative matrix factorization[END_REF] and references therein)
updates only one specific column of A and one specific row of S at each iteration. The main advantage of this family is that each subproblem is often much simpler as their minimizer generally admits a closed-form expression. Moreover, the matrices involved being small, the computation time is much lower. The drawback is however that the errors on some sources/mixing matrix columns propagate from one iteration to the other since they are updated independently.
-Full-size blocks: these algorithms use as blocks the whole matrices A and S (the block size is thus equal to n). For instance, GMCA [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF],
which is reminiscent of the projected Alternating Least Squares (pALS) algorithm, is part of this family. One problem compared to hierarchical or deflation methods is that the problem is more complex due to the simultaneous estimation of a high number of sources. Moreover, the computational cost increases quickly with the number of sources.
The gist of the proposed bGMCA algorithm is to adopt an alternative approach that uses intermediate block sizes. The underlying intuition is that using blocks of intermediate size can be recast as small-scale source separation problems, which are simpler to solve as testified by Fig. 1. As a byproduct, small-size subproblems are also less costly to tackle.
Block based optimization
In the following, bGMCA minimizes the problem in eq. ( 2) with blocks, which are indexed by a set of indices I of size r, 1 r n. In practice, the minimization is performed at each iteration on submatrices of A (keeping only the columns indexed by I ) and S (keeping only the rows indexed by I ).
Minimizing multi-convex problems
Block coordinate relaxation (BCR, [START_REF] Tseng | Convergence of a block coordinate descent method for nondifferentiable minimization[END_REF]) is performed by minimizing [START_REF] Biswal | Blind source separation of multiple signal sources of fMRI data sets using independent component analysis[END_REF] according to a single block while the others remain fixed. In this setting,
Tseng [START_REF] Tseng | Convergence of a block coordinate descent method for nondifferentiable minimization[END_REF] proved the convergence of BCR to minimize non-smooth optimization problems of the form (2). Although we adopted this strategy to tackle sparse NMF problems in [START_REF] Rapin | NMF with sparse regularizations in transformed domains[END_REF], BCR requires an exact minimization for one block at each iteration, which generally leads to a high computational cost. We therefore opted for Proximal Alternating Linearized Minimization (PALM), which was introduced in [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF]. It rather performs a single proximal gradient descent step for each coordinate at each iteration. Consequently, the PALM algorithm is generally much faster than BCR and its convergence to a stationary point of the multi-convex problem is guaranteed under mild conditions. In the framework of the proposed bGMCA algorithm, a PALMbased algorithm requires minimizing at each iteration eq. ( 2) over blocks of size 1 r n and alternating between the update of some submatrices of A and S (these submatrices will be noted A I and S I ). This reads at iteration (k) as:
1 -Update of a submatrix of S using a fixed A:
S (k) I = prox γG(.) A (k-1) T I A (k-1) I 2 S (k-1) I - γ A (k-1) T I A (k-1) I 2 A (k-1) T I (A (k-1) S (k-1) -X) (3)
2 -Update of a submatrix of A using a fixed S:
A (k) I = prox δJ (.) S (k) I S (k) T I 2 A (k-1) I - δ S (k) I S (k) T I 2 (A (k-1) S (k) -X)S (k) T I (4)
In eq. ( 3) and ( 4), the operator prox f is the proximal operator of f (cf. Appendix and [17] [18]). The scalars γ and δ are the gradient path lengths. The . 2 norm is the matrix norm induced by the 2 norm for vectors. More specifically, if x is a vector and .
2 is the 2 norm for vectors, the . 2 induced matrix norm is defined as:
M 2 = sup x =0 Mx 2 x 2 (5)
Block choice
Several strategies for selecting at each iteration the block indices I have been investigated: i) Sequential : at each iteration, r sources are selected sequentially in a cyclic way; ii) Random: at each iteration, r indices in [1, n] are randomly chosen following a uniform distribution and the corresponding sources updated; iii) Random sequential : this strategy combines the sequential and the random choices to ensure that all sources are updated an equal number of times. In the experiments, random strategies tended to provide better results. Indeed, compared to a sequential choice, randomness is likely to make the algorithm more robust with respect to spurious local minima.
Since the results between the random strategy and the random sequential one are similar, the first was eventually selected.
Examined cases and corresponding proximal operators
In several practical examples, an explicit expression can be computed for the proximal operators. In the next, the following penalizations have been considered:
1 -Penalizations G for the sources S:
-1 sparsity constraint in some transformed domain: The sparsity constraint on S is enforced with a 1 -norm penalization: G(S) = Λ S (SΦ T S ) 1 , where the matrix Λ S contains regularization parameters and denotes the Hadamard product. Φ S is a transform into a domain in which S can be sparsely represented. In the following, Φ S will be supposed to be orthogonal. The proximal operator for G in ( 3) is then explicit and corresponds to the softthresholding operator with threshold Λ S , which we shall denote S Λ S (.) (cf. Appendix). Using γ = 1 and assuming Φ S orthogonal, the update is then:
S (k) I = S Λ S S (k-1) I Φ S T - 1 A (k-1) I A (k-1) T I 2 A (k-1) T I (A (k-1) S (k-1) -X)Φ S T Φ S (6)
-Non-negativity in the direct domain and 1 sparsity constraint in some transformed domain: due to the non-negativity constraint, all coefficients in S must be non-negative in the direct domain in addition to the sparsity constraint in a transformed domain Φ S . It can be formulated as
G(S) = Λ S SΦ S T 1 + ι {∀j,k;S[j,k]≥0} (S)
where ι U is the indicator function of the set U . The difficulty is to enforce at the same time two constraints in two different domains, since the proximal operator of G is not explicit. It can either be roughly approximated by composing the proximal operators of the individual penalizations to produce a cheap update or computed accurately using the Generalized Forward-Backward splitting algorithm [START_REF] Raguet | A generalized forward-backward splitting[END_REF].
-Penalizations J for the mixing matrix A:
-Oblique constraint: to avoid obtaining degenerated A and S matrices ( A → ∞ and S → 0), the columns of A are constrained to be in the 2 ball, i.e. ∀j ∈ [1, n], A j 2 1. More specifically, J can be written as J (A) = ι {∀i; A i 2 2 ≤1} (A). Following this constraint, the proximal operator for J in eq. ( 4) is explicit and can be shown to be the projection Π . 2 (cf. Appendix) on the 2 unit ball of each column of the input. The update (4) of A I becomes:
A (k) I = Π . 2 1 A (k-1) I - 1 S (k) I S (k) T I 2 (A (k-1) S (k) -X)S (k) T I (7)
-Non-negativity and oblique constraint: Adding the non-negativity constraint on A reads:
J (A) = ι ∀i; A i 2 2 ≤1 (A) + ι ∀i,j;A[i,j]≥0 (A).
The proximal operator can be shown to be the composition of the proximal operator corresponding to non-negativity followed by Π . 2 1 . The proximal operator corresponding to non-negativity is the projection Π K + (cf. Appendix) on the positive orthant K + .
The update is then:
A (k) I = Π . 2 1 Π K + A (k-1) I - 1 S (k) I S (k) T I 2 (A (k-1) S (k) -X)S (k) T I (8)
Minimization: introduction of a warm-up stage
While being provably convergent to a stationary point of (2), the above PALM-based algorithm suffers from a lack of robustness with regards to a bad initialization, which makes it more prone to be trapped in spurious local minima. Moreover, it is quite difficult to automatically tune the thresholds Λ so that it yields reasonable results. On the other hand, algorithms based on GMCA [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] have been shown to be robust to initialization. Furthermore, in this framework, fixing the parameters Λ can be done in an automatic manner. However, GMCA-like algorithms are based on heuristics, which preclude provable convergence to a minimum of (2).
The proposed strategy consists in combining the best of both approaches to build a two-stage minimization procedure (cf. Algorithm 1): i) a warm-up stage building upon the GMCA algorithm to provide a fast and reliable first guess, and ii) a refinement stage based on the above PALM-based algorithm that provably yields a minimizer of (2). Moreover, the thresholds Λ in the refinement stage will be naturally derived from the first stage. Based on the GMCA algorithm [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF], the warm-up stage is summarized below: 0 -Initialize the algorithm with random A. For each iteration (k):
1 -The sources are first updated assuming a fixed A. A submatrix S I is however now updated instead of S. This is performed using a projected least square solution:
S (k) I = prox G(.) (A (k-1) † I R I ) (9)
where: R I is the residual term defined by R I = X -A (k )
I C S (k )
I C (with I C the indices of the sources outside the block), which is the part of X to be explained by the sources in the current block
I . A (k) † I is the pseudo-inverse of A (k)
I , the estimate of A I at iteration (k).
2 -The mixing sub-matrix A I is similarly updated with a fixed S:
A (k) I = prox J (.) (R I S (k) † I ) ( 10
)
The warm-up stage stops after a given number of iterations. Since the penalizations are the same as in the refinement stage, the proximal operators can be computed with the formulae described previously, depending on the implemented constraints. For S, eq. ( 6) can be used to enforce sparsity. To enforce non-negativity and sparsity in some transformed domain, the cheap update described in section 2.2.1 consisting in composing the proximal operators of the individual penalizations can be used. For A, equations ( 7) and ( 8) can be used depending on the implemented constraint.
Heuristics for the warm-up stage
In the spirit of GMCA, the bGMCA algorithm exploits heuristics to make the separation process more robust to initialization, which mainly consists in making use of a decreasing thresholding strategy. In brief, the entries of the threshold matrix Λ first start with large values and then decrease along the iterations towards final values that only depend on the noise level. This stategy has been shown to significantly improve the performances of the separation process [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] as it provides: i) a better unmixing, ii) an increased robustness to noise, and iii) an increased robustness to spurious local minima.
In the bGMCA algorithm, this strategy is deployed by first identifying the coefficients of each source in I that are not statistically consistent with noise.
Assuming that each source is contaminated with a Gaussian noise with standard deviation σ, this is performed by retaining only the entries whose amplitude is larger than τ σ, where τ ∈ [START_REF] Biswal | Blind source separation of multiple signal sources of fMRI data sets using independent component analysis[END_REF][START_REF] Nuzillard | Application of blind source separation to 1-D and 2-D nuclear magnetic resonance spectroscopy[END_REF]. In practice, the noise standard deviation is estimated empirically using the Median Absolute Deviation (MAD)
estimator. For each source in I, the actual threshold at iteration k is fixed based on a given percentile of the available coefficients with the largest amplitudes. Decreasing the threshold at each iteration is then performed by linearly increasing the percentage of retained coefficients at each iteration:
Percentage = k iterations × 100.
Convergence
The bGMCA algorithm combines sequentially the above warm-up stage and the PALM-based refinement stage. Equipped with the decreasing thresholding strategy, it cannot be proved that the warm-up stage neither converges to a stationary point of eq. ( 2) nor converges at all. In practice, after consecutive iterates, the warm-up stage tends to stabilize. However, it plays a key role to provide a reasonable starting point, as well as threshold values Λ for the refinement procedure. In the refinement stage, the thresholds are computed from the matrices estimated in the warm-up and fixed for the whole refinement step. Based on the PALM algorithm, and with these fixed thresholds, the refinement stage converges to a stationary point of eq. ( 2).
The convergence is also guaranteed with the proposed block-based strategy, as long as the blocks are updated following an essentially cyclic rule [START_REF] Chouzenoux | A block coordinate variable metric forward-backward algorithm[END_REF] or even if they are chosen randomly and updated one by one [START_REF] Patrascu | Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization[END_REF].
Required number of iterations
Intuitively, the required number of iterations should be inversely proportional to r, since only r sources are updated at each iteration, requiring n/r times the number of iterations needed by an algorithm using the full matrices. As will be emphasized later on, the number of required iterations will be smaller than expected, which reduces the computation time.
In the refinement stage, the stopping criterion is based on the angular distance for each column of A, i.e. the angle between the current column and that of the previous iteration. Then, the mean over all the columns is taken:
∆ = j∈[1,n] A (k) j A (k-1) j 1 n (11)
The stopping criterion itself is then a threshold τ used to stop the algorithm when ∆ > τ . In addition, we also fixed a maximal number of iterations.
Numerical experiments on simulated data
In this part, we present our results on simulated data. The goal is to show and to explain on simple data how bGMCA works.
Experimental protocol
The simulated data were generated in the following way: The number of observations m is taken equal to the number of sources:
I = prox γG(.) A (k-1) T I A (k-1) I 2 S (k-1) I - γ A (k-1) T I A (k-1) I 2 A (k-1) T I (A (k-1) S (k-1) -X) A (k) I = prox δJ (.) S (k) I S (k) T I 2 A (k-1) I - δ S (k) I S (k) T I 2 (A (k-1) S (k) -X)S (k) T I ∆ = j∈[1,n] A (k) j A (k-1) j 1 n k = k + 1 end while return A, S 1
m = n.
In this first simulation, no noise is added. The algorithm was launched with 10, 000 iterations. It has to be emphasized that since neither A nor S are non-negative, the corresponding proximal operators we used did not enforce non-negativy. Thus, we used soft-thresholding for S and the oblique constraint for A according to section 2.2.1.
To measure the accuracy of the separation, we followed the definition in [START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] to use a global criterion on A:
C A = median(|PA † A * | -I d ),
Modeling block minimization
In this section, a simple model is introduced to describe the behavior of the bGMCA algorithm. As described in section 2.2, updating a given block
R I = X -A I C S I C = A * I S * I + E + N (12)
A way to further describe the structure of E is to decompose the S matrix in the true matrix plus an error: S I = S * I + I and S I C = S * I C + I C , where S is the estimated matrix, and is the error on S * . Assuming that the errors are small and neglecting the second-order terms, the residual R I can now be written as:
R I = X -A I C S I C = A * I S * I + A * I C S * I C -A I C S * I C -A I C I C + N (13)
This implies that:
E = (A * I C -A I C )S * I C -A I C I C (14)
Equation ( 14) highlights two terms. The first term can be qualified as interferences in that it comes from a leakage of the true sources that are outside the currently updated block. This term vanishes when A I C is perfectly estimated. The second term corresponds to interferences as well as artefacts.
It originates indeed from the error on the sources outside the block I. The artefacts are the errors on the sources induced by the soft thresholding corresponding to the 1 -norm.
Equation ( 14) also allows us to understand how the choice of a given block size r n will impact the separation process:
-Updating small-size blocks can be recast as a small-size source separation problem where the actual number of sources is equal to r. The residual of the sources that are not part of the block I then plays the role of extra noise. As testified by Fig. 1, updating small-size block problems should be easier to tackle.
-Small-size blocks should also yield larger errors E. It is intuitively due to the fact that many potentially badly estimated sources in I C are used for the estimation of A I and S I through the residual, deteriorating this estimation. It can be explained in more details using equation ( 14
Experiment
In this section, we investigate the behavior of the proposed block-based GMCA algorithm with respect to various parameters such as the block size, the number of sources, the conditioning of the mixing matrix and the sparsity level of the sources.
Study of the impact of r and n
In this subsection, bGMCA is evaluated for different numbers of sources n = 20, 50, 100. Each time the block sizes vary in the range 1 ≤ r ≤ n. In this experiment and to complete the description of section 3.1, the parameters for the matrices generation were: p = 0.1, t = 1, 000, C d = 1, m = n, with a Bernoulli-Gaussian distribution for the sources. These results are displayed in Fig. 2a. Interestingly, three different regimes characterize the behavior of the bGMCA algorithm:
-For intermediate and relatively large block sizes (typically r > 5 and r < n -5): we first observe that after an initial deterioration around r = 5 , the separation quality does not vary significantly for increasing block sizes. A degradation of several dB can then be observed for r close to n. In all this part of the curve, the error term E is composed of residuals of sparse sources, and thus E will be rather sparse when the block size is large. Based on the MAD, the thresholds are set according to dense and not to sparse noise. Consequently the automatic thresholding strategy of the bGMCA algorithm will not be sensitive to the estimation errors.
-A very prominent peak can be observed when the block size is of the order of 3. Interestingly, the maximum yields a mixing matrix criterion of about 10 -16 , which means that perfect separation is reached up to numerical errors. This value of 160 dB is at least 80 dB larger than in the standard case r = n, for which the values for the different n are all below 80 dB. In this regime, error propagation is composed of the mixture of a larger number of sparse sources, which eventually entails a densely distributed contribution that can be measured by the MAD-based thresholding procedure. Therefore, the threshold used to estimate the sources is able to filter out both the noise and the estimation errors. Moreover, r = 5 is quite small compared to n. Following the modeling introduced in section 3.2, small block sizes can be recast as a sequence of low-dimensional blind source separation problems, which are simpler to solve.
-For small block sizes (typically r < 4), the separation quality is deteriorated when the block size decreases, especially for large n values. In this regime, the level of estimation error E becomes large, which entails large values for the thresholds Λ. Consequently, the bias induced by the soft-thresholding operator increases, which eventually hampers the performance quality. Furthermore, for a fixed block size r, E increases with the number of sources n, making this phenomenon more pronounced for higher n values.
Condition number of the mixing matrix
In this section, we investigate the role played by the conditioning of the mixing matrix on the performances of the bGMCA algorithm. Fig. 2b displays the empirical results for several condition numbers C d of the A matrix.
There are n = 50 sources generated in the same way as in the previous experiment: with a Bernoulli-Gaussian distribution and p = 0.1, t = 1, 000.
One can observe that when C d increases, the peak present for r close to 5 tends to be flattened, which is probably due to higher projection errors. At some iteration k, the sources are estimated by projecting X -A I c S I c onto the subspace spanned by A I . In the orthogonal case, the projection error is low since A I c and A I are close to orthogonality at the solution. However, this error increases with the condition number C d .
Sparsity level p
In this section, the impact of the sparsity level of the sources is investigated. The sources are still following a Bernoulli-Gaussian distribution.
The parameters are: n = 50, t = 1, 000, C d = 1. As featured in Figure 3, the separation performances at the maximum value decrease slightly with larger p, while a slow shift of the transition between the small/large block size regimes towards larger block sizes operates. Furthermore, the results tend to deteriorate quickly for small block sizes (r < 4). Indeed, owing to the model of subsection 3.2, the contribution of S * I C and I C in the error term [START_REF] Tseng | Convergence of a block coordinate descent method for nondifferentiable minimization[END_REF] increases with p, this effect being even more important for small r (which could also explain the shift of the peak for p = 0.3, by a deterioration of the results at its beginning, r = 3). When p increases, the sources in S I become denser. Instead of being mainly sensitive to the noise and E, the MAD-based thresholding tends to be perturbated by S I , resulting in more artefacts, which eventually hampers the separation performances. This effect increases when the sparsity level of the sources decreases. Beyond improving the separation performances, the use of small block sizes decreases the computational cost of each iteration of the bGMCA algorithm. Since it is iterative, the final running time will depend on both the complexity of each iteration and of the number of iterations. In this part, we focus only on the warm-up stage, which is empirically the most computationally expensive stage. Each iteration of the warm-up stage can be decomposed into the following elementary steps: i) a residual term is computed with a complexity of O(mtr), where m is the number of observations, t the number of samples and r the block size; ii) the pseudo-inverse is performed with the singular value decomposition of a r × r matrix, which yield an overall complexity of O(r 3 + r 2 m + m 2 r); iii) the thresholding-strategy first requires the evaluation of the threshold values, which has a complexity of rt; iv) then the soft-thresholding step which has complexity O(rt); and v) updating A is finally performed using a conjugate gradient algorithm, whose complexity is known to depend on the number of non-zero entries in S and on the condition of this matrix C d (S). An upperbound for this complexity is thus O(rt C d (S)). The final estimate of the complexity of a single iteration is finally given by:
Complexity and computation time
r[mt + rm + m 2 + r 2 + t C d (S)] (15)
With C d (S) the conditioning number of S. Thus, both the r factor and the behavior in r 3 show that small r values will lower the computational budget of each iteration. We further assess the actual number of iterations required by the warm-up stage to yield a good initialization. To this end, the following experiment has been conducted:
1. First, the algorithm is launched with a large number of iterations (e.g. 10000) to give a good initialization for the A and S matrices. The corresponding value of C A is saved and called C * A .
2. Using the same initial conditions, the warm-up stage is re-launched and stops when the mixing matrix criterion reaches 1.05 × C * A (i.e. 5% of the "optimal"initialization for a given setting).
The number of iterations needed to reach the 5% accuracy is reported in Fig. 4. Intuitively, one would expect that when the block size decreases, the required number of iterations should increase by about n/r to keep the number of updates per source constant. This trend is displayed with the straight curve of Fig. 4. Interestingly, Fig. 4 shows that the actual number of iterations to reach the 5% accuracy criterion almost does not vary with r.
Consequently, on top of leading to computationally cheaper iterations, using small block sizes does not require more iterations for the warm-up stage to give a good initialization. Therefore, the use of blocks allows a huge decrease of the computational cost of the warm-up stage and thus of sparse BSS.
Experiment using realistic sources
Context
The goal of this part is to evaluate the behavior of bGMCA and show its efficiency in a more realistic setting. Our data come from a simulated LC -1 H NMR (Liquid Chromatography -1 H Nuclear Magnetic Resonance) experiment. The objective of such a experiment is to identify each of the chemicals compounds present in a fluid, as well as their concentrations. The LC -1 H NMR experiment enables a first physical imperfect separation during which the fluid goes through a chromatography column and its chemicals are separated according to their speeds (which themselves depend on their physical properties). Then, the spectrum of the output of the column is measured at a given time frequency. These measurements of the spectra at different times can be used to feed a bGMCA algorithm to refine the imperfect physical separation.
The fluids on which we worked could for instance correspond to drinks. The goal of bGMCA is then to identify the spectra of each compound (e.g. caffein, saccharose, menthone...) and the mixing coefficients (which are proportional to their concentrations) from the LC -1 H NMR data. BSS has already been successfully applied [START_REF] Toumi | Effective processing of pulse field gradient NMR of mixtures by blind source separation[END_REF] to similar problems but generally with lower number of sources n.
The sources (40 sources with each 10, 000 samples) are composed of elementary sparse non-negative theoretical spectra of chemical compounds taken from the SDBS database 1 , which are further convolved with a Laplacian having a width of 3 samples to simulate a given spectral resolution. Therefore, each convolved source becomes an approximately sparse non-negative row of S. The mixing matrix A of size (m,n) = (320,40) is composed of Gaussians (see Fig. 5), the objective being to have a matrix that could be consistent with the first imperfect physical separation.
Experiments
There are two main differences with the previous experiments of section 3: i) the sources are sparse in the undecimated wavelet domain Φ S , which is These results show that non-negativity yields a huge improvement for all block sizes r, which is expected since the problem is more constrained. This is probably due to the fact that all the small negative coefficients are set to 0, thus artificially allowing lower thresholds and therefore less artefacts. This is especially advantageous in the present context with very low noise2 (the Signal to Noise Ratio -SNR -has a value of 120 dB) where the thresholds do not need to be high to remove noise. Furthermore, the separation quality tends to be constant for r ≥ 10. In this particular setting, non-negativity helps curing the failure of sparse BSS when large blocks are used. However, using smaller block sizes still allows reducing the computation cost while preserving the separation quality. The bGMCA with non-negativity also compares favorably with respect to other tested standard BSS methods (cf. Section 1 for more details), yielding better results for all values of r. In particular, it is always better than HALS, which also uses non-negativity. As an illustration, a single original source is displayed in the right panel of Fig. 6 after its convolution with a Laplacian.
Its estimation using bGMCA with a non-negativity constraint is plotted in dashed line on the same graph, showing the high separation quality because of the nearly perfect overlap between the two curves. Both sources are drawn in the direct domain.
The robustness of the bGMCA algorithm with respect to additive Gaussian noise has further been tested. Fig. 7 reports the evolution of the mixing matrix criterion for varying values of the signal-to-noise ratio. It can be observed that bGMCA yields the best performances for all values of SNR.
Although it seems to particularly benefit from high SNR compared to HALS and EFICA, it still yields better results than the other algorithms for low SNR despite the small block size used (r = 10), which could have been particularly prone to error propagations.
Conclusion
While being central in numerous applications, tackling sparse BSS problems when the number of sources is large is highly challenging. In this article, we describe the block-GMCA algorithm, which is specifically tailored to solve sparse BSS in the large-scale regime. In this setting, the minimiza- All the numerical comparisons conducted show that bGMCA performs at least as well as standard sparse BSS on mixtures of a high number of sources and most of the experiments even show dramatically enhanced separation performances. As a byproduct, the proposed block-based strategy yields a significant decrease of the computational cost of the separation process.
Figure 1 :
1 Figure 1: Evolution of the mixing matrix criterion (whose computation is detailed in sec. 3.1) of four standard BSS algorithms for an increasing n. For comparison, the results of the proposed bGM CA algorithm is presented, showing that its use allows for the good results of GMCA for low n (around 160 dB for n = 3) to persist for n < 50 and to stay much better than GMCA for n > 50. The experiment was conducted using exactly sparse sources S, with 10% non-zero coefficients, the other coefficients having a Gaussian amplitude. The mixing matrix A was taken to be orthogonal. Both A and S were generated randomly, the experiments being done 25 times and the median used to draw the figure.
for 0 k < n max do Choose a set of indices I Estimation of S with a fixed A: S (k)I = prox G(.) (A (k-1) † I R I )Estimation of A with a fixed S: A (k)I = prox J (.) (R I S (k) † I ) Choice of a new threshold Λ (k)heuristic -see section 2.2.3 end for Refinement step while ∆ > τ and k < n max do Choose a set of indices I S (k)
- 2 -
2 Source matrix S: the sources are sparse in the sample domain without requiring any transform (the results would however be identical for any source sparse in an orthogonal representation). The sources in S are exactly sparse and drawn randomly according to a Bernoulli-Gaussian distribution: among the t samples (t = 1, 000), a proportion p (called sparsity degree-unless specified, p = 0.1) of the samples is taken non-zero, with an amplitude drawn according to a standard normal distribution. Mixing matrix A: the mixing matrix is drawn randomly according to a standard normal distribution and modified to have unit columns and a given condition number C d (unless specified, C d = 1).
where A * is the true mixing matrix and A is the solution given by the algorithm, corrected through P for the permutation and scale factors indeterminacies. I d is the identity matrix. This criterion quantifies the quality of the estimation of the mixing directions, that is the columns of A. If they are perfectly estimated, |PA † A * | is equal to I d and C A = 0. The data matrices being drawn randomly, each experiment was performed several times (typically 25 times) and the median of -10 log(C A ) over the experiments will be displayed. The logarithm is used to simplify the reading of the plots despite the high dynamics.
is performed at each iteration from the residual R I = X -A I C S I C . If the estimation were perfect, the residual would be equal to the part of the data explained by the true sources in the current block indexed by I, which would read: R I = A * I S * I , A * and S * being the true matrices. It is nevertheless mandatory to take into account the noise N, as well as a variety of flaws in the estimation by adding a term E to model the estimation error. This entails:
): with more sources in I C , the energy of A I C , A * I C , S * I C and I C increases, yielding bigger error terms (A * I C -A I C )S * I C and -A I C I C . Therefore the errors E become higher, deteriorating the results.
(a) Number of sources. (b) Condition number.
Figure 2 :
2 Figure 2: Left: mixing matrix criterion as a function of r for different n. Right: mixing matrix criterion as a function of r for different C d .
Figure 3 :
3 Figure 3: Mixing matrix criterion as a function of r for different sparsity degrees.
Figure 4 :
4 Figure 4: Right: number of iterations in logarithmic scale as a function of r.
It is designed in two parts: the first columns have relatively spaced Gaussian means while the others have a larger overlap to simulate compounds for which the physical separation is less discriminative. More precisely, an index m ∈ [1, m] is chosen, with m > m/2 (typically, m = 0.75m ). A set of n/2 indices (m k ) k=1... n/2 is then uniformly chosen in [0, m] and another set of n/2 indices (m k ) k= n/2 ...n is chosen in [ m + 1, m]. Each column of A is then created as a Gaussian whose mean is m k . Monte-carlo simulations have been carried out by randomly assigning the sources and the mixing matrix columns. The median over the results of the different experiments will be displayed.
Figure 5 :
5 Figure 5: Exemple of A matrix with 8 columns: the four first columns have spaced means, while the last ones are more correlated
Figure 6 :
6 Figure 6: Left: mixing criterion on realistic sources, with and without a non-negativity constraint. Right: example of a retrieved source, which is almost perfectly superimposed on the true source, therefore showing the quality of the results.
Figure 7 :
7 Figure 7: Mixing criterion on realistic sources, using a non-negative constraint with r = 10
National Institute of Advanced Industrial Science and Technology (AIST), Spectral database for organic compounds: http://sdbs.db.aist.go.jp
Depending on the instrumentation, high SNR values can be reached in such an experiment
Acknowledgement
This work is supported by the European Community through the grant LENA (ERC StG -contract no. 678282).
Appendix
Definition of proximal operators
The proximal operator of an extended-valued proper and lower semicontinuous convex function f : R n → (-∞, ∞] is defined as:
Definition of the soft thresholding operator
The soft thresholding operator S λ (.) is defined as:
Definition of the projection of the columns of a matrix M on the 2 ball
Definition of the projection of a matrix M on the positive orthant | 45,728 | [
"752855",
"858908"
] | [
"554512",
"554445",
"2068"
] |
01767321 | en | [
"info"
] | 2024/03/05 22:32:15 | 2015 | https://inria.hal.science/hal-01767321/file/978-3-319-19195-9_8_Chapter.pdf | Ferruccio Damiani
email: [email protected]
Mirko Viroli
email: [email protected]
Danilo Pianini
email: [email protected]
Jacob Beal
email: [email protected]
Code Mobility Meets Self-organisation: a Higher-order Calculus of Computational Fields
Self-organisation mechanisms, in which simple local interactions result in robust collective behaviors, are a useful approach to managing the coordination of large-scale adaptive systems. Emerging pervasive application scenarios, however, pose an openness challenge for this approach, as they often require flexible and dynamic deployment of new code to the pertinent devices in the network, and safe and predictable integration of that new code into the existing system of distributed self-organisation mechanisms. We approach this problem of combining self-organisation and code mobility by extending "computational field calculus", a universal calculus for specification of self-organising systems, with a semantics for distributed first-class functions. Practically, this allows selforganisation code to be naturally handled like any other data, e.g., dynamically constructed, compared, spread across devices, and executed in safely encapsulated distributed scopes. Programmers may thus be provided with the novel firstclass abstraction of a "distributed function field", a dynamically evolving map from a network of devices to a set of executing distributed processes.
Introduction
In many different ways, our environment is becoming ever more saturated with computing devices. Programming and managing such complex distributed systems is a difficult challenge and the subject of much ongoing investigation in contexts such as cyberphysical systems, pervasive computing, robotic systems, and large-scale wireless sensor networks. A common theme in these investigations is aggregate programming, which aims to take advantage of the fact that the goal of many such systems are best described in terms of the aggregate operations and behaviours, e.g., "distribute the new version of the application to all subscribers", or "gather profile information from everybody in the festival area", or "switch on safety lights on fast and safe paths towards the emergency exit". Aggregate programming languages provide mechanisms for building systems in terms of such aggregate-level operations and behaviours, and a global-to-local mapping that translates such specifications into an implementation in terms of the actions and interactions of individual devices. In this mapping, self-organisation techniques provide an effective source of building blocks for making such systems robust to device faults, network topology changes, and other contingencies. A wide range of such aggregate programming approaches have been proposed [START_REF] Beal | Organizing the aggregate: Languages for spatial computing[END_REF]: most of them share the same core idea of viewing the aggregate in terms of dynamically evolving fields, where a field is a function that maps each device in some domain to a computational value. Fields then become first-class elements of computation, used for tasks such as modelling input from sensors, output to actuators, program state, and the (evolving) results of computation.
Many emerging pervasive application scenarios, however, pose a challenge to these approaches due to their openness. In these scenarios, there is need to flexibly and dynamically deploy new or revised code to pertinent devices in the network, to adaptively shift which devices are running such code, and to safely and predictably integrate it into the existing system of distributed processes. Prior aggregate programming approaches, however, have either assumed that no such dynamic changes of code exist (e.g., [START_REF] Beal | Infrastructure for engineered emergence in sensor/actuator networks[END_REF][START_REF] Viroli | A calculus of computational fields[END_REF]), or else provide no safety guarantees ensuring that dynamically composed code will execute as designed (e.g., [START_REF] Mamei | Programming pervasive and mobile computing applications: The tota approach[END_REF][START_REF] Viroli | Linda in space-time: an adaptive coordination model for mobile ad-hoc environments[END_REF]). Accordingly, our goal in this paper is develop a foundational model that supports both code mobility and the predictable composition of self-organisation mechanisms. Moreover, we aim to support this combination such that these same self-organisation mechanisms can also be applied to manage and direct the deployment of mobile code.
To address the problem in a general and tractable way, we start from the field calculus [START_REF] Viroli | A calculus of computational fields[END_REF], a recently developed minimal and universal [START_REF] Beal | Towards a unified model of spatial computing[END_REF] computational model that provides a formal mathematical grounding for the many languages for aggregate programming. In field calculus, all values are fields, so a natural approach to code mobility is to support fields of first-class functions, just as with first-class functions in most modern programming languages and in common software design patterns such as MapReduce [START_REF] Dean | Mapreduce: simplified data processing on large clusters[END_REF]. By this mechanism, functions (and hence, code) can be dynamically consumed as input, passed around by device-to-device communication, and operated upon just like any other type of program value. Formally, expressions of the field calculus are enriched with function names, anonymous functions, and application of function-valued expressions to arguments, and the operational semantics properly accommodates them with the same core field calculus mechanisms of neighbourhood filtering and alignment [START_REF] Viroli | A calculus of computational fields[END_REF]. This produces a unified model supporting both code mobility and self-organisation, greatly improving over the independent and generally incompatible mechanisms which have typically been employed in previous aggregate programming approaches. Programmers are thus provided with a new first-class abstraction of a "distributed function field": a dynamically evolving map from the network to a set of executing distributed processes.
Section 2 introduces the concepts of higher-order field calculus; Section 3 formalises their semantics; Section 4 illustrates the approach with an example; and Section 5 concludes with a discussion of related and future work.
Fields and First-Class Functions
The defining property of fields is that they allow us to see computation from two different viewpoints. On the one hand, by the standard "local" viewpoint, computation is seen as occurring in a single device, and it hence manipulates data values (e.g., numbers) and communicates such data values with other devices to enable coordination. On the other hand, by the "aggregate" (or "global") viewpoint [START_REF] Viroli | A calculus of computational fields[END_REF], computation is seen as occurring on the overall network of interconnected devices: the data abstraction manipulated is hence a whole distributed field, a dynamically evolving data structure having extent over a subset of the network. This latter viewpoint is very useful when reasoning about aggregates of devices, and will be used throughout this document. Put more precisely, a field value φ may be viewed as a function φ : D → L that maps each device δ in the domain D to an associated data value in range L . Field computations then take fields as input (e.g., from sensors) and produce new fields as outputs, whose values may change over time (e.g., as inputs change or the computation progresses). For example, the input of a computation might be a field of temperatures, as perceived by sensors at each device in the network, and its output might be a Boolean field that maps to true where temperature is greater than 25 • C, and to false elsewhere.
Field Calculus
The field calculus [START_REF] Viroli | A calculus of computational fields[END_REF] is a tiny functional calculus capturing the essential elements of field computations, much as λ -calculus [START_REF] Church | A set of postulates for the foundation of logic[END_REF] captures the essence of functional computation and FJ [START_REF] Igarashi | Featherweight Java: A minimal core calculus for Java and GJ[END_REF] the essence of object-oriented programming. The primitive expressions of field calculus are data values denoted (Boolean, numbers, and pairs), representing constant fields holding the value everywhere, and variables x, which are either function parameters or state variables (see the rep construct below). These are composed into programs using a Lisp-like syntax with five constructs: (1) Built-in function call (o e 1 • • • e n ): A built-in operator o is a means to uniformly model a variety of "point-wise" operations, i.e. involving neither state nor communication. Examples include simple mathematical functions (e.g., addition, comparison, sine) and context-dependent operators whose result depends on the environment (e.g., the 0-ary operator uid returns the unique numerical identifier δ of the device, and the 0-ary nbr-range operator yields a field where each device maps to a subfield mapping its neighbours to estimates of their current distance from the device). The expression
(o e 1 • • • e n ) thus
a) (if x (f (sns)) (g (sns))) f" f" f" f" f" f" f" g" g" g" g" (b) ((if x f g) (sns))
Fig. 1: Field calculus functions are evaluated over a domain of devices. E.g., in (a) the if operation partitions the network into two subdomains, evaluating f where field x is true and g where it is false (both applied to the output of sensor sns). With first-class functions, however, domains must be constructed dynamically based on the identity of the functions stored in the field, as in (b), which implements an equivalent computation.
(3) Time evolution (rep x e 0 e): The "repeat" construct supports dynamically evolving fields, assuming that each device computes its program repeatedly in asynchronous rounds. It initialises state variable x to the result of initialisation expression e 0 (a value or a variable), then updates it at each step by computing e against the prior value of x. For instance, (rep x 0 (+ x 1)) is the (evolving) field counting in each device how many rounds that device has computed. (4) Neighbourhood field construction (nbr e): Device-to-device interaction is encapsulated in nbr, which returns a field φ mapping each neighbouring device to its most recent available value of e (i.e., the information available if devices broadcast the value of e to their neighbours upon computing it). Such "neighbouring" fields can then be manipulated and summarised with built-in operators, e.g., (min-hood (nbr e)) outputs a field mapping each device to the minimum value of e amongst its neighbours.
(5) Domain restriction (if e 0 e 1 e 2 ): Branching is implemented by this construct, which computes e 1 in the restricted domain where e 0 is true, and e 2 in the restricted domain where e 0 is false.
Any field calculus computation may be thus be viewed as a function f taking zero or more input fields and returning one output field, i.e., having the signature f : (D → L ) k → (D → L ). Figure 1a illustrates this concept, showing an example with complementary domains on which two functions are evaluated. This aggregatelevel model of computation over fields can then be "compiled" into an equivalent system of local operations and message passing actually implementing the field calculus program on a distributed system [START_REF] Viroli | A calculus of computational fields[END_REF].
Higher-order Field Calculus The higher-order field calculus (HFC) is an extension of the field calculus with embedded first-class functions, with the primary goal of allowing it to handle functions just like any other value, so that code can be dynamically injected, moved, and executed in network (sub)domains. If functions are "first class" in the language, then: (i) functions can take functions as arguments and return a function as result (higher-order functions); (ii) functions can be created "on the fly" (anonymous The syntax of the calculus is reported in Fig. 2. Values in the calculus include fields φ , which are produced at run-time and may not occur in source programs; also, local values may be smoothly extended by adding other ground values (e.g., characters) and structured values (e.g., lists). Borrowing syntax from [START_REF] Igarashi | Featherweight Java: A minimal core calculus for Java and GJ[END_REF], the overbar notation denotes metavariables over sequences and the empty sequence is denoted by •. E.g., for expressions, we let e range over sequences of expressions, written e 1 , e 2 , . . . e n (n ≥ 0). The differences from the field calculus are as follows: function application expressions (e e) can take an arbitrary expression e instead of just an operator o or a user-defined function name f; anonymous functions can be defined (by syntax (fun (x) e)); and built-in operators, user-defined function names, and anonymous functions are values. This implies that the range of a field can be a function as well. To apply the functions mapped to by such a field, we have to be able to transform the field back into a single aggregatelevel function. Figure 1b illustrates this issue, with a simple example of a function call expression applied to a function-valued field with two different values.
How can we evaluate a function call with such a heterogeneous field of functions? It would seem excessive to run a separate copy of function f for every device that has f as its value in the field. At the opposite extreme, running f over the whole domain is problematic for implementation, because it would require devices that may not have a copy of f to help in evaluating f . Instead, we will take a more elegant approach, in which making a function call acts as a branch, with each function in the range applied only on the subspace of devices that hold that function. Formally, this may be expressed as transforming a function-valued field φ into a function f φ that is defined as:
f φ (ψ 1 , ψ 2 , . . . ) = f ∈φ (D) f (ψ 1 | φ -1 ( f ) , ψ 2 | φ -1 ( f ) , . . . ) (1)
where ψ i are the input fields, φ (D) is set of all functions held as data values by some devices in the domain D of φ , and
ψ i | φ -1 ( f )
is the restriction of ψ i to the subspace of only those devices that φ maps to function f . In fact, when the field of functions is constant, this reduces to be precisely equivalent to a standard function call. This means that we can view ordinary evaluation of function f as equivalent to creating a functionvalued field with a constant value f , then making a function call applying that field to its argument fields. This elegant transformation is the key insight of this paper, enabling first-class functions to be implemented with a minimal change to the existing semantics while also ensuring compatibility with the prior semantics as well, thus also inheriting its previously established desirable properties.
3 The Higher-order Field Calculus: Dynamic and Static Semantics Dynamic Semantics (Big-Step Operational Semantics) As for the field calculus [START_REF] Viroli | A calculus of computational fields[END_REF], devices undergo computation in rounds. In each round, a device sleeps for some time, wakes up, gathers information about messages received from neighbours while sleeping, performs an evaluation of the program, and finally emits a message to all neighbours with information about the outcome of computation before going back to sleep.
The scheduling of such rounds across the network is fair and non-synchronous. This section presents a formal semantics of device computation, which is aimed to represent a specification for any HFC-like programming language implementation. The syntax of the HFC calculus has been introduced in Section 2 (Fig. 2). In the following, we let meta-variable δ range over the denumerable set D of device identifiers (which are numbers). To simplify the notation, we shall assume a fixed program P. We say that "device δ fires", to mean that the main expression of P is evaluated on δ .
We model device computation by a big-step operational semantics where the result of evaluation is a value-tree θ , which is an ordered tree of values, tracking the result of any evaluated subexpression. Intuitively, the evaluation of an expression at a given time in a device δ is performed against the recently-received value-trees of neighbours, namely, its outcome depends on those value-trees. The result is a new value-tree that is conversely made available to δ 's neighbours (through a broadcast) for their firing; this includes δ itself, so as to support a form of state across computation rounds (note that any implementation might massively compress the value-tree, storing only enough information for expressions to be aligned). A value-tree environment Θ is a map from device identifiers to value-trees, collecting the outcome of the last evaluation on the neighbours. This is written δ → θ as short for δ 1 → θ 1 , . . . , δ n → θ n .
The syntax of field values, value-trees and value-tree environments is given in Fig. 3 (top). Figure 3 (middle) defines: the auxiliary functions ρ and π for extracting the root value and a subtree of a value-tree, respectively (further explanations about function π will be given later); the extension of functions ρ and π to value-tree environments; and the auxiliary functions args and body for extracting the formal parameters and the body of a (user-defined or anonymous) function, respectively. The computation that takes place on a single device is formalised by the big-step operational semantics rules given in Fig. 3 (bottom). The derived judgements are of the form δ ;Θ e ⇓ θ , to be read "expression e evaluates to value-tree θ on device δ with respect to the value-tree environment Θ ", where: (i) δ is the identifier of the current device; (ii) Θ is the field of the value-trees produced by the most recent evaluation of (an expression corresponding to) e on δ 's neighbours; (iii) e is a run-time expression (i.e., an expression that may contain field values); (iv) the value-tree θ represents the values computed for all the expressions encountered during the evaluation of e-in particular ρ(θ ) is the resulting value of expression e. The first firing of a device δ after activation or reset is performed with respect to the empty tree environment, while any other firing must consider the outcome of the most recent firing of δ (i.e., whenever Θ is not empty, it includes the value of the most recent evaluation of e on δ )-this is needed to support the stateful semantics of the rep construct.
The operational semantics rules are based on rather standard rules for functional languages, extended so as to be able to evaluate a subexpression e of e with respect to Field values, value-trees, and value-tree environments:
φ ::= δ → field value θ ::= v (θ )
value-tree Θ ::= δ → θ value-tree environment Auxiliary functions:
ρ(v (θ )) = v π i (v (θ 1 , . . . , θ n )) = θ i if 1 ≤ i ≤ n π ,n (v (θ 1 , . . . , θ n+2 )) = θ n+2 if ρ(θ n+1 ) = π i (θ ) = • otherwise π ,n (θ ) = • otherwise For aux ∈ ρ, π i , π ,n : aux(δ → θ ) = aux(θ ) if aux(θ ) = • aux(δ → θ ) = • if aux(θ ) = • aux(Θ ,Θ ) = aux(Θ ), aux(Θ ) args(f) = x if (def f(x) e) body(f) = e if (def f(x) e) args((fun (x) e)) = x body((fun (x) e)) = e
Rules for expression evaluation:
δ ;Θ e ⇓ θ [E-LOC] δ ;Θ ⇓ () [E-FLD] φ = φ | dom(Θ )∪{δ } δ ;Θ φ ⇓ φ () [E-B-APP] δ ; π n+1 (Θ ) e n+1 ⇓ θ n+1 ρ(θ n+1 ) = o δ ; π 1 (Θ ) e 1 ⇓ θ 1 • • • δ ; π n (Θ ) e n ⇓ θ n v = ε o δ ;Θ (ρ(θ 1 ), . . . , ρ(θ n )) δ ;Θ e n+1 (e 1 , . . . , e n ) ⇓ v (θ 1 , . . . , θ n+1 ) [E-D-APP] δ ; π n+1 (Θ ) e n+1 ⇓ θ n+1 ρ(θ n+1 ) = args( ) = x 1 , . . . , x n δ ; π 1 (Θ ) e 1 ⇓ θ 1 • • • δ ; π n (Θ ) e n ⇓ θ n body( ) = e δ ; π ,n (Θ ) e[x 1 := ρ(θ 1 ) . . . x n := ρ(θ n )] ⇓ θ n+2 v = ρ(θ n+2 ) δ ;Θ e n+1 (e 1 , . . . , e n ) ⇓ v (θ 1 , . . . , θ n+2 ) [E-REP] 0 = ρ(Θ (δ )) if Θ = / 0 otherwise δ ; π 1 (Θ ) e[x := 0 ] ⇓ θ 1 1 = ρ(θ 1 ) δ ;Θ (rep x e) ⇓ 1 (θ 1 ) [E-NBR] Θ 1 = π 1 (Θ ) δ ;Θ 1 e ⇓ θ 1 φ = ρ(Θ 1 )[δ → ρ(θ 1 )] δ ;Θ (nbr e) ⇓ φ (θ 1 ) [E-THEN] δ ; π 1 (Θ ) e ⇓ θ 1 ρ(θ 1 ) = true δ ; π true,0 Θ e ⇓ θ 2 = ρ(θ 2 ) δ ;Θ (if e e e ) ⇓ (θ 1 , θ 2 ) [E-ELSE] δ ; π 1 (Θ ) e ⇓ θ 1 ρ(θ 1 ) = false δ ; π false,0 Θ e ⇓ θ 2 = ρ(θ 2 ) δ ;Θ (if e e e ) ⇓ (θ 1 , θ 2 )
Fig. 3: Big-step operational semantics for expression evaluation the value-tree environment Θ obtained from Θ by extracting the corresponding subtree (when present) in the value-trees in the range of Θ . This process, called alignment, is modelled by the auxiliary function π, defined in Fig. 3 (middle). The function π has two different behaviours (specified by its subscript or superscript): π i (θ ) extracts the i-th subtree of θ , if it is present; and π ,n (θ ) extracts the (n + 2)-th subtree of θ , if it is present and the root of the (n + 1)-th subtree of θ is equal to the local value .
Rules [E-LOC] and [E-FLD] model the evaluation of expressions that are either a local value or a field value, respectively. For instance, evaluating the expression 1 produces (by rule [E-LOC]) the value-tree 1 (), while evaluating the expression + produces the value-tree + (). Note that, in order to ensure that domain restriction is obeyed (cf.
Section 2), rule [E-FLD] restricts the domain of the value field φ to the domain of Θ augmented by δ .
Rule [E-B-APP] models the application of built-in functions. It is used to evaluate expressions of the form (e n+1 e 1 • • • e n ) such that the evaluation of e n+1 produces a value-tree θ n+1 whose root ρ(θ n+1 ) is a built-in function o. It produces the value-tree v (θ 1 , . . . , θ n , θ n+1 ), where θ 1 , . . . , θ n are the value-trees produced by the evaluation of the actual parameters e 1 , . . . , e n (n ≥ 0) and v is the value returned by the function.
Rule [E-B-APP] exploits the special auxiliary function ε, whose actual definition is abstracted away. This is such that ε o δ ;Θ (v) computes the result of applying built-in function o to values v in the current environment of the device δ . In particular, we assume that the built-in 0-ary function uid gets evaluated to the current device identifier (i.e., ε uid δ ;Θ () = δ ), and that mathematical operators have their standard meaning, which is independent from δ and Θ (e.g., ε + δ ;Θ (1, 2) = 3). The ε function also encapsulates measurement variables such as nbr-range and interactions with the external world via sensors and actuators. In order to ensure that domain restriction is obeyed, for each built-in function o we assume that:
ε o δ ;Θ (v 1 , • • • , v n ) is defined only if all the field values in v 1 , . . . , v n have domain dom(Θ ) ∪ {δ }; and if ε o δ ;Θ (v 1 , • • • , v n ) returns a field value φ , then dom(φ ) = dom(Θ ) ∪ {δ }.
For instance, evaluating the expression (+ 1 2) produces the value-tree 3 (1 (), 2 (), + ()). The value of the whole expression, 3, has been computed by using rule [E-B-APP] to evaluate the application of the sum operator + (the root of the third subtree of the value-tree) to the values 1 (the root of the first subtree of the value-tree) and 2 (the root of the second subtree of the value-tree). In the following, for sake of readability, we sometimes write the value v as short for the value-tree v (). Following this convention, the value-tree 3 (1 (), 2 (), + ()) is shortened to 3 (1, 2, +).
Rule [E-D-APP] models the application of user-defined or anonymous functions, i.e., it is used to evaluate expressions of the form (e n+1 e 1 • • • e n ) such that the evaluation of e n+1 produces a value-tree θ n+1 whose root = ρ(θ n+1 ) is a user-defined function name or an anonymous function. It is similar to rule [E-B-APP], however it produces a value-tree which has one more subtree, θ n+2 , which is produced by evaluating the body of the function with respect to the value-tree environment π ,n (Θ ) containing only the value-trees associated to the evaluation of the body of the same function .
To illustrate rule [E-REP] (rep construct), as well as computational rounds, we consider program (rep x 0 (+ x 1)) (cf. Section 2). The first firing of a device δ after activation or reset is performed againstthe empty tree environment. Therefore, according to rule [E-REP], to evaluate (rep x 0 (+ x 1)) means to evaluate the subexpression (+ 0 1), obtained from (+ x 1) by replacing x with 0. This produces the valuetree θ 1 = 1 (1 (0, 1, +)), where root 1 is the overall result as usual, while its sub-tree is the result of evaluating the third argument. Any subsequent firing of the device δ is performed with respect to a tree environment Θ that associates to δ the outcome of the most recent firing of δ . Therefore, evaluating (rep x 0 (+ x 1)) at the second firing means to evaluate the subexpression (+ 1 1), obtained from (+ x 1) by replacing x with 1, which is the root of θ 1 . Hence the results of computation are 1, 2, 3, and so on.
Value-trees also support modelling information exchange through the nbr construct, as of rule [E-NBR]. Consider the program e = (min-hood (nbr (sns-num))), where the 1-ary built-in function min-hood returns the lower limit of values in the range of its field argument, and the 0-ary built-in function sns-num returns the numeric value measured by a sensor. Suppose that the program runs on a network of three fully connected devices δ A , δ B , and δ C where sns-num returns 1 on δ A , 2 on δ B , and 3 on δ C . Considering an initial empty tree-environment / 0 on all devices, we have the following: the evaluation of (sns-num) on δ A yields 1 (sns-num) (by rules [E-LOC] and [E-B-APP], since ε sns-num δ A ; / 0 () = 1); the evaluation of (nbr (sns-num)) on δ A yields (δ A → 1) (1 (sns-num)) (by rule [E-NBR]); and the evaluation of e on δ A yields
θ A = 1 ((δ A → 1) (1 (sns-num)), min-hood) (by rule [E-B-APP], since ε min-hood δ A ; / 0 ((δ A → 1)) = 1)
. Therefore, after its first firing, device δ A produces the value-tree θ A . Similarly, after their first firing, devices δ B and δ C produce the value-trees
θ B = 2 ((δ B → 2) (2 (sns-num)), min-hood) θ C = 3 ((δ C → 3) (3 (sns-num)), min-hood)
respectively. Suppose that device δ B is the first device that fires a second time. Then the evaluation of e on δ B is now performed with respect to the value tree environment
Θ B = (δ A → θ A , δ B → θ B , δ C → θ C
) and the evaluation of its subexpressions (nbr(sns-num)) and (sns-num) is performed, respectively, with respect to the following value-tree environments obtained from Θ B by alignment:
Θ B = π 1 (Θ B ) = (δ A → (δ A → 1) (1 (sns-num)), δ B → • • • , δ C → • • • ) Θ B = π 1 (Θ B ) = (δ A → 1 (sns-num), δ B → 2 (sns-num), δ C → 3 (sns-num))
We have that ε sns-num δ B ;Θ B () = 2; the evaluation of (nbr (sns-num)) on δ B with respect to Θ B yields φ (2 (sns-num)) where φ = (δ A → 1, δ B → 2, δ C → 3); and ε min-hood δ B ;Θ B (φ ) = 1. Therefore the evaluation of e on δ B produces the value-tree 1 (φ (2 (sns-num)), min-hood). Namely, the computation at device δ B after the first round yields 1, which is the minimum of sns-num across neighbours-and similarly for δ A and δ C . We now present an example illustrating first-class functions. Consider the program ((pick-hood (nbr (sns-fun)))), where the 1-ary built-in function pick-hood returns at random a value in the range of its field argument, and the 0-ary built-in function sns-fun returns a 0-ary function returning a value of type num. Suppose that the program runs again on a network of three fully connected devices δ A , δ B , and δ C where sns-fun returns 0 = (fun () 0) on δ A and δ B , and returns 1 = (fun () e ) on δ C , where e = (min-hood (nbr (sns-num))) is the program illustrated in the previous example. Assume that sns-num returns 1 on δ A , 2 on δ B , and 3 on δ C . Then after its first firing, device δ A produces the value-tree
θ A = 0 ( 0 ((δ A → 0 ) ( 0 (sns-fun)), pick-hood), 0)
where the root of the first subtree of θ A is the anonymous function 0 (defined above), and the second subtree of θ A , 0, has been produced by the evaluation of the body 0 of 0 . After their first firing, devices δ B and δ C produce the value-trees
θ B = 0 ( 0 ((δ B → 0 ) ( 0 (sns-fun)), pick-hood), 0) θ C = 3 ( 1 ((δ C → 1 ) ( 1 (sns-fun)), pick-hood), θ C )
respectively, where θ C is the value-tree for e given in the previous example.
Suppose that device δ A is the first device that fires a second time. The computation is performed with respect to the value tree environment
Θ A = (δ A → θ A , δ B → θ B , δ C → θ C
) and produces the value-tree 1 ( 1 (φ ( 1 (sns-fun)), pick-hood), θ A ), where
φ = (δ A → 1 , δ C → 1 ) and θ A = 1 ((δ A → 1, δ C → 3) (1 (sns-num)), min-hood),
since, according to rule [E-D-APP], the evaluation of the body e of 1 (which produces the value-tree θ A ) is performed with respect to the value-tree environment π 1 ,0 (Θ A ) = (δ C → θ C ). Namely, device δ A executed the anonymous function 1 received from δ C , and this was able to correctly align with execution of 1 at δ C , gathering values perceived by sns-num of 1 at δ A and 3 at δ C .
Static Semantics (Type-Inference System) We have developed a variant of the Hindley-Milner type system [START_REF] Damas | Principal type-schemes for functional programs[END_REF] for the HFC calculus.This type system has two kinds of types, local types (the types for local values) and field types (the types for field values), and is aimed to guarantee the following two properties:
Type Preservation If a well-typed expression e has type T and e evaluates to a value tree θ , then ρ(θ ) also has type T. Domain Alignment The domain of every field value arising during the evaluation of a well-typed expression on a device δ consists of δ and of the aligned neighbours.
Alignment is key to guarantee that the semantics correctly relates the behaviour of if, nbr, rep and function application-namely, two fields with different domain are never allowed to be combined. Besides performing standard checks (i.e., in a function application expression (e n+1 e 1 • • • e n ) the arguments e 1 , . . . e n have the expected type; in an if-expression (if e 0 e 1 e 2 ) the condition e 0 has type bool and the branches e 1 and e 2 have the same type; etc.) the type system perform additional checks in order to ensure domain alignment. In particular, the type rules check that:
-In an anonymous function (fun (x) e) the free variables y of e that are not in x have local type. This prevents a device δ from creating a closure e = (fun (x) e)[y := φ ] containing field values φ (whose domain is by construction equal to the subset of the aligned neighbours of δ ). The closure e may lead to a domain alignment error since it may be shifted (via the nbr construct) to another device δ that may use it (i.e., apply e to some arguments); and the evaluation of the body of e may involve use of a field value φ in φ such that the set of aligned neighbours of δ is different from the domain of φ . -In a rep-expression (rep x w e) it holds that x, w and e have (the same) local type.
This prevents a device δ from storing in x a field value φ that may be reused in the next computation round of δ , when the set of the set of aligned neighbours may be different from the domain of φ . -In a nbr-expression (nbr e) the expression e has local type. This prevents the attempt to create a "field of fields" (i.e., a field that maps device identifiers to field values)-which is pragmatically often overly costly to maintain and communicate. -In an if-expression (if e 0 e 1 e 2 ) the branches e 1 and e 2 have (the same) local type.
This prevents the if-expression from evaluating to a field value whose domain is different from the subset of the aligned neighbours of δ .
We now illustrate the application of first-class functions using a pervasive computing example. In this scenario, people wandering a large environment (like an outdoor festival, an airport, or a museum) each carry a personal device with short-range pointto-point ad-hoc capabilities (e.g. a smartphone sending messages to others nearby via Bluetooth or Wi-Fi). All devices run a minimal "virtual machine" that allows runtime injection of new programs: any device can initiate a new distributed process (in the form of a 0-ary anonymous function), which the virtual machine spreads to all other devices within a specified range (e.g., 30 meters). For example, a person might inject a process that estimates crowd density by counting the number of nearby devices or a process that helps people to rendezvous with their friends, with such processes likely implemented via various self-organisation mechanisms. The virtual machine then executes these using the first-class function semantics above, providing predictable deployment and execution of an open class of runtime-determined processes.
Virtual Machine Implementation The complete code for our example is listed in Figure 4, with syntax coloring to increase readability: grey for comments, red for field calculus keywords, blue for user-defined functions, and green for built-in operators. In this code, we use the following naming conventions for built-ins: functions sns-* embed sensors that return a value perceived from the environment (e.g., sns-injection-point returns a Boolean indicating whether a device's user wants to inject a function); functions *-hood yield a local value obtained by aggregating over the field value φ in input (e.g., sum-hood sums all values in each neighbourhood); functions *-hood+ behave the same but exclude the value associated with the current device; and built-in functions pair, fst, and snd respectively create a pair of locals and access a pair's first and second component. Additionally, given a built-in o that takes n ≥ 1 locals an returns a local, the built-ins o[*,...,*] are variants of o where one or more inputs are fields (as indicated in the bracket, l for local or f for field), and the return value is a field, obtained by applying operator o in a point-wise manner. For instance, as = compares two locals returning a Boolean, =[f,f] is the operator taking two field inputs and returns a Boolean field where each element is the comparison of the corresponding elements in the inputs, and similarly =[f,l] takes a field and a local and returns a Boolean field where each element is the comparison of the corresponding element of the field in input with the local. The first two functions in Figure 4 implement frequently used self-organisation mechanisms. Function distance-to, also known as gradient [START_REF] Clement | Self-assembly and self-repairing topologies[END_REF][START_REF] Lin | The gradient model load balancing method[END_REF], computes a field of minimal distances from each device to the nearest "source" device (those mapping to true in the Boolean input field). This is computed by repeated application of the triangle inequality (via rep): at every round, source devices take distance zero, while all others update their distance estimates d to the minimum distance estimate through their neighbours (min-hood+ of each neighbour's distance estimate (nbr d) plus the distance to that neighbour nbr-range); source and non-source are discriminated by mux, a builtin "multiplexer" that operates like an if but differently from it always evaluates both branches on every device. Repeated application of this update procedure self-stabilises into the desired field of distances, regardless of any transient perturbations or faults [START_REF] Kutten | Time-adaptive self stabilization[END_REF]. The second self-organisation mechanism, gradcast, is a directed broadcast, achieved by a computation identical to that of distance-to, except that the values are pairs (note that pair[f,f] produces a field of pairs, not a pair of fields), with the second element set to the value of v at the source: min-hood operates on pairs by applying lexicographic ordering, so the second value of the pair is automatically carried along shortest paths from the source. The result is a field of pairs of distance and most recent value of v at the nearest source, of which only the value is returned.
The latter two functions in Figure 4 use these self-organisation methods to implement our simple virtual machine. Code mobility is implemented by function deploy, which spreads a 0-ary function g via gradcast, keeping it bounded within distance range from sources, and holding 0-ary function no-op elsewhere. The corresponding field of functions is then executed (note the double parenthesis). The virtual-machine then simply calls deploy, linking its arguments to sensors configuring deployment range and detecting who wants to inject which functions (and using (fun () 0) as no-op function).
In essence, this virtual machine implements a code-injection model much like those used in a number of other pervasive computing approaches (e.g., [START_REF] Mamei | Programming pervasive and mobile computing applications: The tota approach[END_REF][START_REF] Gelernter | Generative communication in linda[END_REF][START_REF] Butera | Programming a Paintable Computer[END_REF])-though of course it has much more limited features, since it is only an illustrative example. With these previous approaches, however, code shares lexical scope and cannot have its network domain externally controlled. Thus, injected code may spread through the network unpredictably and may interact unpredictably with other injected code that it encounters. The extended field calculus semantics that we have presented, however, ensures that injected code moves only within the range specified to the virtual machine and remains lexically isolated from different injected code, so that no variable can be unexpectedly affected by interactions with neighbours.
Simulated Example Application
We further illustrate the application of first-class functions with an example in a simulated scenario. Consider a museum, whose docents monitor their efficacy in part by tracking the number of patrons nearby while they are working. To monitor the number of nearby patrons, each docent's device injects the following anonymous function (of type: () → num):
(fun () (low-pass 0.5 (converge-sum (distance-to (sns-injection-point))
(sns-patron))))
This counts patrons using the function converge-sum defined in Figure 4(bottom), a simple version of another standard self-organisation mechanism [START_REF] Beal | Building blocks for aggregate programming of self-organising applications[END_REF] which operates like an inverse broadcast, summing the values sensed by sns-patron (1 for a patron, 0 for a docent) down the distance gradient back to its source-in this case the docent at the injection point. In particular, each device's local value is summed with those identifying it as their parent (their closest neighbour to the source, breaking ties with device unique identifiers from built-in function uid), resulting in a relatively balanced spanning tree of summations with the source at its root. This very simple version of summation is somewhat noisy on a moving network of devices, so its output is passed through a simple low-pass filter, the function low-pass, also defined in Figure 4(bottom), in order to smooth its output and improve the quality of estimate. Figure 5a shows a simulation of a docent and 250 patrons in a large 100x30 meter museum gallery. Of the patrons, 100 are a large group of school-children moving together past the stationary docent from one side of the gallery to the other, while the rest are wandering randomly. In this simulation, people move at an average 1 m/s, the docent and all patrons carry personal devices running the virtual machine, executing asynchronously at 10Hz, and communicating via low-power Bluetooth to a range of 10 meters. The simulation was implemented using the ALCHEMIST [START_REF] Pianini | Chemical-oriented simulation of computational systems with Alchemist[END_REF] simulation framework and the Protelis [START_REF] Pianini | Practical aggregate programming with PROTELIS[END_REF] incarnation of field calculus, updated to the extended version of the calculus presented in this paper.
In this simulation, at time 10 seconds, the docent injects the patron-counting function with a range of 25 meters, and at time 70 seconds removes it. Figure 5a shows two snapshots of the simulation, at times 11 (top) and 35 (bottom) seconds, while Figure 5b compares the estimated value returned by the injected process with the true value. Note that upon injection, the process rapidly disseminates and begins producing good estimates of the number of nearby patrons, then cleanly terminates upon removal.
Conclusion, Related and Future Work
Conceiving emerging distributed systems in terms of computations involving aggregates of devices, and hence adopting higher-level abstractions for system development, is a thread that has recently received a good deal of attention. A wide range of aggregate programming approaches have been proposed, including Proto [START_REF] Beal | Infrastructure for engineered emergence in sensor/actuator networks[END_REF], TOTA [START_REF] Mamei | Programming pervasive and mobile computing applications: The tota approach[END_REF], the (bio)chemical tuple-space model [START_REF] Viroli | Spatial coordination of pervasive services through chemical-inspired tuple spaces[END_REF], Regiment [START_REF] Newton | Region streams: Functional macroprogramming for sensor networks[END_REF], the σ τ-Linda model [START_REF] Viroli | Linda in space-time: an adaptive coordination model for mobile ad-hoc environments[END_REF], Paintable Computing [START_REF] Butera | Programming a Paintable Computer[END_REF], and many others included in the extensive survey of aggregate programming languages given in [START_REF] Beal | Organizing the aggregate: Languages for spatial computing[END_REF]. Those that best support self-organisation approaches to robust and environment-independent computations have generally lacked well-engineered mechanisms to support openness and code mobility (injection, update, etc.). Our contribution has been to develop a core calculus, building on the work presented in [START_REF] Viroli | A calculus of computational fields[END_REF], that smoothly combines for the first time self-organisation and code mobility, by means of the abstraction of "distributed function field". This combination of first-class functions with the domain-restriction mechanisms of field calculus allows the predictable and safe composition of distributed self-organisation mechanisms at runtime, thereby enabling robust operation of open pervasive systems. Furthermore, the simplicity of the calculus enables it to easily serve as both an analytical framework and a programming framework, and we have already incorporated this into Protelis [START_REF] Pianini | Practical aggregate programming with PROTELIS[END_REF], thereby allowing these mechanisms to be deployed both in simulation and in actual distributed systems.
Future plans include consolidation of this work, by extending the calculus and its conceptual framework, to support an analytical methodology and a practical toolchain for system development, as outlined in [START_REF] Beal | Building blocks for aggregate programming of self-organising applications[END_REF]. First, we aim to apply our approach to support various application needs for dynamic management of distributed processes [START_REF] Beal | Dynamically defined processes for spatial computers[END_REF], which may also impact the methods of alignment for anonymous functions. Second, we plan to isolate fragments of the calculus that satisfy behavioural properties such as self-stabilisation, quasi-stabilisation to a dynamically evolving field, or density independence, following the approach of [START_REF] Viroli | A calculus of self-stabilising computational fields[END_REF]. Finally, these foundations can be applied in developing APIs enabling the simple construction of complex distributed applications, building on the work in [START_REF] Beal | Building blocks for aggregate programming of self-organising applications[END_REF] to define a layered library of self-organisation patterns, and applying these APIs to support a wide range of practical distributed applications.
2 )
2 produces a field mapping each device identifier δ to the result of applying o to the values at δ of its n ≥ 0 arguments e 1 , . . . , e n . (Function call (f e 1 . . . e n ): Abstraction and recursion are supported by function definition: functions are declared as (def f(x 1 . . . x n ) e) (where elements x i are formal parameters and e is the body), and expressions of the form (f e 1 . . . e n ) are the way of calling function f passing n arguments.
(
eFig. 2 :
2 Fig.2: Syntax of HFC (differences from field calculus are highlighted in grey).
Fig. 4 :
4 Fig. 4: Virtual machine code (top) and application-specific code (bottom).
Estimated vs. True Count
Fig. 5 :
5 Fig. 5: (a) Two snapshots of museum simulation: patrons (grey) are counted (black) within 25 meters of the docent (green). (b) Estimated number of nearby patrons (grey) vs. actual number (black) in the simulation.
This work has been partially supported by HyVar (www.hyvar-project.eu, this project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 644298 -Damiani), by EU FP7 project SAPERE (www.sapere-project.eu, under contract No 256873 -Viroli), by ICT COST Action IC1402 ARVI (www.cost-arvi.eu -Damiani), by ICT COST Action IC1201 BETTY (www.behavioural-types.eu -Damiani), by the Italian PRIN 2010/2011 project CINA (sysma.imtlucca.it/cina -Damiani & Viroli), by Ateneo/CSP project SALT (salt.di.unito.it -Damiani), and by the United States Air Force and the Defense Advanced Research Projects Agency under Contract No. FA8750-10-C-0242 (Beal). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views, opinions, and/or findings | 46,513 | [
"978926",
"856217",
"1009076",
"934716"
] | [
"47709",
"30978",
"30978",
"119348"
] |
01767327 | en | [
"info"
] | 2024/03/05 22:32:15 | 2015 | https://inria.hal.science/hal-01767327/file/978-3-319-19195-9_1_Chapter.pdf | Luca Padovani
Luca Novara
Types for Deadlock-Free Higher-Order Programs
Type systems for communicating processes are typically studied using abstract models -e.g., process algebras -that distill the communication behavior of programs but overlook their structure in terms of functions, methods, objects, modules. It is not always obvious how to apply these type systems to structured programming languages. In this work we port a recently developed type system that ensures deadlock freedom in the π-calculus to a higher-order language.
Introduction
In this article we develop a type system that guarantees well-typed programs that communicate over channels to be free from deadlocks. Type systems ensuring this property already exist [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF][START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF], but they all use the π-calculus as the reference language. This choice overlooks some aspects of concrete programming languages, like the fact that programs are structured into compartmentalized blocks (e.g., functions) within which only the local structure of the program (the body of a function) is visible to the type system, and little if anything is know about the exterior of the block (the callers of the function). The structure of programs may hinder some kinds of analysis: for example, the type systems in [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF][START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] enforce an ordering of communication events and to do so they take advantage of the nature of π-calculus processes, where programs are flat sequences of communication actions. How do we reason on such ordering when the execution order is dictated by the reduction strategy of the language rather than by the syntax of programs, or when events occur within a function, and nothing is known about the events that are supposed to occur after the function terminates? We answer these questions by porting the type system in [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] to a higher-order functional language.
To illustrate the key ideas of the approach, let us consider the program send a (recv b) | send b (recv a) (1.1) consisting of two parallel threads. The thread on the left is trying to send the message received from channel b on channel a; the thread on the right is trying to do the opposite. The communications on a and b are mutually dependent, and the program is a deadlock. The basic idea used in [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] and derived from [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF] for detecting deadlocks is to assign each channel a number -which we call level -and to verify that channels are used in order according to their levels. In (1.1) this mechanism requires b to have smaller level than a in the leftmost thread, and a to have a smaller level than b in the rightmost thread. No level assignment can simultaneously satisfy both constraints. In order to perform these checks with a type system, the first step is to attach levels to channel types. We therefore assign the types ![int] m and ?[int] n respectively to a and b in the leftmost thread of (1.1), and ?[int] m and ![int] n to the same channels in the rightmost thread of (1.1). Crucially, distinct occurrences of the same channel have types with opposite polarities (input ? and output !) and equal level. We can also think of the assignments send : ∀ı.![int] ı → int → unit and recv : ∀ı.?[int] ı → int for the communication primitives, where we allow polymorphism on channel levels. In this case, the application send a (recv b) consists of two subexpressions, the partial application send a having type int → unit and its argument recv b having type int. Neither of these types hints at the I/O operations performed in these expressions, let alone at the levels of the channels involved. To recover this information we pair types with effects [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF]: the effect of an expression is an abstract description of the operations performed during its evaluation. In our case, we take as effect the level of channels used for I/O operations, or ⊥ in the case of pure expressions that perform no I/O. So, the judgment
b : ?[int] n recv b : int & n
states that recv b is an expression of type int whose evaluation performs an I/O operation on a channel with level n. As usual, function types are decorated with a latent effect saying what happens when the function is applied to its argument. So,
a : ![int] m send a : int → m unit & ⊥
states that send a is a function that, applied to an argument of type int, produces a result of type unit and, in doing so, performs an I/O operation on a channel with level m. By itself, send a is a pure expression whose evaluation performs no I/O operations, hence the effect ⊥. Effects help us detecting dangerous expressions: in a call-by-value language an application e 1 e 2 evaluates e 1 first, then e 2 , and finally the body of the function resulting from e 1 . Therefore, the channels used in e 1 must have smaller level than those occurring in e 2 and the channels used in e 2 must have smaller level than those occurring in the body of e 1 . In the specific case of send a (recv b) we have ⊥ < n for the first condition, which is trivially satisfied, and n < m for the second one. Since the same reasoning on send b (recv a) also requires the symmetric condition (m < n), we detect that the parallel composition of the two threads in (1.1) is ill typed, as desired.
It turns out that the information given by latent effects in function types is not sufficient for spotting some deadlocks. To see why, consider the function
f def = λ x.(send a x; send b x)
which sends its argument x on both a and b and where ; denotes sequential composition. The level of a (say m) should be smaller than the level of b (say n), for a is used before b (we assume that communication is synchronous and that send is a potentially blocking operation). The question is, what is the latent effect that decorates the type of f , of the form int → h unit? Consider the two obvious possibilities: if we take h = m, then
recv a | f 3; recv b (1.2)
is well typed because the effect m of f 3 is smaller than the level of b in recv b, which agrees with the fact that f 3 is evaluated before recv b; if we take h = n, then
recv a; f 3 | recv b (1.3)
is well typed for similar reasons. This is unfortunate because both (1.3) and (1.2) reduce to a deadlock. To flag both of them as ill typed, we must refine the type of f to int → m,n unit where we distinguish the smallest level of the channels that occur in the body of f (that is m) from the greatest level of the channels that are used by f when f is applied to an argument (that is n). The first annotation gives information on the channels in the function's closure, while the second annotation is the function's latent effect, as before. So (1.2) is ill typed because the effect of f 3 is the same as the level of b in recv b and (1.3) is ill typed because the effect of recv a is the same as the level of f in f 3.
In the following, we define a core multithreaded functional language with communication primitives (Section 2), we present a basic type and effect system, extend it to address recursive programs, and state its properties (Section 3). Finally, we briefly discuss closely related work and a few extensions (Section 4). Proofs and additional material can be found in long version of the paper, on the first author's home page.
Language syntax and semantics
In defining our language, we assume a synchronous communication model based on linear channels. This assumption limits the range of systems that we can model. However, asynchronous and structured communications can be encoded using linear channels: this has been shown to be the case for binary sessions [START_REF] Dardha | Session types revisited[END_REF] and for multiparty sessions to a large extent [10, technical report].
We use a countable set of variables x, y, . . . , a countable set of channels a, b, . . . , and a set of constants k. Names u, . . . are either variables or channels. We consider a language of expressions and processes as defined below: We write _ for unused/fresh variables. Constants include the unitary value (), the integer numbers m, n, . . . , as well as the primitives fix, fork, new, send, recv whose semantics will be explained shortly. Processes are either threads e , or the restriction (νa)P of a channel a with scope P, or the parallel composition P | Q of processes. The notions of free and bound names are as expected, given that the only binders are λ 's and ν's. We identify terms modulo renaming of bound names and we write fn(e) (respectively, fn(P)) for the set of names occurring free in e (respectively, in P).
The reduction semantics of the language is given by two relations, one for expressions, another for processes. We adopt a call-by-value reduction strategy, for which we need to define reduction contexts E , . . . and values v, w, . . . respectively as:
E ::= [ ] E e vE v, w ::= k a λ x.e send v
The reduction relation -→ for expressions is defined by standard rules
(λ x.e)v -→ e{v/x} fix λ x.e -→ e{fix λ x.e/x}
and closed under reduction contexts. As usual, e{e /x} denotes the capture-avoiding substitution of e for the free occurrences of x in e.
Table 1. Reduction semantics of expressions and processes.
E [send a v] | E [recv a] a - -→ E [()] | E [v] E [fork v] τ - -→ E [()] | v() E [new()] τ - -→ (νa) E [a] a ∈ fn(E ) e -→ e e τ - -→ e P - -→ P P | Q - -→ P | Q P - -→ Q (νa)P - -→ (νa)Q = a P a - -→ Q (νa)P τ - -→ Q P ≡ - -→≡ Q P - -→ Q
The reduction relation of processes (Table 1) has labels , . . . that are either a channel name a, signalling that a communication has occurred on a, or the special symbol τ denoting any other reduction. There are four base reductions for processes: a communication occurs between two threads when one is willing to send a message v on a channel a and the other is waiting for a message from the same channel; a thread that contains a subexpression fork v spawns a new thread that evaluates v(); a thread that contains a subexpression new() creates a new channel; the reduction of an expression causes a corresponding τ-labeled reduction of the thread in which it occurs. Reduction for processes is then closed under parallel compositions, restrictions, and structural congruence. The restriction of a disappears as soon as a communication on a occurs: in our model channels are linear and can be used for one communication only; structured forms of communication can be encoded on top of this simple model (see Example 2 and [5]). Structural congruence is defined by the standard rules rearranging parallel compositions and channel restrictions, where () plays the role of the inert process.
We conclude this section with two programs written using a slightly richer language equipped with let bindings, conditionals, and a few additional operators. All these constructs either have well-known encodings or can be easily accommodated. The fresh channels a and b are used to collect the results from the recursive, parallel invocations of fibo. Note that expressions are intertwined with I/O operations. It is relevant to ask whether this version of fibo is deadlock free, namely if it is able to reduce until a result is computed without blocking indefinitely on an I/O operation.
Example 2 (signal pipe). In this example we implement a function pipe that forwards signals received from an input stream x to an output stream y: let cont = λx.let c = new() in (fork λ_.send x c); c in let pipe = fix λpipe.λx.λy.pipe (recv x) (cont y)
Note that this pipe is only capable of forwarding handshaking signals. A more interesting pipe transmitting actual data can be realized by considering data types such as records and sums [START_REF] Dardha | Session types revisited[END_REF]. The simplified realization we consider here suffices to illustrate a relevant family of recursive functions that interleave actions on different channels.
Since linear channels are consumed after communication, each signal includes a continuation channel on which the subsequent signals in the stream will be sent/received. In particular, cont x sends a fresh continuation c on x and returns c, so that c can be used for subsequent communications, while pipe x y sends a fresh continuation on y after it has received a continuation from x, and then repeats this behavior on the continuations. The program below connects two pipes: Even if the two pipes realize a cyclic network, we will see in Section 3 that this program is well typed and therefore deadlock free. Forgetting cont on line 4 or not forking the send on line 1, however, produces a deadlock.
Type and effect system
We present the features of the type system gradually, in three steps: we start with a monomorphic system (Section 3.1), then we introduce level polymorphism required by Examples 1 and 2 (Section 3.2), and finally recursive types required by Example 2 (Section 3.3). We end the section studying the properties of the type system (Section 3.4).
Core types
Let L def = Z ∪ {⊥, } be the set of channel levels ordered in the obvious way (⊥ < n < for every n ∈ Z); we use ρ, σ , . . . to range over L and we write ρ σ (respectively, ρ σ ) for the minimum (respectively, the maximum) of ρ and σ . Polarities p, q, . . . are non-empty subsets of {?, !}; we abbreviate {?} and {!} with ? and ! respectively, and {?, !} with #. Types t, s, . . . are defined by t, s ::= B p[t] n t → ρ,σ s where basic types B, . . . include unit and int. The type p[t] n denotes a channel with polarity p and level n. The polarity describes the operations allowed on the channel: ? means input, ! means output, and # means both input and output. Channels are linear resources: they can be used once according to each element in their polarity. The type t → ρ,σ s denotes a function with domain t and range s. The function has level ρ (its closure contains channels with level ρ or greater) and, when applied, it uses channels with level σ or smaller. If ρ = , the function has no channels in its closure; if σ = ⊥, the function uses no channels when applied. We write → as an abbreviation for → ,⊥ , so → denotes pure functions not containing and not using any channel.
Recall from Section 1 that levels are meant to impose an order on the use of channels: roughly, the lower the level of a channel, the sooner the channel must be used. We extend the notion of level from channel types to arbitrary types: basic types have level because there is no need to use them as far as deadlock freedom is concerned; the level of functions is written in their type. Formally, the level of t, written |t|, is defined as:
|B| def = |p[t] n | def = n |t → ρ,σ s| def = ρ (3.1)
Levels can be used to distinguish linear types, denoting values (such as channels) that must be used to guarantee deadlock freedom, from unlimited types, denoting values that have no effect on deadlock freedom and may be disregarded. We say that t is linear if |t| ∈ Z; we say that t is unlimited, written un(t), if |t| = .
Below are the type schemes of the constants that we consider. Some constants have many types (constraints are on the right); we write types(k) for the set of types of k.
() : unit n : int fix : (t → t) → t fork : (unit → ρ,σ unit) → unit new : unit → #[t] n n < |t| recv : ?[t] n → ,n t n < |t| send : ![t] n → t → n,n unit n < |t|
The type of (), of the numbers, and of fix are ordinary. The primitive new creates a fresh channel with the full set # of polarities and arbitrary level n. The primitive recv takes a channel of type ?[t] n , blocks until a message is received, and returns the message. The primitive itself contains no free channels in its closure (hence the level ) because the only channel it manipulates is its argument. The latent effect is the level of the channel, as expected. The primitive send takes a channel of type ![t] n , a message of type t, and sends the message on the channel. Note that the partial application send a is a function whose level and latent effect are both the level of a. Note also that in new, recv, and send the level of the message must be greater than the level of the channel: since levels are used to enforce an order on the use of channels, this condition follows from the observation that a message cannot be used until after it has been received, namely after the channel on which it travels has been used. Finally, fork accepts a thunk with arbitrary level ρ and latent effect σ and spawns the thunk into an independent thread (see Table 1). Note that fork is a pure function with no latent effect, regardless of the level and latent effect of the thunk. This phenomenon is called effect masking [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF], whereby the effect of evaluating an expression becomes unobservable: in our case, fork discharges effects because the thunk runs in parallel with the code executing the fork.
We now turn to the typing rules. A type environment Γ is a finite map u 1 : t 1 , . . . , u n : t n from names to types. We write / 0 for the empty type environment, dom(Γ ) for the domain of Γ , and Γ (u) for the type associated with u in Γ ; we write Γ 1 , Γ 2 for the union of Γ 1 and Γ 2 when dom(Γ 1 )∩dom(Γ 2 ) = / 0. We also need a more flexible way of combining type environments. In particular, we make sure that every channel is used linearly by distributing different polarities of a channel to different parts of the program. To this aim, following [START_REF] Kobayashi | Linearity and the pi-calculus[END_REF], we define a partial combination operator + between types:
t + t def = t if un(t) p[t] n + q[t] n def = (p ∪ q)[t] n if p ∩ q = / 0 (3.2)
that we extend to type environments, thus:
Γ + Γ def = Γ , Γ if dom(Γ ) ∩ dom(Γ ) = / 0 (Γ , u : t) + (Γ , u : s) def = (Γ + Γ ), u : t + s (3.3)
For example, we have
(x : int, a : ![int] n ) + (a : ?[int] n ) = x : int, a : #[int] n
, so we might have some part of the program that (possibly) uses a variable x of type int along with channel a for sending an integer and another part of the program that uses the same channel a but this time for receiving an integer. The first part of the program would be typed in the environment x : int, a : ![int] n and the second one in the environment a : ?[int] n . Overall, the two parts would be typed in the environment x : int, a : #[int] n indicating that a is used for both sending and receiving an integer.
We Table 2. Core typing rules for expressions and processes.
Typing of expressions
[T-NAME] Γ , u : t u : t & ⊥ un(Γ ) Γ 1 + Γ 2 P | Q [T-NEW] Γ , a : #[t] n P Γ (νa)P
We are now ready to discuss the core typing rules, shown in Table 2. Judgments of the form Γ e : t & ρ denote that e is well typed in Γ , it has type t and effect ρ; judgments of the form Γ P simply denote that P is well typed in Γ .
Axioms [T-NAME] and [T-CONST] are unremarkable: as in all substructural type systems the unused part of the type environment must be unlimited. Names and constants have no effect (⊥); they are evaluated expressions that do not use (but may contain) channels.
In rule [T-FUN], the effect ρ caused by evaluating the body of the function becomes the latent effect in the arrow type of the function and the function itself has no effect. The level of the function is determined by that of the environment Γ in which the function is typed. Intuitively, the names in Γ are stored in the closure of the function; if any of these names is a channel, then we must be sure that the function is eventually used (i.e., applied) to guarantee deadlock freedom. In fact, |Γ | gives a slightly more precise information, since it records the smallest level of all channels that occur in the body of the function. We have seen in Section 1 why this information is useful. A few examples:
the identity function λ x.x has type int → ,⊥ int in any unlimited environment; the function λ _.a has type unit→ n,⊥ ![int] n in the environment a : ![int] n ; it contains channel a with level n in its closure (whence the level n in the arrow), but it does not use a for input/output (whence the latent effect ⊥); it is nonetheless well typed because a, which is a linear value, is returned as result; the function λ x.send x 3 has type ![int] n → ,n unit; it has no channels in its closure but it performs an output on the channel it receives as argument; the function λ x.(recv a + x) has type int → n,n int in the environment a : ?[int] n ; note that neither the domain nor the codomain of the function mention any channel, so the fact that the function has a channel in its closure (and that it performs some I/O) can only be inferred from the annotations on the arrow; the function λ x.send x (recv a) has type ![int] n+1 → n,n+1 unit in the environment a : ![int] n ; it contains channel a with level n in its closure and performs input/output operations on channels with level n + 1 (or smaller) when applied.
Rule [T-APP] deals with applications e 1 e 2 . The first thing to notice is the type environments in the premises for e 1 and e 2 . Normally, these are exactly the same as the type environment used for the whole application. In our setting, however, we want to distribute polarities in such a way that each channel is used for exactly one communication. For this reason, the type environment Γ 1 + Γ 2 in the conclusion is the combination of the type environments in the premises. Regarding effects, τ i is the effect caused by the evaluation of e i . As expected, e 1 must result in a function of type t → ρ,σ s and e 2 in a value of type t. The evaluation of e 1 and e 2 may however involve blocking I/O operations on channels, and the two side conditions make sure that no deadlock can arise. To better understand them, recall that reduction is call-by-value and applications e 1 e 2 are evaluated sequentially from left to right. Now, the condition τ 1 < |Γ 2 | makes sure that any I/O operation performed during the evaluation of e 1 involves only channels whose level is smaller than that of the channels occurring free in e 2 (the free channels of e 2 must necessarily be in Γ 2 ). This is enough to guarantee that the functional part of the application can be fully evaluated without blocking on operations concerning channels that occur later in the program. In principle, this condition should be paired with the symmetric one τ 2 < |Γ 1 | making sure that any I/O operation performed during the evaluation of the argument does not involve channels that occur in the functional part. However, when the argument is being evaluated, we know that the functional part has already been reduced a value (see the definition of reduction contexts in Section 2). Therefore, the only really critical condition to check is that no channels involved in I/O operations during the evaluation of e 2 occur in the value of e 1 . This is expressed by the condition τ 2 < ρ, where ρ is the level of the functional part. Note that, when e 1 is an abstraction, by rule [T-FUN] ρ coincides with |Γ 1 |, but in general ρ may be greater than |Γ 1 |, so the condition τ 2 < ρ gives better accuracy. The effect of the whole application e 1 e 2 is, as expected, the combination of the effects of evaluating e 1 , e 2 , and the latent effect of the function being applied. In our case the "combination" is the greatest level of any channel involved in the application. Below are some examples:
-(λ x.x) a is well typed, because both λ x.x and a are pure expressions whose effect is ⊥, hence the two side conditions of [T-APP] are trivially satisfied; -(λ x.x) (recv a) is well typed in the environment a : ?[int] n : the effect of recv a is n (the level of a) which is smaller than the level of the function; send a (recv a) is ill typed in the environment a : #[int] n because the effect of evaluating recv a, namely n, is the same as the level of send a;
-(recv a) (recv b) is well typed in the environment a : ?[int → int] 0 , b : ?[int] 1 . The effect of the argument is 1, which is not smaller than the level of the environment a : ?[int → int] 0 used for typing the function. However, 1 is smaller than , which is the level of the result of the evaluation of the functional part of the application. This application would be illegal had we used the side condition
τ 2 < |Γ 1 | in [T-APP].
The typing rules for processes are standard: [T-PAR] splits contexts for typing the processes in parallel, [T-NEW] introduces a new channel in the environment, and [T-THREAD] types threads. The effect of threads is ignored: effects are used to prevent circular dependencies between channels used within the sequential parts of the program (i.e., within expressions); circular dependencies that arise between parallel threads are indirectly detected by the fact that each occurrence of a channel is typed with the same level (see the discussion of (1.1) in Section 1).
Level polymorphism
Looking back at Example 1, we notice that fibo n c may generate two recursive calls with two corresponding fresh channels a and b. Since the send operation on c is blocked by recv operations on a and b (line 5), the level of a and b must be smaller than that of c. Also, since expressions are evaluated left-to-right and recv a + recv b is syntactic sugar for the application (+) (recv a) (recv b), the level of a must be smaller than that of b. Thus, to declare fibo well typed, we must allow different occurrences of fibo to be applied to channels with different levels. Even more critically, this form of level polymorphism of fibo is necessary within the definition of fibo itself, so it is an instance of polymorphic recursion [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF].
The core typing rules in Table 2 do not support level polymorphism. Following the previous discussion on fibo, the idea is to realize level polymorphism by shifting levels in types. We define level shifting as a type operator ⇑ n , thus:
⇑ n B def = B ⇑ n p[t] m def = p[⇑ n t] n+m ⇑ n (t → ρ,σ s) def = ⇑ n t → n+ρ,n+σ ⇑ n s (3.4)
where + is extended from integers to levels so that n+ = and n+⊥ = ⊥. The effect of ⇑ n t is to shift all the finite level annotations in t by n, leaving and ⊥ unchanged. Now, we have to understand in which cases we can use a value of type ⇑ n t where one of type t is expected. More specifically, when a value of type ⇑ n t can be passed to a function expecting an argument of type t. This is possible if the function has level . We express this form of level polymorphism with an additional typing rule for applications:
[T-APP-POLY] Γ 1 e 1 : t → ,σ s & τ 1 Γ 2 e 2 : ⇑ n t & τ 2 Γ 1 + Γ 2 e 1 e 2 : ⇑ n s & (n + σ ) τ 1 τ 2 τ 1 < |Γ 2 | τ 2 <
This rule admits an arbitrary mismatch n between the level the argument expected by the function and that of the argument supplied to the function. The type of the application and the latent effect are consequently shifted by the same amount n.
Soundness of [T-APP-POLY
] can be intuitively explained as follows: a function with level has no channels in its closure. Therefore, the only channels possibly manipulated by the function are those contained in the argument to which the function is applied or channels created within the function itself. Then, the fact that the argument has level n + k rather than level k is completely irrelevant. Conversely, if the function has channels in its closure, then the absolute level of the argument might have to satisfy specific ordering constraints with respect to these channels (recall the two side conditions in [T-APP]). Since level polymorphism is a key distinguishing feature of our type system, and one that accounts for much of its expressiveness, we elaborate more on this intuition using an example. Consider the term fwd def = λ x.λ y.send y (recv x) which forwards on y the message received from x. The derivation . . .
[T-APP] y : ![int] 1 send y : int → 1,1 unit & ⊥ . . . [T-APP] x : ?[int] 0 recv x : int & 0 [T-APP] x : ?[int] 0 , y : ![int] 1 send y (recv x) : unit & 1 [T-FUN] x : ?[int] 0 λ y.send y (recv x) : ![int] 1 → 0,1 unit & ⊥ [T-FUN] fwd : ?[int] 0 → ![int] 1 → 0,1 unit & ⊥
does not depend on the absolute values 0 and 1, but only on the level of x being smaller than that of y, as required by the fact that the send operation on y is blocked by the recv operation on x. Now, consider an application fwd a, where a has type ?[int] 2 . The mismatch between the level of x (0) and that of a (2) is not critical, because all the levels in the derivation above can be uniformly shifted up by 2, yielding a derivation for
fwd : ?[int] 2 → ![int] 3 → 2,3 unit & ⊥
This shifting is possible because fwd has no free channels in its body (indeed, it is typed in the empty environment). Therefore, using [T-APP-POLY], we can derive
a : ?[int] 2 fwd a : ![int] 3 → 2,3 unit & ⊥
Note that (fwd a) is a function having level 2. This means that (fwd a) is not level polymorphic and can only be applied, through [T-APP], to channels with level 3. If we allowed (fwd a) to be applied to a channel with level 2 using [T-APP-POLY] we could derive
a : #[int] 2 fwd a a : unit & 2
which reduces to a deadlock. Example 3. To show that the term in Example 1 is well typed, consider the environment
Γ def = fibo : int → ![int] 0 → ,0 unit, n : int, c : ![int] 0
In the proof derivation for the body of fibo, this environment is eventually enriched with the assignments a : #[int] -2 and b : #[int] -1 . Now we can derive . . .
[T-APP] Γ fibo (n -2) : ![int] 0 → ,0 unit & ⊥ [T-NAME] a : ![int] -2 a : ![int] -2 & ⊥ [T-APP-POLY] Γ , a : ![int] -2 fibo (n -2) a : unit & -2
where the application fibo (n -2) a is well typed despite the fact that fibo (n -2) expects an argument of type ![int] 0 , while a has type ![int] -2 . A similar derivation can be obtained for fibo (n -1) b, and the proof derivation can now be completed.
Recursive types
Looking back at Example 2, we see that in a call pipe x y the channel recv x is used in the same position as x. Therefore, according to [T-APP-POLY], recv x must have the same type as x, up to some shifting of its level. Similarly, channel c is both sent on y and then used in the same position as y, suggesting that c must have the same type as y, again up to some shifting of its level. This means that we need recursive types in order to properly describe x and y. Instead of adding explicit syntax for recursive types, we just consider the possibly infinite trees generated by the productions for t shown earlier. In light of this broader notion of types, the inductive definition of type level (3.1) is still well founded, but type shift (3.4) must be reinterpreted coinductively, because it has to operate on possibly infinite trees. The formalities, nonetheless, are well understood.
It is folklore that, whenever infinite types are regular (that is, when they are made of finitely many distinct subtrees), they admit finite representations either using type variables and the familiar µ notation, or using systems of type equations [START_REF] Courcelle | Fundamental properties of infinite trees[END_REF]. Unfortunately, a careful analysis of Example 2 suggests that -at least in principle -we also need non-regular types. To see why, let a and c be the channels to which (recv x) and (cont y) respectively evaluate on line 2 of the example. Now:
x must have smaller level than a since a is received from x (cf. the types of recv).
y must have smaller level than c since c is sent on y (cf. the types of send).
x must have smaller level than y since x is used in the functional part of an application in which y occurs in the argument (cf. line 2 and [T-APP-POLY]).
Overall, in order to type pipe in Example 2 we should assign x and y the types t n and s n that respectively satisfy the equations
t n = ?[t n+2 ] n s n = ![t n+3 ] n+1 (3.5)
Unfortunately, these equations do not admit regular types as solutions. We recover typeability of pipe with regular types by introducing a new type constructor
t ::= • • • t n
that wraps types with a pending shift: intuitively t n and ⇑ n t denote the same type, except that in t n the shift ⇑ n on t is pending. For example, ?[int] 0 1 and ?[int] 2 -1 are both possible wrappings of ?
[int] 1 , while int → 0,⊥ ![int] 0 is the unwrapping of int → 1,⊥ ![int] 1 -1 .
To exclude meaningless infinite types such as • • • n n n we impose a contractiveness condition requiring every infinite branch of a type to contain infinite occurrences of channel or arrow constructors. To see why wraps help finding regular representations for otherwise non-regular types, observe that the equations
t n = ?[ t n 2 ] n s n = ![ t n+1 2 ] n+1 (3.6)
denote -up to pending shifts -the same types as the ones in (3.5), with the key difference that (3.6) admit regular solutions and therefore finite representations. For example, t n could be finitely represented as a familiar-looking µα.?[ α 2 ] n term. We should remark that t n and ⇑ n t are different types, even though the former is morally equivalent to the latter: wrapping is a type constructor, whereas shift is a type operator. Having introduced a new constructor, we must suitably extend the notions of type level (3.1) and type shift (3.4) we have defined earlier. We postulate
| t n | def = n + |t| ⇑ n t m def = ⇑ n t m
in accordance with the fact that • n denotes a pending shift by n (note that | • | extended to wrappings is well defined thanks to the contractiveness condition).
We also have to define introduction and elimination rules for wrappings. To this aim, we conceive two constants, wrap and unwrap, having the following type schemes:
wrap : ⇑ n t → t n unwrap : t n → ⇑ n t
We add wrap v to the value forms. Operationally, we want wrap and unwrap to annihilate each other. This is done by enriching reduction for expressions with the axiom and we are now able to find a typing derivation for it that uses regular types. In particular, we assign cont the type s n → s n+2 and pipe the type t n → s n → n, unit where t n and s n are the types defined in (3.6). Note that cont is a pure function because its effects are masked by fork and that pipe has latent effect since it loops performing recv operations on channels with increasing level. Because of the side conditions in [T-APP] and [T-APP-POLY], this means that pipe can only be used in tail position, which is precisely what happens above and in Example 2.
unwrap (wrap v) -→ v
Properties
To formulate subject reduction, we must take into account that linear channels are consumed after communication (last but one reduction in Table 1). This means that when a process P communicates on some channel a, a must be removed from the type environment used for typing the residual of P. To this aim, we define a partial operation Γthat removes from Γ , when is a channel. Formally:
Theorem 1 (subject reduction). If Γ P and P - -→ Q, then Γ - Q where Γ -τ def = Γ and (Γ , a : #[t] n ) -a def = Γ .
Note that Γa is undefined if a ∈ dom(Γ ). This means that well-typed programs never attempt at using the same channel twice, namely that channels in well-typed programs are indeed linear channels. This property has important practical consequences, since it allows the efficient implementation (and deallocation) of channels [START_REF] Kobayashi | Linearity and the pi-calculus[END_REF].
Deadlock freedom means that if the program halts, then there must be no pending I/O operations. In our language, the only halted program without pending operations is (structurally equivalent to) () . We can therefore define deadlock freedom thus: Definition 1. We say that P is deadlock free
if P τ - -→ * Q -→ implies Q ≡ () .
As usual, τ --→ * is the reflexive, transitive closure of τ --→ and Q -→ means that Q is unable to reduce further. Now, every well-typed, closed process is free from deadlocks: Theorem 2 (soundness). If / 0 P, then P is deadlock free.
Theorem 2 may look weaker than desirable, considering that every process P (even an ill-typed one) can be "fixed" and become part of a deadlock-free system if composed in parallel with the diverging thread fix λ x.x . It is not easy to state an interesting property of well-typed partial programs -programs that are well typed in uneven environments -or of partial computations -computations that have not reached a stable (i.e., irreducible) state. One might think that well-typed programs eventually use all of their channels. This property is false in general, for two reasons. First, our type system does not ensure termination of well-typed expressions, so a thread like send a (fix λ x.x) never uses channel a, because the evaluation of the message diverges. Second, there are threads that continuously generate (or receive) new channels, so that the set of channels they own is never empty; this happens in Example 2. What we can prove is that, assuming that a well-typed program does not internally diverge, then each channel it owns is eventually used for a communication or is sent to the environment in a message. To formalize this property, we need a labeled transition system describing the interaction of programs with their environment. Labels π, . . . We formalize the assumption concerning the absence of internal divergences as a property that we call interactivity. Interactivity is a property of typed processes, which we write as pairs Γ P, since the messages exchanged between a process and the environment in which it executes are not arbitrary in general.
Definition 2 (interactivity). Interactivity is the largest predicate on well-typed processes such that Γ P interactive implies Γ P and:
1. P has no infinite reduction P Clause [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF] says that an interactive process does not internally diverge: it will eventually halt either because it terminates or because it needs interaction with the environment in which it executes. Clause (2) states that internal reductions preserve interactivity. Clause (3) states that a process with a pending output on a channel a must reduce to an interactive process after the output is performed. Finally, clause (4) states that a process with a pending input on a channel a may reduce to an interactive process after the input of a particular message v is performed. The definition looks demanding, but many conditions are direct consequences of Theorem 1. The really new requirements besides well typedness are convergence of P (1) and the existence of v (4). It is now possible to prove that well-typed, interactive processes eventually use their channels.
1 -→ P 1 2 -→ P 2 3 -→ • • • , and 2. if P -→ Q, then Γ -Q is interactive, and 3. if P a!v -→ Q and Γ = Γ , a : ![t] n , then Γ Q is interactive for some Γ ⊆ Γ ,
Theorem 3 (interactivity). Let Γ P be an interactive process such that a ∈ fn(P). Then P
π 1 -→ P 1 π 2 -→ • • • π n
-→ P n for some π 1 , . . . , π n such that a ∈ fn(P n ).
Concluding remarks
We have demonstrated the portability of a type system for deadlock freedom of πcalculus processes [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] to a higher-order language using an effect system [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF]. We have shown that effect masking and polymorphic recursion are key ingredients of the type system (Examples 1 and 2), and also that latent effects must be paired with one more annotation -the function level. The approach may seem to hinder program modularity, since it requires storing levels in types and levels have global scope. In this respect, level polymorphism (Section 3.2) alleviates this shortcoming of levels by granting them a relative -rather than absolute -meaning at least for non-linear functions.
Other type systems for higher-order languages with session-based communication primitives have been recently investigated [START_REF] Gay | Linear type theory for asynchronous session types[END_REF][START_REF] Wadler | Propositions as sessions[END_REF][START_REF] Bono | Polymorphic Types for Leak Detection in a Session-Oriented Functional Language[END_REF]. In addition to safety, types are used for estimating bounds in the size of message queues [START_REF] Gay | Linear type theory for asynchronous session types[END_REF] and for detecting memory leaks [START_REF] Bono | Polymorphic Types for Leak Detection in a Session-Oriented Functional Language[END_REF]. Since binary sessions can be encoded using linear channels [START_REF] Dardha | Session types revisited[END_REF], our type system can address the same family of programs considered in these works with the advantage that, in our case, well-typed programs are guaranteed to be deadlock free also in presence of session interleaving. For instance, the pipe function in Example 2 interleaves communications on two different channels. The type system described by Wadler [START_REF] Wadler | Propositions as sessions[END_REF] is interesting because it guarantees deadlock freedom without resorting to any type annotation dedicated to this purpose. In his case the syntax of (well-typed) programs prevents the modeling of cyclic network topologies, which is a necessary condition for deadlocks. However, this also means that some useful program patterns cannot be modeled. For instance, the program in Example 2 is ill typed in [START_REF] Wadler | Propositions as sessions[END_REF].
The type system discussed in this paper lacks compelling features. Structured data types (records, sums) have been omitted for lack of space; an extended technical report [START_REF] Padovani | Types for Deadlock-Free Higher-Order Concurrent Programs[END_REF] and previous works [START_REF] Padovani | Type Reconstruction for the Linear π-Calculus with Composite and Equi-Recursive Types[END_REF][START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] show that they can be added without issues. The same goes for non-linear channels [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF], possibly with the help of dedicated accept and request primitives as in [START_REF] Gay | Linear type theory for asynchronous session types[END_REF]. True polymorphism (with level and type variables) has also been studied in the technical report [START_REF] Padovani | Types for Deadlock-Free Higher-Order Concurrent Programs[END_REF]. Its impact on the overall type system is significant, especially because level and type constraints (those appearing as side conditions in the type schemes of constants, Section 3.1) must be promoted from the metatheory to the type system. The realization of level polymorphism as type shifting that we have adopted in this paper is an interesting compromise between impact and flexibility. Our type system can also be relaxed with subtyping: arrow types are contravariant in the level and covariant in the latent effect, whereas channel types are invariant in the level. Invariance of channel levels can be relaxed refining levels to pairs of numbers as done in [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF]. This can also improve the accuracy of the type system in some cases, as discussed in [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] and [START_REF] Carbone | Progress as compositional lock-freedom[END_REF]. It would be interesting to investigate which of these features are actually necessary for typing concrete functional programs using threads and communication/synchronization primitives.
Type reconstruction algorithms for similar type systems have been defined [START_REF] Padovani | Type Reconstruction for the Linear π-Calculus with Composite and Equi-Recursive Types[END_REF][START_REF] Padovani | Type Reconstruction Algorithms for Deadlock-Free and Lock-Free Linear π-Calculi[END_REF]. We are confident to say that they scale to type systems with arrow types and effects.
e
::= k u λ x.e ee P, Q ::= e (νa)P P | Q Expressions comprise constants k, names u, abstractions λ x.e, and applications e 1 e 2 .
Example 1 (
1 parallel Fibonacci function). The fibo function below computes the n-th number in the Fibonacci sequence and sends the result on a channel c: fix λfibo.λn.λc.if n ≤ 1 then send c n else let a = new() and b = new() in (fork λ_.fibo (n -1) a); (fork λ_.fibo (n -2) b); send c (recv a + recv b)
3 4 (
4 let a = new() and b = new() in fork λ_.pipe a b); (fork λ_.pipe b (cont a))
extend the function | • | to type environments so that |Γ | def = u∈dom(Γ ) |Γ (u)| with the convention that | / 0| = ; we write un(Γ ) if |Γ | = .
Example 4 .
4 We suitably dress the code in Example 2 using wrap and unwrap: 1 let cont = λx.let c = new() in (fork λ_.send x (wrap c)); c in 2 let pipe = fix λpipe.λx.λy.pipe (unwrap (recv x)) (cont y)
of transitions are defined by π ::= a?e a!v and the transition relation π -→ extends reduction with the rules a ∈ bn(C ) C [send a v] a!v -→ C [()] a ∈ bn(C ) fn(e) ∩ bn(C ) = / 0 C [recv a] a?e -→ C [e] where C ranges over process contexts C ::= E | (C | P) | (P | C ) | (νa)C . Messages of input transitions have the form a?e where e is an arbitrary expression instead of a value. This is just to allow a technically convenient formulation of Definition 2 below.
and Γ = Γ , a : ?[t] n , then Γ Q{v/x} is interactive for some v and Γ ⊇ Γ such that n < |Γ \ Γ |.
[T-CONST] Γ k : t & ⊥ un(Γ ) t ∈ types(k) [T-FUN] Γ , x : t e : s & ρ Γ λ x.e : t → |Γ |,ρ s & ⊥ [T-APP] Γ 1 e 1 : t → ρ,σ s & τ 1 Γ 2 e 2 : t & τ 2 Γ 1 + Γ 2 e 1 e 2 : s & σ τ 1 τ 2 τ 1 < |Γ 2 | τ 2 < ρTyping of processes[T-THREAD] Γ e : unit & ρ Γ e [T-PAR] Γ 1 P Γ 2 Q
Acknowledgments. The authors are grateful to the reviewers for their detailed comments and useful suggestions. The first author has been supported by Ateneo/CSP project SALT, ICT COST Action IC1201 BETTY, and MIUR project CINA. | 46,718 | [
"966685"
] | [
"47709",
"47709"
] |
01767335 | en | [
"info"
] | 2024/03/05 22:32:15 | 2015 | https://inria.hal.science/hal-01767335/file/978-3-319-19195-9_3_Chapter.pdf | Ritwika Ghosh
email: <[email protected]
Sayan Mitra
email: mitras>@illinois.edu
A Strategy for Automatic Verification of Stabilization of Distributed Algorithms
Automatic verification of convergence and stabilization properties of distributed algorithms has received less attention than verification of invariance properties. We present a semi-automatic strategy for verification of stabilization properties of arbitrarily large networks under structural and fairness constraints. We introduce a sufficient condition that guarantees that every fair execution of any (arbitrarily large) instance of the system stabilizes to the target set of states. In addition to specifying the protocol executed by each agent in the network and the stabilizing set, the user also has to provide a measure function or a ranking function. With this, we show that for a restricted but useful class of distributed algorithms, the sufficient condition can be automatically checked for arbitrarily large networks, by exploiting the small model properties of these conditions. We illustrate the method by automatically verifying several well-known distributed algorithms including linkreversal, shortest path computation, distributed coloring, leader election and spanning-tree construction.
Introduction
A system is said to stabilize to a set of states X * if all its executions reach some state in X * [START_REF] Dolev | Self-stabilization[END_REF]. This property can capture common progress requirements like absence of deadlocks and live-locks, counting to infinity, and achievement of selfstabilization in distributed systems. Stabilization is a liveness property, and like other liveness properties, it is generally impossible to verify automatically. In this paper, we present sufficient conditions which can be used to automatically prove stabilization of distributed systems with arbitrarily many participating processes.
A sufficient condition we propose is similar in spirit to Tsitsiklis' conditions given in [START_REF] Johnn | On the stability of asynchronous iterative processes[END_REF] for convergence of iterative asynchronous processes. We require the user to provide a measure function, parameterized by the number of processes, such that its sub-level sets are invariant with respect to the transitions and there is a progress making action for each state. 1 Our point of departure is a non-interference condition that turned out to be essential for handling models of distributed systems. Furthermore, in order to handle non-deterministic communication patterns, our condition allows us to encode fairness conditions and different underlying communication graphs.
Next, we show that these conditions can be transformed to a forall-exists form with a small model property. That is, there exists a cut-off number N 0 such that if the condition(s) is(are) valid in all models of sizes up to N 0 , then it is valid for all models. We use the small model results from [START_REF] Johnson | A small model theorem for rectangular hybrid automata networks[END_REF] to determine the cutoff parameter and apply this approach to verify several well-known distributed algorithms.
We have a Python implementation based on the sufficient conditions for stabilization we develop in Section 3. We present precondition-effect style transition systems of algorithms in Section 4 and they serve as pseudo-code for our implementation. The SMT-solver is provided with the conditions for invariance, progress and non-interference as assertions. We encode the distributed system models in Python and use the Z3 theorem-prover module [START_REF] Moura | Z3: An efficient smt solver[END_REF] provided by Python to check the conditions for stabilization for different model sizes.
We have used this method to analyze a number of well-known distributed algorithms, including a simple distributed coloring protocol, a self-stabilizing algorithm for constructing a spanning tree of the underlying network graph, a link-reversal routing algorithm, and a binary gossip protocol. Our experiments suggest that this method is effective for constructing a formal proof of stabilization of a variety of algorithms, provided the measure function is chosen carefully. Among other things, the measure function should be locally computable: changes from the measure of the previous state to that of the current state only depend on the vertices involved in the transition. It is difficult to determine whether such a measure function exists for a given problem. For instance, consider Dijkstra's self-stabilizing token ring protocol [START_REF] Edsger | Self-stabilization in spite of distributed control[END_REF]. The proof of correctness relies on the fact that the leading node cannot push for a value greater than its previous unique state until every other node has the same value. We were unable to capture this in a locally computable measure function because if translated directly, it involves looking at every other node in the system.
Related Work
The motivation for our approach is from the paper by John Tsitsiklis on convergence of asynchronous iterative processes [START_REF] Johnn | On the stability of asynchronous iterative processes[END_REF], which contains conditions for convergence similar to the sufficient conditions we state for stabilization. Our use of the measure function to capture stabilization is similar to the use of Lyapunov functions to prove stability as explored in [START_REF] Oliver | Exploitation of lyapunov theory for verifying self-stabilizing algorithms[END_REF], [START_REF] Oehlerking | Towards automatic convergence verification of self-stabilizing algorithms[END_REF] and [START_REF] Oliver | A new verification technique for self-stabilizing distributed algorithms based on variable structure systems and lyapunov theory[END_REF]. In [START_REF] Dhama | A tranformational approach for designing scheduler-oblivious self-stabilizing algorithms[END_REF], Dhama and Theel present a progress monitor based method of designing self-stabilizing algorithms with a weakly fair scheduler, given a self-stabilizing algorithm with an arbitrary, possibly very restrictive scheduler. They also use the existence of a ranking function to prove convergence under the original scheduler. Several authors [START_REF] Ghosh | Distributed systems: an algorithmic approach[END_REF] employ functions to prove termination of distributed algorithms, but while they may provide an idea of what the measure function can be, in general they do not translate exactly to the measure functions that our verification strategy can employ. The notion of fairness we have is also essential in dictating what the measure function should be, while not prohibiting too many behaviors. In [START_REF] Oehlerking | Towards automatic convergence verification of self-stabilizing algorithms[END_REF], the assumption of serial execution semantics is compatible with our notions of fair executions.
The idea central to our proof method is the small model property of the sufficient conditions for stabilization. The small model nature of certain invariance properties of distributed algorithms (eg. distributed landing protocols for small aircrafts as in [START_REF] Umeno | Safety verification of an aircraft landing protocol: A refinement approach[END_REF]) has been used to verify them in [START_REF] Johnson | Invariant synthesis for verification of parameterized cyber-physical systems with applications to aerospace systems[END_REF]. In [START_REF] Emerson | Reducing model checking of the many to the few[END_REF], Emerson and Kahlon utilize a small model argument to perform parameterized model checking of ring based message passing systems.
Preliminaries
We will represent distributed algorithms as transition systems. Stabilization is a liveness property and is closely related to convergence as defined in the works of Tsitsiklis [START_REF] Johnn | On the stability of asynchronous iterative processes[END_REF]; it is identical to the concept of region stability as presented in [START_REF] Sridhar | Abstraction refinement for stability[END_REF]. We will use measure functions in our definition of stabilization. A measure function on a domain provides a mapping from that domain to a well-ordered set. A well-ordered set W is one on which there is a total ordering <, such that there is a minimum element with respect to < on every non-empty subset of W . Given a measure function C : A → B, there is a partition of A into sub level-sets. All elements of A which map to the same element b ∈ B under C are in the same sub level-set L b .
We are interested in verifying stabilization of distributed algorithms independent of the number of participating processes or nodes. Hence, the transition systems are parameterized by N -the number of nodes. Given a non-negative integer N , we use [N ] to denote a set of indices {1, 2, . . . , N }. Definition 1. For a natural number N and a set Q, a transition system A(N ) with N nodes is defined as a tuple (X,A,D) where a) X is the state space of the system. If the state space of of each node is Q, X = Q N . b) A is a set of actions. c) D : X × A → X is a transition function, that maps a system-state action pair to a system-state.
For any x ∈ X , the i th component of x is the state of the i th node and we refer to it as x[i]. Given a transition system A(N ) = (X , A, D) we refer to the state obtained by the application of the action a on a state x ∈ X i.e, D(x, a), by a(x).
An execution of A(N ) records a particular run of the distributed system with N nodes. Formally, an execution α of A(N ) is a (possibly infinite) alternating sequence of states and actions x 0 , a 1 , x 1 , . . ., where each x i ∈ X and each a i ∈ A such that D(x i , a i+1 ) = x i+1 . Given that the choice of actions is nondeterministic in the execution, it is reasonable to expect that not all executions may stabilize. For instance, an execution in which not all nodes participate, may not stabilize. Definition 2. A fairness condition F for A(N ) is a finite collection of subsets of actions {A i } i∈I , where I is a finite index set. An action-sequence σ = a 1 , a 2 , . . . is F-Fair if every A i in F is represented in σ infinitely often, that is,
∀ A ∈ F, ∀i ∈ N, ∃k > i, a k ∈ A .
For instance, if the fairness condition is the collection of all singleton subsets of A, then each action occurs infinitely often in an execution. This notion of fairness is similar to action based fairness constraints in temporal logic model checking [START_REF] Huth | Logic in Computer Science: Modelling and reasoning about systems[END_REF]. The network graph itself enforces whether an action is enabled: every pair of adjacent nodes determines a continuously enabled action. An execution is strongly fair, if given a set of actions A such that all actions in A are infinitely often enabled; some action in A occurs infinitely often in the it. An F-fair execution is an infinite execution such that the corresponding sequence of actions is F-fair. Definition 3. Given a system A(N ), a fairness condition F, and a set of states
X * ⊆ X , A(N ) is said to F-stabilize to X * iff for any F-fair execution α = x 0 , a 1 , x 1 , a 2 , . . ., there exists k ∈ N such that x k ∈ X * . X * is called a stabilizing set for A and F.
It is different from the definition of self-stabilization found in the literature [START_REF] Dolev | Self-stabilization[END_REF], in that the stabilizing set X * is not required to be an invariant of A(N ). We view proving the invariance of X * as a separate problem that can be approached using one of the available techniques for proving invariance of parametrized systems in [START_REF] Johnson | A small model theorem for rectangular hybrid automata networks[END_REF], [START_REF] Johnson | Invariant synthesis for verification of parameterized cyber-physical systems with applications to aerospace systems[END_REF].
Example 1. (Binary Gossip) We look at binary gossip in a ring network composed of N nodes. The nodes are numbered clockwise from 1, and nodes 1 and N are also neighbors. Each node has one of two states : {0, 1}. A pair of neighboring nodes communicates to exchange their values, and the new state is set to the binary Or (∨) of the original values. Clearly, if all the interactions happen infinitely often, and the initial state has at least one node state 1, this transition system stabilizes to the state x = 1 N . The set of actions is specified by the set of edges of the ring. We first represent this protocol and its transitions using a standard precondition-effect style notation similar to one used in [START_REF] Mitra | A verification framework for hybrid systems[END_REF].
Automaton Gossip[N : N] type indices : [N ] type values : {0, 1} variables x[ indices → values ] transitions step (i: indices , j : indices ) pre True eff x[i] = x[j] = x[i] ∨ x[j] measure func C : x → Sum(x)
The above representation translates to the transition system A(N ) = (X , A, D) where 1. The state space of each node is
Q = {0, 1}, i.e X = {0, 1} N . 2. The set of actions is A = {step(i, i + 1) | 1 ≤ i < N } ∪ {(N, 1)}. 3. The transition function is D(x, step(i, j)) = x where x [i] = x [j] = x[i] ∨ x[j].
We define the stabilizing set to be X * = {1 N }, and the fairness condition is
F = {{(i, i+1} | 1 < i < N }∪{1, N },
which ensures that all possible interactions take place infinitely often. In Section 3 we will discuss how this type of stabilization can be proven automatically with a user-defined measure function.
3 Verifying Stabilization
A Sufficient Condition for Stabilization
We state a sufficient condition for stabilization in terms of the existence of a measure function. The measure functions are similar to Lyapunov stability conditions in control theory [START_REF] Hassan | Nonlinear systems[END_REF] and well-founded relations used in proving termination of programs and rewriting systems [START_REF] Dershowitz | Termination of rewriting[END_REF].
Theorem 1. Suppose A(N ) = X , A, D is a transition system parameterized by N , with a fairness condition F, and let X * be a subset of X . Suppose further that there exists a measure function C : X → W , with minimum element ⊥ such that the following conditions hold for all states x ∈ X:
-(invariance) ∀ a ∈ A, C(a(x)) ≤ C(x), -(progress) ∃ A x ∈ F, ∀a ∈ A x , C(x) =⊥⇒ C(a(x)) < C(x), -(noninterference) ∀a, b ∈ A, C(a(x)) < C(x) ⇒ C(a(b(x))) < C(x), and
-(minimality) C(x) = ⊥ ⇒ x ∈ X * . Then, A[N ] F-stabilizes to X * .
Proof. Consider an F-fair execution α = x 0 a 1 x 1 . . . of A(N ) and let x i be an arbitrary state in that execution. If C(x i ) = ⊥, then by minimality, we have x i ∈ X * . Otherwise, by the progress condition we know that there exists a set of actions
A xi ∈ F and k > i, such that a k ∈ A xi , and C(a k (x i )) < C(x i ).
We perform induction on the length of the sub-sequence
x i a i+1 x i+1 . . . a k x k and prove that C(x k ) < C(x i ). For any sequence β of intervening actions of length n, C(a k (x i )) < C(x i ) ⇒ C(a k (β(x i ))) < C(x i ).
The base case of the induction is n = 0, which is trivially true. By induction hypothesis we have: for any j < n, with length of β equal to j,
C(a k (β(x i )) < C(x i ).
We have to show that for any action b ∈ A,
C(a k (β(b(x i ))) < C(x i ).
There are two cases to consider. If C(b(x i )) < C(x i ) then the result follows from the invariance property. Otherwise, let x = b(x i ). From the invariance of b we have C(x ) = C(x i ). From the noninterference condition we have
C(a(b(x i )) < C(x i ),
which implies that C(a(x )) < C(x ). By applying the induction hypothesis to x we have the required inequality C(a k (β(b(x i ))) < C(x i ). So far we have proved that either a state x i in an execution is already in the stabilizing set, or there is a state
x k , k > i such that C(x k ) < C(x i ).
Since < is a well-ordering on C(X ), there cannot be an infinite descending chain. Thus
∃j(j > i ∧ C(j) = ⊥).
By minimality , x j ∈ X * . By invariance again, we have F-stabilization to X *
We make some remarks on the conditions of Theorem 1. It requires the measure function C and the transition system A(N ) to satisfy four conditions. The invariance condition requires the sub-level sets of C to be invariant with respect to all the transitions of A(N ). The progress condition requires that for every state x for which the measure function is not already ⊥, there exists a fair set of actions A x that takes x to a lower value of C.
The minimality condition asserts that C(x) drops to ⊥ only if the state is in the stabilizing set X * . This is a part of the specification of the stabilizing set.
The noninterference condition requires that if a results in a decrease in the value of the measure function at state x, then application of a to another state x that is reachable from x also decreases the measure value below that of x. Note that it doesn't necessarily mean that a decreases the measure value at x , only that either x has measure value less than x at the time of application of a or it drops after the application. In contrast, the progress condition of Theorem 1 requires that for every sub-level set of C there is a fair action that takes all states in the sub-level set to a smaller sub-level set.
To see the motivation for the noninterference condition, consider a sub-level set with two states x 1 and x 2 such that b(x 1 ) = x 2 , a(x 2 ) = x 1 and there is only one action a such that C(a(x 1 )) < C(x 1 ). But as long as a does not occur at x 1 , an infinite (fair) execution x 1 bx 2 ax 1 bx 2 . . . may never enter a smaller sub-level set.
In our examples, the actions change the state of a node or at most a small set of nodes while the measure functions succinctly captures global progress conditions such as the number of nodes that have different values. Thus, it is often impossible to find actions that reduce the measure function for all possible states in a level-set. In Section 4, we will show how a candidate measure function can be checked for arbitrarily large instances of a distributed algorithm, and hence, lead to a method for automatic verification of stabilization.
Automating Stabilization Proofs
For finite instances of a distributed algorithm, we can use formal verification tools to check the sufficient conditions in Theorem 1 to prove stabilization. For transition systems with invariance, progress and noninterference conditions that can be encoded appropriately in an SMT solver, these checks can be performed automatically. Our goal, however, is to prove stabilization of algorithms with an arbitrary or unknown number of participating nodes. We would like to define a parameterized family of measure functions and show that ∀N ∈ N, A(N ) satisfies the conditions of Theorem 1. This is a parameterized verification problem and most of the prior work on this problem has focused on verifying invariant properties (see Section 1 for related works). Our approach will be based on exploiting the small model nature of the logical formulas representing these conditions.
Suppose we want to check the validity of a logical formula of the form ∀ N ∈ N, φ(N ). Of course, this formula is valid iff the negation ∃ N ∈ N, ¬φ(N ) has no satisfying solution. In our context, checking if ¬φ(N ) has a satisfying solution over all integers is the (large) search problem of finding a counter-example. That is, a particular instance of the distributed algorithm and specific values of the measure function for which the conditions in Theorem 1 do not hold. The formula ¬φ(N ) is said to have a small model property if there exists a cut-off value N 0 such that if there is no counter-example found in any of the instances A(1), A(2), . . . , A(N 0 ), then there are no counter-examples at all. Thus, if the conditions of Theorem 1 can be encoded in such a way that they have these small model properties then by checking them over finite instances, we can infer their validity for arbitrarily large systems.
In [START_REF] Johnson | A small model theorem for rectangular hybrid automata networks[END_REF], a class of ∀∃ formulas with small model properties were used to check invariants of timed distributed systems on arbitrary networks. In this paper, we will use the same class of formulas to encode the sufficient conditions for checking stabilization. We use the following small model theorem as presented in [START_REF] Johnson | A small model theorem for rectangular hybrid automata networks[END_REF]:
Theorem 2. Let Γ (N ) be an assertion of the form ∀i 1 , . . . , i k ∈ [N ]∃j 1 , . . . , m ∈ [N ], φ(i 1 , . . . , i k , j 1 , . . . , j m )
where φ is a quantifier-free formula involving the index variables, global and local variables in the system. Then, ∀N ∈ N : Γ (N ) is valid iff for all n ≤ N 0 = (e + 1)(k + 2), Γ (n) is satisfied by all models of size n, where e is the number of index array variables in φ and k is the largest subscript of the universally quantified index variables in Γ (N ).
Computing the Small Model Parameter
Computing the small model parameter N 0 for verifying a stability property of a transition system first requires expressing all the conditions of Theorem 1 using formulas which have the structure specified by Theorem 2. There are a few important considerations while doing so.
Translating the sufficient conditions In their original form, none of the conditions of Theorem 1 have the structure of ∀∃-formulas as required by Theorem 2. For instance, a leading ∀x ∈ X quantification is not allowed by Theorem 2, so we transform the conditions into formulas with implicit quantification. Take for instance the invariance condition: ∀x ∈ X , ∀a ∈ A, (C(a(x)) ≤ C(x)). Checking the validity of the invariance condition is equivalent to checking the satisfiability of ∀a ∈ A, (a(x) = x ⇒ C(x ) ≤ C(x)), where x and x are free variables, which are checked over all valuations. Here we need to check that x and x are actually states and they satisfy the transition function. For instance in the binary gossip example, we get
Invariance : ∀x ∈ X , ∀a ∈ A, C(a(x)) ≤ C(x) is verified as ∀a ∈ A, x = a(x) ⇒ C(x ) ≤ C(x). ≡ ∀i, j ∈ [N ], x = step(i, j)(x) ⇒ Sum(x ) ≤ Sum(x). Progress : ∀x ∈ X , ∃a ∈ A, C(x) = ⊥ ⇒ C(a(x)) < C(x) is verified as C(x) = 0 ⇒ ∃i, j ∈ [N ], x = step(i, j)(x) ∧ Sum(x) < Sum(x). Noninterference : ∀x ∈ X , ∀a, b ∈ A, (C(a(x)) < C(x) ≡ C(a(b(x))) < C(x)) is verified as ∀i, j, k, l ∈ [N ], x = step(i, j)(x) ∧ x = step(k, l)(x) ∧x = step(i, j)(x ) ⇒ (C(x ) < C(x) ⇒ C(x ) < C(x)).
Interaction graphs In distributed algorithms, the underlying network topology dictates which pairs of nodes can interact, and therefore the set of actions. We need to be able to specify the available set of actions in a way that is in the format demanded by the small-model theorem. In this paper we focus on specific classes of graphs like complete graphs, star graphs, rings, k-regular graphs, and k-partite complete graphs, as we know how to capture these constraints using predicates in the requisite form. For instance, we use edge predicates E(i, j) : i and j are node indices, and the predicate is true if there is an undirected edge between them in the interaction graph. For a complete graph, E(i, j) = true. In the Binary Gossip example, the interaction graph is a ring, and
E(i, j) = (i < N ∧ j = i + 1) ∨ (i > 1 ∧ j = i -1) ∨ i = 1 ∧ j = N ).
If the graph is a d-regular graph, we express use d arrays, reg 1 , . . . , reg d , where ∃i, reg i [k] = l if there is an edge between k and l, and i = j
≡ reg i [k] = reg j [k]
. This only expresses that the degree of each vertex is d, but there is no information about the connectivity of the graph. For that, we can have a separate index-valued array which satisfies certain constraints if the graph is connected. These constraints need to be expressed in a format satisfying the small model property as well. Other graph predicates can be introduced based on the model requirements, for instance, P arent(i, j), Child(i, j), Direction(i, j). In our case studies we verify stabilization under the assumption that all pairs of nodes in E interact infinitely often. For the progress condition, the formula simplifies to ∃a ∈ A, C(x) = ⊥ ⇒ C(a(x)) < C(x)). More general fairness constraints can be encoded in the same way as we encode graph constraints.
Case studies
In this section, we will present the details of applying our strategy to various distributed algorithms. We begin by defining some predicates that are used in our case studies. Recall that we want wanted to check the conditions of Theorem 1 using the transformation outlined in Section 3.3 involving x, x etc., representing the states of a distributed system that are related by the transitions. These conditions are encoded using the following predicates, which we illustrate using the binary gossip example given in Section 2:
-isState(x) returns true iff the array variable x represents a state of the system. In the binary gossip example, isState(x
) = ∀i ∈ [N ], x[i] = 0 ∨ x[i] = 1.
-isAction(a) returns true iff a is a valid action for the system. Again, for the binary gossip example isAction(step(i, j)) = True for all i, j ∈ [N ] in the case of a complete communication graph. -isTransition(x, step(i, j), x ) returns true iff the state x goes to x when the transition function for action step(i, j) is applied to it. In case of the binary gossip example, isTransition(x, step(i, j), x ) is
(x [j] = x [i] = x[i] ∨ x[j]) ∧ (∀p, p / ∈ {i, j} ⇒ x[p] = x [p]).
-Combining the above predicates, we define P (x, x , i, j) as
isState(x) ∧ isState(x ) ∧ isTransition(x, step(i, j), x ) ∧ isAction(step(i, j)).
Using these constructions, we rewrite the conditions of Theorem 1 as follows:
Invariance : ∀i, j, P (x, x , i, j) ⇒ C(x ) ≤ C(x).
(1)
Progress : C(x) = ⊥ ⇒ ∃i, j, P (x, x , i, j) ∧ C(x ) < C(x). (2)
Noninterference : ∀p, r, s, t, P (x, x , p, q) ∧ P (x, x , s, t) ∧ P (x , x , p, q)
⇒ (C(x ) < C(x) ⇒ C(x ) < C(x)). (3)
Minimality : C(x) = ⊥ ⇒ x ∈ X * . (4)
Graph Coloring
This algorithm colors a given graph in d + 1 colors, where d is the maximum degree of a vertex in the graph [START_REF] Ghosh | Distributed systems: an algorithmic approach[END_REF]. Two nodes are said to have a conflict if they have the same color. A transition is made by choosing a single vertex, and if it has a conflict with any of its neighbors, then it sets its own state to be the least available value which is not the state of any of its neighbours. We want to verify that the system stabilizes to a state with no conflicts. The measure function is chosen as the set of pairs with conflicts.
Automaton Coloring[N : N] type indices : [N ] type values : {1, . . . , N } variables x[ indices → values ] transitions internal step (i: indices ) pre ∃j ∈ [N ](E(j, i) ∧ x[j] = x[i]) eff x[i] = min(values \{c | j ∈ [N ] ∧ E(i, j) ∧ x[j] = c}) measure func C : x → {(i, j) | E(i, j) ∧ x[i] = x[j]}
Here, the ordering on the image of the measure function is set inclusion.
Invariance : ∀i ∈ [N ], P (x, x , i) ⇒ C(x ) ⊆ C(x). (From (1)) ≡ ∀i, j, k ∈ [N ], P (x, x , i) ⇒ ((j, k) ∈ C(x ) ⇒ (j, k) ∈ C(x)). ≡ ∀i, j, k ∈ [N ], P (x, x , i) ⇒ (E(j, k) ∧ x[j] = x[k] ⇒ x [j] = x [k]).
(E is the set of edges in the underlying graph)
Progress : ∃m ∈ [N ], C(x) = ∅ ⇒ C(step(m)(x)) < C(x). ≡ ∀i, j ∈ [N ], ∃m, n ∈ [N ], (E(i, j) ∧ x[i] = x[j]) ∨ (P (x, x , m) ∧ E(m, n) ∧ x[m] = x[n] ∧ x [m] = x [n]).
Noninterference : ∀q, r, s, t ∈ [N ], (P (x, x , q) ∧ P (x, x , s) ∧ P (x , x , q))
⇒ (E(q, r) ∧ x[q] = x[r] ∧ x [q] = x [r] ⇒ E(s, t) ∧(x [s] = x [t] ⇒ x [s] = x [t]) ∧ x [r] = x [q])).
(from (3 and expansion of ordering)
Minimality : C(x) = ∅ ⇒ x ∈ X * .
From the above conditions, using Theorem 2 N 0 is calculated to be 24.
Leader Election
This algorithm is a modified version of the Chang-Roberts leader election algorithm [START_REF] Ghosh | Distributed systems: an algorithmic approach[END_REF]. We apply Theorem 1 directly by defining a straightforward measure function. The state of each node in the network consists of a) its own uid, b) the index and uid of its proposed candidate, and c) the status of the election according to the node (0 : the node itself is elected, 1 : the node is not the leader, 2 : the node is still waiting for the election to finish). A node i communicates its state to its clockwise neighbor j (i + 1 if i < N , 0 otherwise) and if the UID of i's proposed candidate is greater than j, then j is out of the running. The proposed candidate for each node is itself to begin with. When a node gets back its own index and uid, it sets its election status to 0. This status, and the correct leader identity propagates through the network, and we want to verify that the system stabilizes to a state where a leader is elected. The measure function is the number of nodes with state 0.
[i] = 1 ∧ uid[candidate[i]] > uid[candidate[j]] eff leader[j] = 1 ∧ candidate[j] = candidate[i] pre leader[j] = 2 ∧ candidate[i] = j eff leader [ j ] = 0∧candidate[j] = j pre leader[i] = 0 eff leader[j] = 1 ∧ candidate[j] = i measure func C : x → Sum(x.leader[i])
The function Sum() represents the sum of all elements in the array, and it can be updated when a transition happens by just looking at the interacting nodes. We encode the sufficient conditions for stabilization of this algorithm using the strategy outlined in Section 3.2.
Invariance : ∀i, j ∈ [N ], P (x, x , i, j) ⇒ (Sum(x .leader) ≤ Sum(x.leader)).
≡ ∀i, j ∈ [N ], (P (x, x , i, j) ⇒ (Sum(x.leader) -x.leader[i] -
x.leader[j] + x .leader[i] + x .leader[j] ≤ Sum(x.leader)).
(difference only due to interacting nodes)
≡ ∀i, j ∈ [N ], P (x, x , i, j) (one element still waiting for election to end)
⇒ (x .leader[i] + x .leader[j] ≤ x.leader[i] + x.leader[j]) Progress : ∃m, n ∈ [N ], Sum(x.leader) = N -1 ⇒ Sum(step(m, n)(x).
Noninterference : ∀q, r, s, t ∈ [N ], P (x, x , q, r) ∧ P (x, x , s, t) ∧ P (x , x , q, r)
⇒ (x [q] + x [r] < x[q] + x[r] ⇒ (x [q] + x [r] + x [s] + x [t] < x[q] + x[r] + x[s] + x[t])).
(expanding out Sum)
Minimality : C(x) = N -1 ⇒ x ∈ X * .
From the above conditions, using Theorem 2, N 0 is calculated to be 35.
Shortest path
This algorithm computes the shortest path to every node in a graph from a root node. It is a simplified version of the Chandy-Misra shortest path algorithm [START_REF] Ghosh | Distributed systems: an algorithmic approach[END_REF].
We are allowed to distinguish the nodes with indices 1 or N in the formula structure specified by Theorem 2. The state of the node represents the distance from the root node. The root node (index 1) has state 0. Each pair of neighboring nodes communicates their states to each other, and if one of them has a lesser value v, then the one with the larger value updates its state to v + 1. This stabilizes to a state where all nodes have the shortest distance from the root stored in their state. We don't have an explicit value of ⊥ for the measure function for this, but it can be seen that we don't need it in this case. Let the interaction graph be a d-regular graph. The measure function is the sum of distances.
) pre x[j] > x[i] + 1 eff x[j] = x[i] + 1 pre x[i] = 0 eff x[j] = 1 measure func C : x → Sum(x[i])
Ordering on the image of measure function is the usual one on natural numbers.
Invariance : ∀i, j ∈ [N ], P (x, x , i, j) ⇒ Sum(x ) ≤ Sum(x). ≡ ∀, j ∈ [N ], P (x, x , i, j) ⇒ Sum(x) -x[i] -x[j] + x [i] + x [j] ≤ Sum(x). ≡ ∀i, j ∈ [N ], P (x, x , i, j) ⇒ x [i] + x [j] ≤ x[i] + x[j). Progress : ∃m, n ∈ [N ], C(x) = ⊥ ⇒ P (x, x , m, n) ∧ Sum(x) < Sum(x). ≡ ∀k, l ∈ [N ], (E(k, l) ⇒ x[k] ≤ x[l] + 1) ∨∃m, n ∈ [N ](P (x, x , m, n) ∧ E(m, n) ∧x[m] + x[n] > x [m] + x [n]).
(C(x) = ⊥ if there is no pair of neighboring vertices more than 1 distance apart from each other )
Noninterference : ∀q, r, s, t ∈ [N ], P (x, x , q, r) ∧ P (x, x , s, t) ∧ P (x , x , q, r)
⇒ (x [q] + x [r] < x[q] + x[r] ⇒ (x [q] + x [r] + x [s] + x [t] < x[q] + x[r] + x[s] + x[t])). Minimality : C(x) = ⊥ ⇒ x ∈ X * ≡ ∀i, j(E(i, j) ⇒ x[i] -x[j] ≤ 1 ⇒ x ∈ X * ) (definition) N 0 is 7(d + 1)
where the graph is d-regular.
Link Reversal
We describe the full link reversal algorithm as presented by Gafni and Bertsekas in [START_REF] Eli | Distributed algorithms for generating loopfree routes in networks with frequently changing topology[END_REF], where, given a directed graph with a distinguished sink vertex, it outputs a graph in which there is a path from every vertex to the sink. There is a distinguished sink node(index N). Any other node which detects that it has only incoming edges, reverses the direction of all its edges with its neighbours. We use the vector of reversal distances (the least number of edges required to be reversed for a node to have a path to the sink, for termination. The states store the reversal distances, and the measure function is identity.
i = N ∧ ∀j ∈ [N ](E(i, j) ∧ (direction(i, j) = -1) eff ∀j ∈ [N ](E(i, j) ⇒ (Reverse(i, j)) ∧ x(i) = min(x(j))) measure func C : x → x
The ordering on the image of the measure function is component-wise comparison:
V 1 < V 2 ⇔ ∀i(V 1 [i] < V 2 [i])
We mentioned earlier that the image of C has a well-ordering. That is a condition formulated with the idea of continuous spaces in mind. The proposed ordering for this problem works because the image of the measure function is discrete and has a lower bound (specifically, 0 N ). We elaborate a bit on P here, because it needs to include the condition that the reversal distances are calculated accurately. The node N has reversal distance 0. Any other node has reversal distance rd(i) = min(rd(j 1 ), . . . rd(j m ), rd(k 1 ) + 1, . . . rd(k n ) + 1) where j p (p = 1 . . . m) are the nodes to which it has outgoing edges, and k q (q = 1 . . . n) are the nodes it has incoming edges from. P also needs to include the condition that in a transition, reversal distances of no other nodes apart from the transitioning nodes change.
The interaction graph in this example is complete. Noninterference : ∀i, j ∈ [N ], P (x, x , i) ∧ P (x , x , j) ∧ P (x , x , i)
⇒ (x [i] < x[i] ∧ x [i] < x[i]).
(decreasing measure)
Minimality : C(x) = 0 N ⇒ x ∈ X * .
From the above conditions, using Theorem 2, N 0 is calculated to be 21.
Experiments and Discussion
We verified that instances of the aforementioned systems with sizes less than the small model parameter N 0 satisfy the four conditions(invariance, progress, non-interference, minimality) of Theorem 1 using the Z3 SMT-solver [START_REF] Moura | Z3: An efficient smt solver[END_REF]. The models are checked by symbolic execution. The interaction graphs were complete graphs in all the experiments. In Figure 5, the x-axis represents the problem instance sizes, and the y-axis is the log of the running time (in seconds) for verifying Theorem 1 for the different algorithms. We observe that the running times grow rapidly with the increase in the model sizes. For the binary gossip example, the program completes in ∼ 17 seconds for a model size 7, which is the N 0 value. In case of the link reversal, for a model size 13, the program completes in ∼ 30 mins. We have used complete graphs in all our experiments, but as we mentioned earlier in Section 3.2, we can encode more general graphs as well. This method is a general approach to automated verification of stabilization properties of distributed algorithms under specific fairness constraints, and structural constraints on graphs. The small model nature of the conditions to be verified is crucial to the success of this approach. We saw that many distributed graph algorithms, routing algorithms and symmetry-breaking algorithms can be verified using the techniques discussed in this paper. The problem of finding a suitable measure function which satisfies Theorem 2, is indeed a non-trivial one in itself, however, for the problems we study, the natural measure function of the algorithms seems to work.
→ [N ]] candidate [ indices → [N ]] leader [ indices → {0, 1, 2}] transitions internal step (i: indices , j : indices ) pre leader
leader) < Sum(x.leader)).≡ ∀p ∈ [N ], x.leader[p] = 2 ⇒ ∃m, n ∈ [N ], (P (x, x , m, n) ∧ E(m, n) ∧ x .leader[m] + x .leader[n] < x.leader[m] + x.leader[n]).
Invariance
: ∀i, j ∈ [N ], P (x, x , i) ⇒ x [j] ≤ x[j] (ordering) Progress : ∃m ∈ [N ], C(x) = ⊥ ⇒ (C(step(m)(x)) < C(x)). ≡ ∀n ∈ [N ], (x[n] = 0) ∨ ∃m ∈ [N ](P (x, x , m) ∧ x [m] < x[m]).
2
2
Fig. 1 .
1 Fig. 1. Instance size vs log 10 (T ), where T is the running time in seconds
A sub-level set of a function comprises of all points in the domain which map to the same value or less. | 36,790 | [
"1030900",
"1009005"
] | [
"303576",
"303576"
] |
01767336 | en | [
"info"
] | 2024/03/05 22:32:15 | 2015 | https://inria.hal.science/hal-01767336/file/978-3-319-19195-9_14_Chapter.pdf | Benoit Claudel
Quentin Sabah
Jean-Bernard Stefani
Simple Isolation for an Actor Abstract Machine
Keywords: mkcf { acs : actors, ows : owners, mbs : mailboxes, shp : heap } prod msgid addr. Definition queue := list message. Record mbox : Type := mkmb { own : aid, msgs : queue}
Introduction
Motivations. The actor model of concurrency [START_REF] Agha | Actors: A Model of Concurrent Computation in Distributed Systems[END_REF], where isolated sequential threads of execution communicate via buffered asynchronous message-passing, is an attractive alternative to the model of concurrency adopted e.g. for Java, based on threads communicating via shared memory. The actor model is both more congruent to the constraints of increasingly distributed hardware architectures -be they local as in multicore chips, or global as in the world-wide web -, and more adapted to the construction of long-lived dynamic systems, including dealing with hardware and software faults, or supporting dynamic update and reconfiguration, as illustrated by the Erlang system [START_REF] Armstrong | [END_REF]. Because of this, we have seen in the recent years renewed interest in implementing the actor model, be that at the level of experimental operating systems as in e.g. Singularity [START_REF] Fahndrich | Language Support for Fast and Reliable Messagebased Communication in Singularity OS[END_REF], or in language libraries as in e.g. Java [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF] and Scala [START_REF] Haller | Actors that unify threads and events[END_REF].
When combining the actor model with an object-oriented programming model, two key questions to consider are the exact semantics of message passing, and its efficient implementation, in particular on multiprocessor architectures with shared physical memory. To be efficient, an implementation of message passing on a shared memory architecture ought to use data transfer by reference, where the only data exchanged is a pointer to the part of the memory that contains the message. However, with data transfer by reference, enforcing the share-nothing semantics of actors becomes problematic: once an arbitrary memory reference is exchanged between sender and receiver, how do you ensure the sender can no longer access the referenced data ? Usual responses to this question, typically involve restricting the shape of messages, and controlling references (usually through a reference uniqueness scheme [START_REF] Minsky | Towards alias-free pointers[END_REF]) by various means, including runtime support, type systems and other static analyses, as in Singularity [START_REF] Fahndrich | Language Support for Fast and Reliable Messagebased Communication in Singularity OS[END_REF], Kilim [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF], Scala actors [START_REF] Haller | Capabilities for uniqueness and borrowing[END_REF], and SOTER [START_REF] Negara | Inferring ownership transfer for efficient message passing[END_REF].
Contributions. In this paper, we study a point in the actor model design space which, despite its simplicity, has never, to our knowledge, been explored before. It features a very simple programming model that places no restriction on the shape and type of messages, and does not require special types or annotations for references, yet still enforces the share nothing semantics of the actor model. Specifically, we introduce an actor abstract machine, called Siaam. Siaam is layered on top of a sequential object-oriented abstract machine, has actors running concurrently using a shared heap, and enforces strict actor isolation by means of run-time barriers that prevent an actor from accessing objects that belong to a different actor. The contributions of this paper can be summarized as follows. We formally specify the Siaam model, building on the Jinja specification of a Java-like sequential language [START_REF] Klein | A machine-checked model for a java-like language, virtual machine, and compiler[END_REF]. We formally prove, using the Coq proof assistant, the strong isolation property of the Siaam model. We describe our implementation of the Siaam model as a modified Jikes RVM [16]. We present a novel static analysis, based on a combination of points-to, alias and liveness analyses, which is used both for improving the run-time performance of Siaam programs, and for providing useful debugging support for programmers. Finally, we evaluate the performance of our implementation and of our static analysis.
Outline. The paper is organized as follows. Section 2 presents the Siaam machine and its formal specification. Section 3 presents the formal proof of its isolation property. Section 4 describes the implementation of the Siaam machine. Section 5 presents the Siaam static analysis. Section 6 presents an evaluation of the Siaam implementation and of the Siaam analysis. Section 7 discusses related work and concludes the paper. Because of space limitations, we present only some highlights of the different developments. Interested readers can find all the details in the second author's PhD thesis [START_REF] Sabah | SIAAM: Simple Isolation for an Abstract Actor Machine[END_REF], which is available online along with the Coq code [25].
Siaam: model and formal specification
Informal presentation. Siaam combines actors and objects in a programming model with a single shared heap. Actors are instances of a special class. Each actor is equipped with at least one mailbox for queued communication with other actors, and has its own logical thread of execution that runs concurrently with other actor threads. Every object in Siaam belongs to an actor, we call its owner. An object has a unique owner. Each actor is its own owner. At any point in time the ownership relation forms a partition of the set of objects. A newly created object has its owner set to that of the actor of the creating thread.
Siaam places absolutely no restriction on the references between objects, including actors. In particular objects with different owners may reference each other. Siaam also places no constraint on what can be exchanged via messages: the contents of a message can be an arbitrary object graph, defined as the graph of objects reachable (following object references in object fields) from a root object specified when sending a message. Message passing in Siaam has a zerocopy semantics, meaning that the object graph of a message is not copied from unexpected updates of the objects' fields it is allowed to reach and access. By unexpected we mean field updates that are not immediate side-e ects of the considered actor.
Mail system. Each actor may have zero, one, or several mailboxes from which it can retrieve messages at will. Mailboxes are created dynamically and may be communicated without restriction. Any actor of the system may send messages through a mailbox. However each mailbox is associated with a receiver actor, such that only the receiver may retrieve messages from a mailbox. More detailed informations on the mailboxes are deferred to the mailboxes paragraph.
Actors. The local state of each actor is represented by an object, and the associated behaviour is a method of that object. The behaviour method is free to implement any algorithm, the actor terminates when that method returns. Here we clearly deviate from the definition of an actor given in 1.1.1, where actors "react" to received communications. Siaam's actor are more active in the sense that they can arbitrarily chose when to receive a message, and from what mailbox. Although it is possible to replicate Agha's actor model with the Siaam actor model and conversely. Simply fix a unique mailbox for each Siaam's actor and write a infinite loop behaviour processing messages one by one to obtain the Agha's model. In configuration (a) all the objects but the actor 0 may be employed as the starting object Fig. 1. Ownership and ownership transfer in Siaam the sender actor to the receiver actor, only the reference to the root object of a message is communicated. An actor is only allowed to send objects it owns 4 , and it cannot send itself as part of a message content.
Figure 1 illustrates ownership and ownership transfer in Siaam. On the left side (a) is a configuration of the heap and the ownership relation where each actor, presented in gray, owns the objects that are part of the same dotted convex hull. Directed edges are heap references. On the right side (b), the objects 1, 2, 3 have been transferred from a to b, and object 1 has been attached to the data structure maintained in b's local state. The reference from a to 1 has been preserved, but actor a is no longer allowed to access the fields of 1, 2, 3.
To ensure isolation, Siaam enforces the following invariant: an object o (in fact an executing thread) can only access fields of an object that has the same owner than o; any attempt to access the fields of an object of a different owner than the caller raises a run-time exception. To enforce this invariant, message exchange in Siaam involves twice changing the owner of all objects in a message contents graph: when a message is enqueued in a receiver mailbox, the owner of objects in the message contents is changed atomically to a null owner ID that is never assigned to any actor ; when the message is dequeued by the receiver actor, the owner of objects in the message contents is changed atomically to the receiver actor. This scheme prevents pathological situations where an object passed in a message m may be sent in another message m by the receiver actor without the latter having dequeued (and hence actually received) message m. Since Siaam does not modify object references in any way, the sender actor can still have references to objects that have been sent, but any attempt from this sender actor to access them will raise an exception.
Siaam: model and formal specification. The formal specification of the Siaam model defines an operational semantics for the Siaam language, in the form of a reduction semantics. The Siaam language is a Java-like language, for its sequential part, extended with special classes with native methods corresponding to operations of the actor model, e.g. sending and receiving messages. The semantics is organized in two layers, the single-actor semantics and the global semantics. The single-actor semantics deals with evolutions of individual actors, and reduces actor-local state. The global semantics maintains a global state not directly accessible from the single-actor semantics. In particular, the effect of reading or updating object fields by actors belongs to the single-actor semantics, but whether it is allowed is controlled by the global semantics. Communications are handled by the global semantics.
The single actor semantics extends the Jinja formal specification in HOL of the reduction semantics of a (purely sequential) Java-like language [START_REF] Klein | A machine-checked model for a java-like language, virtual machine, and compiler[END_REF] 5 . Jinja gives a reduction semantics for its Java-like language via judgments of the form P e, (lv, h) → e , (lv , h ) , which means that in presence of program P (a list of class declarations), expression e with a set of local variables lv and a heap h reduces to expression e with local variables lv and heap h .
We extend Jinja judgments for our single-actor semantics to take the form P, w e, (lv, h) -wa → e , (lv , h ) where e, lv corresponds to the local actor state, h is the global heap, w is the identifier of the current actor (owner), and wa is the actor action requested by the reduction. Actor actions embody the Siaam model per se. They include creating new objects (with their initial owner), including actors and mailboxes, checking the owner of an object, sending and receiving messages. For instance, succesfully accessing an object field is governed by rule Read in Figure 2. Jinja objects are pairs (C, f s) of the object class name C and the field table f s. A field table is a map holding a value for each field of an object, where fields are identified by pairs (F, D) of the field name F and the name D of the declaring class. The premisses of rule Read retrieve the object referenced by a from the heap (hp s a = Some (C, f s) -where hp is the projection function that retrieves the heap component of a local actor state, and the heap itself is an association table modelled as a function that given an object reference returns an object), and the value v held in field F . In the conclusion of rule Read, reading the field F from a returns the value v, with the local state s (local variables and heap) unchanged. The actor action OwnerCheck a T rue indicates that object a has the current actor as its owner. Apart from the addition of the actor action label, rule Read is directly lifted from the small step semantics of Jinja in [START_REF] Klein | A machine-checked model for a java-like language, virtual machine, and compiler[END_REF]. In the case of field access, the rule Read is naturally complemented with rule ReadX, that raises an exception if the owner check fails, and which is specific to Siaam. Actor actions also include a special silent action, that corresponds to single-actor reductions (including exception handling) that require no access to the global state. Non silent actor actions are triggered by object creation, object field access, and native calls, i.e. method calls on the special actor and mailbox classes.
The global semantics is defined by the rule Global in Figure 2. The judgment, written P s → s , means in presence of program P , global state s reduces to global state s . The global state (xs, ws, ms, h) of a Siaam program execution comprises four components: the actor table xs, an ownership relation ws, the mailbox table ms, and a shared heap h. The projection functions acs, ows, mbs, shp return respectively the actor table, the ownerships relation, the mailbox table, and the shared heap component of the global state. The actor table associates an actor identifier to an actor local state consisting of a pair e, lv of expression and local variables. The rule Global reduces the global state by applying a single step of the single-actor semantics for actor w. In the premises of the rule, the shared heap shp s and the current local state x (expression and local variables) for w are retrieved from the global state. The actor can reduce to x with new shared heap h and perform the action wa. ok act tests the actor action precondition against s. If it is satisfiable, upd act applies the effects of wa to the global state, yielding the new tuple of state components (xs , ws , ms , ) where the heap is left unchanged. The new state s is assembled from the new mailbox table, the new ownership relation, the new heap from the single actor reduction and the new actor table where the state for actor w is updated with its new local state x . We illustrate the effect of actor actions in the next section.
Siaam: Proof of isolation
The key property we expect the Siaam model to uphold is the strong isolation (or share nothing) property of the actor model, meaning actors can only exchange information via message passing. We have formalized this property and proved it using the Coq proof assistant (v8.4) [START_REF]Coq development team[END_REF]. We present in this section some key elements of the formalization and proof, using excerpts from the Coq code. The formalization uses an abstraction of the operational semantics presented in the previous section. Specifically, we abstract away from the single-actor semantics. The local state of an actor is abstracted as being just a table of local variables (no expression), which may change in obvious ways: adding or removing a local variable, changing the value held by a local variable. The formalization (which we call Abstract Siaam) is thus a generalization of the Siaam operational semantics.
A message is just a pair consisting of a message identifier and a reference to a root object. A value can be either the null value (vnull), the mark value (vmark ), an integer (vnat), a boolean (vbool), an object reference, an actor id or a mailbox id. The special mark value is simply a distinct value used to formalize the isolation property.
Abstract Siaam: Transition rules. Evolution of a Siaam system are modeled in Abstract Siaam as transitions between configurations, which are in turn governed by transition rules. Each transition rule in Abstract Siaam corresponds to an an instance of the Global rule in the Siaam operational semantics, specialized for dealing with a given actor action. For instance, the rule governing field access, which abstracts the global semantics reduction picking the OwnerCheck a T rue action offered by a Read reduction of the single-actor semantics (cf. Figure 2) carrying the identifier of actor e, and accessing field f of object o referenced by a is defined as follows:
Inductive redfr : conf → aid → conf → Prop := | redfr_step : ∀ (c1 c2 : conf)(e : aid)(l1 l2 : locals)(i j : vid)(v w : value)(a: addr) (o : object)(f: fid), set_In (e,l1) (acs c1) → set_In (i, w) l1 → set_In (j,vadd a) l1 → set_In (a,o) (shp c1) → set_In (f,v) o → set_In (a, Some e) (ows c1) → v_compat w v → l2 = up_locals i v l1 → c2 = mkcf (up_actors e l2 (acs c1)) (ows c1) (mbs c1) (shp c1) →
c1 =fr e ⇒ c2 where " t '=fr' a '⇒' t' " := (redfr t a t').
The conclusion of the rule, c1 =fr e ⇒ c2, states that configuration c1 can evolve into configuration c2 by actor e doing a field access fr. The premises of the rule are the obvious ones: e must designate an actor of c1; the table l1 of local variables of actor e must have two local variables i and j, one holding a reference a to the accessed object (set_In (j,vadd a) l1), the other some value w (set_In ( i, w) l1) compatible with that read in the accessed object field (v_compat w v); a must point to an object o in the heap of c1 (set_In (a,o) (shp c1) ), which must have a field f, holding some value v (set_In (f,v) o) ; and actor e must be the owner of object o for the field access to succeed (set_In (a, Some e) (ows c1)). The final configuration c2 has the same owernship relation, mailbox table and shared heap than the initial one c1, but its actor table is updated with new local state of actor e (c2 = mkcf (up_actors e l2 (acs c1)) (ows c1) (mbs c1) (shp c1)), where variable i now holds the read value v (l2 = up_locals i v l1).
Another key instance of the Abstract Siaam transition rules is the rule presiding over message send:
Inductive redsnd : conf → aid → conf → Prop := | redsnd_step : ∀ (c1 c2 : conf)(e : aid) (a : addr) (l : locals) (ms: msgid)(mi: mid) (mb mb': mbox)(owns : owners), set_In (e,l) (acs c1) → set_In (vadd a) (values_from_locals l) → trans_owner_check (shp c1) (ows c1) (Some e) a = true → set_In (mi,mb) (mbs c1) → not (set_In ms (msgids_from_mbox mb)) → Some owns = trans_owner_update (shp c1) (ows c1) None a → mb' = mkmb (own mb) ((ms,a)::(msgs mb)) → c2 = mkcf (acs c1) owns (up_mboxes mi mb' (mbs c1)) (shp c1) → c1 =snd e ⇒ c2
where " t '=snd' a '⇒' t' " := (redsnd t a t').
The conclusion of the rule, c1 =snd e ⇒ c2, states that configuration c1 can evolve into configuration c2 by actor e doing a message send snd. The premises of the rule expects the owner of the objects reachable from the root object (referenced by a) of the message to be e; this is checked with function trans_owner_check : trans_owner_check (shp c1) (ows c1) (Some e) a = true. When placing the message in the mailbox mb of the receiver actor, the owner of all the objects reachable is set to None; this is done with function trans_owner_update: Some owns = trans_owner_update (shp c1) (ows c1) None a. Placing the message with id ms and root object referenced by a in the mailbox is just a matter of queuing it in the mailbox message queue: mb' = mkmb (own mb) ((ms,a)::(msgs mb)).
The transition rules of Abstract Siaam also include a rule governing silent transitions, i.e. transitions that abstract from local actor state reductions that elicit no change on other elements of a configuration (shared heap, mailboxes, ownership relation, other actors). The latter are just modelled as transitions arbitrarily modifying a given actor local variables, with no acquisition of object references that were previously unknown to the actor.
Isolation proof. The Siaam model ensures that the only means of information transfer between actors is message exchange. We can formalize this isolation property using mark values. We call an actor a clean if its local variables do not hold a mark, and if all objects reachable from a and belonging to a hold no mark in their fields. An object o is reachable from an actor a if a has a local variable holding o's reference, or if, recursively, an object o' is reachable from a which holds o's reference in one of its fields. The isolation property can now be characterized as follows: a clean actor in any configuration remains clean during an evolution of the configuration if it never receives any message. In Coq:
Theorem ac_isolation : ∀ (c1 c2 : conf) (a1 a2: actor), wf_conf c1 → set_In a1 (acs c1) → ac_clean (shp c1) a1 (ows c1) → c1 =@ (fst a1) ⇒ * c2 → Some a2 = lookup_actor (acs c2) (fst a1) → ac_clean (shp c2) a2 (ows c2).
The theorem states that, in any well-formed configuration c1, an actor a1 which is clean (ac_clean (shp c1) a1 (ows c1)), remains clean in any evolution of c1 that does not involve a reception by a1. This is expressed as c1 =@ (fst a1) ⇒ * c2 and ac_clean (shp c2) a2 (ows c2), where fst a1 just extracts the identifier of actor a1, and a2 is the descendant of actor a1 in the evolution (it has the same actor identifier than a1: Some a2 = lookup_actor (acs c2) (fst a1)). The relation =@ a ⇒ * , which represents evolutions not involving a message receipt by actor a, is defined as the reflexive and transitive closure of relation =@ a ⇒, which is a one step evolution not involving a receipt by a. The isolation theorem is really about transfer of information between actors, the mark denoting a distinguished bit of information held by an actor. At first sight it appears to say nothing about about ownership, but notice that a clean actor a is one such that all objects that belong to a are clean, i.e. hold no mark in their fields. Thus a corollary of the theorem is that, in absence of message receipt, actor a cannot acquire an object from another actor (if that was the case, transferring the ownership of an unclean object would result in actor a becoming unclean).
A well-formed configuration is a configuration where each object in the heap has a single owner, all identifiers are indeed unique, where mailboxes hold messages sent by actors in the actor table, and all objects referenced by actors (directly or indirectly, through references in object fields) belong to the heap. To prove theorem ac_isolation, we first prove that well-formedness is an invariant in any configuration evolution:
Theorem red_preserves_wf : ∀ (c1 c2 : conf), c1 ⇒ c2 → wf_conf c1 → wf_conf c2.
The theorem red_preserves_wf is proved by induction on the derivation of the assertion c1 ⇒ c2. To prove the different cases, we rely mostly on simple reasoning with sets, and a few lemmas characterizing the correctness of table manipulation functions, of the trans_owner_check function which verifies that all objects reachable from the root object in a message have the same owner, and of the trans_owner_update function which updates the ownership table during message transfers. Using the invariance of well-formedness, theorem ac_isolation is proved by induction on the derivation of the assertion c1 =@ (fst a1) ⇒ * c2. To prove the different cases, we rely on several lemmas dealing with reachability and cleanliness.
The last theorem, live_mark, is a liveness property that shows that the isolation property is not vacuously true. It states that marks can flow between actors during execution. In Coq:
Theorem live_mark : ∃ (c1 c2 : conf)(ac1 ac2 : actor), c1 ⇒ * c2 ∧ set_In ac1 (acs c1) ∧ ac_clean (shp c1) ac1 (ows c1) ∧ Some ac2 = lookup_actor (acs c2) (fst ac1) ∧ ac_mark (shp c2) ac2 (ows c2).
Siaam: Implementation
We have implemented the Siaam abstract machine as a modified Jikes RVM [16]. Specifically, we extended the Jikes RVM bytecode and added a set of core primitives supporting the ownership machinery, which are used to build trusted APIs implementing particular programming models. The Siaam programming model is available as a trusted API that implements the formal specification presented in Section 2. On top of the Siaam programming model, we implemented the ActorFoundry API as described in [START_REF] Karmani | Actor frameworks for the JVM platform: a comparative analysis[END_REF], which we used for some of our evaluation. Finally we implemented a trusted event-based actor programming model on top of the core primitives, which can dispatch thousand of lightweight actors over pools of threads, and enables to build high-level APIs similar to Kilim with Siaam's ownership-based isolation.
Bytecode. The standard Java VM instructions are extended to include: a modified object creation instruction New, which creates an object on the heap and sets its owner to that of the creating thread; modified field read and write acess instructions getfield and putfield with owner check; modified instructions load and store array instructions aload and astore with owner check.
Virtual machine core. Each heap object and each thread of execution have an owner reference, which points to an object implementing the special Owner interface. A thread can only access objects belonging to the Owner instance referenced by its owner reference. Core primitives include operations to retrieve and set the owner of the current thread, to retrieve the owner of an object, to withdraw and acquire ownership over objects reachable from a given root object. In the Jikes RVM, objects are represented in memory by a sequence of bytes organized into a leading header section and the trailing scalar object's fields or array's length and elements. We extended the object header with two reference-sized words, OWNER and LINK. The OWNER word stores a reference to the object owner, whereas the LINK word is introduced to optimize the performance of object graph traversal operations.
Contexts. Since the JikesRVM is fully written in Java, threads seamlessly execute application bytecode and the virtual machine internal bytecode. We have introduced a notion of execution context in the VM to avoid subjecting VM bytecode to the owner-checking mechanisms. A method in the application context is instrumented with all the isolation mechanisms whereas methods in the VM context are not. If a method can be in both context, it must be compiled in two versions, one for both contexts. When a method is invoked, the context of the caller is used to deduce which version of the method should be called. The decision is taken statically when the invoke instruction is compiled.
Owernship transfer. Central to the performance of the Siaam virtual machine are operations implementing ownership transfer, withdraw and acquire. In the formal specification, owner-checking an object graph and updating the owner of objects in the graph is done atomically (see e.g. the message send transition rule in Section 3). However implementing the withdraw operation as an atomic operation would be costly. Furthermore, an implementation of ownership transfer must minimize graph traversals. We have implemented an iterative algo-procedural points-to analysis and an intra-procedural live variable analysis. Both phases depends on the transfered abstract objects analysis that propagates unsafe abstract objects from the communication sites downward the call graph edges.
By combining results from the two phases, the algorithm computes conservative approximations of unsafe runtime objects and safe variables at any control-flow point in the program. The owner-check elimination for a given instruction s accessing the reference in variable V processes as following (figure 5.1). First the unsafe objects analysis is queried to know whether V may points-to an unsafe runtime object at s. If not, the instruction can skip the ownercheck for V . Otherwise, the safe reference analysis is consulted to know whether the reference in variable V is considered safe at s thanks to dominant owner-checks of the reference in the control-flow graph.
The two phases of our analysis are independent, it is possible to disable one and replace it with a very conservative approximation. Disabling one phase allows faster computation but less accurate results. We implemented the safe references analysis as a code optimization pass in the Siaam virtual machine, so that intra-procedural owner-check eliminations are performed without the need of a costly whole-program analysis. rithm for withdraw that chains objects that are part of a message through their LINK word. The list thus obtained is maintained as long as the message exists so that the acquire operation can efficiently traverse the objects of the message.
The algorithm leverages specialized techniques, initially introduced in the Jikes RVM to optimize the reference scanning phase during garbage collection [START_REF] Garner | A comprehensive evaluation of object scanning techniques[END_REF], to efficiently enumerate the reference offsets for a given base object.
Siaam: Static Analysis
We describe in this section some elements of Siaam static analysis to optimize away owner-checking on field read and write instructions. The analysis is based on the observation that an instruction accessing an object's field does not need an owner-checking if the object accessed belongs to the executing actor. Any object that has been allocated or received by an actor and has not been passed to another actor ever since, belongs to that actor. The algorithm returns an under-approximation of the owner-checking removal opportunities in the analyzed program. Considering a point in the program, we say an object (or a reference to an object) is safe when it always belongs to the actor executing that point, regardless of the execution history. By opposition, we say an object is unsafe when sometimes it doesn't belong to the current actor. We extend the denomination to instructions that would respectively access a safe object or an unsafe object. A safe instruction will never throw an OwnerException, whereas an unsafe instruction might.
Analysis. The Siaam analysis is structured in two phases. First the safe dynamic references analysis employs a local must-alias analysis to propagate owner-checked references along the control-flow edges. It is optionally refined with an inter-procedural pass propagating safe references through method arguments and returned values. Then the safe objects analysis tracks safe runtime objects along call-graph and method control-flow edges by combining an interprocedural points-to analysis and an intra-procedural live variable analysis. Both phases depend on the transfered abstract objects analysis that propagates unsafe abstract objects from the communication sites downward the call graph edges.
By combining results from the two phases, the algorithm computes conservative approximations of unsafe runtime objects and safe variables at any controlflow point in the program. The owner-check elimination for a given instruction s accessing the reference in variable V proceeds as illustrated in Figure 3. First the unsafe objects analysis is queried to know whether V may points-to an unsafe runtime object at s. If not, the instruction can skip the owner-check for V . Otherwise, the safe reference analysis is consulted to know whether the reference in variable V is considered safe at s, thanks to dominant owner-checks of the reference in the control-flow graph.
The Siaam analysis makes use of several standard intra and inter-prodedural program analyses: a call-graph representation, an inter-procedural points-to analysis, an intra-procedural liveness analysis, and an intra-procedural must-alias analysis. Each of these analyses exists in many different variants offering various tradeoffs of results accuracy and algorithmic complexity, but regardless of the implementation, they provide a rather standard querying interface. Our analysis is implemented as a framework that can make use of different instances of these analyses.
Implementations. The intra-procedural safe reference analysis which is part of the Siaam analysis has been included in the Jikes RVM optimizing compiler. Despite its relative simplicity and its very conservative assumptions, it efficiently eliminates about half of the owner-check barriers introduced by application bytecode and the standard library for the benchmarks we have tested (see Section 6). The safe reference analysis and the safe object analyses from the Siaam analysis have been implemented in their inter-procedural versions as an offline tool written in Java. The tool interfaces with the Soot analysis framework [23], that provides the program representation, the call graph, the interprocedural pointer analysis, the must-alias analysis and the liveness analysis we use.
Programming assistant. The Siaam programming model is quite simple, requiring no programmer annotation, and placing no constraint on messages. However, it may generate hard to understand runtime exceptions due to failed owner-checks. The Siaam analysis is therefore used as the basis of a programming assistant that helps application developers understand why a given program statement is potentially unsafe and may throw an owernship exception at runtime. The Siaam analysis guarantees that there will be no false negative, but to limit the amount of false positives it is necessary to use a combination of the most accurate standard (points-to, must-alias and liveness) analyses. The programming assistant tracks a program P backward, starting from an unsafe statement s with a non-empty set of unverified ownerhip preconditions (as given by the ok act function in Section 2), trying to find every program points that may explain why a given precondition is not met at s. For each unsatisfied precondition, the assistant can exhibit the shortest execution paths that result in an exception being raised at s. An ownership precondition may comprise requirements that a variable or an object be safe. When a requirement is not satisfied before s, it raises one or several questions of the form "why is x unsafe before s?". The assistant traverses the control-flow backward, looks for immediate answers at each statement reached, and propagates the questions further if necessary, until all questions have found an answer.
Siaam Implementation. We present first an evaluation of the overall performance of our Siaam implementation based on the DaCapo benchmark suite [START_REF] Blackburn | The DaCapo benchmarks: Java benchmarking development and analysis[END_REF], representative of various real industrial workloads. These applications use regular Java. The bytecode is instrumented with Siaam's owner-checks and all threads share the same owner. With this benchmark we measure the overhead of the dynamic ownership machinery, encompassing the object owner initialization and the owner-checking barriers, plus the allocation and collection costs linked to the object header modifications.
We benchmarked five configurations. no siaam is the reference Jikes RVM without modifications. opt designates the modified Jikes RVM with JIT ownerchecks elimination. noopt designates the modified Jikes RVM without JIT ownerchecks elimination. sopt is the same as opt but the application bytecode has safety annotations issued by the offline Siaam static analysis tool. Finally soptnc is the same as sopt without owner-check barriers for the standard library bytecode. We executed the 2010-MR2 version of the DaCapo benchamrks, with two workloads, the default and the large. Table 1 shows the results for the Dacapo 2010-MR2 runs. The results were obtained using a machine equipped with an Intel Xeon W3520 2.67Ghz processor. The execution time results are normalized with respect to the no-siaam configuration for each program of the suite: lower is better. The geometric mean summarizes the typical overhead for each configuration. The opt figures in Table 1 show that the modified virtual machine including JIT barrier elimination has an overhead of about 30% compared to the not-isolated reference. The JIT elimination improves the performances by about 20% compared to the noopt configuration. When the bytecode is annotated by the whole-program static analysis the performance is 10% to 20% better than with the runtime-only optimization. However, the DaCapo benchmarks use the Java reflection API to load classes and invoke methods, meaning our static analysis was not able to process all the bytecode with the best precision. We can expect better results with other programs for which the call graph can be entirely built with precision. Moreover we used for the benchmarks a context-insensitive, flow-insensitive pointer analysis, meaning the Siaam analysis could be even more accurate with sensitive standard analyses. Finally the standard library bytecode is not annotated by our tool, it is only treated by the JIT elimination optimization. The soptnc configuration provides a good indication of what the full optimization would yield. The results show an overhead (w.r.t. application) with a mean of 15%, which can be considered as an acceptable price to pay for the simplicity of developing isolated programs with Siaam.
The Siaam virtual machine consumes more heap space than the unmodified Jikes RVM due to the duplication of the standard library used by both the virtual machine and the application, and because of the two words we add in every object's header. The average object size in the DaCapo benchmarks is 62 bytes, so our implementations increases it by 13%. We have measured a 13% increase in the full garbage collection time, which accounts for the tracing of the two additional references and the memory compaction. Siaam Analysis. We compare the efficiency of the Siaam whole-program analysis to the SOTER algorithm, which is closest to ours. Table 2 contains the results that we obtained for the benchmarks reported in [START_REF] Negara | Inferring ownership transfer for efficient message passing[END_REF], that use Actor-Foundry programs. For each analyzed application we give the total number of owner-checking barriers and the total number of message passing sites in the bytecode. The columns "Ideal safe" show the expected number of safe sites for each criteria. The column " Siaam safe" gives the result obtained with the Siaam analysis. The analysis execution time is given in the third main colum. The last column compares the result ratio to ideal for both SOTER and Siaam. Our analysis outperforms SOTER significantly. SOTER relies on an inter-procedural live-analysis and a points-to analysis to infer message passing sites where a byreference semantics can applies safely. Given an argument a i of a message passing site s in the program, SOTER computes the set of objects passed by a i and the set of objects transitively reachable from the variables live after s. If the intersection of these two sets is empty, SOTER marks a i as eligible for by-reference argument passing, otherwise it must use the default by-value semantic. The weakness to this pessimistic approach is that among the live objects, a significant part won't actually be accessed in the control-flow after s. On the other hand, Siaam do care about objects being actually accessed, which is a stronger evidence criterion to incriminate message passing sites. Although Siaam's algorithm wasn't designed to optimize-out by-value message passing, it is perfectly adapted for that task. For each unsafe instruction detected by the algorithm, there is one or several guilty dominating message passing sites. Our diagnosis algorithm tracks back the application control-flow from the unsafe instruction to the incriminated message passing sites. These sites represent a subset of the sites where SOTER cannot optimize-out by-value argument passing.
Related Work and Conclusion
Enforcing isolation between different groups of objects, programs or threads in presence of a shared memory has been much studied in the past two decades. Although we cannot give here a full survey of the state of the art (a more in depth analysis is available in [START_REF] Sabah | SIAAM: Simple Isolation for an Abstract Actor Machine[END_REF]), we can point out three different kinds of related works: those relying on type annotations to ensure isolation, those relying on run-time mechanisms, and those relying on static analyses. Much work has been done on controlling aliasing and encapsulation in objectoriented languages and systems, in a concurrent context or not. Much of the works in these areas rely on some sort of reference uniqueness, that eliminates object sharing by making sure that there is only one reference to an object at any time, e.g. [START_REF] Clarke | External uniqueness is unique enough[END_REF][START_REF] Haller | Capabilities for uniqueness and borrowing[END_REF][START_REF] Hogg | Islands: aliasing protection in object-oriented languages[END_REF][START_REF] Minsky | Towards alias-free pointers[END_REF][START_REF] Müller | Ownership transfer in universe types[END_REF]. All these systems restrict the shape of object graphs or the use of references in some way. In contrast, Siaam makes no such restriction. A number of systems rely on run-time mechanisms for achieving isolation, most using either deep-copy or special message heaps for communication, e.g. [START_REF] Czajkowski | Multitasking without compromise: A virtual machine evolution[END_REF][START_REF] Fahndrich | Language Support for Fast and Reliable Messagebased Communication in Singularity OS[END_REF][START_REF] Geoffray | I-JVM: a java virtual machine for component isolation in osgi[END_REF][START_REF] Gruber | Ownership-based isolation for concurrent actors on multicore machines[END_REF]. Of these, O-Kilim [START_REF] Gruber | Ownership-based isolation for concurrent actors on multicore machines[END_REF], which builds directly on the PhD work of the first author of this paper [START_REF] Claudel | Mécanismes logiciels de protection mémoire[END_REF], is the closest to Siaam: it places no constraint on transferred object graphs, but at the expense of a complex programming model and no programmer support, in contrast to Siaam. Finally several works develop static analyses for efficient concurrency or ownership transfer, e.g. [START_REF] Carlsson | Message analysis for concurrent programs using message passing[END_REF][START_REF] Negara | Inferring ownership transfer for efficient message passing[END_REF][START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF]. Kilim [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF] relies in addition on type annotations to ensure tree-shaped messages. The SOTER [START_REF] Negara | Inferring ownership transfer for efficient message passing[END_REF] analysis is closest to the Siaam analysis and has been discussed in the previous section.
With its annotation-free programming model, which places no restriction on object references and message shape, we believe Siaam to be really unique compared to other approaches in the literature. In addition, we have not found an equivalent of the formal proof of isolation we have conducted for Siaam. Our evaluations demonstrate that the Siaam approach to isolation is perfectly viable: it suffers only from a limited overhead in performance and memory consumption, and our static analysis can significantly improve the situation. The one drawback of our programming model, raising possibly hard to understand runtime exceptions, is greatly alleviated by the use of the Siaam analysis in a programming assistant.
Figure 1 . 1 :
11 Figure 1.1: (a) Three ownership domains with their respective actor in gray. (b) The configuration after the ownership of objects 1, 2, 3 was transferred from actor a to actor b.
Figures 1 .
1 Figures 1.2 to 1.4 features some examples of valid and invalid message starting objects.In configuration (a) all the objects but the actor 0 may be employed as the starting object
Figure 5 . 1 :
51 Figure 5.1: Owner-check elimination decision diagram. The left-most question is answered by the safe objects analysis. The right-most question is answered by the safe references analysis.
Fig. 3 .
3 Fig. 3. Owner-check elimination decision diagram
Table 1 .
1 DaCapo benchmarks
Benchmark opt noopt sopt soptnc opt noopt sopt soptnc
workload default large
antlr 1.20 1.32 1.09 1.11 1.21 1.33 1.11 1.10
bloat 1.24 1.41 1.17 1.05 1.40 1.59 1.14 0.96
hsqldb 1.24 1.36 1.09 1.06 1.45 1.60 1.29 1.10
jython 1.52 1.73 1.41 1.24 1.45 1.70 1.45 1.15
luindex 1.25 1.46 1.09 1.05 1.25 1.43 1.09 1.03
lusearch 1.31 1.45 1.17 1.18 1.33 1.49 1.21 1.21
pmd 1.32 1.37 1.29 1.24 1.34 1.44 1.39 1.30
xalan 1.24 1.39 1.33 1.35 1.29 1.41 1.38 1.40
geometric mean 1.28 1.43 1.20 1.16 1.34 1.50 1.25 1.15
Table 2 .
2 ActorFoundry analyses.
Ownercheck Message Passing ratio to Ideal
Ideal Siaam Ideal Siaam Time
Sites safe safe Sites safe safe (sec) Siaam SOTER
ActorFoundry
threadring 24 24 24 8 8 8 0.1 100% 100%
(1) concurrent 99 99 99 15 12 10 0.1 98% 58%
(2) copymessages 89 89 84 22 20 15 0.1 91% 56%
performance 54 54 54 14 14 14 0.2 100% 86%
pingpong 28 28 28 13 13 13 0.1 100% 89%
refmessages 4 4 4 6 6 6 0.1 100% 67%
Benchmarks
chameneos 75 75 75 10 10 10 0.1 100% 33%
fibonacci 46 46 46 13 13 13 0.2 100% 86%
leader 50 50 50 10 10 10 0.1 100% 17%
philosophers 35 35 35 10 10 10 0.2 100% 100%
pi 31 31 31 8 8 8 0.1 100% 67%
shortestpath 147 147 147 34 34 34 1.2 100% 88%
Synthetic
quicksortCopy 24 24 24 8 8 8 0.2 100% 100%
(3) quicksortCopy2 56 56 51 10 10 5 0.1 85% 75%
Real world
clownfish 245 245 245 102 102 102 2.2 100% 68%
(4) rainbow fish 143 143 143 83 82 82 0.2 99% 99%
swordfish 181 181 181 136 136 136 1.7 100% 97%
Siaam enforces the constraint that all objects reachable from a message root object have the same owner -the sending actor. If the constraint is not met, sending the message fails. However, this constraint, which makes for a simple design, is just a design option. An alternative would be to consider that a message contents consist of all the objects reachable from the root object which have the sending actor as their owner. This alternate semantics would not change the actual mechanics of the model and the strong isolation enforced by it.
Jinja, as described in[START_REF] Klein | A machine-checked model for a java-like language, virtual machine, and compiler[END_REF], only covers a subset of the Java language. It does not have class member qualifiers, interfaces, generics, or concurrency. | 47,301 | [
"992"
] | [
"311165",
"253810",
"209151"
] |
01401473 | en | [
"phys"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01401473/file/1611.00804.pdf | M Beneke
A Bharucha
A Hryczuk
S Recksiegel
P Ruiz-Femenía
The last refuge of mixed wino-Higgsino dark matter
We delineate the allowed parameter and mass range for a wino-like dark matter particle containing some Higgsino admixture in the MSSM by analysing the constraints from diffuse gamma-rays from the dwarf spheroidal galaxies, galactic cosmic rays, direct detection and cosmic microwave background anisotropies. A complete calculation of the Sommerfeld effect for the mixed-neutralino case is performed. We find that the combination of direct and indirect searches poses significant restrictions on the thermally produced wino-Higgsino dark matter with correct relic density. For µ > 0 nearly the entire parameter space considered is excluded, while for µ < 0 a substantial region is still allowed, provided conservative assumptions on astrophysical uncertainties are adopted.
Introduction
Many remaining regions in the parameter space of the Minimal Supersymmetric Standard Model (MSSM), which yield the observed thermal relic density for neutralino dark matter, rely on very specific mechanisms, such as Higgs-resonant annihilation in the socalled funnel region, or sfermion co-annihilation. In [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF] we identified new regions, where the dark matter particle is a mixed-as opposed to pure-wino, has mass in the TeV region, and yields the observed relic density. These new regions are driven to the correct relic abundance by the proximity of the resonance of the Sommerfeld effect due to electroweak gauge boson exchange. In such situations, the annihilation cross section is strongly velocity dependent, and the present-day annihilation cross section is expected to be relatively large, potentially leading to observable signals in indirect searches for dark matter (DM). On the other hand, a substantial Higgsino fraction of a mixed dark matter particle leads to a large, potentially observable dark matter-nucleon scattering cross section.
In this paper we address the question of which part of this region survives the combination of direct and indirect detection constraints. For the latter we consider diffuse gamma-rays from the dwarf spheroidal galaxies (dSphs), galactic cosmic rays (CRs) and cosmic microwave background (CMB) anisotropies. These have been found to be the most promising channels for detecting or excluding the pure-wino DM model [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF]. Stronger limits can be obtained only from the non-observation of the gamma-line feature and to a lesser extent from diffuse gamma-rays both originating in the Galactic Centre (GC). Indeed, it has been shown [START_REF] Cohen | Wino Dark Matter Under Siege[END_REF][START_REF] Fan | In Wino Veritas? Indirect Searches Shed Light on Neutralino Dark Matter[END_REF] that the pure-wino model is ruled out by the absence of an excess in these search channels, unless the galactic dark matter profile develops a core, which remains a possibility. Since the viability of wino-like DM is a question of fundamental importance, we generally adopt the weaker constraint in case of uncertainty, and hence we take the point of view that wino-like DM is presently not excluded by gamma-line and galactic diffuse gamma-ray searches. Future results from the Čerenkov Telescope Array (CTA) are expected to be sensitive enough to resolve this issue (see e.g. [START_REF] Roszkowski | Prospects for dark matter searches in the pMSSM[END_REF][START_REF] Lefranc | Dark Matter in γ lines: Galactic Center vs dwarf galaxies[END_REF]), and will either observe an excess in gamma-rays or exclude the dominantly wino DM MSSM parameter region discussed in the present paper.
Imposing the observed relic density as a constraint, the pure-wino DM model has no free parameters and corresponds to the limit of the MSSM when all other superpartner particles and non-standard Higgs bosons are decoupled. Departing from the pure wino in the MSSM introduces many additional dimensions in the MSSM parameter space and changes the present-day annihilation cross section, branching ratios (BRs) for particular primary final states, and the final gamma and CR spectra leading to a modification of the limits. The tools for the precise computation of neutralino dark matter (co-) annihilation in the generic MSSM when the Sommerfeld enhancement is operative have been developed in [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF][START_REF] Hellmann | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos II. P-wave and next-to-next-to-leading order S-wave coefficients[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF] and applied to relic density computations in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF][START_REF] Beneke | Heavy neutralino relic abundance with Sommerfeld enhancements -a study of pMSSM scenarios[END_REF]. The present analysis is based on an extension of the code to calculate the annihilation cross sections for all exclusive two-body final states separately, rather than the inclusive cross section.
Further motivation for the present study is provided by the spectrum of the cosmic 1 antiproton-to-proton ratio reported by the AMS-02 collaboration [START_REF] Aguilar | Antiproton Flux, Antiproton-to-Proton Flux Ratio, and Properties of Elementary Particle Fluxes in Primary Cosmic Rays Measured with the Alpha Magnetic Spectrometer on the International Space Station[END_REF], which appears to be somewhat harder than expected from the commonly adopted cosmic-ray propagation models. In [START_REF] Ibe | Wino Dark Matter in light of the AMS-02 2015 Data[END_REF] it has been shown that pure-wino DM can improve the description of this data. Although our understanding of the background is insufficient to claim the existence of a dark matter signal in antiprotons, it is nevertheless interesting to check whether the surviving mixed-wino DM regions are compatible with antiproton data. The outline of this paper is as follows. In Section 2 we summarize the theoretical input, beginning with a description of the dominantly wino MSSM parameter region satisfying the relic-density constraint, then providing some details on the computation of the DM annihilation rates to primary two-body final states. The following Section 3 supplies information about the implementation of the constraints from diffuse gammarays from the dSphs, galactic CRs, direct detection and the CMB, and the data employed for the analysis. The results of the indirect detection analysis are presented in Section 4 as constraints in the plane of the two most relevant parameters of the MSSM, the wino mass parameter M 2 and |µ| -M 2 , where µ is the Higgsino mass parameter. In Section 5 the indirect detection constraints are combined with that from the non-observation of dark matter-nucleon scattering. For the case of µ < 0 we demonstrate the existence of a mixed wino-Higgsino region satisfying all constraints, while for µ > 0 we show that there is essentially no remaining parameter space left. Section 6 concludes.
CR fluxes from wino-like dark matter 2.1 Dominantly-wino DM with thermal relic density in the MSSM
In [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF] the Sommerfeld corrections to the relic abundance computation for TeV-scale neutralino dark matter in the full MSSM have been studied. The ability to perform the computations for mixed dark matter at a general MSSM parameter space point [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF][START_REF] Hellmann | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos II. P-wave and next-to-next-to-leading order S-wave coefficients[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF][START_REF] Beneke | Heavy neutralino relic abundance with Sommerfeld enhancements -a study of pMSSM scenarios[END_REF] revealed a large neutralino mass range with the correct thermal relic density, which opens mainly due to the proximity of the resonance of the Sommerfeld effect and its dependence on MSSM parameters. In this subsection we briefly review the dominantlywino parameter region identified in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF], which will be studied in this paper. "Dominantlywino" or "wino-like" here refers to a general MSSM with non-decoupled Higgs bosons, sfermions, bino and Higgsinos as long as the mixed neutralino dark matter state is mainly wino. We also require that its mass is significantly larger than the electroweak scale.
The well-investigated pure-wino model refers to the limit in this parameter space, when all particles other than the triplet wino are decoupled. Despite the large number of parameters needed to specify a particular MSSM completely, in the dominantly-wino region, the annihilation rates depend strongly only on a subset of parameters. These are the wino, bino and Higgsino mass parameters M 2 , M 1 and µ, respectively, which control the neutralino composition and the chargino-neutralino mass difference, and the common sfermion mass parameter M sf . In this work we assume that the bino is much heavier that the wino, that is, the lightest neutralino is a mixed wino-Higgsino. Effectively a value of |M 1 | larger than M 2 by a few 100 GeV is enough to decouple the bino in the TeV region. 1 The wino mass parameter determines the lightest neutralino (LSP) mass, and the difference |µ| -M 2 the wino-Higgsino admixture. In the range M 2 = 1 -5 TeV considered here, the relation m LSP M 2 remains accurate to a few GeV, when some Higgsino fraction is added to the LSP state, and values of |µ| -M 2 > ∼ 500 GeV imply practically decoupled Higgsinos. Increasing the Higgsino component of the wino-like LSP lowers its coupling to charged gauge bosons, to which wino-like neutralinos annihilate predominantly, and therefore increases the relic density. Larger mixings also imply that the mass difference between the lightest chargino and neutralino increases, which generically reduces the size of the Sommerfeld enhancement of the annihilation cross section. These features are apparent in the contours of constant relic density in the |µ|-M 2 vs. M 2 plane for the wino-Higgsino case shown in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF], which are almost straight for large |µ|-M 2 , but bend to lower values of m LSP as |µ| -M 2 is reduced. A representative case is reproduced in Fig. 1. The contours also bend towards lower M 2 when sfermions become lighter, as they mediate the t-and uchannel annihilation into SM fermions, which interferes destructively with the s-channel annihilation, effectively lowering the co-annihilation cross section. By choosing small values of M sf (but larger than 1.25 m LSP to prevent sfermion co-annihilation, not treated by the present version of the code), LSP masses as low as 1.7 TeV are seen to give the correct thermal density, to be compared with the pure-wino result, m LSP 2.8 TeV.
For M 2 > 2.2 TeV a resonance in the Sommerfeld-enhanced rates is present, which extends to larger M 2 values as the Higgsino fraction is increased. The enhancement of the cross section in the vicinity of the resonance makes the contours of constant relic density cluster around it and develop a peak that shifts m LSP to larger values. In particular, the largest value of M 2 , which gives the correct thermal relic density, is close to 3.3 TeV, approximately 20% higher than for the pure-wino scenario. The influence of the less relevant MSSM Higgs mass parameter M A is also noticeable when the LSP contains some Higgsino admixture, which enhances the couplings to the Higgs (and Z) bosons in s-channel annihilation. This is more pronounced if M A is light enough such that final states containing heavy Higgs bosons are kinematically accessible. The corresponding increase in the annihilation cross section results in positive shifts of around 100 to 250 GeV in the value of M 2 giving the correct relic density on decreasing M A from 10 TeV to 800 GeV. In summary, a large range of lightest neutralino masses, 1.7 -3.5 TeV, provides the correct relic density for the mixed wino-Higgsino state as a consequence of the Sommerfeld corrections.
The MSSM parameter points considered in this paper have passed standard collider, flavour and theoretical constraints as discussed in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF]. In the dominantly-wino parameter space, most of the collider and flavour constraints are either satisfied automatically or receive MSSM corrections that are suppressed or lie within the experimental and theoretical uncertainties. Ref. [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF] further required compatibility with direct dark matter (µ -M 2 ) plane for µ > 0, as computed in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF]. The (green) band indicates the region within 2σ of the observed dark matter abundance. Parameters are as given in the header, and the trilinear couplings are set to A i = 8 TeV for all sfermions except for that of the stop, which is fixed by the Higgs mass value. The black solid line corresponds to the old LUX limit [START_REF] Akerib | First results from the LUX dark matter experiment at the Sanford Underground Research Facility[END_REF] on the spinindependent DM-nucleon cross section, which excludes the shaded area below this line. Relaxing the old LUX limit by a factor of two to account for theoretical uncertainties eliminates the direct detection constraint on the shown parameter space region. detection constraints by imposing that the DM-nucleon spin-independent cross section was less than twice the LUX limits reported at the time of publication [START_REF] Akerib | First results from the LUX dark matter experiment at the Sanford Underground Research Facility[END_REF]. This did not affect the results significantly, see Fig. 1, as in most of the parameter space of interest the scattering cross section was predicted to be much above those limits. Recently the LUX collaboration has presented a new limit, stronger than the previous one by approximately a factor of four [START_REF] Akerib | Results from a search for dark matter in LUX with 332 live days of exposure[END_REF], potentially imposing more severe constraints on the dominantly-wino neutralino region of the MSSM parameter space. The details of the implementation of the limits from indirect detection searches for the mixed wino, which were not included in our previous analysis, and from the new LUX results are given in Section 3.
���� ���� ���� ���� ������ ���� ���� ���� 0.22 ���������� �������� ���� ���� ���� ���� � � � � � � � ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] μ-� � [��� ] � �� =�� ���� � � =�� � � � � =�� ���� ���β=��
Branching fractions and primary spectra
The annihilation of wino-like DM produces highly energetic particles, which subsequently decay, fragment and hadronize into stable SM particles, producing the CR fluxes.
The primary particles can be any of the SM particles, and the heavy MSSM Higgs bosons, H 0 , A 0 and H ± , when they are kinematically accessible. We consider neutralino dark matter annihilation into two primary particles. The number of such exclusive twobody channels is 31, and the corresponding neutralino annihilation cross sections are computed including Sommerfeld loop corrections to the annihilation amplitude as described in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF]. As input for this calculation we need to provide the tree-level exclusive annihilation rates of all neutral neutralino and chargino pairs, since through Sommerfeld corrections the initial LSP-LSP state can make transitions to other virtual states with heavier neutralinos or a pair of conjugated charginos, which subsequently annihilate into the primaries. The neutralino and chargino tree-level annihilation rates in the MSSM have been derived analytically in [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF], and including v2 -corrections in [START_REF] Hellmann | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos II. P-wave and next-to-next-to-leading order S-wave coefficients[END_REF], in the form of matrices, where the off-diagonal entries refer to the interference of the short-distance annihilation amplitudes of different neutralino/chargino two-particle states into the same final state. For the present analysis the annihilation matrices have been generalized to vectors of matrices, such that the components of the vector refer to the 31 exclusive final states. The large number of different exclusive final states can be implemented without an increase in the CPU time for the computation relative to the inclusive case.
Since the information about the exclusive annihilation rates only enters through the (short-distance) annihilation matrices, the two-particle wave-functions that account for the (long-distance) Sommerfeld corrections only need to be computed once. On the contrary, since the v 2 -corrections to the annihilation of DM in the present Universe are very small, they can be neglected, which results in a significant reduction in the time needed to compute the annihilation matrices. 2 It further suffices to compute the present-day annihilation cross section for a single dark matter velocity, and we choose v = 10 -3 c. The reason for this choice is that the Sommerfeld effect saturates for very small velocities, and the velocity dependence is negligible for velocities smaller than 10 -3 c. The energy spectrum dN f /dx of a stable particle f at production per DM annihilation can be written as
dN f dx = I Br I dN I→f dx , (1)
where x = E f /m LSP , and dN I→f /dx represents the contribution from each two-body primary final state I with branching fraction Br I to the spectrum of f after the decay, fragmentation and hadronization processes have taken place. We compute Br I from our MSSM Sommerfeld code as described above and use the tables for dN I→f /dx provided with the PPPC4DMID code [START_REF] Cirelli | PPPC 4 DM ID: A Poor Particle Physicist Cookbook for Dark Matter Indirect Detection[END_REF], which include the leading logarithmic electroweak corrections through the electroweak fragmentation functions [START_REF] Ciafaloni | Weak Corrections are Relevant for Dark Matter Indirect Detection[END_REF]. Two comments regarding the use of the spectra provided by the PPPC4DMID code are in order. The code only considers primary pairs I of a particle together with its antiparticle, both assumed to have the same energy spectrum. For wino-like DM there exist primary final states with different species, i.e. I = ij with j = ī, such as Zγ and Zh 0 . In this case, we compute the final number of particles f produced from that channel as one half of the sum of those produced by channels I = i ī and I = j j. This is justified, since the fragmentation of particles i and j is independent. A second caveat concerns the heavy MSSM Higgs bosons that can be produced for sufficiently heavy neutralinos. These are not considered to be primary channels in the PPPC4DMID code, which only deals with SM particles. A proper treatment of these primaries would first account for the decay modes of the heavy Higgs bosons, and then consider the fragmentation and hadronization of the SM multi-particle final state in an event generator. Instead of a full treatment, we replace the charged Higgs H ± by a longitudinal-polarized W ± -boson, and the neutral heavy Higgses H 0 , A 0 by the light Higgs h 0 when computing the spectra in x. This approximation is not very well justified. However, the branching ratios of the dominantly-wino neutralino to final states with heavy Higgses are strongly suppressed, and we could equally have set them to zero without a noticeable effect on our results.
The branching fractions of primary final states obtained from our code are shown in the left panel of Fig. 2 as a function of the Higgsino fraction for a wino-like LSP with 2 TeV mass. The pure wino annihilates mostly to W + W -and to a lesser extent to other pairs of gauge bosons, including the loop-induced photon final state, which is generated by the Sommerfeld correction. The annihilation to fermions is helicity or p-wave suppressed. The suppression is lifted only for the t t final state as the Higgsino admixture increases, in which case this final state becomes the second most important. Except for this channel, the dominant branching fractions are largely independent of the Higgsino fraction. The annihilation to W + W -is always dominant and above 75%.
The final spectra of photons, positrons and antiprotons per annihilation at production for small (solid lines) and large (dashed lines) Higgsino mixing are plotted in the right panel of Fig. 2. The spectra in these two extreme cases are very similar, because W + W - is the dominant primary final state largely independent of the wino-Higgsino composition, and also the number of final stable particles produced by the sub-dominant primary channels do not differ significantly from each other. The inset in the right-hand plot shows that the relative change between the mixed and pure wino case varies from about +40% to about -40% over the considered energy range. Concerning the variation with respect to the DM mass, the most important change is in the total annihilation cross section, not in the spectra dN f /dx. The branching ratios Br I to primaries depend on the LSP mass in the TeV regime only through the Sommerfeld corrections, which can change the relative size of the different channels. However, since for wino-like neutralinos annihilation into W + W -dominates the sum over I in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF], the dependence of the final spectra on m LSP is very mild.
Indirect and direct searches
In this section we discuss our strategy for determining the constraints on mixed-wino dark matter from various indirect searches. While the analysis follows that for the pure wino [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF], here we focus on the most relevant search channels: the diffuse gamma-ray [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF]. Right: Comparison of p, e + and gamma-ray spectra per annihilation at production of a 50% mixed wino-Higgsino (dashed) to the pure-wino (solid) model. The gamma-line component is not shown. In the inset at the bottom of the plot the relative differences between the two spectra are shown.
E k [GeV] dN/dLogE k M 2 =2 TeV, M 1 =4.02 TeV, M sf =30TeV, tanΒ=15
emission from dSphs, antiprotons and positron CRs, and the CMB. Moreover, since we consider wino-like DM with a possibly significant Higgsino admixture, we implement the direct detection constraints as well.
Charged cosmic rays
Propagation
The propagation of charged CRs in the Galaxy is best described within the diffusion model with possible inclusion of convection. In this framework the general propagation equation takes the form [17]
∂N i ∂t -∇ • D xx ∇ -v c N i + ∂ ∂p ṗ - p 3 ∇ • v c N i - ∂ ∂p p 2 D pp ∂ ∂p N i p 2 (2) = Q i (p, r, z) + j>i cβn gas (r, z)σ ij N j -cβn gas (r, z)σ in N i - j<i N i τ i→j + j>i N j τ j→i ,
where N i (p, r, z) is the number density of the i-th particle species with momentum p and corresponding velocity v = cβ, written in cylindrical coordinates (r, z), σ in the inelastic scattering cross section, σ ij the production cross section of species i by the fragmentation of species j, and τ i→j , τ j→i are the lifetimes related to decays of i and production from heavier species j, respectively. We solve (2) with the help of the DRAGON code [START_REF] Evoli | Cosmic-Ray Nuclei, Antiprotons and Gamma-rays in the Galaxy: a New Diffusion Model[END_REF], assuming cylindrical symmetry and no convection, v c = 0. With the galacto-centric radius r, the height from the Galactic disk z and rigidity R = pc/Ze, we adopt the following form of the spatial diffusion coefficient:
D xx (R, r, z) = D 0 β η R R 0 δ e |z|/z d e (r-r )/r d . (3)
The momentum-space diffusion coefficient, also referred to as reaccelaration, is related to it via
D pp D xx = p 2 v 2 A /9
, where the Alfvén velocity v A represents the characteristic velocity of a magnetohydrodynamic wave. The free parameters are the normalization D 0 , the spectral indices η and δ, the parameters setting the radial scale r d and thickness z d of the diffusion zone, and finally v A . We fix the normalization at R 0 = 3 GV. The diffusion coefficient is assumed to grow with r, as the large scale galactic magnetic field gets weaker far away from the galactic center.
The source term is assumed to have the form
Q i (R, r, z) = f i (r, z) R R i -γ i , (4)
where f i (r, z) parametrizes the spatial distribution of supernova remnants normalized at R i , and γ i is the injection spectral index for species i. For protons and Helium we modify the source term to accommodate for two breaks in the power-law, as strongly indicated by observations. Leptons lose energy very efficiently, thus those which are very energetic need to be very local, while we do not observe nor expect many local sources of TeV scale leptons. This motivates multiplying (4) by an additional exponential cut-off in energy, e -E/Ec , with E c set to 50 TeV for electron and positron injection spectra. We employ the gas distribution n gas derived in [START_REF] Tavakoli | Three Dimensional Distribution of Atomic Hydrogen in the Milky Way[END_REF][START_REF] Pohl | 3D Distribution of Molecular Gas in the Barred Milky Way[END_REF] and adopt the standard forcefield approximation [START_REF] Gleeson | Solar Modulation of Galactic Cosmic Rays[END_REF] to describe the effect of solar modulation. The modulation potential is assumed to be a free parameter of the fit and is allowed to be different for different CR species.
Background models
In [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF] 11 benchmark propagation models with varying diffusion zone thickness, from z d = 1 kpc to z d = 20 kpc, were identified by fitting to the B/C, proton, Helium, electron and e + + e -data. Since then the AMS-02 experiment provided CR spectra with unprecedented precision, which necessitates modifications of the above benchmark models. Following the same procedure as in [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF] we choose three representative models, which give a reasonable fit to the AMS-02 data, denoted Thin, Med and Thick, corresponding to the previous z d = 1 kpc, z d = 4 kpc and z d = 10 kpc models. 3 The relevant pa- Table 1: Benchmark propagation models. The radial length is always r d = 20 kpc and convection is neglected, v c = 0. The second break in the proton injection spectra is at 300 GV. For primary electrons we use a broken power-law with spectral indices 1.6/2.65 and a break at 7 GV, while for heavier nuclei we assumed one power-law with index 2.25. R i 0,1 refer to the positions of the first and second break, respectively, and γ i 1,2,3 to the power-law in the three regions separated by the two breaks. The propagation parameters were obtained by fitting to B/C, proton and He data and cross-checked with antiproton data, while the primary electrons were obtained from the measured electron flux. rameters are given in Table 1. In Fig. 3 we show the fit to the B/C and the AMS-02 proton data [START_REF] Oliva | AMS results on light nuclei: Measurement of the cosmic rays boron-to-carbon ration with AMS-02[END_REF][START_REF] Haino | Precision measurement of he flux with AMS[END_REF][START_REF] Aguilar | Precision Measurement of the Proton Flux in Primary Cosmic Rays from Rigidity 1 GV to 1.8 TV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF] and superimpose the older data from PAMELA [START_REF] Adriani | PAMELA Measurements of Cosmic-ray Proton and Helium Spectra[END_REF][START_REF] Adriani | Measurement of boron and carbon fluxes in cosmic rays with the PAMELA experiment[END_REF]. In all these cases, as well as for the lepton data [START_REF] Accardo | High Statistics Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5-500 GeV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF][START_REF] Aguilar | Precision Measurement of the (e + + e -) Flux in Primary Cosmic Rays from 0.5 GeV to 1 TeV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF], the measurements used in the fits were from AMS-02 results only.
Benchmark Diffusion Injection Model z d δ D 0 /10 28 v A η γ p 1 /γ p 2 /γ p 3 R p 0,1 γ He 1 /γ He 2 /γ He 3 R He 0,1 [kpc] [cm 2 s -1 ] [km s -1 ] GV GV
In the fit we additionally assumed that the normalization of the secondary CR antiprotons can freely vary by 10% with respect to the result given by the DRAGON code. This is motivated by the uncertainty in the antiproton production cross sections. The impact of this and other uncertainties has been studied in detail in e.g. [START_REF] Kappl | AMS-02 Antiprotons Reloaded[END_REF][START_REF] Evoli | Secondary antiprotons as a Galactic Dark Matter probe[END_REF][START_REF] Giesen | AMS-02 antiprotons, at last! Secondary astrophysical component and immediate implications for Dark Matter[END_REF].
As we will show below, the DM contribution to the lepton spectra is of much less importance for constraining the parameter space of our interest, therefore, we do not discuss the lepton backgrounds explicitly. All the details of the implementation of the lepton limits closely follow [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF], updated to the published AMS-02 data [START_REF] Accardo | High Statistics Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5-500 GeV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF][START_REF] Aguilar | Precision Measurement of the (e + + e -) Flux in Primary Cosmic Rays from 0.5 GeV to 1 TeV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF].
Diffuse gamma-rays from dSphs
Recently the Fermi -LAT and MAGIC collaborations released limits from the combination of their stacked analyses of 15 dwarf spheroidal galaxies [START_REF] Ahnen | Limits to dark matter annihilation cross-section from a combined analysis of MAGIC and Fermi-LAT observations of dwarf satellite galaxies[END_REF]. Here we use the results of this analysis to constrain the parameter space of the mixed wino-Higgsino neutralino. To this end we compute all exclusive annihilation cross sections for present-day DM annihilation in the halo and take a weighted average of the limits provided by the experimental collaborations. As discussed in Section 2.2, the TeV scale wino-like neutralino annihilates predominantly into W + W -, ZZ and t t, with much smaller rates into leptons models were optimized for pre-AMS data and are based on a semi-analytic diffusion model. Since we rely on the full numerical solution of the diffusion equation, we follow the benchmark models of [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF]. This comes at the expense of no guarantee that the chosen models really provide the minimal and maximal number of antiprotons. However, as in this work we are not interested in setting precise limits from antiproton data, we consider this approach as adequate.
E 2 p [GeV -1 m -2 s -1 sr -1 ]
Figure 3: Comparison of the benchmark propagation models: B/C (left) and protons (right). The fit was performed exclusively to the AMS-02 [START_REF] Oliva | AMS results on light nuclei: Measurement of the cosmic rays boron-to-carbon ration with AMS-02[END_REF][START_REF] Haino | Precision measurement of he flux with AMS[END_REF][START_REF] Aguilar | Precision Measurement of the Proton Flux in Primary Cosmic Rays from Rigidity 1 GV to 1.8 TV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF] measurements, while the other data sets are shown only for comparison: PAMELA [START_REF] Adriani | PAMELA Measurements of Cosmic-ray Proton and Helium Spectra[END_REF][START_REF] Adriani | Measurement of boron and carbon fluxes in cosmic rays with the PAMELA experiment[END_REF], HEAO-3 [START_REF] Engelmann | Charge composition and energy spectra of cosmic-ray for elements from Be to NI -Results from HEAO-3-C2[END_REF], CREAM [START_REF] Ahn | Measurements of cosmic-ray secondary nuclei at high energies with the first flight of the CREAM balloon-borne experiment[END_REF], CRN [START_REF] Swordy | Relative abundances of secondary and primary cosmic rays at high energies[END_REF], ACE [START_REF] George | Elemental composition and energy spectra of galactic cosmic rays during solar cycle 23[END_REF]. and the lighter quarks. In the results from [START_REF] Ahnen | Limits to dark matter annihilation cross-section from a combined analysis of MAGIC and Fermi-LAT observations of dwarf satellite galaxies[END_REF] only the W + W -, b b, µ + µ -and τ + τ - final states are given. However, as the predicted spectrum and number of photons from a single annihilation is not significantly different for the hadronic or leptonic final states, we adopt the approximation that the limits from annihilation into ZZ are the same as from W + W -, while those from t t and cc are the same as b b. The differences in the number of photons produced between these annihilation channels in the relevant energy range are maximally of order O(20%) for W + W -vs. ZZ and t t vs. b b. Comparing b b to light quarks these can rise up to factor 2, however due to helicity suppression these channels have negligible branching fractions. Hence, the adopted approximation is expected to be very good and, the corresponding uncertainty is significantly smaller than that related to the astrophysical properties of the dSphs (parametrised by the J-factors).
CMB constraints
The annihilation of dark matter at times around recombination can affect the recombination history of the Universe by injecting energy into the pre-recombination photon-baryon plasma and into the post-recombination gas and background radiation, which has consequences for the power and polarization spectra of the CMB [START_REF] Padmanabhan | Detecting dark matter annihilation with CMB polarization: Signatures and experimental prospects[END_REF][START_REF] Galli | CMB constraints on Dark Matter models with large annihilation cross-section[END_REF][START_REF] Slatyer | CMB Constraints on WIMP Annihilation: Energy Absorption During the Recombination Epoch[END_REF]. In particular, it can result in the attenuation of the temperature and polarization power spectra, more so on smaller scales, and in a shift of the TE and EE peaks. These effects can be traced back to the increased ionization fraction and baryon temperature, resulting in a broadening of the surface of last scattering, which suppresses perturbations on scales less than the width of this surface. Therefore the CMB temperature and polarization angular power spectra can be used to infer upper bounds on the annihilation cross section of dark mat-ter into a certain final state for a given mass. When Majorana dark matter particles annihilate, the rate at which energy E is released per unit volume V can be written as
dE dtdV (z) = ρ 2 crit Ω 2 (1 + z) 6 p ann (z) (5)
where ρ crit is the critical density of the Universe today, and experiment provides constraints on p ann (z), which describes the effects of the DM. These effects are found to be well enough accounted for when the z dependence of p ann (z) is neglected, such that a limit is obtained for the constant p ann . The latest 95% C.L. upper limit on p ann was obtained by Planck [START_REF] Ade | Planck 2015 results. XIII. Cosmological parameters[END_REF], and we adopt their most significant limit 3.4 • 10 -28 cm 3 s -1 GeV -1 from the combination of TT, TE, EE + lowP + lensing data. The constant p ann can further be expressed via
p ann = 1 M χ f eff σv , (6)
where f eff , parametrizing the fraction of the rest mass energy that is injected into the plasma or gas, must then be calculated in order to extract bounds on the DM annihilation cross section in the recombination era. In our analysis, for f eff we use the quantities f I eff,new from [START_REF] Madhavacheril | Current Dark Matter Annihilation Constraints from CMB and Low-Redshift Data[END_REF] for a given primary annihilation channel I. We then extract the upper limit on the annihilation cross section at the time of recombination by performing a weighted average over the contributing annihilation channels, as done for the indirect detection limits discussed in Section 3.2. As the Sommerfeld effect saturates before this time, σv at recombination is the same as the present-day cross section. In the future the cross section bound can be improved by almost an order of magnitude, until p ann is ultimately limited by cosmic variance.
Direct detection
Direct detection experiments probe the interaction of the dark matter particle with nucleons. For the parameter space of interest here, the bounds on spin-independent interactions, sensitive to the t-channel exchange of the Higgs bosons and to s-channel sfermion exchange are more constraining than those on spin-dependent interactions. The coupling of the lightest neutralino to a Higgs boson requires both a Higgsino and gaugino component, and is therefore dependent on the mixing. Note that the relative size of the Higgs Yukawa couplings means that the contribution due to the Higgs coupling to strange quarks dominates the result.
In the pure-wino limit, when the sfermions are decoupled and the coupling to the Higgs bosons vanishes, the direct detection constraints are very weak as the elastic scattering takes place only at the loop level [START_REF] Hisano | Direct Detection of Electroweak-Interacting Dark Matter[END_REF]. Allowing for a Higgsino admixture and/or non-decoupled sfermions introduces tree-level scattering processes mediated by Higgs or sfermion exchange. Direct detection experiments have recently reached the sensitivity needed to measure such low scattering cross sections and with the new data released by the LUX [START_REF] Akerib | Results from a search for dark matter in LUX with 332 live days of exposure[END_REF] and PandaX [START_REF] Tan | Dark Matter Results from First 98.7-day Data of PandaX-II Experiment[END_REF] collaborations, a portion of the discussed parameter space is now being probed.
In the analysis below we adopt the LUX limits [START_REF] Akerib | Results from a search for dark matter in LUX with 332 live days of exposure[END_REF], being the strongest in the neutralino mass range we consider. In order to be conservative, in addition to the limit presented by the collaboration we consider a weaker limit by multiplying by a factor of two. This factor two takes into account the two dominant uncertainties affecting the spin-independent cross section, i.e. the local relic density of dark matter and the strange quark content of the nucleon. The former, ρ = 0.3 ± 0.1 GeV/cm 3 , results in an uncertainty of 50% [START_REF] Bovy | On the local dark matter density[END_REF] and the latter result contributes an uncertainty on the cross section of about 20% [START_REF] Dürr | Lattice computation of the nucleon scalar quark contents at the physical point[END_REF], which on combination result in weakening the bounds by a factor of two (denoted as ×2 on the plots). For the computation of the spin-independent scattering cross section for every model point we use micrOMEGAs [START_REF] Belanger | Indirect search for dark matter with micrOMEGAs2.4[END_REF][START_REF] Belanger | micrOMEGAs 3: A program for calculating dark matter observables[END_REF]. Note that the Sommerfeld effect does not influence this computation and the tree-level result is expected to be accurate enough.
Since only mixed Higgsino-gaugino neutralinos couple to Higgs bosons, the limits are sensitive to the parameters affecting the mixing. To be precise, for the case that the bino is decoupled (|M 1 | M 2 , |µ|) and |µ| -M 2 m Z , the couplings of the Higgs bosons h, H to the lightest neutralino are proportional to
c h = m Z c W M 2 + µ sin 2β µ 2 -M 2 2 , c H = -m Z c W µ cos 2β µ 2 -M 2 2 , (7)
where c W ≡ cos θ W , and it is further assumed that M A is heavy such that c h,H can be computed in the decoupling limit cos(α -β) → 0. When tan β increases, the light Higgs coupling c h decreases for µ > 0 and increases for µ < 0. On the other hand the coupling c H increases in magnitude with tan β for both µ > 0 and µ < 0, but is positive when µ > 0 and negative for µ < 0. In addition, in the decoupling limit the coupling of the light Higgs to down-type quarks is SM-like, and the heavy Higgses couple to down-type quarks proportionally to tan β. The sfermion contribution is dominated by the gauge coupling of the wino-like component neutralino to the sfermion and the quarks. We remark that for the parameter range under consideration there is destructive interference between the amplitude for the Higgs and sfermion-exchange diagrams for µ > 0, and for µ < 0 when [49]
m 2 H (1 -2/t β ) m 2 h < t β , (8)
provided M 2 |µ| and t β ≡ tan β 1. For these cases lower values of the sfermion masses reduce the scattering cross section.
In Fig. 4 we show the resulting limits from LUX data in the |µ| -M 2 vs. M 2 plane for different choices of t β , M A , M sf , and the sign of µ. The above discussion allows us to understand the following trends observed:
• On decreasing t β and M A the direct detection bound becomes stronger for positive µ and weaker for negative µ. Note that for µ < 0 the cross section decreases, and the bound weakens, due to the destructive interference between the h and H contributions as the relative sign between the couplings c h and c H changes. Where not stated, the parameter choices correspond to those for the black line. The area below the lines is excluded. The left panel shows the case of µ > 0, while the right of µ < 0.
M A = 1 T eV M A = 0 .8 T e V M A = 0 .5 T e V t Β = 1 5 M sf =6 T eV M sf =30TeV, M A =10TeV, t Β =30 M A = 0 .
M 2 [GeV] |Μ|-M 2 [GeV] M A = 1 T e V M A = 0 .8 T eV t Β = 1 0 M sf = 1 2 T e V M sf = 1. 25 M 2 ,M A = 0. 5T eV ,t Β = 15 M sf =30TeV, M A =10TeV, t Β =30 M A = 0 .5 T eV , t Β =
• The direct detection bound weakens for less decoupled sfermions when there is destructive interference between the t-channel Higgs-exchange and s-channel sfermionexchange diagrams. This always occurs for µ > 0, while for µ < 0 one requires small heavy Higgs masses. For instance, for t β = 15 the maximum value of M A giving destructive interference is slightly above 500 GeV, while for t β = 30 one needs M A < 700 GeV.
Since we consider a point in the |µ|-M 2 vs. M 2 plane to be excluded only if it is excluded for any (allowed) value of the other MSSM parameters, this means that the bounds from direct detection experiments are weakest for µ < 0 in combination with low values of M sf , M A and tan β, and for µ > 0 in combination with high values of M A and tan β but low values of M sf .
Results: indirect detection and CMB limits
In this section we first determine the region of the |µ|-M 2 vs. M 2 plane which satisfies the relic density constraint and is allowed by the gamma-ray limits from dwarf spheroidals, the positron limits from AMS-02, and the CMB limits. 4 We also determine the regions preferred by fits to AMS-02 antiproton results. Over a large part of the considered |µ| -M 2 vs. M 2 plane, the observed relic density can be obtained for some value of the sfermion masses and other MSSM parameters. For the remaining region of the plane, where the relic density constraint is not fulfilled for thermally produced neutralino dark matter, we consider both, the case where the dark matter density is that observed throughout the plane, in which case it cannot be produced thermally, and the case where it is always thermally produced, for which the neutralino relic density does not always agree with that observed, and the limits must be rescaled for each point in the plane by the relic density calculated accordingly. That the neutralino dark matter is not thermally produced, or that it only constitutes a part of the total dark matter density are both viable possibilities.
We then consider various slices through this plane for fixed values of |µ| -M 2 , and show the calculated present-day annihilation cross section as a function of M 2 ∼ m χ 0 1 together with the same limits and preferred regions as above, both for the case that the limits are and are not rescaled according to the thermal relic density.
Limits on mixed-wino DM
In this section we present our results on the limits from indirect searches for wino-like DM in the MSSM, assuming the relic density is as observed. That is, for most parameter points the DM must be produced non-thermally or an additional mechanism for late entropy production is at play. We show each of the considered indirect search channels separately in the |µ| -M 2 vs. M 2 plane (including both µ > 0 and µ < 0), superimposing on this the contours of the correct relic density for three choices of the sfermion mass. Note that while the indirect detection limits are calculated for M sf = 8 TeV, the effect of the choice of sfermion mass on them is minimal, and therefore we display only the relic density contours for additional values of M sf .
In Fig. 5 we show the exclusions from dSphs, e + , and the CMB separately in the |µ| -M 2 vs. M 2 plane. For the positrons we show two limits, obtained on assuming the Thin and Thick propagation models described in Section 3.1.2. We see that the most relevant exclusions come from the diffuse gamma-ray searches from dSphs. Here we show three lines corresponding to the limit on the cross section assuming the Navarro-Frenk-White profile in dSphs, and rescaling this limit up and down by a factor 2. This is done in order to estimate the effect of the uncertainty in the J-factors. For instance, the recent reassessment [START_REF] Ullio | A critical reassessment of particle Dark Matter limits from dwarf satellites[END_REF] of the J-factor for Ursa Minor inferred from observational data suggests 2 to 4 times smaller limits than those commonly quoted. In order to provide conservative bounds, we adopt the weakest of the three as the reference limit. We then compare (lower right plot) this weakest limit from dSphs to the preferred region obtained on fitting to the AMS-02 antiproton results, showing the results for both Thin and Thick propagation models. 5 in our analysis, because for the DM models under consideration, the strongest lepton limits arise from energies below about 100 GeV, in particular the from observed positron fraction (see Fig. 7 of [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF]). 5 The actual analysis was finalized before the recent antiproton results were published [START_REF] Aguilar | Antiproton Flux, Antiproton-to-Proton Flux Ratio, and Properties of Elementary Particle Fluxes in Primary Cosmic Rays Measured with the Alpha Magnetic Spectrometer on the International Space Station[END_REF] and hence We find that there are parts of the mixed wino-Higgsino and dominantly wino neutralino parameter space both below and above the Sommerfeld resonance region, where was based on earlier data presented by the AMS collaboration [START_REF] Kounine | Latest results from the alpha magnetic spectrometer: positron fraction and antiproton/proton ratio, presentation at[END_REF]. This is expected to have a small effect on the antiproton fit presented in this work, with no significant consequences for the overall results. The antiproton-to-proton ratio: background propagation models (left) and comparison of three DM models with relic density within the observational range and assuming the "Med" propagation (right). The shown data is from AMS-02 [START_REF] Kounine | Latest results from the alpha magnetic spectrometer: positron fraction and antiproton/proton ratio, presentation at[END_REF] and PAMELA [START_REF] Adriani | Measurement of the flux of primary cosmic ray antiprotons with energies of 60-MeV to 350-GeV in the PAMELA experiment[END_REF].
� �� = ���� � � � � � = � � � � � � � = � � � � � σ � = � σ � � � � � � σ � = σ � � � � � � σ � = � / � σ � �� �� � � �� = ��� �� � ����� ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � �� = ��� �� � � � � = � � � � � � � = � � � � � � �� = ���� � � � + ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � �� = ��� �� � � � � = � � � � � � � = � � � � � � �� = ��� �� � �������� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � �� = ���� � � � � �� ���� � � � � = � � � � � � � = � � � � � σ � = � σ � ��� �� � �� = ��� �� � � �� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=��
the relic density is as observed and which are compatible with the non-observation of dark matter signals in indirect searches. In the lower right plot of Fig. 5 we see that these further overlap with the regions preferred by fits to the antiproton results. In the smaller region above the resonance, this overlap occurs when the sfermions are decoupled, and hence corresponds to an almost pure-wino case, whereas below the resonance the overlap region is spanned by varying the sfermion masses from 1.25M 2 to being decoupled. The latter region requires substantial Higgsino-mixing of the wino, and extends from M 2 = 1.7 TeV to about 2.5 TeV, thus allowing dominantly-wino dark matter in a significant mass range. Let us comment on the improvement of the fit to the antiproton measurements found for some choices of the parameters. In Fig. 6 we show examples of antiproton-to-proton ratio fits to the data from the background models (left) and including the DM component (right). Although the propagation and antiproton production uncertainties can easily resolve the apparent discrepancy of the background models vs. the observed data [START_REF] Kappl | AMS-02 Antiprotons Reloaded[END_REF][START_REF] Evoli | Secondary antiprotons as a Galactic Dark Matter probe[END_REF][START_REF] Giesen | AMS-02 antiprotons, at last! Secondary astrophysical component and immediate implications for Dark Matter[END_REF], it is nevertheless interesting to observe that the spectral shape of the DM component matches the observed data for viable mixed-wino dark matter particles.
Indirect search constraints on the MSSM parameter space
In this section we present our results for the limits from indirect searches on wino-like DM, assuming the relic density is always thermally produced. In other words, for the standard cosmological model, these constitute the limits on the parameter space of the MSSM, since even if the neutralino does not account for all of the dark matter, its thermal population can give large enough signals to be seen in indirect searches. In this case a parameter-space point is excluded, if
(σv) 0 th > Ωh 2 | obs Ωh 2 | thermal 2 (σv) 0 exp lim (9)
where (σv) 0 th is the theoretically predicted present-day cross section and (σv) 0 exp lim the limit quoted by the experiment. This is because the results presented by the experiments assume the DM particle to account for the entire observed relic density. Therefore if one wishes to calculate the limits for dark matter candidates which only account for a fraction of the relic density, one needs to rescale the bounds by the square of the ratio of observed relic density Ωh 2 | obs to the thermal relic density Ωh 2 | thermal . Viewed from another perspective, the results below constitute astrophysical limits on a part of the MSSM parameter space, which is currently inaccessible to collider experiments, with the only assumption that there was no significant entropy production in the early Universe after the DM freeze-out. In Fig. 7, as in the previous subsection, we show the exclusions from dSphs, e + , and the CMB individually in the |µ| -M 2 vs. M 2 plane. The limits are calculated as for Fig. 5. We then compare the weakest limit from dSphs to the preferred region obtained on fitting to the AMS-02 antiproton results, where we show the results for both Thin and Thick propagation models. Again we find that parameter regions exist where the relic density is correct and which are not excluded by indirect searches. The marked difference between the previous and present results is that in Fig. 7 the region of the plots for lower M 2 is not constrained by the indirect searches, because in this region the thermal relic density is well below the measured value and therefore the searches for relic neutralinos are much less sensitive. In the bottom lower plot of Fig. 7 we see that the unconstrained regions overlap with the regions preferred by fits to the antiproton results. While the limits themselves do not depend on the sfermion mass, the thermal relic density does, and therefore the rescaling of the limits via (9) induces a dependence on the sfermion mass. Therefore the intersection of the lines of correct relic density for M sf = 8 TeV with the preferred region from antiproton searches is not meaningful, and we do not show them in the plots.
Limits on the present-day cross section for fixed |µ| -M 2
In order to understand how the limits and the present-day annihilation cross section depend on the mass of the DM candidate, we take slices of the |µ| -M 2 vs. M 2 plane for fixed values of |µ|-M 2 , and plot (σv) 0 (black) as a function of M 2 , which is approximately equal to the LSP mass m χ 0 1 in the range shown in Figs. 8 and9. As in Figs. 5 and7 we show the limits from dSphs (brown), positrons (blue dashed) and the CMB (magenta dotdashed), along with the preferred regions from antiproton searches (pale green) adopting the Thin and Thick propagation models. We consider three choices of µ -M 2 : a very mixed neutralino LSP, |µ| -M 2 = 50 GeV where µ is negative, a mixed case |µ| -M 2 = 220 GeV where µ is positive, and an almost pure-wino scenario, |µ| -M 2 = 1000 GeV. The blue shaded region indicates where the relic density can correspond to the observed by changing M sf . For Fig. 8 we adopt the unrescaled limit, that is, two sections of Fig. 5. In the case of the very mixed wino-Higgsino shown in the upper panel there is a wide range of neutralino masses for which the black curve lies below the conservative dSphs limit and simultaneously within the range of correct relic density spanned by the variation of the sfermion mass. This is different for the almost pure-wino scenario shown in the lower panel, where only a small mass region survives the requirement that the conser- 1 for the Higgsino admixture |µ| -M 2 as indicated. This is compared with exclusion limits from dSphs (brown), positrons (blue dashed) and the CMB (magenta dot-dashed), along with the preferred regions from antiproton searches (pale green) adopting the Thin and Thick models. We also show the dSphs exclusion limits multiplied and divided by 2 (brown), the weaker of which is the thicker line. The observed relic density is assumed. The blue shaded region indicates where the relic density can correspond to the observed value by changing M sf . vative dSphs limit is respected and the observed relic density is predicted. Moreover, in this mass region the sfermions must be almost decoupled. Fig. 9 shows two cases of mixed wino-Higgsino dark matter, which exhibit similar features, but now for the case of assumed thermal relic density, such that the limits are rescaled. 8, but the thermal relic density is assumed and the limits are rescaled according to [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF]. Note the different value of |µ| -M 2 in the lower plot compared to the previous figure. The black-dashed vertical line indicates where the relic density is equal to that observed for the sfermion mass value M sf = 8 TeV.
σ � = � σ � � � � � � σ � = σ � � � � � � σ � = � / � σ � ����� � � � = � � � � ����� ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � � � = � � � � � + ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � � � = � � � � ��� ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � � � � � � � � � � � � = � � � � σ � = � σ � �� �� � � �� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=��
��� � �� ��� ���� � ���� ��� ���� μ -� � = �� ��� � �� =�� ���� � �� =� ���� � �� =����� � � � � + ���� μ < � ���� ���� ���� ���� ���� ���� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� � � [���] σ � � [�� � /�] ��� � �� ��� ���� � ���� �� � ��� � � �� =� ���� � �� =� ���� � �� =�� ���� � �� =����� � � � |�μ -� � = ���� ��� μ > � ���� ���� ���� ���� ���� ���� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� � � [���] σ � � [�� � /�]
��� � ��� �� � � � � � �� � � �� � ��� � μ -� � = �� ��� � �� =� ���� � + ��� � μ < � ���� ���� ���� ���� ���� ���� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� � � [���] σ � � [�� � /�] ���� ����� � � � � � �� � � �� � ��� � � �� =� ���� � + ��� � |�μ -� � = ��� ��� μ > � ���� ���� ���� ���� ���� ���� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� � � [���] σ � � [�� � /�]
It is evident from both figures that for lower values of |µ| -M 2 , larger regions in M 2 can provide both the correct relic density and present-day cross section below the dSphs bounds. We also see that while the correct relic density can be attained at the Sommerfeld resonance, the mass regions compatible with indirect search constraints typically lie below the Sommerfeld resonance, as was evident from Figs. 5 and7 .
Results: including direct detection limits
We have seen in the previous section that there is a sizeable mixed wino-Higgsino MSSM parameter space where the lightest neutralino has the correct relic abundance and evades indirect detection constraints. A significant Higgsino fraction might, however, be in conflict with the absence of a direct detection signal. In this section we therefore combine the exclusion limits from indirect searches studied in the previous section with those coming from the latest LUX results for direct detection, in order to determine the allowed mixed wino-Higgsino or dominantly-wino dark matter parameter space. To this end we first determine the maximal region in this space that passes relic density and indirect detection limits in the following way. For a given |µ| -M 2 we identify two points in M A , M sf and tan β within the considered parameter ranges, i.e. M A ∈ {0.5 TeV, 10 TeV}, M sf ∈ {1.25M 2 , 30 TeV} and tan β ∈ {5, 30},6 corresponding to maximal and minimal values of M 2 , for which the relic density matches the observed value. Two distinct areas of parameter space arise: the first is larger and corresponds to a mixed wino-Higgsino whereas the second is narrower and corresponds approximately to the pure wino. The relic density criterion therefore defines one (almost pure wino) or two (mixed wino-Higgsino) sides of the two shaded regions, shown in Figs. 10 and11, corresponding to the pure and mixed wino. The dSphs limit defines the other side in the almost pure-wino region, while the remaining sides of the mixed wino-Higgsino area are determined by the dSphs limit (upper), the condition |µ| -M 2 = 0, and the antiproton search (the arc on the lower side of the mixed region beginning at M 2 1.9 TeV). We recall that we consider the central dSphs limit and those obtained by rescaling up and down by a factor of two; the shading in grey within each region is used to differentiate between these three choices.
Next we consider the exclusion limits in the M 2 vs. |µ|-M 2 plane from the 2016 LUX results, which have been obtained as outlined in Section 3.4. As discussed there, the sign of µ can strongly influence the strength of the direct detection limits and consequently the allowed parameter space for mixed wino-Higgsino DM. We therefore consider the two cases separately.
µ > 0
Out of the two distinct regions described above, the close-to-pure wino and the mixed wino-Higgsino, only the former survives after imposing the direct detection constraints, see Fig. 10. If conservative assumptions are adopted for direct detection and dSphs limits a small triangle at the top of the mixed region is still allowed. The fact that the direct detection constraints mainly impact the mixed rather than the pure wino region was discussed in Section 3.4, and is understood in that the Higgs bosons only couple to mixed gaugino-Higgsino neutralinos.
� � σ � � � = � σ � � � � � � σ � �� = � σ � ����� Ω �� �� � = Ω � �� � � Ω � � � � � = Ω � � � � � μ > � �� � � �� = � ���� � β = ��� � � = �� ��� � � � � � � � � � -� � � Ω �� �� � = Ω � ��� � � � � �� � � � � -�� � �� � � �� = ���� � � � β = �� � � = �� ��� �� � � �� = �� ���� � β = ��� � � = ��� ��� 2 ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� ��� � � [���] |μ|-� � [��� ]
Figure 10: Shaded areas denote the maximal region in the M 2 vs |µ| -M 2 plane for µ > 0 where the relic density is as observed and the limit from dSphs diffuse gamma searches is respected within parameter ranges considered. The darker the grey region, the more stringent is the choice of the bound as described in the text. The grey lines mark the weakest possible limit of the region excluded by the 2016 LUX results and the same limit weakened by a factor of two as indicated. The limit from the previous LUX result is the dotted line. The different bounds are calculated at different parameter sets p1, p2 and p3, as indicated.
Note that the direct detection limits presented on the plot are for the choice of MSSM parameters giving the weakest possible constraints. This is possible because the boundaries of the maximal region allowed by indirect searches do not depend as strongly on the parameters governing the wino-Higgsino mixing as the spin-independent scattering cross section does. The only exceptions are the boundaries of the mixed-wino region, arising from the relic density constraint, which indeed depend strongly on M sf . However, as varying these boundaries does not significantly change the allowed region, since it is mostly in the part excluded by the LUX data, we choose to display the LUX bound for a value of M sf different from that defining these boundaries. Therefore, all in all, the case of the mixed wino-Higgsino with µ > 0 is verging on being excluded by a combination of direct and indirect searches, when imposing that the lightest neutralino accounts for the entire thermally produced dark matter density of the Universe. Note, however, that the small close-to-pure wino region is not affected by direct detection constraints.
Ω � � � � � = Ω � � � � � σ � � � = � σ � � � � � � σ � �� = � σ � ���� � Ω �� �� � = Ω � ��� � Ω �� �� � = Ω � �� � � μ < � �� � � �� = ���� � � � � β = ��� � � = ��� ��� ���������-�� �� � � �� = �� ���� � β = ��� � � = ��� ��� �� � � �� = ���� � � � � β = �� � � = �� ��� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ]
Figure 11: Maximal region in the M 2 vs µ -M 2 plane for µ < 0, obtained as in Fig. 10. The limit from the 2016 LUX result weakened by a factor of two is not visible within the ranges considered in the plot. The different bounds are calculated at different parameter sets p1, p2 and p3, as indicated.
µ < 0
When µ < 0 the spin-independent cross section decreases, particularly for smaller values of tan β. This allows for parameter choices with small |µ| -M 2 giving viable neutralino DM, in agreement with the direct detection constraint. Indeed, for appropriate parameter choices the direct detection limits are too weak to constrain any of the relevant regions of the studied parameter space. In particular, the weakest possible limits correspond to M sf = 1.25M 2 , M A = 0.5 TeV and tan β = 15. Note that for M A = 0.5 TeV a significantly lower value of tan β would be in conflict with constraints from heavy Higgs searches at the LHC. The result of varying M A , M sf and tan β is a sizeable mass region for viable mixedwino dark matter in the MSSM, ranging from M 2 = 1.6 to 3 TeV, as shown in Fig. 11. The parameter |µ| -M 2 for the Higgsino admixture varies from close to 0 GeV to 210 GeV below the Sommerfeld resonance, and from 200 GeV upwards above, when the most conservative dSphs limit (shown in light grey) is adopted.
We note that in determining the viable mixed-wino parameter region we did not include the diffuse gamma-ray and gamma line data from observations of the Galactic center, since the more conservative assumption of a cored dark matter profile would not provide a further constraint. However, future gamma data, in particular CTA observations of the Galactic center, are expected to increase the sensitivity to the parameter region in question to the extent (cf. [START_REF] Roszkowski | Prospects for dark matter searches in the pMSSM[END_REF]) that either a dominantly-wino neutralino dark matter would be seen, or the entire plane shown in Fig. 11 would be excluded even for a cored profile.
Conclusions
This study was motivated by the wish to delineate the allowed parameter (in particular mass) range for a wino-like dark matter particle in the MSSM, only allowing some mixing with the Higgsino. More generically, this corresponds to the case where the dark matter particle is the lightest state of a heavy electroweak triplet with potentially significant doublet admixture and the presence of a scalar mediator. The Sommerfeld effect is always important in the TeV mass range, where the observed relic density can be attained, and has been included in this study extending previous work [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF][START_REF] Beneke | Heavy neutralino relic abundance with Sommerfeld enhancements -a study of pMSSM scenarios[END_REF]. Our main results are summarized in Figs. 10 and11, which show the viable parameter region for the dominantly-wino neutralino for the cases µ > 0 and µ < 0, respectively. After imposing the collider and flavour constraints (both very weak), we considered the limits from diffuse gamma-rays from the dwarf spheroidal galaxies (dSphs), galactic cosmic rays and cosmic microwave background anisotropies. We also calculated the antiproton flux in order to compare with the AMS-02 results. The choice of indirect search constraints is influenced by the attitude that the fundamental question of the viability of wino-like dark matter should be answered by adopting conservative assumptions on astrophysical uncertainties. The non-observation of an excess of diffuse gamma-rays from dSphs then provides the strongest limit.
It turns out that in addition to these indirect detection bounds, the direct detection results have a significant impact on the parameter space, particularly for the µ > 0 case where the mixed Higgsino-wino region is almost ruled out as shown in Fig. 10. In the µ < 0 case the limits are weaker as seen in Fig. 11, and a sizeable viable region remains. Note that the region of the |µ|-M 2 vs. M 2 plane constrained by direct detection is complementary to that constrained by indirect detection. Therefore while for µ > 0, (almost) the entire mixed region is ruled out, for µ < 0 there is a part of parameter space where M 2 = 1.7 -2.7 TeV which is in complete agreement with all current experimental constraints.
Let us conclude by commenting on the limits from line and diffuse photon spectra from the Galactic center. If a cusped or mildly cored DM profile was assumed, the H.E.S.S. observations of diffuse gamma emission [START_REF] Collaboration | Search for dark matter annihilations towards the inner Galactic halo from 10 years of observations with H.E.S.S[END_REF] would exclude nearly the entire parameter space considered in this paper, leaving only a very narrow region with close to maximal wino-Higgsino mixing. The limits from searches for a line-like feature [START_REF] Abramowski | Search for Photon-Linelike Signatures from Dark Matter Annihilations with H[END_REF] would be even stronger, leaving no space for mixed-wino neutralino DM. However, a cored DM profile remains a possibility, and hence we did not include the H.E.S.S. results. In other words, adopting a less conservative approach, one would conclude that not only the purewino limit of the MSSM, but also the entire parameter region of the dominantly-wino neutralino, even with very large Higgsino or bino admixture, was in strong tension with the indirect searches. Therefore, the forthcoming observations by CTA should either discover a signal of or definitively exclude the dominantly-wino neutralino.
Figure 1 :
1 Figure1: Contours of constant relic density in the M 2 vs. (µ -M 2 ) plane for µ > 0, as computed in[START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF]. The (green) band indicates the region within 2σ of the observed dark matter abundance. Parameters are as given in the header, and the trilinear couplings are set to A i = 8 TeV for all sfermions except for that of the stop, which is fixed by the Higgs mass value. The black solid line corresponds to the old LUX limit[START_REF] Akerib | First results from the LUX dark matter experiment at the Sanford Underground Research Facility[END_REF] on the spinindependent DM-nucleon cross section, which excludes the shaded area below this line. Relaxing the old LUX limit by a factor of two to account for theoretical uncertainties eliminates the direct detection constraint on the shown parameter space region.
Figure 2 :
2 Figure 2: Left: Branching fractions of present-day wino-like neutralino annihilation vs. the Higgsino fraction for decoupled M A and sfermions. |Z 31 | 2 + |Z 41 | 2 refers to the Higgsino fraction of the lightest neutralino in the convention of[START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF]. Right: Comparison of p, e + and gamma-ray spectra per annihilation at production of a 50% mixed wino-Higgsino (dashed) to the pure-wino (solid) model. The gamma-line component is not shown. In the inset at the bottom of the plot the relative differences between the two spectra are shown.
Figure 4 :
4 Figure4: Direct detection limits for different of the MSSM parameters, assuming the neutralino is completely responsible for the measured dark matter density of the Universe. Where not stated, the parameter choices correspond to those for the black line. The area below the lines is excluded. The left panel shows the case of µ > 0, while the right of µ < 0.
Figure 5 :
5 Figure 5: Results in the M 2 vs. |µ| -M 2 plane. Left: limits from dSphs (upper) and the CMB (lower). The shaded regions are excluded, different shadings correspond to the DM profile uncertainty. Right: the region excluded by AMS-02 leptons (upper), and the best fit contours for antiprotons (lower), where the green solid lines show the Thin and Thick propagation models, while the dotted lines around them denote the 1σ confidence intervals. Contours where the observed relic density is obtained for the indicated value of the sfermion mass are overlaid.
Figure 6 :
6 Figure6: The antiproton-to-proton ratio: background propagation models (left) and comparison of three DM models with relic density within the observational range and assuming the "Med" propagation (right). The shown data is from AMS-02[START_REF] Kounine | Latest results from the alpha magnetic spectrometer: positron fraction and antiproton/proton ratio, presentation at[END_REF] and PAMELA[START_REF] Adriani | Measurement of the flux of primary cosmic ray antiprotons with energies of 60-MeV to 350-GeV in the PAMELA experiment[END_REF].
Figure 7 :
7 Figure 7: Results in the M 2 vs |µ| -M 2 plane for the case where the limits are rescaled according to the thermal relic density for a given point in the plane. Details are as in Fig. 5.
Figure 8 :
8 Figure8: The predicted present-day annihilation cross section (σv) 0 (black) is shown as a function of M 2 ∼ m χ 0 1 for the Higgsino admixture |µ| -M 2 as indicated. This is compared with exclusion limits from dSphs (brown), positrons (blue dashed) and the CMB (magenta dot-dashed), along with the preferred regions from antiproton searches (pale green) adopting the Thin and Thick models. We also show the dSphs exclusion limits multiplied and divided by 2 (brown), the weaker of which is the thicker line. The observed relic density is assumed. The blue shaded region indicates where the relic density can correspond to the observed value by changing M sf .
Figure 9 :
9 Figure9: As in Fig.8, but the thermal relic density is assumed and the limits are rescaled according to[START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF]. Note the different value of |µ| -M 2 in the lower plot compared to the previous figure. The black-dashed vertical line indicates where the relic density is equal to that observed for the sfermion mass value M sf = 8 TeV.
Allowing for significant bino admixture leads to other potentially interesting, though smaller regions, as described in[START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF].
Since we also computed the relic density for every parameter point, which requires including the v 2 -corrections, we did not make use of this simplification in the present analysis.
We loosely follow here the widely adopted MIN, MED, MAX philosophy[START_REF] Donato | Antiprotons in cosmic rays from neutralino annihilation[END_REF], choosing models with as large variation in the DM-originated antiproton flux as possible. However, the MIN, MED, MAX
For the combined e + + e -flux several earlier observations provide data extending to higher energies than the AMS-02 experiment, though with much larger uncertainties. We do not include these data
Moving the lower limit M A = 500 GeV to 800 GeV would result in a barely noticeable change to the boundaries marked by p2.
Acknowledgements
We thank A. Ibarra for comments on the manuscript, and A. Goudelis and V. Rentala for helpful discussions. This work is supported in part by the Gottfried Wilhelm Leibniz programme of the Deutsche Forschungsgemeinschaft (DFG) and the Excellence Cluster "Origin and Structure of the Universe" at Technische Universität München. AH is supported by the University of Oslo through the Strategic Dark Matter Initiative (SDI). We further gratefully acknowledge that part of this work was performed using the facilities of the Computational Center for Particle and Astrophysics (C2PAP) of the Excellence Cluster. | 77,102 | [
"6994"
] | [
"132871",
"179898",
"407859",
"50791",
"132871",
"6747"
] |
01767476 | en | [
"info"
] | 2024/03/05 22:32:15 | 2016 | https://inria.hal.science/hal-01767476/file/433330_1_En_13_Chapter.pdf | Peter Csaba Ölveczky
Formalizing and Validating the P-Store Replicated Data Store in Maude
P-Store is a well-known partially replicated transactional data store that combines wide-area replication, data partition, some fault tolerance, serializability, and limited use of atomic multicast. In addition, a number of recent data store designs can be seen as extensions of P-Store. This paper describes the formalization and formal analysis of P-Store using the rewriting logic framework Maude. As part of this work, this paper specifies group communication commitment and defines an abstract Maude model of atomic multicast, both of which are key building blocks in many data store designs. Maude model checking analysis uncovered a non-trivial error in P-Store; this paper also formalizes a correction of P-Store whose analysis did not uncover any flaw.
Introduction
Large cloud applications-such as Google search, Gmail, Facebook, Dropbox, eBay, online banking, and card payment processing-are expected to be available continuously, even under peak load, congestion in parts of the network, server failures, and during scheduled hardware or software upgrades. Such applications also typically manage huge amounts of (potentially important user) data. To achieve the desired availability, the data must be replicated across geographically distributed sites, and to achieve the desired scalability and elasticity, the data store may have to be partitioned across multiple partitions.
Designing and validating cloud storage systems are hard, as the design must take into account wide-area asynchronous communication, concurrency, and fault tolerance. The use of formal methods during the design and validation of cloud storage systems has therefore been advocated recently [START_REF] Newcombe | How Amazon Web Services uses formal methods[END_REF][START_REF] Ölveczky | Design and validation of cloud computing data stores using formal methods[END_REF]. In [START_REF] Newcombe | How Amazon Web Services uses formal methods[END_REF], engineers at the world's largest cloud computing provider, Amazon Web Services, describe the use of TLA+ during the development of key parts of Amazon's cloud infrastructure, and conclude that the use of formal methods at Amazon has been a success. They report, for example, that: (i) "formal methods find bugs in system designs that cannot be found though any other technique we know of"; (ii) "formal methods [...] give good return on investment"; (iii) "formal methods are routinely applied to the design of complex real-world software, including public cloud services"; (iv) formal methods can analyze "extremely rare" combination of events, which the engineer cannot do, as "there are too many scenarios to imagine"; and (v) formal methods allowed Amazon to "devise aggressive optimizations to complex algorithms without sacrificing quality."
This paper describes the application of the rewriting-logic-based Maude language and tool [START_REF] Clavel | All About Maude[END_REF] to formally specify and analyze the P-Store data store [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF]. P-Store is a well-known partially replicated transactional data store that provides both serializability and some fault tolerance (e.g., transactions can be validated even when some nodes participating in the validation are down).
Members of the University of Illinois Center for Assured Cloud Computing have used Maude to formally specify and analyze complex industrial cloud storage systems such as Google's Megastore and Apache Cassandra [START_REF] Grov | Formal modeling and analysis of Google's Megastore in Real-Time Maude[END_REF][START_REF] Liu | Formal modeling and analysis of Cassandra in Maude[END_REF]. Why is formalizing and analyzing P-Store interesting? First, P-Store is a well-known data store design in its own right with many good properties that combines widearea replication, data partition, some fault tolerance, serializability, and limited use of atomic multicast. Second, a number of recent data store designs can be seen as extensions and variations of P-Store [START_REF] Sovran | Transactional storage for georeplicated systems[END_REF][START_REF] Ardekani | Non-monotonic snapshot isolation: Scalable and strong consistency for geo-replicated transactional systems[END_REF][START_REF] Ardekani | G-DUR: a middleware for assembling, analyzing, and improving transactional protocols[END_REF]. Third, it uses atomic multicast to order concurrent transactions. Fourth, it uses "group communication" for atomic commit. The point is that both atomic multicast and group communication commit are key building blocks in cloud storage systems (see, e.g., [START_REF] Ardekani | G-DUR: a middleware for assembling, analyzing, and improving transactional protocols[END_REF]) that have not been formalized in previous work. Indeed, one of the main contributions of this paper is an abstract Maude model of atomic multicast that allows any possible ordering of message reception consistent with atomic multicast.
I have modeled (both versions of) P-Store, and performed model checking analysis on small system configurations. Maude analysis uncovered some significant errors in the supposedly-verified P-Store algorithm, like read-only transactions never getting validated in certain cases. An author of the original P-Store paper [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] confirmed that I had indeed found a nontrivial mistake in their algorithm and suggested a way of correcting the mistake. Maude analysis of the corrected algorithm did not find any error. I also found that a key assumption was missing from the paper, and that an important definition was very easy to misunderstand because of how it was phrased in English. All this emphasizes the need for a formal specification and formal analysis in addition to the standard prose-and-pseudo-code descriptions and informal correctness proofs.
The rest of the paper is organized as follows. Section 2 gives a background on Maude. Section 3 defines an abstract Maude model of the atomic multicast "communication primitive." Section 4 gives an overview of P-Store. Sections 5 and 6 present the Maude model and the Maude analysis, respectively, of P-Store, and Section 7 describes a corrected version of P-Store. Section 8 discusses some related work, and Section 9 gives some concluding remarks.
Due to space limitations, only parts of the specifications and analyses are given. I refer to the longer report [START_REF] Ölveczky | Formalizing and validating the P-Store replicated data store in Maude[END_REF] for more details. Furthermore, the executable Maude specifications of P-Store, together with analysis commands, are available at http://folk.uio.no/peterol/WADT16.
Preliminaries: Maude
Maude [START_REF] Clavel | All About Maude[END_REF] is a rewriting-logic-based formal language and simulation and model checking tool. A Maude module specifies a rewrite theory (Σ, E ∪ A, R), where:
-Σ is an algebraic signature; that is, a set of declarations of sorts, subsorts, and function symbols. -(Σ, E ∪ A) is a membership equational logic theory, with E a set of possibly conditional equations and membership axioms, and A a set of equational axioms such as associativity, commutativity, and identity. The theory (Σ, E∪ A) specifies the system's state space as an algebraic data type. -R is a set of labeled conditional rewrite rules3 l : t -→ t if m j=1 u j = v j specifying the system's local transitions. The rules are universally quantified by the variables in the terms, and are applied modulo the equations E ∪ A. 4I briefly summarize the syntax of Maude and refer to [START_REF] Clavel | All About Maude[END_REF] for more details. Operators are introduced with the op keyword: op f : s 1 . . . s n -> s. They can have user-definable syntax, with underbars '_' marking the argument positions, and equational attributes, such as assoc, comm, and id, stating, for example, that the operator is associative and commutative and has a certain identity element. Equations and rewrite rules are introduced with, respectively, keywords eq, or ceq for conditional equations, and rl and crl. The mathematical variables in such statements are declared with the keywords var and vars, or can be introduced on the fly having the form var:sort. An equation f (t 1 , . . . , t n ) = t with the owise ("otherwise") attribute can be applied to a term f (. . .) only if no other equation with left-hand side f (u 1 , . . . , u n ) can be applied. A class declaration class C | att 1 : s 1 , ... , att n : s n .
declares a class C with attributes att 1 to att n of sorts s 1 to s n . An object of class C is represented as a term < O : C | att 1 : val 1 , ..., att n : val n > of sort Object, where O, of sort Oid, is the object's identifier, and where val 1 to val n are the current values of the attributes att 1 to att n . A message is a term of sort Msg.
The state is a term of the sort Configuration, and is a multiset made up of objects and messages. Multiset union for configurations is denoted by a juxtaposition operator (empty syntax) that is declared associative and commutative, so that rewriting is multiset rewriting supported directly in Maude.
The dynamic behavior of concurrent object systems is axiomatized by specifying each of its transition patterns by a rewrite rule. For example, the rule A subclass inherits all the attributes and rules of its superclasses.
Formal Analysis in Maude. A Maude module is executable under some conditions, such as the equations being confluent and terminating, modulo the structural axioms, and the theory being coherent [START_REF] Clavel | All About Maude[END_REF]. Maude provides a range of analysis methods, including simulation for prototyping, search for reachability analysis, and LTL model checking. This paper uses Maude's search command
(search [[n]] t0 =>* pattern [such that cond ] .)
which uses a breadth-first strategy to search for at most n states that are reachable from the initial state t 0 , match the pattern pattern (a term with variables), and satisfy the (optional) condition cond . If '[n]' is omitted, then Maude searches for all solutions. If the arrow '=>!' is used instead of '=>*', then Maude searches for final states; i.e., states that cannot be further rewritten.
Atomic Multicast in Maude
Messages that are atomically multicast from (possibly) different nodes in a distributed system must be read in (pairwise) the same order: if nodes n 3 and n 4 both receive the atomically multicast messages m 1 and m 2 , they must receive (more precisely: "be served") m 1 and m 2 in the same order. Note that m 2 may be read before m 1 even if m 2 is atomically multicast after m 1 . Atomic multicast is typically used to order events in a distributed system. In distributed data stores like P-Store, atomic multicast is used to order (possibly conflicting) concurrent transactions: When a node has finished its local execution of a transaction, it atomically multicasts a validation request to other nodes (to check whether the transaction can commit). The validation requests therefore impose an order on concurrent transactions.
Atomic multicast does not necessarily provide a global order of all events. If each of the messages m 1 , m 2 , and m 3 is atomically multicast to two of the receivers A, B, and C, then A can read m 1 before m 2 , B can read m 2 before m 3 , and C can read m 3 before m 1 . These reads satisfy the pairwise total order requirement of atomic multicast, since there is no conflict between any pair of receivers. Nevertheless, atomic multicast has failed to globally order the messages m 1 , m 2 , and m 3 . If atomic multicast is used to impose something resembling a global order (e.g., on transactions), it should also satisfy the following uniform acyclic order property: the relation < on (atomic-multicast) messages is acyclic, where m < m holds if there exists a node that reads m before m . Atomic multicast is an important concept in distributed systems, and there are a number of well-known algorithms for achieving atomic multicast [START_REF] Guerraoui | Genuine atomic multicast in asynchronous distributed systems[END_REF]. To model P-Store, which uses atomic multicast, I could of course formalize a specific algorithm for atomic multicast and include it in a model of P-Store. Such a solution would, however, have a number of disadvantages, including:
1. Messy non-modular specifications. Atomic multicast algorithms involve some complexity, including maintaining Lamport clocks during system execution, keeping buffers of received messages that cannot be served, and so on. This solution could also easily yield a messy non-modular specification that fails to separate the specification of P-Store from that of atomic multicast. 2. Increased state space. Running a distributed algorithm concurrently with P-Store would also lead to much larger state spaces during model checking analyses, since also the states generated by the rewrites involving the atomic multicast algorithm would contribute to new states. 3. Lack of generality. Implementing a particular atomic multicast algorithm might exclude behaviors possible with other algorithms. That would mean that model checking analysis might not cover all possible behaviors of P-Store, but only those possible with the selected atomic multicast algorithm.
I therefore instead define, for each of the two "versions" of atomic multicast, a general atomic multicast primitive, which allows all possible ways of reading messages that are consistent with the selected version of atomic multicast. In particular, such a solution will not add states during model checking analysis.
Atomic Multicast in Maude: "User Interface"
To define an atomic multicast primitive, the system maintains a "table" of read and sent-but-unread atomic-multicast messages for each node. This table must be consulted before reading an atomic-multicast message, to check whether it can be read/served already, and must be updated when the message is read.
The "user interface" of my atomic multicast "primitive" is as follows:
-Atomically multicasting a message. A node n that wants to atomically multicast a message m to a set of nodes {n 1 , . . . , n k } just "sends" the "message" -The user must add the term [emptyAME] (denoting the "empty" atomic multicast table) to the initial state.
Maude Specification of Atomic Multicast
To keep track of atomic-multicast messages sent and received, the table The "wrapper" used for atomic multicast takes as arguments the message (content), the sender's identifier, and the (identifiers of the) set of receivers: "distributes" such an atomic-multicast msg from o to o 1 ... o n message by: (1) "dissolving" the above multicast message into a set of messages The update function, which updates the atomic-multicast table when O reads a message MC, just moves MC from the set of unread messages to the end of the list of read messages in O's record in the table.
The expression okToRead(mc, o, amTable) is used to check whether the object o can read the atomic-multicast message mc with the given global atomicmulticast table amTable. The function okToRead is defined differently depending on whether atomic multicast must satisfy the uniform acyclic order requirement.
okToRead for Pairwise Total Order Atomic Multicast. The following equations define okToRead by first characterizing the cases when the message cannot be read; the last equation uses Maude's owise construct to specify that the message can be read in all other cases: In the first equation, O wants to read MC, and its AM-entry shows that O has not read message MC2. However, another object O2 has already read MC2 before MC, which implies that O cannot read MC. In the second equation some object O2 has read MC2 and has MC in its sets of unread atomic-multicast messages, which implies that O cannot read MC yet (it must read MC2 first).
okToRead for Uniform Acyclic Order Atomic Multicast. To define atomic multicast which satisfies the uniform acyclic order requirement, the above definition must be generalized to consider the induced relation < instead of pairwise reads. The above definition checks whether a node o can read a message m 1 by checking whether it has some other unread message m 2 pending such reading m 1 before m 2 would conflict with the m 1 /m 2 -reading order of another node. This happens if another node has read m 2 before reading m 1 , or if it has read m 2 and has m 1 pending (which implies that eventually, that object would read m 2 before m 1 ). In the more complex uniform acyclic order setting, that solution must be generalized to check whether reading m 1 before any other pending message m 2 would violate the current or the (necessary) future "global order." That is, is there some m 1 elsewhere that has been read or must eventually be read after m 2 somewhere? If so, node o obviously cannot read m 1 at the moment.
The function receivedAfter takes a set of messages and the global AMtable as arguments, and computes the < * -closure of the original set of messages; i.e., the messages that cannot be read before the original set of messages: In the above equation, there is a message MC in the current set of messages in the closure. Furthermore, the global atomic-multicast table shows that some node O2 has read MC2 right after reading MC, and MC2 is not yet in the closure. Therefore, MC2 is added to the closure.
op receivedAfter : MsgContSet AM-Table -> MsgContSet .
In the following equation, there is a message MC in the closure; furthermore, some object O2 has already read MC. This implies that all unread messages MCS2 of O2 must eventually be read after MC, and hence they are added to the closure: Finally, the current set is returned when it cannot be extended:
eq receivedAfter(MCS, AM-TABLE) = MCS [owise] .
The function okToRead can then be defined as expected: O can read the pending message MC if MC is not (forced to be) read after any other pending message (in the set MCS): I have model-checked both specifications of atomic multicast on a number of scenarios and found no deadlocks or inconsistent multicast read orders.
P-Store
P-Store [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] is a partially replicated data store for wide-area networks developed by Schiper, Sutra, and Pedone that provides transactions with serializability. P-Store executes transactions optimistically: the execution of a transaction T at site s (which may involve remote reads of data items not replicated at s) proceeds without worrying about conflicting concurrent transactions at other sites. After the transaction T has finished executing, a certification process is executed to check whether or not the transaction T was in conflict with a concurrent transaction elsewhere, in which case T might have to be aborted. More precisely, in the certification phase the site s atomically multicasts a request to certify T to all sites storing data accessed by T . These sites then perform a voting procedure to decide whether T can commit or has to be aborted.
P-Store has a number of attractive features: (i) it is a genuine protocol: only the sites replicating data items accessed by a transaction T are involved in the certification of T ; and (ii) P-Store uses atomic multicast at most once per transaction. Another issue in the certification phase: in principle, the sites certify the transactions in the order in which the certification requests are read. However, if for some reason the certification of the first transaction in a site's certification queue takes a long time (maybe because other sites involved in the voting are still certifying other transactions), then the certification of the next transaction in line will be delayed accordingly, leading to the dreaded convoy effect. P-Store has an "advanced" version that tries to mitigate this problem by allowing a site to start the certification also of other transactions in its certification queue, as long as they are not in a possible conflict with "older" transactions in that queue.
The authors of [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] claim that they have proved the P-Store algorithm correct.
P-Store in Detail
This section summarizes the description of P-Store in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF].
System Model and Assumptions. A database is a set of triples (k, v, ts), where k is a key, v its value, and ts its time stamp. Each site holds a partial copy of the database, with Items(s) denoting the keys replicated at site s. I do not consider failures in this paper (as failure treatment is not described in the algorithms in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF]). A transaction T is a sequence of read and write operations, and is executed locally at site proxy(T ). Items(T ) is the set of keys read or written by T ; WReplicas(T ) and Replicas(T ) denote the sites replicating a key written, respectively read or written, by T . A transaction T "is local iff for any site s in Replicas(T ), Items(T ) ⊆ Items(s); otherwise, T is global." Each site ensures order-preserving serializability of its local executions of transactions. As already mentioned, P-Store assumes access to an atomic multicast service that guarantees uniform acyclic order.
Executing a Transaction. While a transaction T is executing (at site proxy(T )), a read on key k is executed at some site that stores k; k and the item time stamp ts read are stored as a pair (k, ts) in T 's read set T.rs. Every write of value v to key k is stored as a pair (k, v) in T 's set of updates T.up. If T reads a key that was previously updated by T , the corresponding value in T.up is returned.
When T has finished executing, it can be committed immediately if T is read-only and local. Otherwise, we need to run the certification protocol, which also propagates T 's updates to the other (write-) replicating sites.
If the certification process, described next, decides that T can commit, all sites in WReplicas(T ) apply T 's updates. In any case, proxy(T ) is notified about the outcome (commit or abort) of the certification. Certification Phase. When T is submitted for certification, T is atomically multicast to all sites storing keys read (to check for stale reads) or written (to propagate the updates) by T . When a site s reads such a request, it checks whether the values read by T are up-to-date by comparing their versions against those currently stored in the database. If they are the same, T passes the certification test; otherwise T fails at s. The site s may not replicate all keys read by T and therefore may not be able to certify T . In this case there is a voting phase where each site s replicating keys read by T sends the result of its local certification test to all sites s w replicating a key written by T . A site s w can decide on T 's outcome when it has received (positive) votes from a voting quorum for T , i.e., a set of sites that together replicate all keys read by T . If some site votes "no," the transaction must be aborted. The pseudo-code description of this certification algorithm in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] is shown in Fig. 1.
As already mentioned, a site does not start the certification of another transaction until it is done certifying the first transaction in its certification queue. To avoid the convoy effect that this can lead to, the paper [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] also describes a version of P-Store where different transactions in a site's certification queue can be certified concurrently as long as they do not read-write conflict.
Formalizing P-Store in Maude
I have formalized both versions of P-Store (i.e., with and without sites initiating multiple concurrent certifications) in Maude, and present parts of the formalization of the simpler version. The executable specifications of both versions, with analysis commands, are available at http://folk.uio.no/peterol/ WADT16, and the longer report [START_REF] Ölveczky | Formalizing and validating the P-Store replicated data store in Maude[END_REF] provides more detail.
Class Declarations
Transactions. Although the actual values of keys in the databases are sometimes ignored during analysis of distributed data stores, I choose for purposes of illus-tration to represent the concrete values of keys (or data items). This should not add new states that would slow down the model checking analysis.
A transaction (sometimes also called a transaction request) is modeled as an object of the following class Transaction: The operations attribute denotes the list of read and write operations that remain to be executed. Such an operation is either a read operation x := read k, where x is a "local variable" that stores the value of the (data item with) key k read by the operation, or a write operation write(k, expr ), where expr in our case is a simple arithmetic expression involving the transaction's local variables. waitRemote(k, x) is an "internal operation" denoting that the transaction execution is awaiting the value of a key k (to be assigned to the local variable x) which is not replicated by the transaction's proxy. An operation list is a list of such operations, with list concatenation denoted by juxtaposition. destination denotes the (identity of the) proxy of the transaction; that is, the site that should execute the transaction. The readSet attribute denotes the ','-separated set of pairs versionRead(k, version), each such pair denoting that the transaction has read version version of the key k. The writeSet attribute denotes the write set of the transaction as a map (k 1 |-> val 1 ), ..., (k n |-> val n ). The status attribute denotes the commit state of the transaction, which is either commit, abort, or undecided. Finally, localVars is a map from the transaction's local variables to their current values.
Replicas. A replicating site (or site or replica) stores parts of the database, executes the transactions for which it is the proxy, and takes part in the certification of other transactions. A replica is formalized as an object instance of the following subclass Replica: The datastore attribute represents the replica's local database as a set < key 1 , val 1, ver 1 > , . . . , < key l , val l , ver l > of triples < key i , val i, ver i > denoting a version of the data item with key key i , value val i , and version number ver i . 5 The attributes executing, submitted, committed, and aborted denote the transactions executed by the replica and which are/have been, respectively, currently executing, submitted for certification, committed, and aborted. The queue holds the certification queue of transactions to be certified by the replica (in collaboration with other replicas). transToCertify contains data used for the certification of the first element in the certification queue (in the simpler algorithm), and decidedTranses show the status (aborted/committed) of the transactions that have previously been (partly) certified by the replica.
Clients. Finally, I add an "interface/application layer" to the P-Store specification in the form of clients that send transactions to be executed by P-Store:
class Client | txns : ObjectList, pendingTrans : TransIdSet .
txns denotes the list of transaction (objects) that the client wants P-Store to execute, and pendingTrans is either the empty set or (the identity of) the transaction the client has submitted to P-Store but whose execution is not yet finished.
Initial State.
The following shows an initial state init4 (with some parts replaced by '...') used in the analysis of P-Store. This system has: two clients, c1 and c2, that want P-Store to execute the two transactions t1 and t2; three replicating sites, r1, r2, and r3; and three data items/keys x, y, and z. Transaction t1 wants to execute the operations (xl :=read x) (yl :=read y) at replica r1, while transaction t2 wants to execute write(y, 5) write(x, 8) at replica r2. The initial state also contains the empty atomic multicast table and the table which assigns to each key the sites replicating this key. Initially, the value of each key is [START_REF] Ardekani | G-DUR: a middleware for assembling, analyzing, and improving transactional protocols[END_REF] and its version is 1. Site r2 replicates both x and y.
Local Execution of a Transaction
The execution of a transaction has two phases. In the first phase, the transaction is executed locally by its proxy: the transaction performs its reads and writes, but the database is not updated; instead, the reads are recorded in the transaction's read set, and its updates are stored in the writeSet attribute.
The second phase is the certification (or validation) phase, when all appropriate nodes together decide whether or not the transaction can be committed or must be aborted. If it can be committed, the replicas update their databases.
This section specifies the first phase, which starts when a client without pending transactions sends its next transaction to its proxy. I do not show the variable declarations (see [START_REF] Ölveczky | Formalizing and validating the P-Store replicated data store in Maude[END_REF]), but follow the convention that variables are written with (all) capital letters. P-Store assumes that the local executions of multiple transactions on a site are equivalent to some serialized executions. I model this assumption by executing the transactions one-by-one. Therefore, a replica can only receive a transaction request if its set of currently executing transactions is empty (none): There are three cases to consider when executing a read operation X :=read K: (i) the transaction has already written to key K; (ii) the transaction has not written K and the proxy replicates K; or (iii) the key K has not been read and the proxy does not replicate K. I only show the specification for case (i). I do not know what version number should be associated to the read, and I choose not to add the item to the read set. (The paper [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] does not describe what to do in this case; the problem disappears if we make the common assumption that a transaction always reads a key before updating it.) As an effect, the local variable X gets the value V: Write operations are easy: evaluate the expression EXPR to write and add the update to the transaction's writeSet:
Certification Phase
When all the transaction's operations have been executed by the proxy, the proxy's next step is to try to commit the transaction. If the transaction is readonly and local, it can be committed directly; otherwise, it must be submitted to the certification protocol. Some colleagues and I all found the definition of local in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] (and quoted in Section 4) to be quite ambiguous. We thought that "for any site s in Replicas(T ), Items(T ) ⊆ Items(s)" means either "for each site s . . . " or that proxy(T ) replicates all items in T . The first author of [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF], Nicolas Schiper, told me that it actually means "for some s . . . ." In hindsight, we see that this is also a valid interpretation of the definition of local . To avoid misunderstanding, it is probably good to avoid the phrase "for any" and use either "for each" or "for some."
If the transaction T cannot be committed immediately, it is submitted for certification by atomically multicasting a certification request-with the transaction's identity TID, read set RS, and write set WS-to all replicas storing keys read or updated by T (lines 9-10 in Fig. 1 According to lines 7-8 in Fig. 1, a replica's local certification succeeds if, for each key in the transaction's read set that is replicated by the replica in question, the transaction read the same version stored by the replica: op certificationOk : ReadSet DataStore -> Bool . eq certificationOk((versionRead(K, VERSION) , READSET), (< K, V, VERSION2 > , DS)) = (VERSION == VERSION2) and certificationOk(READSET, (< K, V, VERSION2 > , DS)) . eq certificationOk(RS, DS) = true [owise] .
If the transaction to certify is not local, the certifying sites must together decide whether or not the transaction can be committed. Each certifying site therefore checks whether the transaction passes the local certification test, and sends the outcome of this test to the other certifying sites (lines 13 and 19-22): If the local certification fails, the site sends an abort vote to the other write replicas and also notifies the proxy of the outcome. Otherwise, the site sends a commit vote to all other site replicating an item written by the transaction. The voting phase ends when there is a voting quorum; that is, when the voting sites together replicate all keys read by the transaction. This means that a certifying site must keep track of the votes received during the certification of a transaction. The set of sites from which the site has received a (positive) vote is the fourth parameter of the certify record it maintains for each transaction. If a site receives a positive vote, it stores the sender of the vote (lines [START_REF] Ölveczky | Design and validation of cloud computing data stores using formal methods[END_REF][START_REF] Ölveczky | Formal modeling, performance estimation, and model checking of wireless sensor network algorithms in Real-Time Maude[END_REF]. If a site receives a negative vote, it decides the fate of the transaction and notifies the proxy if it replicates an item written by the transaction (lines 28-29).
If a write replica has received positive votes from a voting quorum (lines 23-27 and 29), the transaction can be committed, and the write replica applies the updates and notifies the proxy. The following rule models the behavior when a site has received votes from a voting quorum RIDS for transaction TID: Finally, the proxy of transaction TID receives the outcome from one or more sites in TID's certification set (the abort case is similar): ---notify client
6 Formal Analysis of P-Store
In the absence of failures, P-Store is supposed to guarantee serializability of the committed transactions, and that a decision (commit/abort) is made on all transactions.
To analyze P-Store, I search for all final states-i.e., states that cannot be further rewritten-reachable from a given initial state, and inspect the result. This analysis therefore also discovers undesired deadlocks. In the future, I should instead automatically check serializability, possibly using the techniques in [START_REF] Grov | Formal modeling and analysis of Google's Megastore in Real-Time Maude[END_REF], which adds to the state a "serialization graph" that is updated whenever a transaction commits, and then checks whether the graph has cycles.
The search for final states reachable from state init4 in Section 5.1 yields a state which shows that t1's proxy is not notified about the outcome of the certification (see [START_REF] Ölveczky | Formalizing and validating the P-Store replicated data store in Maude[END_REF] for details). The problem seems to be line 29 in the algorithm in Fig. 1: only sites replicating items written by transaction T (WReplicas(T )) send the outcome of the certification to T 's proxy. It is therefore not surprising that the outcome of the read-only transaction t1 does not reach t1's proxy.
The transactions in init4 are local. What about non-local transactions? The initial state init5 is the same as init4 in Section 5.1, except that item y is only replicated at site r3, which means that t1 and t2 become non-local transactions.
Searching for final states reachable from init5 shows a result where the certification process cannot reach a decision on the outcome of transaction t1: The fate of t1 is not decided: both r2 and r3 are stuck in their certification process. The problem seems to be lines 22 and 23 in the P-Store certification algorithm: why are only write replicas involved in sending and receiving votes during the certification? Shouldn't both read and write replicas vote? Otherwise, it is impossible to certify non-local read-only transactions, such as t1 in init5. are read from the same site and some additional concurrency control is used to ensure serializability), but admitted that this is indeed not mentioned anywhere in their paper. My specifications consider the algorithm as given in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF], without taking the unstated assumptions into account, and also subjects the local read-only transactions to certification.
Analysis of the Updated Specification. I have analyzed the corrected specification on five small initial configurations (3 sites, 3 data items, 2 transactions, 4 operations). All the final states were correct: the committed transactions were indeed serializable.
The Advanced Algorithm. I have also specified and successfully analyzed the (corrected) version of P-Store where multiple transactions can be certified concurrently. It is beyond the scope of this paper to describe that specification.
Related Work
Different communication forms/primitives have been defined in Maude, including wireless broadcast that takes into account the geographic location of nodes and the transmission strength/radius [START_REF] Ölveczky | Formal modeling, performance estimation, and model checking of wireless sensor network algorithms in Real-Time Maude[END_REF], as well as wireless broadcast in mobile systems [START_REF] Liu | Modeling and analyzing mobile ad hoc networks in Real-Time Maude[END_REF]. However, I am not aware of any model of atomic multicast in Maude. Maude has been applied to a number of industrial and academic cloud storage systems, including Google's Megastore [START_REF] Grov | Formal modeling and analysis of Google's Megastore in Real-Time Maude[END_REF], Apache Cassandra [START_REF] Liu | Formal modeling and analysis of Cassandra in Maude[END_REF], and UC Berkeley's RAMP [START_REF] Liu | Formal modeling and analysis of RAMP transaction systems[END_REF]. However, that work did not address issues like atomic multicast and group communication commit.
Lamport's TLA+ has also been used to specify and model check large industrial cloud storage systems like S3 at Amazon [START_REF] Newcombe | How Amazon Web Services uses formal methods[END_REF] and the academic TAPIR transaction protocol targeting large-scale distributed storage systems.
On the validation of P-Store and similar designs, P-Store itself has been proved to be correct using informal "hand proofs" [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF]. However, such hand proofs do not generate precise specifications of the systems and tend to be error-prone and rely on missing assumptions, as I show in this paper. I have not found any model checking validation of related designs, such as Jessy [START_REF] Ardekani | Non-monotonic snapshot isolation: Scalable and strong consistency for geo-replicated transactional systems[END_REF] and Walter [START_REF] Sovran | Transactional storage for georeplicated systems[END_REF].
Concluding Remarks
Cloud computing relies on partially replicated wide-area data stores to provide the availability and elasticity required by cloud systems. P-Store is a well-known such data store that uses atomic multicast, group communication commitment, concurrent certification of independent transactions, etc. Furthermore, many other partially replicated data stores are extensions and variations of P-Store.
I have formally specified and analyzed P-Store in Maude. Maude reachability analysis uncovered a numbers of errors in P-Store that were confirmed by one of the P-Store developers: both read and write replicas need to participate in the certification of transactions; write replicas are not enough. I have specified the proposed fix of P-Store, whose Maude analysis did not uncover any error.
Another main contribution of this paper is a general and abstract Maude "primitive" for both variations of atomic multicast.
One important advantage claimed by proponents of formal methods is that even precise-looking informal descriptions tend to be ambiguous and contain missing assumptions. In this paper I have pointed at a concrete case of ambiguity in a precise-looking definition, and at a crucial missing assumption in P-Store.
This work took place in the context of the University of Illinois Center for Assured Cloud Computing, within which we want to identify key building blocks of cloud storage systems, so that they can be built and verified in a modular way by combining such building blocks in different ways. Some of those building blocks are group communication commitment certification and atomic multicast. In more near term, this work should simplify the analysis of other state-of-the-art data stores, such as Walter and Jessy, that can be seen as extensions of P-Store.
The analysis performed was performed using reachability analysis; in the future one should also be able to specify the desired consistency property "directly."
rl [l] : m(O,w) < O : C | a1 : x, a2 : O', a3 : z > => < O : C | a1 : x + w, a2 : O', a3 : z > m'(O',x) .
op atomic-multicast_from_to_ : MsgCont Oid OidSet -> Configuration . The equation eq (atomic-multicast MC from O to OS) [AM-ENTRIES] = (distribute MC from O to OS) [insert(MC, OS, AM-ENTRIES)] .
(
msg msg from o to o1) ... (msg msg from o to on), one for each receiver o k in the set {o 1 , . . . , o n }; and (2) by adding, for each receiver o k , the message (content) msg to the set unread k of unread atomicmulticast messages in the atomic-multicast table.
vars MC MC2 : MsgContent . vars MCS MCS2 : MsgContSet . vars MCL MCL2 MCL3 MCL4 : MsgContList . eq okToRead(MC, O, [am-entry(O, MCL, MCS MC MC2) am-entry(O2, MCL2 :: MC2 :: MCL3 :: MC :: MCL4, MCS2) AM-ENTRIES]) = false . eq okToRead(MC, O, [am-entry(O, MCL, MCS MC MC2) am-entry(O2, MCL2 :: MC2 :: MCL4, MCS2 MC) AM-ENTRIES]) = false . eq okToRead(MC, O, [AM-ENTRIES]) = true [owise] .
ceq receivedAfter(MC MCS, [am-entry(O2, MCL :: MC :: MC2 :: MCL2, MCS2) AM-ENTRIES]) = receivedAfter(MC MCS MC2, [am-entry(O2, MCL :: MC :: MC2 :: MCL2, MCS2) AM-ENTRIES]) if not (MC2 in MCS) .
ceq receivedAfter(MC MCS, [am-entry(O2, MCL2 :: MC :: MCL4, MCS2) AM-ENTRIES]) = receivedAfter(MC MCS MCS2, [am-entry(O2, MCL2 :: MC :: MCL4, emptyMsgContSet) AM-ENTRIES]) if MCS2 =/= emptyMsgContSet .
eq okToRead(MC, O, [am-entry(O, MCL, MCS MC) AM-ENTRIES]) = not (MC in receivedAfter(MCS, [am-entry(O, MCL, MCS) AM-ENTRIES])) .
Fig. 1 .
1 Fig. 1. The P-Store certification algorithm in [14].
x, r2) ;; replicatingSites(y, (r2 , r3)) ;; replicatingSites(z, r1)] < c1 : Client | txns : < t1 : Transaction | operations : ((xl :=read x) (yl :=read y)), destination : r1, readSet : emptyReadSet, status : undecided, writeSet : emptyWriteSet, localVars : (xl |-> [0] , yl |-> [0]) >, pendingTrans : empty > < c2 : Client | txns : < t2 : Transaction | operations : (write(y, 5) write(x, 8)), destination : r2, ... > pendingTrans : empty > < r1 : Replica | datastore : (< z, [2], 1 >), committed : none, aborted : none, executing : none, submitted : none, queue : emptyTransList, transToCertify : noTrans, decidedTranses : noTS > < r2 : Replica | datastore : ((< x, [2], 1 >) , (< y, [2], 1 >)), ... > < r3 : Replica | datastore : (< y, [2], 1 >), ... > .
rl [sendTxn] : < C : Client | pendingTrans : empty, txns : < TID : Transaction | destination : RID > ; TXNS > => < C : Client | pendingTrans : TID, txns : TXNS > (msg executeTrans(< TID : Transaction | >) from C to RID) .
rl [receiveTxn] : (msg executeTrans(< TID : Transaction | >) from C to RID) < RID : Replica | executing : none > => < RID : Replica | executing : < TID : Transaction | > > .
rl [executeRead1] : < RID : Replica | executing : < TID : Transaction | operations : (X :=read K) OPLIST, writeSet : (K |-> V), WS, localVars : VARS > > => < RID : Replica | executing : < TID : Transaction | operations : OPLIST, localVars : insert(X, V, VARS) > > .
rl [executeWrite] : < RID : Replica | executing : < TID : Transaction | operations : write(K, EXPR) OPLIST, localVars : VARS, writeSet : WS > > => < RID : Replica | executing : < TID : Transaction | operations : OPLIST, writeSet : insert(K, eval(EXPR, VARS), WS) > > .
rl [readCommit] : (msg commit(TID) from RID2 to RID) < RID : Replica | submitted : < TID : Transaction | >, committed : TRANSES > => < RID : Replica | submitted : none, committed : (TRANSES < TID : Transaction | >) > done(TID) .
Maude> (search init5 =>! C:Configuration .) ... Solution 4 ... < r1 : Replica | submitted : < t1 : Transaction | localVars :(xl |->[8], yl |->[5]), operations : nil, readSet : versionRead(x,2), versionRead(y,2), ... > , transToCertify : noTrans > < r2 : Replica | committed : < t2 : Transaction | writeSet : (x |-> [8], y |-> [5]), ... >, datastore : < x,[8],2 >, decidedTranses : transStatus(t2,commit), transToCertify : certify(t1,r1,(versionRead(x,2),versionRead(y,2)), emptyWriteSet,r2) , ... > < r3 : Replica | aborted : none, committed : none, datastore : < y,[5],2 >, decidedTranses : transStatus(t2,commit), transToCertify : certify(t1, r1, ..., emptyWriteSet, r3) , ... >
defines a family of transitions in which a message m, with parameters O and w, is read and consumed by an object O of class C, the attribute a1 of the object
O is changed to x + w, and a new message m'(O',x) is generated. Attributes
whose values do not change and do not affect the next state of other attributes
or messages, such as a3, need not be mentioned in a rule. Likewise, attributes
that are unchanged, such as a2, can be omitted from right-hand sides of rules.
):
crl [commit/submit2] :
< RID : Replica | executing :
< TID : Transaction | operations : nil, readSet : RS, writeSet : WS >,
submitted : TRANSES >
REPLICA-TABLE
=>
< RID : Replica | executing : none, submitted : TRANSES < TID : Transaction | > >
REPLICA-TABLE
(atomic-multicast certify(TID, RS, WS) from RID
to replicas((keys(RS) , keys(WS)), REPLICA-TABLE))
if WS =/= emptyWriteSet or not localTrans(keys(RS), REPLICA-TABLE) .
Nicolas Schiper confirmed that the errors pointed out in Section 6 are indeed errors in P-Store. He also suggested the fix alluded to in Section 6: replace WReplicas(T ) with Replicas(T ) in lines 22, 23, and 29. The Maude specification of the proposed correction is given in http://folk.uio.no/peterol/WADT16/.Missing Assumptions. One issue seems to remain: why can read-only local transactions be committed without certification? Couldn't such transactions have read stale values? Nicolas Schiper kindly explained that local read-only transactions are handled in a special way (all values
7 Fixing P-Store
An equational condition ui = wi can also be a matching equation, written ui:= wi, which instantiates the variables in ui to the values that make ui = wi hold, if any.
Operationally, a term is reduced to its E-normal form modulo A before a rewrite rule is applied.
The paper[START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] does not specify whether a replica stores multiple versions of a key.
Acknowledgments. I would like to thank Nicolas Schiper for quick and friendly replies to my questions about P-Store, the anonymous reviewers for helpful comments, and Si Liu and José Meseguer for valuable discussions about P-Store and atomic multicast.
This work was partially supported by AFOSR/AFRL Grant FA8750-11-2-0084 and NSF Grant CNS 14-09416. | 48,269 | [
"1030362"
] | [
"50791",
"303576"
] |
01677442 | en | [
"math"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01677442v2/file/clough_t_splineshal2.pdf | Tom Lyche
Jean-Louis Merrien
email: [email protected]
J.-L Merrien
Simplex-Splines on the Clough-Tocher Element
Keywords: Triangle Mesh, Piecewise polynomials, Interpolation, Simplex Splines, Marsden-like Identity
We propose a simplex spline basis for a space of C 1 -cubics on the Clough-Tocher split on a triangle. The 12 elements of the basis give a nonnegative partition of unity. We derive two Marsden-like identities, three quasi-interpolants with optimal approximation order and prove L ∞ stability of the basis. The conditions for C 1 -junction to neighboring triangles are simple and similar to the C 1 conditions for the cubic Bernstein polynomials on a triangulation. The simplex spline basis can also be linked to the Hermite basis to solve the classical interpolation problem on the Clough-Tocher split.
Introduction
Piecewise polynomials over triangles have applications in several branches of the sciences ranging from finite element analysis, surfaces in computer aided design and other engineering problems. For many of these applications, piecewise linear C 0 surfaces do not suffice. In some cases, we need smoother surfaces for modeling, or higher degrees to increase the approximation order. To obtain C 1 smoothness on an arbitrary triangulation, one needs piecewise quintic polynomials, [START_REF] Lai | Spline Functions on Triangulations[END_REF]. We can use lower degrees if we are willing to split each triangle into a number of subtriangles. Examples are the Clough-Tocher split (CT), [START_REF] Clough | Finite element stiffness matrices for analysis of plate bending[END_REF] and the Powell-Sabin 6 and 12-splits (PS6, PS12), [START_REF] Powell | Piecewise quadratic approximation on triangles[END_REF]. The 1 number of subtriangles is 3, 6 and 12 for CT, PS6 and PS12, respectively. A B-spline like basis both for C 1 cubics and C 2 quintics has been constructed for PS6, [START_REF] Grošelj | Construction and analysis of cubic Powell-Sabin B-splines[END_REF][START_REF] Speleers | A normalized basis for quintic Powell Sabin splines[END_REF] and references therein. Recently a B-spline like basis has also been proposed for a 9 dimensional subspace of C 1 cubics on CT, [START_REF] Grošelj | Construction and analysis of cubic Powell-Sabin B-splines[END_REF]. The PS12-split can be defined as the complete graph obtained by connecting vertices and edge midpoints of each triangle. A B-spline basis for PS12 and the full space of C 1 -cubics on CT seem difficult. An alternative to the B-spline basis is the Hermite basis. Since it uses both values and derivatives it is not as stable as the B-spline basis and it does not form a nonnegative partition of unity.
Here we construct a B-spline basis for one triangle in the coarse triangulation and connect to neighboring triangles using Bernstein-Bézier techniques. This was done for PS12 using C 1 quadratics, [START_REF] Cohen | A B-spline-like basis for the Powell-Sabin 12-split based on simplex splines[END_REF], and C 2 and C 3 quintics, [START_REF] Lyche | A Hermite interpolatory subdivision scheme for C 2 -quintics on the Powell-Sabin 12-split[END_REF][START_REF] Lyche | Stable Simplex Spline Bases for C 3 Quintics on the Powell-Sabin 12-Split[END_REF]. These bases, consisting of simplex splines (see for example [START_REF] Micchelli | On a numerically efficient method for computing multivariate B-splines[END_REF] for a general introduction), all share attractive properties of univariate B-splines such as In this paper we consider the full 12 dimensional space of C 1 cubics on the CT-split. We will define a simplex spline basis for this split and show that it has all the B-spline and Bernstein-Bézier properties mentioned above.
The CT-split is interesting for many reasons. To obtain a space of C 1 piecewise polynomials of degree at most 3 on an arbitrary triangulation, we only need to divide each triangle into 3 subtriangles, while 6 and 12 subtriangles are needed for PS6 and PS12. Moreover, the approximation order of the space S 3 of piecewise C 1 cubics on CT is 4 and this is at least as good as for the spaces S 6 and S 12 of piecewise cubics on PS6 and piecewise quadratics on PS12. The degrees of freedom for S 6 are values and gradients of the vertices of the coarse triangulation while for S 3 and S 12 we need in addition cross boundary derivatives at the midpoint of the edges, see Figure 1 (left). For further comparisons of these three spaces see Section 6.6 in [START_REF] Lai | Spline Functions on Triangulations[END_REF].
This paper is organized as follows: In the remaining part of the introduction, we review some properties of CT, introduce our notation and recall the main properties of simplex splines. In Section 2, we construct a cubic simplex spline basis for CT, from which, in Section 3, we derive two Marsden identities and then, in Section 4, three quasi-interpolants, and show L ∞ stability of the basis. In Section 5, conditions to ensure C 0 and C 1 continuity through an edge between two triangles are derived. The conversion between the simplex spline basis and the Hermite basis for CT is considered in Section 6. Lagrange and Hermite interpolation on triangulations using C 1 cubics, quartics and higher degrees have also been considered in [START_REF] Davydov | Interpolation by Splines on Triangulations[END_REF]. We end the paper with numerical examples of interpolation on a triangulation.
The Clough-Tocher split
To describe this split, let T := p 1 , p 2 , p 3 be a nondegenerate triangle in R 2 . Using the barycenter p T := (p 1 + p 2 + p 3 )/3 we can split T into three subtriangles T 1 := p T , p 2 , p 3 , T 2 := p T , p 3 , p 1 and T 3 := p T , p 1 , p 2 . On T we consider the space
S 1 3 ( ) := {f ∈ C 1 (T ) : f |T i is a polynomial of degree at most 3, i = 1, 2, 3}.
(1) This is a linear space of dimension 12, [START_REF] Lai | Spline Functions on Triangulations[END_REF]. Indeed, each element in the space can be determined uniquely by specifying values and gradients at the 3 vertices and cross boundary derivatives at the midpoint of the edges, see Figure 1,(right).
We associate the half open edges
p i , p T ) := {(1 -t)p i + tp T : 0 ≤ t < 1}, i = 1, 2, 3,
with subtriangles of T as follows
p 1 , p T ) ∈ T 2 , p 2 , p T ) ∈ T 3 , p 3 , p T ) ∈ T 1 , (2)
and we somewhat arbitrarily assume p T ∈ T 2 .
Notation
We let N be the set of natural numbers and N 0 := N ∪ {0} the set of nonnegative integers. For a given degree d ∈ N 0 , the space of polynomials of total degree at most d will be denoted by P d . The Bernstein polynomials of degree d on T are given by
B d ijk (p) := B d ijk (β 1 , β 2 , β 3 ) := d! i!j!k! β i 1 β j 2 β k 3 , i, j, k ∈ N 0 , i+j +k = d, (3)
where p ∈ R 2 and β 1 , β 2 , β 3 , given by
p = β 1 p 1 + β 2 p 2 + β 3 p 3 , β 1 + β 2 + β 3 = 1, (4)
are the barycentric coordinates of p. The set
B d := {B d ijk : i, j, k ∈ N 0 , i + j + k = d} (5)
is a partition of unity basis for P d . The points
p d ijk := ip 1 + jp 2 + kp 3 d , i, j, k ∈ N 0 , i + j + k = d, (6)
are called the domain points of B d relative to T . In this paper, we will order the cubic Bernstein polynomials by going counterclockwise around the boundary, starting at p 1 with B 3 300 and ending with B 3 111 , see Figure 2 {B 1 , B 2 , . . . , B 10 } := {B 3 300 , B
p 3 , 2p 3 + p 1 3 , p 3 + 2p 1 3 , p T . (8)
The partial derivatives of a bivariate function
f = f (x 1 , x 2 ) are denoted ∂ 1,0 f := ∂f ∂x 1 , ∂ 0,1 f := ∂f ∂x 2 , and ∂ u f := (u 1 ∂ 1,0 + u 2 ∂ 0,1
)f is the derivative in the direction u := (u 1 , u 2 ). We denote by ∂ β j f , j = 1, 2, 3 the partial derivatives of f (β 1 , β 2 , β 3 ) with respect to the barycentric coordinates of f . The symbols S and S o are the closed and open convex hull of a set S ∈ R m . For k ≤ m, we let vol k (S) be the k dimensional volume of S and define 1 S : R m → R by
1 S (x) := 1, if x ∈ S, 0, otherwise.
By the association (2), we note that for any x ∈ T
1 T 1 (x) + 1 T 2 (x) + 1 T 3 (x) = 1 T (x). (9)
We write #K for the number of elements in a sequence K.
Bivariate simplex splines
In this section we recall some basic properties of simplex splines.
For n ∈ N, d ∈ N 0 , let m := n + d and k 1 , . . . , k m+1 ∈ R n be a sequence of points called knots. The multiplicity of a knot is the number of times it occurs in the sequence. Let σ = k 1 , . . . , k m+1 with vol m (σ) > 0 be a simplex in R m whose projection π : R m → R n onto the first n coordinates satisfies π(k i ) = k i , for i = 1, . . . , m + 1.
With [K] := [k 1 , . . . , k m+1 ], the unit integral simplex spline M [K] can be defined geometrically by
M [K] : R n → R, M [K](x) := vol m-n σ ∩ π -1 (x) vol m (σ) .
For properties of M [K] and proofs see for example [START_REF] Micchelli | On a numerically efficient method for computing multivariate B-splines[END_REF]. Here, we mention:
• If n = 1 then M [K]
is the univariate B-spline of degree d with knots K, normalized to have integral one.
• In general M [K] is a nonnegative piecewise polynomial of total degree d and support K .
• For d = 0 we have
M [K](x) := 1/vol n ( K ), x ∈ K o , 0, if x / ∈ K . (10)
• The value of M [K] on the boundary of K has to be delt with separately, see below.
• If vol n ( K ) = 0 then M [K] can be defined either as identically zero or as a distribution.
We will deal with the bivariate case n = 2, and for our purpose it is convenient to work with area normalized simplex splines, [START_REF] Lyche | Stable Simplex Spline Bases for C 3 Quintics on the Powell-Sabin 12-Split[END_REF]. They are defined by Q[K](x) = 0 for all x ∈ R 2 if vol 2 ( K ) = 0, and otherwise
Q T [K] = Q[K] := vol 2 (T ) d+2 2 M [K], (11)
where T in general is some subset of R 2 , and in our case will be the triangle T := p 1 , p 2 , p 3 . The knot sequence is [p 1 , p 2 , p 3 , p T ] taken with multiplicities. Using properties of M [K] and [START_REF] Powell | Piecewise quadratic approximation on triangles[END_REF], we obtain the following for
Q[K]:
• It is a piecewise polynomial of degree d = #K -3 with support K
• knot lines are the lines in the complete graph of K
• local smoothness: Across a knot line, Q[K] ∈ C d+1-µ
, where d is the degree and µ is the number of knots on that knot line, including multiplicities
• differentiation formula: ∂ u Q[K] = d d+3 j=1 a j Q[K \ k j ],
for any u ∈ R 2 and any a 1 , . . . , a d+3 such that j a j k j = u, j a j = 0 (A-recurrence)
• recurrence relation: Q[K](x) = d+3 j=1 b j Q[K \ k j ](x), for any x ∈ R 2 and any b 1 , . . . , b d+3 such that j b j k j = x, j b j = 1 (B-recurrence) • knot insertion formula: Q[K] = d+3 j=1 c j Q[K ∪ y \ k j ],
for any y ∈ R 2 and any c 1 , . . . , c d+3 such that j c j k j = y, j c j = 1 (C-recurrence)
• degree zero: From ( 10) and ( 11) we obtain for d = 0
Q[K](x) := vol 2 (T )/vol 2 ( K ), x ∈ K o , 0, if x / ∈ K . ( 12
)
2 A simplex spline basis for the Clough-Tocher split
In this section we determine and study a basis of C 1 cubic simplex splines on the Clough-Tocher split on a triangle. For fixed x ∈ T we use the simplified notation
i j k := Q[p [i] 1 , p [j] 2 , p [k] 3 , p [l] T ](x), i, j, k, l ∈ N 0 , i + j + k + l ≥ 3,
where the notation p
[n] m denotes that p m is repeated n times. When one of the integers i, j, k, l is zero we have Lemma 1 For i, j, k, l ∈ N 0 , i+j +k+l = d ≥ 0 and x ∈ T with barycentric coordinates β 1 , β 2 , β 3 we have
i = 0, j+1 k+1 + = d! j!k!l! (β 2 -β 1 ) j (β 3 -β 1 ) k (3β 1 ) l 1 1 1 , j = 0, i+1 k+1 + = d! i!k!l! (β 1 -β 2 ) i (β 3 -β 2 ) k (3β 2 ) l 1 1 1 , k = 0, i+1 j+1
+ = d! i!j!l! (β 1 -β 3 ) i (β 2 -β 3 ) j (3β 3 ) l 1 1 1 , l = 0, i+1 j+1 k+1 = d! i!j!k! β i 1 β j 2 β k 3 1 1 1 = B d ijk (x), ( 13
)
where the constant simplex splines are given by
1 1 1 = 3 1 T 1 (x), 1 1 1 = 3 1 T 2 (x), 1 1 1 = 3 1 T 3 (x), 1 1 1 = 1 T (x). ( 14
)
Proof: Suppose i = 0. The first equation in (13) holds for d = 0. Suppose it holds for d -1 and let j + k + l = d. Let β 023 j , j = 0, 2, 3 be the barycentric coordinates of x with respect to T 1 = p 0 , p 2 , p 3 , where p 0 := p T . By the B-recurrence
j+1 k+1 + = β 023 2 j k+1 + + β 023 3 j+1 k + + β 023 0 j+1 k+1 .
It is easily shown that
β 023 2 = β 2 -β 1 , β 023 3 = β 3 -β 1 , β 023 0 = 3β 1 .
Therefore, by the induction hypothesis
j+1 k+1 + = (d -1)! j!k!l! (j + k + l)(β 023 2 ) j (β 023 3 ) k (β 023 0 ) l 1 1 1 Since j + k + l = d we obtain the first equation in (13).
The next two equations in (13) follow similarly using
β 031 1 = β 1 -β 2 , β 031 3 = β 3 -β 2 , β 031 0 = 3β 2 , β 012 1 = β 1 -β 3 , β 012 2 = β 2 -β 3 , β 012 0 = 3β 3 .
Using the B-recurrence repeatedly, we obtain the first equality for l = 0. The values of the constant simplex splines are a consequence of [START_REF] Speleers | A normalized basis for quintic Powell Sabin splines[END_REF].
Remark 2 For i = 0 we note that the expression
d! j!k!l! (β 2 -β 1 ) j (β 3 - β 1 ) k (3β 1 ) l in (13) is a Bernstein polynomial on T 1 . Similar remarks hold for j, k = 0. The set C1 := i j k ∈ S 1 3 ( ) : i j k = 0 (15)
of all nonzero simplex splines that can be used in a basis for S 1 3 ( ) contains precisely the following 13 simplex splines.
Lemma 3 We have
C1 = i j k : i, j, k ∈ N, i + j + k = 6 2 2 1 1 , 1 2 2 1 , 2 1 2 1
.
Proof: For l = 0 it follows from Lemma 1 that i j k ∈ S 1 3 ( ) for all i + j + k = 6. Consider next l = 1. By the local smoothness property, C 1 smoothness implies that each of i, j, k can be at most 2. But then
2 2 1 1 , 1 2 2 1 , 2 1 2 1
are the only possibilities. Now if l = 2 then i + j + k = 4 implies that one of i, j, k must be at least 2 and we cannot have C 1 smoothness. Similarly l > 2 is not feasible. Recall that S 1 3 ( ) is a linear space of dimension 12, [START_REF] Clough | Finite element stiffness matrices for analysis of plate bending[END_REF]. Thus, in order to obtain a possible basis for this space, we need to choose 12 of the 13 elements in C1. Since C1 contains the 10 cubic Bernstein polynomials we have to include at least two of
2 2 1 1 , 1 2 2 1 , 2 1 2 1
. We also want a symmetric basis and therefore, we have to include all of them. But then one of the Bernstein polynomials has to be excluded. To see which one to exclude, we insert the
knot p 3 = -p 1 -p 2 + 3p T into 2 2 1 1
and use the C-recurrence to obtain
2 2 1 1 = - 1 2 2 1 - 2 1 2 1 + 3 2 2 2
, or by (13)
2 2 1 1 + 1 2 2 1 + 2 1 2 1 = 3B 3 111 (x). ( 16
)
Thus, in order to have symmetry and hopefully obtain 12 linearly independent functions, we see that B 3 111 is the one that should be excluded. We obtain the following simplex spline basis for S 1 3 ( ).
Theorem 4 (CTS-basis) The 12 simplex splines S 1 , . . . , S 12 , where
S j (x) := B j (x)
, where B j is given by (7) j = 1, . . . , 9,
S 10 (x) := 1 3 2 2 1 1 = (B 3 210 -B 3 300 )1 T 1 + (B 3 120 -B 3 030 )1 T 2 + (B 3 111 -B 3 102 -B 3 012 + 2B 3 003 )1 T 3 S 11 (x) := 1 3 1 2 2 1 = (B 3 111 -B 3 210 -B 3 201 + 2B 3 300 )1 T 1 + (B 3 021 -B 3 030 )1 T 2 + (B 3 012 -B 3 003 )1 T 3 S 12 (x) := 1 3 2 1 2 1 = (B 3 201 -B 3 300 )1 T 1 + (B 3 111 -B 3 120 -B 3 021 + 2B 3 030 )1 T 2 + (B 3 102 -B 3 003 )1 T 3 .
(17) form a partition of unity basis for the space S 1 3 ( ) given by (1). This basis, which we call the CTS-basis, is the only symmetric simplex spline basis for S 1 3 ( ). On the boundary of T the functions S 10 , S 11 , S 12 have the value zero, while the elements of {S 1 , S 2 , . . . , S 9 } reduce to zero, or to univariate Bernstein polynomials.
Proof: By Lemma 1, it follows that the Bernstein polynomials B 1 , . . . , B 9 are cubic simplex splines, and the previous discussion implies that the functions in (17), apart from scaling, are the only candidates for a symmetric simplex spline basis for S 1 3 ( ).
We can find the explicit form of (see definitions at the end of Section 1) . Consider the C-recurrence. Insert-ing p 1 twice and using p 1 = -p 2p 3 + 3p T and (13) we find
2 2 1 1 = - 3 1 1 1 - 3 2 1 + 3 3 2 1 = 4 1 1 + 4 1 1 -3 4 1 1 - 3 2 1 + 3 3 2 1 = (β 1 -β 2 ) 3 1 1 1 + (β 1 -β 3 ) 3 1 1 1 -3β 3 1 1 1 1 -3(β 1 -β 3 ) 2 (β 2 -β 3 ) 1 1 1 + 9β 2 1 β 2 1 1 1 = (β 1 -β 2 ) 3 1 1 1 + [(β 1 -β 3 ) 3 -3(β 1 -β 3 ) 2 (β 2 -β 3 )] 1 1 1 + 3β 2 1 (3β 2 -β 1 ) 1 1 1 .
(18)
Using ( 9) and Lemma 1, we can write 3
1 1 1 = 1 1 1 + 1 1 1 + 1 1 1 , so that 2 2 1 1 = [(β 1 -β 2 ) 3 + β 2 1 (3β 2 -β 1 )] 1 1 1 + β 2 1 (3β 2 -β 1 ) 1 1 1 + [(β 1 -β 3 ) 2 (β 1 -3β 2 + 2β 3 ) + β 2 1 (3β 2 -β 1 )] 1 1 1 = (3β 2 1 β 2 -β 3 1 ) 1 1 1 + (3β 1 β 2 2 -β 3 2 ) 1 1 1 + (6β 1 β 2 β 3 -3β 1 β 2 3 -3β 2 β 2 3 + 2β 3 3 ) 1 1 1 . ( 19
)
By symmetry we obtain
1 2 2 1 = (6β 1 β 2 β 3 -3β 2 1 β 2 -3β 2 1 β 3 + 2β 3 1 ) 1 1 1 + (3β 2 2 β 3 -β 3 2 ) 1 1 1 + (3β 2 β 2 3 -β 3 3 ) 1 1 1 , 2 1 2 1 = (3β 2 1 β 3 -β 3 1 ) 1 1 1 + (3β 1 β 2 3 -β 3 3 ) 1 1 1 + (6β 1 β 2 β 3 -3β 1 β 2 2 -3β 2 2 β 3 + 2β 3 2 ) 1 1 1 . ( 20
)
The formulas for S 10 , S 11 and S 12 in (17) now follows from ( 19) and (20) using ( 3) and ( 14).
By the partition of unity for Bernstein polynomials we find
12 j=1 S j (x) = i+j+k=3 B 3 ijk (x) = 1, x ∈ T .
It is well known that B 3 ijk reduces to univariate Bernstein polynomials or zero on the boundary of T .
Clearly S j ∈ C(R 2 ), j = 10, 11, 12, since no edge contains more than 4 knots. This follows from general properties of simplex splines. By the local support property they must therefore be zero on the boundary. It also follows that S j ∈ C 1 (T ), j = 10, 11, 12, since no interior knot line contains more than 3 knots.
It remains to show that the 12 functions S j , j = 1, . . . , 12 are linearly independent on T . Suppose that 12 j=1 c j S j (x) = 0 for all x ∈ T and let (β 1 , β 2 , β 3 ) be the barycentric coordinates of x. On the edge p 1 , p 2 , where β 3 = 0, the functions S j , j = 5, . . . 12 vanish, and thus
12 j=1 c j S j (x) = c 1 B 3 300 (x) + c 2 B 3 210 (x) + c 3 B 3 120 (x) + c 4 B 3 030 (x) = 0.
On p 1 , p 2 this is a linear combination of linearly independent univariate Bernstein polynomials and we conclude that c 1 = c 2 = c 3 = c 4 = 0. Similarly c j = 0 for j = 5, . . . , 9. It remains to show that S 10 , S 11 and S 12 are linearly independent on T . For x ∈ T o 3 and β 3 = 0 we find
∂S 10 ∂β 3 | β 3 =0 = 6β 1 β 2 = 0, ∂S j ∂β 3 | β 3 =0 = 0, j = 11, 12.
We deduce that c 10 = 0 and similarly c 11 = c 12 = 0 which concludes the proof.
In Figure 3 we show graphs of the functions S 10 , S 11 , S 12 .
Two Marsden identities and representation of polynomials
We give both a barycentric and a Cartesian Marsden-like identity.
Theorem 5 (Barycentric Marsden-like identity) For u := (u 1 , u 2 , u 3 ), The CTS-basis functions S 10 , S 11 , S 12 on the triangle (0, 0), (1, 0), (0, 1) .
β := (β 1 , β 2 , β 3 ) ∈ R 3 with β i ≥ 0, i = 1, 2, 3 and β 1 + β 2 + β 3 = 1 we have (β T u) 3 = u 3 1 S 1 (β) + u 2 1 u 2 S 2 (β) + u 1 u 2 2 S 3 (β) + u 3 2 S 4 (β) + u 2 2 u 3 S 5 (β) + u 2 u 2 3 S 6 (β) + u 3 3 S 7 (β) + u 1 u 2 3 S 8 (β) + u 2 1 u 3 S 9 (β) + u 1 u 2 u 3 S 10 (β) + S 11 (β) + S 12 (β) .
Proof: By the multinomial expansion we obtain
(β 1 u 1 + β 2 u 2 + β 3 u 3 ) 3 = i+j+k=3 3! i!j!k! (β 1 u 1 ) i (β 2 u 2 ) j (β 3 u 3 ) k = i+j+k=3 u i 1 u j 2 u k 3 B 3 ijk (β).
Using B 3 111 = S 10 + S 11 + S 12 and the ordering in Theorem 4 we obtain (21).
Corollary 6 For l, m, n ∈ N 0 with l + m + n ≤ 3 we have an explicit representation for lower degree Bernstein polynomials in terms of the CTSbasis (17).
B l+m+n lmn = 3 l + m + n -1 3 l 0 m 0 n S 1 + 2 l 1 m 0 n S 2 + 1 l 2 m 0 n S 3 + 0 l 3 m 0 n S 4 + 0 l 2 m 1 n S 5 + 0 l 1 m 2 n S 6 + 0 l 0 m 3 n S 7 + 1 l 0 m 2 n S 8 + 2 l 0 m 1 n S 9 + 1 l 1 m 1 n S 10 + S 11 + S 12 , (22)
where 0 0 := 1 and r s := 0 if s > r.
Proof: Differentiating, for any d ∈ N 0 , (β 1 u 1 +β 2 u 2 +β 3 u 3 ) d a total of l, m, n times with respect to u 1 , u 2 , u 3 , respectively, and setting
u 1 = u 2 = u 3 = 1 we find d! (d -l -m -n)! β l 1 β m 2 β n 3 = i+j+k=d i(i -1) . . . (i -l + 1)j . . . (j -m + 1)k . . . (k -n + 1)B d ijk ,
and by a rescaling
B l+m+n lmn = d l + m + n -1 i+j+k=d i l j m k n B d ijk , l + m + n ≤ d.
(23) Using ( 17) with d = 3, we obtain (22).
As an example, we find
B 1 100 = 1 3 3S 1 + 2S 2 + S 3 + S 8 + 2S 9 + S 10 + S 11 + S 12 .
Theorem 7 (Cartesian Marsden-like identity) We have
(1 + x T v) 3 = 12 j=1 ψ j (v)S j (x), x ∈ T , v ∈ R 2 , ( 24
)
where the dual polynomials in Cartesian form are given by
ψ j (v) := 3 l=1 (1 + d T j,l v), j = 1, . . . , 12, v ∈ R 2 . ( 25
)
Here the dual points d j := [d j,1 , d j,2 , d j,3 ], are given as follows.
d 1 d 2 d 3 d 4 d 5 d 6 d 7 d 8 d 9 d 10 d 11 d 12 := p 1 p 1 p 1 p 1 p 1 p 2 p 1 p 2 p 2 p 2 p 2 p 2 p 2 p 2 p 3 p 2 p 3 p 3 p 3 p 3 p 3 p 1 p 3 p 3 p 1 p 1 p 3 p 1 p 2 p 3 p 1 p 2 p 3 p 1 p 2 p 3 . ( 26
)
The domain points p * j in (8) are the coefficients of x in terms of the CTSbasis
x = 12 j=1 p * j S j (x), ( 27
)
where p * 10 = p * 11 = p * 12 = p T .
Proof: We apply (21) with β 1 , β 2 , β 3 the barycentric coordinates of x and
u i = 1 + p T i v, i = 1, 2, 3.
Then
β 1 u 1 + β 2 u 2 + β 3 u 3 = β 1 + β 2 + β 3 + β 1 p T 1 v + β 2 p T 2 v + β 3 p T 3 v = 1 + x T v.
and ( 24), ( 25), (26) follow from (21). Taking partial derivatives in (24) with respect to v,
∂ v 1 , ∂ v 2 (1 + x T v) 3 = 3x(1 + x T v) 2 = 12 j=1 ∂ v 1 , ∂ v 2 ψ j (v)S j (x),
where
∂ v 1 , ∂ v 2 ψ j (v) := d j,1 (1+d T j,2 v)(1+d T j,3 v)+d j,2 (1+d T j,1 v)(1+d T j,3 v)+ d j,3 (1 + d T j,1 v)(1 + d T j,2 v).
Setting v = 0 we obtain (27). Note that the domain point p T for B 3 111 has become a triple domain point for the CTS-basis.
Following the proof of (27) we can give explicit representations of all the monomials x r y s spanning P 3 . We do not give details here.
Three quasi-interpolants
We consider three quasi-interpolants on S 1 3 ( ). They all use functionals based on point evaluations and the third one will be used to estimate the L ∞ condition number of the CTS-basis.
To start, we consider the following polynomial interpolation problem on T . Find g ∈ P 3 such that g(p * i ) = f i , where f := [f 1 , . . . , f 10 ] T is a vector of given real numbers and the p * i given by ( 8) are the domain points for the cubic Bernstein basis.
Using the ordering (7), we write g in the form 10 j=1 c j B j and obtain the linear system
A = A 1 0 A 2 A 3 , (28)
and if A 1 and A 3 are nonsingular then
A -1 = A -1 1 0 -A -1 3 A 2 A -1 1 A -1 3 = B 1 0 B 2 B 3 . ( 29
)
Using the barycentric form of the domain points in ( 8) we find
A 2 = [1, 3, 3, 1, 3, 3, 1, 3, 3]/27, A 3 = B 3 111 ( 1 3 , 1 3 , 1 3 ) = 2 9 , A 1 := 1 27
27 0 0 0 0 0 0 0 0 8 12 6 1 0 0 0 0 0 1 6 12 8 0 0 0 0 0 0 0 0 27 0 0 0 0 0 0 0 0 8 12 6 1 0 0 0 0 0 1 6 12 8 0 0 0 0 0 0 0 0 27 0 0 1 0 0 0 0 0 8 12 6 8 0 0 0 0 0 1 6 12
∈ R 9×9 (30)
and
B 1 := A -1 1 = 1 6 6 0 0 0 0 0 0 0 0 -5 18 -9 2 0 0 0 0 0 2 -9 18 -5 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 -5 18 -9 2 0 0 0 0 0 2 -9 18 -5 0 0 0 0 0 0 0 0 6 0 0 2 0 0 0 0 0 -5 18 -9 -5 0 0 0 0 0 2 -9 18 , B 3 = [ 9 2 ], B 2 := -B 3 A 2 B 1 = 1 12
λ P i (f )B i , λ P i (f ) := 10 j=1 α i,j f (p * j ), ( 32
)
where the matrix α := A -1 has elements α i,j in row i and column j, i, j = 1, . . . , 10. We have
λ P i (B j ) = 10 k=1 α i,k B j (p * k ) = 10 k=1 α i,k a k,j = δ i,j , i, j = 1, . . . , 10.
It follows that QI P (g) = g for all g ∈ P 3 . Since B j = S j , j = 1, . . . , 9 and B 10 = B 3 111 = S 10 + S 11 + S 12 the quasi-interpolant
QI P : C(T ) → S 1 3 ( ), QI P (f ) := 12 i=1 λ P i (f )S i , λ P 11 = λ P 12 = λ P 10 , ( 33
)
where λ P i (f ) is given by (32), i = 1, . . . , 10, reproduces P 3 . Moreover, since for any f ∈ C(T ) and x ∈ T
|QI P (f )(x)| ≤ max 1≤i≤12 |λ P i (f )| 12 i=1 S i (x) = max 1≤i≤10 |λ P i (f )|,
we obtain
QI P (f ) L∞(T ) ≤ α ∞ f L∞(T ) = 10 f L∞(T ) ,
independently of the geometry of T . Using the construction in [START_REF] Lyche | Stable Simplex Spline Bases for C 3 Quintics on the Powell-Sabin 12-Split[END_REF], we can derive another quasi-interpolant which also reproduces P 3 . It uses more points, but has a slightly smaller norm. Consider the map P : C(T ) → S 1 3 (T ) defined by P (f ) = 12 ℓ=1 M ℓ (f )S ℓ , where
M ℓ (f ) := 1 6 f (d ℓ,1 ) + f (d ℓ,2 ) + f (d ℓ,3 ) + 9 2 f (p * ℓ ) - 4 3 f d ℓ,1 + d ℓ,2 2 + f d ℓ,1 + d ℓ,3 2 + f d ℓ,2 + d ℓ,3 2 .
Here the d ℓ,m are the dual points given by (26) and the p * ℓ are the domain points given by (27). Note that this is an affine combination of function values of f .
We have tested the convergence of the quasi-interpolant, sampling data from the function f (x, y) = e 2x+y + 5x + 7y on the triangle A = [0, 0],
B = h * [1, 0], C = h * [0.2, 1.2]
for h ∈ {0.05, 0.04, 0.03, 0.02, 0.01}. The following array indicates that the error: f -P (f ) L∞(T ) , is O(h 4 ). h 0.05 0.04 0.03 0.02 0.01 error/h 4 0.0550 0.0547 0.0543 0.0540 0.0537 Using a standard argument the following Proposition shows that the error is indeed O(h 4 ) for sufficiently smooth functions.
Proposition 8
The operator P is a quasi-interpolant that reproduces P 3 . For any f ∈ C(T )
P (f ) L∞(T ) ≤ 9 f L∞(T ) , (34)
independently of the geometry of T . Moreover,
f -P (f ) L∞(T ) ≤ 10 inf g∈P 3 f -g L∞(T ) . (35)
Proof: Since d 10 = d 11 = d 12 and B 3 111 = S 10 + S 11 + S 12 , B 3 ijk = S ℓ for (i, j, k) = (1, 1, 1) and some ℓ, we obtain
P (f ) = i+j+k=3 Mijk (f )B 3 ijk
where Mijk = M ℓ for (i, j, k) = (1, 1, 1) and corresponding ℓ and M111 = 3M 10 .
To prove that P reproduces polynomials up to degree 3, i.e., P (B 3 ijk ) = B 3 ijk , whenever i + j + k = 3, it is sufficient to prove the result for B
p 3 , p 2 + p 3 2 , it is easy to compute that M300 (B 3 300 ) = 1, M300 (B 3 ijk ) = 0 for (i, j, k) = (3, 0, 0), M210 (B 3 210 ) = 1, M210 (B 3 ijk ) = 0 for (i, j, k) = (2, 1, 0), M111 (B 3 111 ) = 1, M111 (B 3 ijk ) = 0 for (i, j, k) = (1, 1 , 1).
Therefore, by a standard argument, P is a quasi-interpolant that reproduces P 3 . Since the sum of the absolute values of the coefficients defining M ℓ (f ) is equal to 9, another standard argument shows (34) and (35).
The operators QI P and P do not reproduce the whole spline space S 1 3 ( ). Indeed, since λ P 10 (B 10 ) = M 10 (B 10 ) = 1, we have λ P 10 (S j ) = M 10 (S j ) = 1 3 , j = 10, 11, 12.
To give un upper bound for the condition number of the CTS-basis we need a quasi-interpolant which reproduces the whole spline space. We again use the inverse of the coefficient matrix of an interpolation problem to construct such an operator. We need 12 interpolation points and a natural choice is to use the first 9 cubic Bernstein domain points p * j , j = 1, . . . , 9 and split the barycenter p * 10 = p T into three points. After some experimentation we redefine p * 10 and choose p * 10 := (3, 3, 1)/7, p * 11 := (3, 1, 3)/7 and p * 12 := (1, 3, 3)/7. The problem is to find s = 12 j=1 c j S j such that s(p * i ) = f i , i = 1, . . . , 12. The coefficient matrix for this problem has again the block tridiagonal form (28), where A 1 ∈ R 9×9 and B 1 := A -1 1 are given by ( 30) and (31) as before. Moreover, using the formulas in Theorem 4 we find
A 3 = [S j (p * i )]
α S := A -1 = B 1 0 B 2 B 3 ,
where It follows that the quasi-interpolant QI given by
B 2 = -B 3 A 2 B 1 =
QI : C(T ) → S 1 3 ( ), QI(f ) := 12 i=1 λ S i (f )S i , λ S i (f ) = 12 j=1 α S i,j f (p * j ), (37)
is a projector onto the spline space S 1 3 ( ). In particular
s := 12 i=1 c i S i =⇒ c i = λ S i (s), i = 1, . . . , 12. (38)
The quasi-interpolant (37) can be used to show the L ∞ stability of the CTS-basis. For this we prove that the condition number is independent of the geometry of the triangle.
We define the ∞-norm condition number of the CTS-basis on T by
κ ∞ (T ) := max c =0 b T c L∞(T ) c ∞ max c =0 c ∞ b T c L∞(T )
,
where b T c := 12 j=1 c j S j ∈ S 1 3 ( ).
) |c i | = |λ S i (b T c)| ≤ α S ∞ b T c L∞(T ) . Therefore, c ∞ b T c L∞(T ) ≤ α S ∞ = 27 - 32 405 ,
and the upper bound κ ∞ < 27 follows.
5 C 0 and C 1 -continuity
In the following, we derive conditions to ensure C 0 and C 1 continuity through an edge between two triangles. The conditions are very similar to the classical conditions for continuity of Bernstein polynomials.
Theorem 10
Let s 1 = 12 j=1 c j S j and s 2 = 12 j=1 d j Sj be defined on the triangle T := p 1 , p 2 , p 3 and T := p 1 , p 2 , p3 , respectively, see Figure 4.
The function s
= s 1 on T s 2 on T is continuous on T ∪ T if d 1 = c 1 , d 2 = c 2 , d 3 = c 3 , d 4 = c 4 . (39)
Moreover, s ∈ C 1 (T ∪ T ) if in addition to (39) we have
d 5 = γ 1 c 3 + γ 2 c 4 + γ 3 c 5 , d 9 = γ 1 c 1 + γ 2 c 2 + γ 3 c 9 , d 10 = γ 1 c 2 + γ 2 c 3 + γ 3 c 10 .
(40) where γ 1 , γ 2 , γ 3 are the barycentric coordinates of p3 with respect to T . Suppose next (39) holds and s ∈ C 1 (T ∪ T ). By the continuity property we see that S j , j = 6, 7, 8, 11, 12 are zero and have zero cross boundary derivatives on p 1 , p 2 since they have at most 3 knots on that edge. We take derivatives in the direction u := p3p 1 using the A-recurrence (defined at the end of Section 1) with a := (γ 1 -1, γ 2 , γ 3 , 0) for s 1 and a := (-1, 0, 1, 0)
The Hermite basis
The classical Hermite interpolation problem on the Clough-Tocher split is to interpolate values and gradients at vertices and normal derivatives at the midpoint of edges, see Figure 1. These interpolation conditions can be described by the linear functionals
ρ(f ) = [ρ 1 (f ), . . . , ρ 12 (f )] T := [f (p 1 ), ∂ 1,0 f (p 1 ), ∂ 0,1 f (p 1 ), f (p 2 ), ∂ 1,0 f (p 2 ), ∂ 0,1 f (p 2 ), f (p 3 ), ∂ 1,0 f (p 3 ), ∂ 0,1 f (p 3 ), ∂ n 1 f (p 5 ), ∂ n 2 f (p 6 ), ∂ n 3 f (p 4 )] T ,
where p 4 , p 5 , p 6 , are the midpoints on the edges p 1 , p 2 , p 2 , p 3 , p 3 , p 1 , respectively, and ∂ n j f is the derivative in the direction of the unit normal to that edge in the direction towards p j . We let p j = (x j , y j ) be the coordinates of each point. The coefficient vector c := [c 1 , . . . , c 12 ] T of the interpolant g := 12 j=1 c j S j is solution of the linear system Ac = ρ(f ), where A ∈ R 12×12 with a i,j := ρ i (S j ).
Let H 1 , . . . , H 12 be the Hermite basis for S 1 3 ( ) defined by ρ i (H j ) = δ i,j . The matrix A transforms the Hermite basis to the CTS-basis. Since a basis transformation matrix is always nonsingular, we have
[S 1 , . . . , S 12 ] = [H 1 , . . . , H 12 ]A, [H 1 , . . . , H 12 ] = [S 1 , . . . , S 12 ]A -1 . (44)
To find the elements ρ i (S j ) of A we define for i, j, k = 1, 2, 3
ν ij := p ij 2 , p ij := p i -p j , x ij := x i -x j , y ij := y i -y j , ν ijk := p T i,j p j,k ν ij , for i = j, δ := 1 1 1 x 1 x 2 x 3 y 1 y 2 y 3 . (45)
We note that ν ijk is the length of the projection of p j,k in the direction of p i,j and that δ is twice the signed area of T . By the definition of the unit normals and the chain rule for j = 1, . . . , 12 we find ∂ 1,0 S j = (y 23 ∂ β 1 S j + y 31 ∂ β 2 S j + y 12 ∂ β 3 S j )/δ, ∂ 0,1 S j = (x 32 ∂ β 1 S j + x 13 ∂ β 2 S j + x 21 ∂ β 3 S j )/δ, ∂ n 1 S j = (y 23 ∂ 1,0 S j + x 32 ∂ 0,1 S j )/ν 32 = (ν 32 ∂ β 1 S j + ν 231 ∂ β 2 S j + ν 321 ∂ β 3 S j )/δ, ∂ n 2 S j = (y 31 ∂ 1,0 S j + x 13 ∂ 0,1 S j )/ν 31 = (ν 132 ∂ β 1 S j + ν 31 ∂ β 2 S j + ν 312 ∂ β 3 S j )/δ, ∂ n 3 S j = (y 12 ∂ 1,0 S j + x 21 ∂ 0,1 S j )/ν 21 = (ν 123 ∂ β 1 S j + ν 213 ∂ β 2 S j + ν 21 ∂ β 3 S j )/δ. This leads to
A := A 1 0 A 2 A 3 , with A 1 ∈ R 9×9 , A 2 ∈ R 3×9 , A 3 ∈ R 3×3 ,
where We find
A 1 := 3 δ
A -1 := B 1 0 B 2 B 3 = [b i,j ] 12 i,j=1 ,
where
B 1 := A -1 1 = 1 3
3 0 0 0 0 0 0 0 0 3 x 21 y 21 0 0 0 0 0 0 0 0 0 3 x 12 y 12 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 3 x 32 y 32 0 0 0 0 0 0 0 0 0 3 x 23 y 23 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 3 x 13 y 13 3 x 31 y 31 0 0 0 0 0 0
∈ R 9×9 ,
•
a differentiation formula • a stable recurrence relation • a knot insertion formula • they constitute a nonnegative partition of unity • simple explicit dual functionals • L ∞ stability • simple conditions for C 1 and C 2 joins to neighboring triangles • well conditioned collocation matrices for Lagrange and Hermite interpolation using certain sites.
Figure 1 :
1 Figure 1: The PS12-split (left) and the CT-split (right). The C 1 quadratics on PS-12 and C 1 cubics on CT have the same degrees of freedom as indicated.
Figure 2 :
2 Figure 2: The cubic Bernstein basis (left) and the CTS-basis (right), where B 3 111 is replaced by S 10 , S 11 , S 12 .
the B-or C-recurrence
Figure 3:
10 j=1c
10 j B j (p * i ) = f i , i = 1, . . . , 10, or in matrix form Ac = f for the unknown coefficient vector c := [c 1 , . . . , c 10 ] T . Since B 10 (p * i ) = B 3 111 (p * i ) = 0 for i = 1, . . . , 9 the coefficient matrix A is block triangular
4 - 9 -9 4 - 9 -9 4 - 9 - 9 .
4949499
Theorem 9
9 For any triangle T we have κ ∞ (T ) < 27. Proof: Since the S j form a nonnegative partition of unity it follows that max c =0 b T c L∞(T ) / c ∞ = 1. If s = 12 j=1 c j S j = b T c then by (38
c 10 ,c 11 ,c 12
Figure 4 :
4 Figure 4: C 1 -continuity and components
Figure 5 :
5 Figure 5: C 1 smoothness
Figure 6 :
6 Figure 6: The Hermite basis functions H 1 , H 2 , H 3 , H 10 on the unit triangle.
ν 32 , ν 231 , ν 231 -ν 32 , ν 321 -ν 32 , ν 321 , ν 32 , 0 , A 2 (2) := 3 4δ ν 132 , ν 31 , 0, 0, 0, ν 31 , ν 312 , ν 312 -ν 31 ν 132 -ν 31 , A 2 (3) := 3 4δ ν 123 , ν 123 -ν 21 , ν 213 -ν 21 , ν 213 , ν 21 , 0, 0, 0, ν 21 ,
B 3 :
3 3×3 , and the rows ofB 2 = -B 3 A 2 B 1 ∈ R 3×9 are given by B 2 (1) :=
Figure 7 :
7 Figure 7: The triangulation and the C 1 surface
Figure 8 : A C 1
81 Figure 8: A C 1 Hermite interpolating surface on the triangulation
1 6ν 21 -6ν 123 , x 12 ν 123 + ν 21 x 23 , y 12 ν 123 + ν 21 y 23 , -6ν 213 , x 21 ν 213 + ν 21 x 13 , y 21 ν 213 + ν 21 y 13 , 0, 0, 0 , -6ν 231 , x 23 ν 231 + ν 32 x 31 , y 23 ν 231 + ν 32 y 31 , -6ν 321 , x 23 ν 231 + ν 32 x 21 + ν 32 x 23 , y 23 ν 231 + ν 32 y 21 + ν 32 y 23 , 6ν 132 , x 13 ν 132 + ν 31 x 32 , y 13 ν 132 + ν 31 y 32 , 0, 0, 0, -6ν 312 , x 31 ν 312 + ν 31 x 12 , y 31 ν 312 + ν 31 y 12 .
P 6
P 4 P 5
P 3
B 2 (2) := 0, 0, 0, B 2 (3) := 1 6ν 32 1 6ν 31 -P 1 P 2
for s 2 . We find with x ∈ p 1 , p 2
The last equality follows from (13) since β 3 = 0 on p 1 , p 2 so that = 3B 2 110 (x). Consider next Sj . By the same argument as for S j , we see that Sj , j = 6, 7, 8, 11, 12 are zero and have zero cross boundary derivatives on p 1 , p 2 . We find for x ∈ p 1 , p 2
We note that on p 1 , p 2 , the polynomials B 2 101 , B2 101 , B 2 011 , B2 011 vanish and
As an example, on the unit triangle (p 1 , p 2 , p 3 ) = ((0, 0), (1, 0), (0, 1)) we find
Some of the Hermite basis functions are shown in Figure 6.
We have also tested the convergence of the Hermite interpolant, sampling again data from the function f (x, y) = e 2x+y + 5x + 7y on the triangle
Examples
Several examples have been considered for scattered data on the CT-split, see for example [START_REF] Farin | A modified Clough-Tocher interpolant[END_REF][START_REF] Mann | Cubic precision Clough-Tocher interpolation[END_REF]. Here, we consider a triangulation with vertices p 1 = (0, 0), p 2 = (1, 0), p 3 = (3/2, 1/2), p 4 = (-1/2, 1), p 5 = (1/4, 3/4), p 6 = (3/2, 3/2), p 7 = (1/2, 2) and triangles T 1 := p 1 , p 2 , p 5 , T 2 := p 2 , p 3 , p 5 , T 3 := p 4 , p 1 , p 5 , T 4 := p 3 , p 6 , p 5 , T 5 := p 6 , p 4 , p 5 , T 6 := p 4 , p 6 , p 7 ,. We divide each of the 6 triangles into 3 subtriangles using the Clough-Tocher split. We then obtain a space of C 1 piecewise polynomials of dimension 3V + E = 3 × 7 + 12 = 33, where V is the number of vertices and E the number of edges in the triangulation. We can represent a function s in this space by either using the Hermite basis or using CTS-splines on each of the triangles and enforcing the C 1 continuity conditions. The function s on T 1 depends on 12 components, while the C 1 -continuity through the edges gives only 5 free components for T 2 ,T 3 and T 4 . Closing the 1-cell at p 5 gives only one free component for T 5 and 5 free components for T 6 , Figure 7 left.
In the following graph, Figure 7, right, once the 12 first components on T 1 were chosen, the other free ones are set to zero. Then, in Figure 8, we have plotted the Hermite interpolant of the function f (x, y) = e 2x+y +5x+7y and gradients using the CTS-splines. | 36,811 | [
"829737"
] | [
"75"
] |
01767456 | en | [
"phys"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01767456/file/1610.09377.pdf | Joel C Roediger
Laura Ferrarese
Patrick Côté
Lauren A Macarthur
Rúben Sánchez-Janssen
John P Blakeslee
Eric W Peng
Chengze Liu
Roberto Muñoz
Jean-Charles Cuillandre
Roberto Munoz
Stephen Gwyn
Simona Mei
Samuel Boissier
Alessandro Boselli
Michele Cantiello
Stéphane Courteau
Pierre-Alain Duc
Ariane Lançon
J Christopher Mihos
Thomas H Puzia
James E Taylor
Patrick R Durrell
Elisa Toloba
Puragra Guhathakurta
Hongxin Zhang
The Next Generation Virgo Cluster
Keywords: galaxies: clusters: individual (Virgo), galaxies: dwarf, galaxies: evolution, galaxies: nuclei, galaxies: star clusters: general, galaxies: stellar content
published or not. The documents may come
INTRODUCTION
Despite the complexities of structure formation in a ΛCDM Universe, galaxies are well-regulated systems. Strong evidence supporting this statement are the many fundamental relations to which galaxies adhere: star formation rate versus stellar mass or gas density (Daddi et al. 2007;Elbaz et al. 2007;Noeske et al. 2007;Kennicutt & Evans 2012), rotational velocity versus luminosity or baryonic mass for disks (Courteau et al. 2007;McGaugh 2012;Lelli et al. 2016), the fundamental plane for spheroids (Bernardi et al. 2003;Zaritsky et al. 2012), and the mass of a central compact object versus galaxy mass (Ferrarese et al. 2006;Wehner & Harris 2006; [email protected] Beifiori et al. 2012;Kormendy & Ho 2013), to name several. Moreover, many of these relations are preserved within galaxy groups and clusters, demonstrating that such regulation is maintained in all environments (e.g. Blanton & Moustakas 2009). This paper focusses on the relationship between color and luminosity for quiescent ["quenched"] galaxies: the so-called red sequence [RS].
First identified by de Vaucouleurs (1961) and Visvanathan & Sandage (1977), the RS represents one side of the broader phenomenon of galaxy color bimodality (Strateva et al. 2001;Blanton et al. 2003;Baldry et al. 2004;Balogh et al. 2004;Driver et al. 2006;Cassata et al. 2008;Taylor et al. 2015), the other half being the blue cloud, with the green valley separating them. Based on the idea of passively evolving stellar pop-ulations, color bimodality is widely interpreted as an evolutionary sequence where galaxies transform their cold gas into stars within the blue cloud and move to the RS after star formation ends (e.g. Faber et al. 2007). This evolution has been partly observed through the increase of mass density along the RS towards low redshift (Bell et al. 2004;Kriek et al. 2008;Pozzetti et al. 2010), although the underlying physics of quenching remains a matter of active research. The standard view of color bimodality is a bit simplistic though insofar as the evolution does not strictly proceed in one direction; a fraction of galaxies in the RS or green valley have their stellar populations temporarily rejuvenated by replenishment of their cold gas reservoirs (Schawinski et al. 2014).
Crucial to our understanding of the RS is knowing when and how it formed. The downsizing phenomenon uncovered by spectroscopic analyses of nearby early-type galaxies (ETGs; Nelan et al. 2005;Thomas et al. 2005;Choi et al. 2014) implies that the RS was built over an extended period of time [∼5 Gyr], beginning with the most massive systems (e.g. Tanaka et al. 2005). These results support the common interpretation that the slope of the RS is caused by a decline in the metallicity [foremost] and age of the constituent stellar populations towards lower galaxy masses (Kodama & Arimoto 1997;Ferreras et al. 1999;Terlevich et al. 1999;Poggianti et al. 2001;De Lucia et al. 2007). Efforts to directly detect the formation of the RS have observed color bimodality to z ∼ 2 (Bell et al. 2004;Willmer et al. 2006;Cassata et al. 2008). More recently, legacy surveys such as GOODS, COSMOS, NEWFIRM, and UltraVISTA have shown that massive quiescent galaxies [M * 3 × 10 10 M ] begin to appear as early as z = 4 (Fontana et al. 2009;Muzzin et al. 2013;Marchesini et al. 2014) and finish assembling by z = 1-2 (Ilbert et al. 2010;Brammer et al. 2011). Growth in the stellar mass density of quiescent galaxies since z = 1, on the other hand, has occured at mass scales of M * and lower (Faber et al. 2007), consistent with downsizing.
Owing to their richness, concentration, and uniform member distances, galaxy clusters are an advantageous environment for studying the RS. Moreover, their characteristically high densities likely promote quenching and therefore hasten the transition of galaxies to the RS. In terms of formation, the RS has been identified in [proto-]clusters up to z ∼ 2 (Muzzin et al. 2009;Wilson et al. 2009;Gobat et al. 2011;Spitler et al. 2012;Stanford et al. 2012;Strazzullo et al. 2013;Cerulo et al. 2016). Much of the interest in z > 0 clusters has focussed on the growth of the faint end of the RS. Whereas scant evidence has been found for evolution of either the slope or scatter of the RS (Ellis et al. 1997;Gladders et al. 1998;Stanford et al. 1998;Blakeslee et al. 2003;Holden et al. 2004;Lidman et al. 2008;Mei et al. 2009;Papovich et al. 2010, but see Hao et al. 2009 andHilton et al. 2009), several groups have claimed an elevated ratio of bright-to-faint RS galaxies in clusters up to z = 0.8, relative to local measurements (Smail et al. 1998;De Lucia et al. 2007;Stott et al. 2007;Gilbank et al. 2008;Hilton et al. 2009;Rudnick et al. 2009, see also Boselli & Gavazzi 2014 and references therein). The increase in this ratio with redshift indicates that low-mass galaxies populate the RS at later times than high-mass systems, meaning that the former, on average, take longer to quench and/or are depleted via mergers/stripping at early epochs. These results are not without controversy, however, with some arguing that the inferred evolution may be the result of selection bias, small samples, or not enough redshift baseline (Crawford et al. 2009;Lu et al. 2009;De Propris et al. 2013;Andreon et al. 2014;Romeo et al. 2015;Cerulo et al. 2016).
As a tracer of star formation activity and stellar populations, colors also are a key metric for testing galaxy formation models. Until recently, only semi-analytic models [SAMs] had sufficient statistitcs to enable meaningful comparisons to data from large surveys. Initial efforts indicated that the fraction of red galaxies was too high in models, and thus quenching too efficient, which led to suggestions that re-accretion of SNe ejecta was necessary to maintain star formation in massive galaxies (Bower et al. 2006). Since then, a persistent issue facing SAMs has been that their RSs are shallower than observed (Menci et al. 2008;González et al. 2009;Guo et al. 2011). The common explanation for this is that the stellar metallicity-luminosity relation in the models is likewise too shallow. Font et al. (2008) demonstrated that an added cause of the excessively red colors of dwarf satellites is their being too easily quenched by strangulation, referring to the stripping of halo gas. While Font et al. (2008) increased the binding energy of this gas as a remedy, Gonzalez-Perez et al. (2014) have shown that further improvements are still needed. Studies of other models have revealed similar mismatches with observations (Romeo et al. 2008;Weinmann et al. 2011), indicating that the problem is widespread.
In this paper, we use multi-band photometry from the Next Generation Virgo Cluster Survey (NGVS; Ferrarese et al. 2012) to study galaxy colors in the core of a z = 0 cluster, an environment naturally weighted to the RS. The main novelty of this work is that NGVS photometry probes mass scales from brightest cluster galaxies to Milky Way satellites (Ferrarese et al. 2016b, hereafter F16), allowing us to characterize the RS over an unprecedented factor of >10 5 in luminosity [∼10 6 M in stellar mass] and thus reach a largely unexplored part of the color-magnitude distribution [CMD]. Given the unique nature of our sample, we also take the opportunity to compare our data to galaxy formation models, which have received scant attention in the context of cluster cores.
Our work complements other NGVS studies of the galaxy population within Virgo's core. Zhu et al. (2014) jointly modelled the dynamics of field stars and globular clusters [GCs] to measure the total mass distribution of M87 to a projected radius of 180 kpc. Grossauer et al. (2015) combined dark matter simulations and the stellar mass function to extend the stellar-to-halo mass relation down to M h ∼ 10 10 M . Sánchez-Janssen et al. (2016) statistically inferred the intrinsic shapes of the faint dwarf population and compared the results to those for Local Group dwarfs and simulations of tidal stripping. Ferrarese et al. (2016a) present the deepest luminosity function to date for a rich, volume-limited sample of nearby galaxies. Lastly, Côté et al. (in preparation) and Sánchez-Janssen et al. (in preparation) study the galaxy and nuclear scaling relations, respectively, for the same sample.
In Section 2 we briefly discuss our dataset and preparations thereof. Our analysis of the RS is presented in Section 3, while Sections 4-6 focus on comparisons to previous work, compact stellar systems [CSS] and galaxy formation models. A discussion of our findings and conclusions are provided in Sections 7-8.
DATA
Our study of the RS in the core of Virgo is enabled by the NGVS (Ferrarese et al. 2012). Briefly, the NGVS is an optical imaging survey of the Virgo cluster performed with CFHT/MegaCam. Imaging was obtained in the u * g ız bands1 over a 104 deg2 footprint centered on sub-clusters A and B, reaching out to their respective virial radii (1.55 and 0.96 Mpc, respectively, for an assumed distance of 16.5 Mpc; Mei et al. 2007;Blakeslee et al. 2009). The NGVS also obtained --band imaging for an area of 3.71 deg 2 [0.3 Mpc 2 ], roughly centered on M87, the galaxy at the dynamical center of subcluster A; we refer to this as the core region. NGVS images have a uniform limiting surface brightness of ∼29 gmag arcsec -2 . Further details on the acquisition and reduction strategies for the NGVS are provided in Ferrarese et al. (2012).
This paper focuses on the core of the cluster, whose boundaries are defined as, 12 h 26 m 20 s ≤ RA (J2000) ≤ 12 h 34 m 10 s 11 • 30 22 ≤ Dec (J2000) ≤ 13 • 26 45 and encompass four MegaCam pointings [see Figure 13 of F16]. A catalog of 404 galaxies for this area, of which 154 are new detections, is published in F16, spanning the range 8.9 ≤ g ≤ 23.7 and ≥50% complete to g ∼ 22. As demonstrated there, the galaxy photometry has been thoroughly analysed and cluster membership extensively vetted for this region; below we provide a basic summary of these endeavors. A study of the CMD covering the entire survey volume will be presented in a future contribution.
Faint [g > 16] extended sources in the core were identified using a dedicated pipeline based on ring-filtering of the MegaCam stacks. Ring-filtering replaces pixels contaminated by bright, point-like sources with the median of pixels located just beyond the seeing disk. This algorithm helps overcome situations of low surface brightness sources being segmented into several parts due to contamination. The list of candidates is then culled and assigned membership probabilities by analysing SExtractor and GALFIT (Peng et al. 2002) parameters in the context of a size versus surface brightness diagram, colors and structural scaling relations, and photometric redshifts. A final visual inspection of the candidates and the stacks themselves is made to address issues of false-positives, pipeline failures, and missed detections. After this, the remaining candidates are assigned a membership flag indicating their status as either certain, likely, or possible members.
As part of their photometric analysis, F16 measured surface brightness profiles and non-parametric structural quantities in the u * g iz bands for the core galaxies with the IRAF task ELLIPSE. These data products are complemented with similar metrics from Sérsic fits to both the light profiles and image cutouts for each source [the latter achieved with GAL-FIT]. Our work is based on the growth curves deduced by applying their [non-parametric] g -band isophotal solutions to all other bands while using a common master mask. This allows us to investigate changes in the RS as a function of galactocentric radius, rather than rely on a single aperture. Driver et al. (2006) adopted a similar approach for their CMD analysis, finding that bimodality was more pronounced using core versus global colors; our results support this point [see Fig. 4]. We extract from the growth curves all ten colors covered by the NGVS, integrated within elliptical apertures having semi-major axes of a × R e,g [R e,g = g -band effective radius], where a = 0.5, 1.0, 2.0, 3.0; we also examine colors corresponding to the total light of these galaxies. Errors are estimated following Chen et al. (2010), using the magnitude differences between F16's growth curve and Sérsic analyses, and scaling values for each set of apertures by the fraction of light enclosed. These estimates should probably be regarded as lower limits, since they do not capture all sources of systematic uncertainty.
Absolute magnitudes are computed assuming a uniform distance of 16.5 Mpc (Mei et al. 2007;Blakeslee et al. 2009) for all galaxies and corrected for reddening using the York Extinction Solver 2 (McCall 2004), adopting the Schlegel et al. (1998) dust maps, Fitzpatrick (1999) extinction law, and R V = 3.07. To help gauge the intrinsic scatter along the RS, we use recovered magnitudes for ∼40k artificial galaxies injected into the image stacks [F16] to establish statistical errors in our total light measurements. A more focussed discussion of uncertainties in the NGVS galaxy photometry may be found in Ferrarese et al. (2016a) andF16.
We note that, although the NGVS is well-suited for their detection, ultra-compact dwarfs [UCDs] are excluded from our galaxy sample for two reasons. First, they have largely been omitted from previous analyses of the RS. Second, the nature of these objects is unsettled. While many are likely the remnants of tidally-stripped galaxies (e.g. Bekki et al. 2003;Drinkwater et al. 2003;Pfeffer & Baumgardt 2013;Seth et al. 2014), the contribution of large GCs to this population remains unclear. Readers interested in the photometric properties of the UCD population uncovered by the NGVS are referred to Liu et al. (2015) for those found in the core region; still, we include UCDs in our comparisions of the colors of RS galaxies and CSS in Section 5.
THE RED SEQUENCE IN THE CORE OF VIRGO
Figure 1a plots the (u * -) colors, integrated within 1.0 R e,g , of all 404 galaxies in the core of Virgo as a function of their total g -band magnitudes. One of the most striking features in this plot is the depth to which we probe galaxy colors: at its 50%-completeness limit [M g ∼ -9], the NGVS luminosity function reaches levels that have only been previously achieved in the Local Group [i.e. comparable to the Carina dSph, and only slightly brighter than Draco; Ferrarese et al. 2016a]. This is significant as integrated colors for dwarf galaxies at these scales have, until now, been highly biased to the local volume [D ≤ 4 Mpc], incomplete, and noisy (e.g. Johnson et al. 2013). The NGVS CMD therefore represents the most extensive one to date based on homogeneous photometry, spanning a factor of 2 × 10 5 in luminosity.
Also interesting about Fig. 1a is the dearth of blue galaxies in the core of Virgo. This is more apparent in Figure 1b, where we plot histograms of (u * -) in four bins of luminosity. Three of the four samples are well described as unimodal populations rather than the bimodal color distributions typically found in large galaxy surveys (e.g. Baldry et al. 2004). The absence of a strong color bimodality in Virgo's core is not surprising though (Balogh et al. 2004;Boselli et al. 2014) and suggests that most of these galaxies have been cluster members long enough to be quenched by the environment3 . The minority of blue galaxies we find may be members that are currently making their first pericentric passage or are noncore members projected along the line-of-sight. Since our interest lies in the RS, we have inspected three-color images for every galaxy and exclude from further analysis 24 that are clearly star-forming [blue points in Fig. 1a]. Also excluded are the 56 galaxies that fall below our completeness limit [grey points], 16 whose imaging suffers from significant contamination [e.g. scattered light from bright stars; green points], and 4 that are candidate remnants of tidal stripping [red points]. While we cannot rule out a contribution by reddening to the colors of the remaining galaxies, their threecolor images do not indicate a significant frequency of dust lanes.
Figure 2 plots all ten CMDs for quiescent galaxy candidates in Virgo's core, where the colors again correspond to 1.0 R e,g . Having culled the star-forming galaxies, we can straightforwardly study the shape of the RS as a function of wavelength. In each panel of Fig. 2 we observe a clear trend, whereby for M g -14, colors become bluer towards fainter magnitudes. To help trace this, we have run the Locally Weighted Scatterplot Smoothing algorithm (LOWESS; Cleveland 1979) on each CMD; these fits are represented by the red lines in the figure. The observed trends are notable given that optical colors are marginally sensitive to the metallicities of composite stellar populations with Z 0.1Z . Simple comparisons of our LOWESS curves to stellar population models suggests that, for M g -14, metallicity increases with luminosity
Galaxy mass and possible delay times likely factor into this disagreement.
along the RS [see Fig. 9]; age trends are harder to discern with the colors available to us. A metallicity-luminosity relation for RS galaxies agrees with previous work on the stellar populations of ETGs throughout Virgo (Roediger et al. 2011b) and the quiescent galaxy population at large (e.g. Choi et al. 2014). Our suggestion though is based on fairly restrictive assumptions about the star formation histories of these galaxies [i.e. exponentially-declining, starting ≥8 Gyr ago]; more robust results on age and metallicity variations along the RS in Virgo's core from a joint UV-optical-NIR analysis will be the subject of future work.
A flattening at the bright end of the RS for Virgo ETGs was first identified by Ferrarese et al. (2006) and later confirmed in several colors by Janz & Lisker (2009, hereafter JL09). This seems to be a ubiquitous feature of the quiescent galaxy population, based on CMD analyses for nearby galaxies (Baldry et al. 2004;Driver et al. 2006). This flattening may also be present in our data, beginning at M g ∼ -19, but the small number of bright galaxies in the core makes it difficult to tell. Also, this feature does not appear in colors involving the zband, but this could be explained by a plausible error in this measurement for M87 [e.g. 0.1 mag], the brightest galaxy in our sample.
The flattening seen at bright magnitudes implies that the RS is non-linear. A key insight revealed by the LOWESS fits in Fig. 2 is that the linearity of the RS also breaks down at faint magnitudes, in all colors. The sense of this non-linearity is that, for M g -14, the local slope is shallower than at brighter magnitudes, even flat in some cases [e.g. u * -g ; see Appendix]. For several colors [e.g. --ı], the LOWESS fits sug-
0.8 1.0 1.2 1.4 1.6 u * - g ′ 0.5 0.6 0.7 0.8 0.9 1.0 1.1 g ′ - i ′ 1.2 1.4 1.6 1.8 2.0 2.2 2.4 u * - r ′ 0.4 0.6 0.8 1.0 1.2 1.4 g ′ - z ′ 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 u * - i ′ 0.1 0.2 0.3 0.4 0.5 r ′ - i ′ 1.5 2.
.2 0.3 0.4 0.5 i ′ - z ′ M g ′ (mag) Figure 2.
CMDs for quiescent galaxies in Virgo's core, in all ten colors measured by the NGVS. Fluxes have been measured consistently in all five bands within apertures corresponding to 1.0 R e,g isophote of each galaxy. Black points represent individual galaxies while red lines show non-parametric fits to the data. The RS defines a color-magnitude relation in all colors that flattens at faint magnitudes, which could be explained by a constant mean age and metallicity for the lowest-mass galaxies in this region [albeit with significant scatter; but see Fig. 3]. Representative errors for the same magnitude bins as in Fig. 1 -15 since our tests did not probe brighter magnitudes. The scatter and errors, averaged within three bins of luminosity, match quite well, especially at the faintest luminosities, suggesting minimal intrinsic scatter in the colors and stellar populations of these galaxies.
gest that the behavior at the faint end of the RS may be even more complex, but the scale of such variations is well below the photometric errors [see Fig. 3]. JL09 found that the colormagnitude relation [CMR] of ETGs also changes slope in a similar manner, but at a brighter magnitude than us [M g ∼ -16.5]; we address this discrepancy in Section 4.
An implication of the faint-end flattening of the RS is that the low-mass dwarfs in Virgo's core tend to be more alike in color than galaxies of higher mass. This raises the question of whether the scatter at the faint-end of the RS reflects intrinsic color variations or just observational errors. We address this issue in Figure 3 by comparing the observed scatter in the total colors to error estimates based on the artificial galaxies mentioned in Section 2. Shown there are LOWESS fits to the data and the rms scatter about them [solid and dashed lines, respectively], and the scatter expected from photometric errors [dotted lines]. Both types of scatter have been averaged within three bins of magnitude: -15 < M g ≤ -13, -13 < M g ≤ -11, and -11 < M g ≤ -9; the comparison does not probe higher luminosities because our artificial galaxy catalog was limited to g > 16, by design. We generally find that the scatter and errors both increase towards faint magnitudes and that the two quantities match well, except in the brightest bin, where the scatter mildly exceeds the errors. For the other bins however, the intrinsic scatter must be small, strengthening the assertion that the faintest galaxies possess uniform colors [to within 0.05 mag] and, possibly, stellar populations. Deeper imaging will be needed to improve the constraints on genuine color variations at these luminosities.
The last topic examined in this section is the effect of aperture size on galaxy color. Our most important result, the flattening of the RS at faint magnitudes, is based on galaxy colors integrated within their half-light radii. Aperture effects could be significant in the presence of radial color gradients, as suggested by Driver et al. (2006), and therefore bias our inferences on the shape of the RS. In Figure 4 we show LOWESS fits to the u * -g and g -z RSs for colors measured within 0.5 R e,g , 1.0 R e,g , 2.0 R e,g , and 3.0 R e,g . These particular colors are chosen because, in the absence of deep UV and NIR photometry 4 , they provide the only leverage on stellar populations for the full NGVS dataset. We also include measurements of the scatter about these fits for the 0.5 R e,g and 3.0 R e,g apertures, represented by the shaded envelopes.
The top panel of Fig. 4 shows that u * -g changes by at most 0.04-0.06 mag at M g ≤ -17 between consecutive aperture pairs. Two-sample t-tests of linear fits to the data indicate that these differences are significant at the P = 0.01 level. Conversely, hardly any variation is seen between apertures for galaxies with M g > -16. The bottom panel of Fig. 4 demonstrates that g -z changes little with radius in most of our galaxies. Slight exceptions are the 0.5 R e,g colors for galaxies with M g ≤ -16, which differ from the 2.0 and 3.0 R e,g colors by 0.04 mag. The 1.0 R e,g colors bridge this gap, following the 0.5 R e,g sequence at M g -17 and moving towards the other sequences for brighter magnitudes.
The changes in the RS with galactocentric radius imply the existence of negative color gradients within specific regions of select galaxies. The strongest gradients are found for u *g within bright galaxies, inside 2.0 R e,g , while galaxies with M g > -15 have little-to-none in either color. Mild negative 4 UV and deep NIR imaging of the Virgo cluster exist (Boselli et al. 2011;Muñoz et al. 2014) but can only aid us for brighter galaxies and select fields, respectively. . RS in (u * -g ) and (g -z ), for different sizes of aperture used to measure galaxy colors. All four curves consider the same sample of galaxies. The choice of aperture has an impact on the slope of the RS at M g -16 mag for (u * -g ), with smaller apertures yielding steeper slopes, while the RS is more stable in (g -z ). The shaded envelopes represent the scatter about the RS for the 0.5 R e,g and 3.0 R e,g apertures. gradients are seen in g -z between 0.5 and 1.0 R e,g for galaxies with M g < -17, consistent with previous work on the spatially-resolved colors of galaxies throughout Virgo (Roediger et al. 2011a). The most important insight though from Fig. 4 is that the flattening of the RS at faint magnitudes does not apply to a specific aperture. The implications of those gradients we do detect in our galaxies, in terms of stellar populations and comparisons with galaxy formation models, will be addressed in Section 7.
COMPARISON TO PREVIOUS WORK
Before discussing the implications of our results, over the next two sections we compare our RS to earlier/ongoing work on the colors of Virgo galaxies and CSS, starting with the former. Of the several studies of the galaxy CMD in Virgo Representative errors for the NGVS are included along the bottom. (bottom) As above but restricted to the galaxies common to both samples; measurement pairs are joined with lines. The NGVS extends the CMD for this cluster faintward by ∼5 mag, with much improved photometric errors. We also find that JL09's CMR is steeper than our own at intermediate magnitudes, likely due to their inclusion of systems having recent star formation and possible errors in their sky subtraction. (Bower et al. 1992;Ferrarese et al. 2006;Chen et al. 2010;Roediger et al. 2011a;Kim et al. 2010), that of JL09 is the most appropriate for our purposes. JL09 measured colors for 468 ETGs from the Virgo Cluster Catalog (Binggeli et al. 1985), based on ugriz imaging from SDSS DR5 (Adelman- McCarthy et al. 2007). Their sample is spread throughout the cluster and has B < 18.0. Most interestingly, they showed that these galaxies trace a non-linear relation in all optical CMDs, not unlike what we find for faint members inhabiting the centralmost regions.
In Figure 5 we overlay the u * -g CMD from JL09 against our own, measured within 1.0 R e,g ; the comparison is ap-propriate since JL09 measured colors within their galaxies' r-band half-light radii. We have transformed JL09's photometry to the CFHT/MegaCam system following Equation 4 in Ferrarese et al. (2012). The top panel shows all objects from both samples, along with respective LOWESS fits, while the bottom is restricted to the 62 galaxies in common to both. We focus on the u * -g color because of its importance to stellar population analyses; indeed, this is a reason why accurate u *band photometry was a high priority for the NGVS.
The most notable feature in the top panel of Fig. 5 is the superior depth of the NGVS relative to the SDSS, an extension of ∼5 mag. There is a clear difference in scatter between the two samples, with that for JL09 increasing rapidly for M g > -18, whereas the increase for the NGVS occurs much more gradually5 [cf. Fig. 3; see Fig. 1 -16.5]. The shallower slopes found by JL09 at both ends of their CMR are seen for other colors and so cannot be explained by limitations/biases in the SDSS u-band imaging. The shallower slope at bright magnitudes substantiates what was hinted at in Fig. 2 and is more obvious in JL09 since their sample covers the full cluster6 ; the existence of this feature is also well-known from SDSS studies of the wider galaxy population (e.g. Baldry et al. 2004). The lower zeropoint of the JL09 CMR is seen in other colors too, hinting that calibration differences between SDSS DR5 and DR7 are responsible, where the NGVS is anchored to the latter (Ferrarese et al. 2012).
Lastly, the LOWESS fits in Fig. 5 indicate that, between -19 M g -16.5, the JL09 CMR has a steeper slope than the NGVS RS. This difference is significant [P = 0.01] and holds for other u * -band colors as well. This steeper slope forms part of JL09's claim that the ETG CMR flattens at M g -16.5, a feature not seen in our data. Since JL09 selected their sample based on morphology, recent star formation in dwarf galaxies could help create their steeper slope. For one, the colors of many galaxies in the JL09 sample overlap with those flagged in our sample as star-forming. Also, Kim et al. (2010) find that dS0s in Virgo follow a steeper UV CMR than dEs and have bluer UV-optical colors at a given magnitude. We therefore are unsurprised to have not observed the flattening detected by JL09.
Recent star formation cannot solely explain why JL09 find a steeper slope at intermediate magnitudes though. The bottom panel of Fig. 5 shows that, for the same galaxies, JL09 measure systematically bluer u * -g colors; moreover, this difference grows to fainter magnitudes, creating a steeper CMR. Comparisons of other colors [e.g. g -] and the agreement found therein proves that this issue only concerns JL09's u-band magnitudes. The stated trend in the color discrepancy appears inconsistent with possible errors in our SDSS-MegaCam transformations. Aperture effects can also be ruled out since the differences in size scatter about zero and never exceed 25% for any one object; besides, Fig. 4 demonstrates that color gradients in u * -g are minimal at faint magnitudes. A possible culprit may be under-subtracted backgrounds in JL09's u-band images since they performed their own sky subtraction. Therefore, we suggest that the differences between the JL09 CMR and NGVS RS for M g > -19 can be explained by: (i) a drop in the red fraction amongst Virgo ETGs between -19 M g -16.5, and (ii) JL09's measurement of systematically brighter u-band magnitudes. Despite this disagreement, these comparisons highlight two exciting aspects about the NGVS RS [and the photometry overall]: (i) it extends several magnitudes deeper than the SDSS, and (ii) the photometric errors are well-controlled up to the survey limits.
COMPARISON TO COMPACT STELLAR SYSTEMS
The NGVS is unique in that it provides photometry for complete samples of stellar systems within a single global environment, including galaxies, GCs, galactic nuclei, and UCDs. These systems are often compared to one another through their relative luminosities and sizes (e.g. Burstein et al. 1997;Misgeld & Hilker 2011;Brodie et al. 2011), whereas their relative stellar contents, based on homogeneous datasets, are poorly known. Given the depth of the NGVS RS, we have a unique opportunity to begin filling this gap by examining the colors of faint dwarfs and CSS at fixed lumuniosity.
Our samples of GCs, nuclei, and UCDs are drawn from the catalogs of Peng et al. (in preparation), F16, and Zhang et al. (2015), respectively; complete details on the selection functions for these samples may be found in those papers. Briefly though, GCs and UCDs were both identified via magnitude cuts and the u * ıK diagram (Muñoz et al. 2014), and separated from each other through size constraints [r h ≥ 11 pc for UCDs]. The validity of candidate GCs are assessed probabilistically and we use only those having a probability > 50%. All UCDs in the Zhang et al. (2015) catalog are spectroscopically-confirmed cluster members. Lastly, galactic nuclei were identified by visual inspection of the image cutouts for each galaxy and modelled in the 1D surface brightness profiles with Sérsic functions. For our purposes, we only consider those objects classified as unambiguous or possible nuclei in the F16 catalog.
In Figure 6 we plot the CMDs of galaxies and CSS in Virgo's core [left-hand side] and the color distributions for objects with g > 18 [right-hand side]; u * -g colors are shown in the upper row and g -ı in the lower. Note that we have truncated the CSS samples to 18 < g < 22 so that our comparisons focus on a common luminosity range.
An obvious difference between the distributions for galaxies and CSS at faint luminosities is the latter's extension to very red colors, whereas the former is consistent with a single color [Fig. 3]. This is interesting given that CSS have a higher surface density than the faint dwarfs in Virgo's core, suggesting that, at fixed luminosity, diffuse systems are forced to be blue while concentrated systems can have a wide range of colors. The nature of red CSS is likely explained by a higher metal content, since metallicity more strongly affects the colors of quiescent systems than age [see Fig. 9]. Also, the Spearman rank test suggests that nuclei follow CMRs in both u * -g [ρ = -0.57; p = 4 × 10 -5 ] and g -ı [ρ ∼ -0.5; p = 6 × 10 -4 ], hinting at a possible mass-metallicity relation for this population. A contribution of density to the colors of CSS is not obvious though given that many [if not most] of them were produced in the vicinity of higher-mass galaxies, and so may owe their enrichment to their local environments. The as-yet uncertain nature of UCDs as either the massive tail of the GC population or the bare nuclei of stripped galaxies also raises ambiguity on what governs their stellar contents, be it due to internal or external factors [i.e. self-enrichment versus enriched natal gas clouds].
While it is possible for CSS to be quite red for their luminosities, the majority of them have bluer colors, in both u * -g and g -ı, that agree better with those of faint RS galaxies. Closer inspection of the right-half of Fig. 6 reveals some tensions between the populations though. KS tests indicate that the null hypothesis of a common parent distribution for galaxies and GCs is strongly disfavored for u * -g and gı [p < 10 -10 ], whereas conclusions vary for UCDs and nuclei depending on the color under consideration [p u * -g ∼ 0.09 and p g -ı < 10 -4 for UCDs; p u * -g ∼ 0.007 and p g -ı ∼ 0.07 for nuclei]. The tails in the distributions for the CSS play an important role in these tests, but their removal only brings about consistency for the nuclei. For instance, clipping objects with * -g ≥ 1.2 increases the associated p-values to 0.18, 0.17, and 0.04 for UCDs, nuclei, and GCs, respectively, while p changes to ∼ 10 -4 , 0.65, and < 10 -4 by removing objects with g -ı ≥ 0.85. We have also fit skewed normal distributions to each dataset, finding consistent mean values between galaxies and CSS [except GCs, which have a larger value in g -ı], while the standard deviations for galaxies is typically larger than those for CSS. The evidence for common spectral shapes between the majority of CSS and faint galaxies in the core of Virgo is therefore conflicting. An initial assessment of the relative stellar contents within these systems, and potential trends with surface density and/or local environment, via a joint UV-optical-NIR analysis is desirable to pursue this subject further (e.g. Spengler et al., in preparation).
COMPARISON TO GALAXY FORMATION MODELS
As stated earlier, colors allow us to test our understanding of the star formation histories and chemical evolution of galaxies; scaling relations therein; and ultimately the physics governing these processes. Here we explore whether current galaxy formation models plausibly explain these subjects by reproducing the RS in the core of Virgo. The main novelty of this comparison lies in its focus on the oldest and densest part of a z ∼ 0 cluster, where members have been exposed to extreme external forces, on average, for several Gyr (Oman et al. 2013). The nature of our sample dictates that this comparison is best suited for galaxies of intermediate-to-low masses, although we still include high-mass systems for completeness. Unless otherwise stated, when discussing the slope of the RS, we are referring to the interval -19 M g -15, where its behavior is more or less linear.
We compare our results to three recent models of galaxy formation: one SAM (Henriques et al. 2015, hereafter H15) and two hydrodynamic (Illustris and EAGLE; Vogelsberger et al. 2014;Schaye et al. 2015). H15 significantly revised the L-Galaxies SAM, centered on: (i) increased efficiency of radiomode AGN feedback; (ii) delayed reincoporation of galactic winds [scaling inversely with halo mass]; (iii) reduced density threshold for star formation; (iv) AGN heating within satellites; and (v) no ram pressure stripping of hot halo gas in low-mass groups. H15 built their model on the Millenium I and II cosmological N-body simulations (Springel et al. 2005;Boylan-Kolchin et al. 2009), enabling them to produce galaxies over a mass range of 10 7 < M * < 10 12 M . Their revi- Since our intent is to compare these stellar systems within a common magnitude range, only those CSS having 18 < g < 22 are plotted. Representative errors for each population at faint magnitudes are included at bottom-left. (bottom row) As above but for the g -ı color. At faint magnitudes, comparitively red objects are only found amongst the CSS populations; their colors are likely caused by a higher metal content than those for galaxies of the same luminosity.
sions helped temper the persistent issues of SAMs having too large a blue and red fraction at high and low galaxy masses, respectively (Guo et al. 2011;Henriques et al. 2013). Illustris consists of eight cosmological N-body hydro simulations, each spanning a volume of ∼100 3 Mpc 3 , using the moving-mesh code AREPO. This model includes prescriptions for gas cooling; stochastic star formation; stellar evolution; gas recycling; chemical enrichment; [kinetic] SNe feedback; supermassive black hole [SMBH] seeding, accretion and mergers; and AGN feedback. The simulations differ in terms of the resolution and/or particle types/interactions considered; we use the one having the highest resolution and a full physics treatment. EAGLE comprises six simulations with a similar nature to Illustris but run with a modified ver-sion of the SPH code GADGET 3 instead. The simulations differ in terms of resolution, sub-grid physics, or AGN parameterization, where the latter variations produce a better match to the z ∼ 0 stellar mass function and high-mass galaxy observables, respectively. The fiducial model [which we adopt] includes radiative cooling; star formation; stellar mass loss; feedback from star formation and AGN; and accretion onto and mergers of SMBHs. Modifications were made to the implementations of stellar feedback [formerly kinetic, now thermal], gas accretion by SMBHs [angular momentum now included], and the star formation law [metallicity dependence now included]. The galaxy populations from Illustris and EA-GLE both span a range of M * 10 8.5 M .
We selected galaxies from the z = 0.0 snapshot of H15 M g (mag)
Figure 7. Comparison of the NGVS RS to those from galaxy formation models, with gray circles marking the positions of the observed galaxies. The shaded region surrounding each model curve indicates the 1-σ scatter, measured in five bins of luminosity. Curves for Illustris do not appear in panels showing u * -band colors since their subhalo catalogs lack those magnitudes. In every color, models uniformly predict a shallower slope for the RS than is observed in cluster cores.
that inhabit massive halos [M h > 10 14 M ], have non-zero stellar masses, are quenched [sSFR < 10 -11 yr -1 ] and bulgedominated [B/T > 0.5, by mass]; the last constraint aims to weed out highly-reddened spirals. We query the catalogs for both the Millenium I and II simulations, where the latter fills in the low-mass end of the galaxy mass function, making this sample of model galaxies the best match to the luminosity/mass range of our dataset. Similar selection criteria were used to obtain our samples of Illustris and EAGLE galaxies, except that involving B/T since bulge parameters are not included with either simulation's catalogs. We also imposed a resolution cut on Illustris such that each galaxy is populated by ≥240 star particles [minimum particle mass = 1.5 × 10 4 M ]. A similar cut is implicit in our EAGLE selection as SEDs are only available for galaxies having M * 10 8.5 M . Interestingly, most of the brightest cluster galaxies in EAGLE are not quenched, such that we make a second selection to incorporate them in our sample; no such issue is found with Illustris. Broadband magnitudes in the SDSS filters were obtained from all three models and transformed to the CFHT/MegaCam system [see Section 4]. We note that these magnitudes and the associated colors correspond to the total light of these galaxies.
A final note about this comparison is that we stack clusters from each model before analysing its RS. The high densities of cluster cores make them difficult to resolve within cosmological volumes, particularly for hydro simulations, leading to small samples for individual clusters. Stacking is therefore needed to enable a meaningful analysis of the model CMD for quenched cluster-core galaxies. H15, Illustris, and EAGLE respectively yield ∼15k, 144, and 157 galaxies lying within 300 kpc of their host halos' centers, which is roughly equivalent to the projected size of Virgo's core [as we define it]. Note that the much larger size of the H15 sample is explained by the greater spatial volume it models and the fainter luminosities reached by SAMs [M g ≤ -12, compared to M g -15 for hydro models].
In Figure 7 we compare the RS from Fig. 2 [black] to those from H15 [red], Illustris [green], and EAGLE [blue], where the curves for the latter were obtained in identical fashion to those for the NGVS. The shaded regions about each model RS convey the 1σ scatter within five bins of luminosity. The Illustris RS does not appear in the panels showing u * -band colors since their catalogs lack SDSS u-band magnitudes.
The clear impression from Fig. 7 is that no model reproduces the RS in Virgo's core, with model slopes being uniformly shallower than observed. Two-sample t-tests of linear fits to the data and models show that these differences are significant at the P = 0.01 level, except for the case of the EAGLE models and g -color [P = 0.09]. Further, the H15 RS exhibits no sign of the flattening we observe at faint magnitudes; the hydro models unfortunately lack the dynamic range needed to evaluate them in this regard.
The model RSs also differ from one another to varying degrees. First, H15 favors a shallower slope than the hydro models. Second, the color of the H15 RS monotonically reddens towards bright magnitudes whereas the hydro RSs turnover sharply at M g -19. EAGLE and Illustris agree well except for the ubiquitos upturn at faint magnitudes in the latter's RS [marked with dashed lines]. These upturns are created by the resolution criterion we impose on the Illustris catalog and should be disregarded. Underlying this behavior is the fact that lines of constant M * trace out an approximate anti- correlation in color-magnitude space (Roediger & Courteau 2015), a pattern clearly seen when working with richer samples from this model [e.g. galaxies from all cluster-centric radii]. Third, the scatter in H15 is typically the smallest and approximately constant with magnitude, whereas those of the hydro models are larger and increase towards faint magnitudes, more so for Illustris. Given that we find little intrinsic scatter in the NGVS RS at M g > -15 [Fig. 3], H15 appears to outperform the hydro models in this regard, although we can only trace the latter's scatter to M g ∼ -15. Other differences between Illustris and EAGLE appear for the colors g -ı, --ı, and ı-z , in terms of turnovers, slopes and/or zeropoints, all of which are significant [P = 0.01]. It is worth noting that while Fig. 7 references colors measured within 1.0 R e,g for NGVS galaxies [to maximize their numbers], the agreement is not much improved if we use colors from larger apertures. The conflicting shapes on the RS from data and models could be viewed in one of two ways: (i) the core of Virgo is somehow special, or (ii) models fail to reproduce the evolution of cluster-core galaxies. To help demonstrate that the latter is more probable, we compare the same models against a separate dataset for nearby clusters. WINGS (Fasano et al. 2002(Fasano et al. , 2006) is an all-sky survey of a complete, X-ray selected sample of 77 galaxy clusters spread over a narrow redshift range [z = 0.04 -0.07]. Valentinuzzi et al. (2011) measured the slope of the RS for 72 WINGS clusters using BV photometry for galaxies in the range -21.5 ≤ M V ≤ -18. We have done likewise for each well-populated [N > 100] model cluster, using the Blanton & Roweis (2007) filter transformations to obtain BV photometry from SDSS gr-band magnitudes.
Figure 8 compares the distribution of RS slopes from WINGS and galaxy formation models, with the dashed line in the top panel indicating the value in Virgo's core, which fits comfortably within the former. Each model distribution is shown for the two closest snapshots to the redshift limits of the WINGS sample. In the case of H15 and Illustris, these snapshots bracket the WINGS range quite well, whereas the redshift sampling of EAGLE is notably coarser. The latter fact may be important to explaining the difference between the two distributions for this model, since z = 0.1 corresponds to a look-back time of ∼1.3 Gyr. On the other hand, H15 and Illustris suggest that the RS slope does not evolve between z = 0.07/0.08 and 0.03. We have not tried to link model clusters across redshifts as parsing merger trees lies beyond the scope of this work. Observations though support the idea of a static slope in clusters over the range z = 0 -1 (Gladders et al. 1998;Stanford et al. 1998;Blakeslee et al. 2003;Ascaso et al. 2008).
Fig. 8 demonstrates that the distributions for the WINGS and model clusters are clearly incompatible, with the models, on average, preferring a shallower slope for the RS. The sense of this discrepancy is the same as that seen in Fig. 7 between the core of Virgo and the models. A caveat with the comparisons to WINGS though is that the model slopes have all been measured in the respective rest-frames of the clusters. In other words, the model slopes could be biased by differential redshifting of galaxy colors as a function of magnitude [e.g. fainter galaxies reddened more than brighter ones]. To address this, we have simulated the effect of k-corrections using the median of the EAGLE distribution at z = 0.1, finding it would steepen this cluster's RS by -0.01. While significant, we recall that the redshift range for the WINGS sample is z = 0.04 -0.07, such that the mean k-correction to the model slopes is likely smaller than this value and would therefore not bring about better agreement.
Given the value of the above comparisons for testing galaxy formation models, we provide in the Appendix parametric fits to the NGVS RS in every color [measured at 1 R e,g ]. These fits reproduce our LOWESS curves well and enable the wider community to perform their own comparisons.
DISCUSSION
Figure 1 indicates that >90% of the galaxy population within the innermost ∼300 kpc of the Virgo cluster has likely been quenched of star formation. This makes the population ideal for studying the characteristics of the RS, such as its shape and intrinsic scatter. Our analysis demonstrates that, in all optical colors, the RS is (a) non-linear and (b) strongly flattens in the domain of faint dwarfs. The former behavior had already been uncovered in Virgo, albeit at the bright end (Ferrarese et al. 2006;JL09), while the latter, which is new, begins at -14 < M g < -13 [see Appendix], well above the completeness limit of the NGVS. No correlation is observed between color and surface brightness, in bins of luminosity, for M g > -15, implying that the faint-end flattening is not the result of bias or selection effect.
The RS follows the same general shape at M g < -14 in each color, which may have implications for trends in the stellar populations of these galaxies. Assuming that bluer [e.g. u * -g ] and redder [e.g. g -z ] colors preferentially trace mean age and metallicity (Roediger et al. 2011b), respectively, the decrease in color towards faint magnitudes over the range -19 M g ≤ -14 hints that the populations become younger and less enriched (consistent with downsizing; Nelan et al. 2005), with two exceptions. The flattening at bright magnitudes, seen better in samples that span the full cluster (JL09) and the global galaxy population (Baldry et al. 2004), signals either a recent burst of star formation within these galaxies or an upper limit to galactic chemical enrichment. The latter seems more likely given that the stellar mass-metallicity relation for galaxies plateaus at M * 10 11.5 M (Gallazzi et al. 2005). The other exception concerns the flattening at the faint-end of the RS.
7.1. What Causes the Faint-End Flattening of the RS? If colors reasonably trace stellar population parameters [see next sub-section], then arguably the most exciting interpretation suggested by the data is that the faint dwarfs in Virgo's core have a near-uniform age and metallicity, over a range of ∼3-4 magnitudes. This would imply that the known stellar population scaling relations for quiescent galaxies of intermediate-to-high mass (e.g. Choi et al. 2014) break down at low masses [below ∼4 × 10 7 M ; see Appendix] and, more fundamentally, that the physics governing the star formation histories and chemical enrichment of galaxies decouples from mass at these scales.
Given the nature of our sample, the above scenario begs the questions of whether the faint-end flattening of the RS is caused by the environment, and if so, when and where the quenching occurs. While Geha et al. (2012) make the case that dwarfs with M * < 10 9 M must essentially be satellites in order to quench (also see Slater & Bell 2014;Phillips et al. 2015;Davies et al. 2016), we know little of the efficiency and timescale of quenching at low satellite masses and as a function of host halo mass. Using Illustris, Mistani et al. (2016) showed that, on average, the time to quench in low-mass clusters decreases towards low satellite masses, from ∼5.5 Gyr to ∼3 Gyr, over the range 8.5 log M * 10. Slater & Bell (2014) combine measurements of Local Group dwarfs with N-body simulations to suggest that, in such groups, galaxies of M * 10 7 M quench within 1-2 Gyr of their first pericenter passage. However, Weisz et al. (2015) compared HST/WFPC2 star formation histories to predicted infall times based on Via Lactea II (Diemand et al. 2008), finding that many dwarfs in the Local Group likely quenched prior to infall.
In addition to reionization, pre-processing within smaller host halos may play a key role in explaining why many Local Group dwarfs ceased forming stars before their accretion. Likewise, pre-processing must also be considered when trying to understand issues pertaining to quenching of cluster galaxies (e.g. McGee et al. 2009;De Lucia et al. 2012;Wetzel et al. 2013;Hou et al. 2014;Taranu et al. 2014), such as the cause of Virgo's flattened RS at faint magnitudes. Wetzel et al. (2013) deduced where satellites of z = 0 groups/clusters were when they quenched their star formation, by modelling SDSS observations of quiescent fractions with mock catalogs. They found that for host halo masses of 10 14-15 M the fraction of satellites that quenched via pre-processing increases towards lower satellite masses, down to their completeness limit of M * ∼ 7 × 10 9 M , largely at the expense of quenching in-situ. Extrapolating this trend to lower satellite masses suggests that the majority of the quiescent, low-mass dwarfs in Virgo were quenched elsewhere. This suggestion is consistent with abundance matching results for our sample (Grossauer et al. 2015), which indicate that only half of the core galaxies with M * = 10 6-7 M were accreted by z ∼ 1 (see also Oman et al. 2013).
Assuming that the flattening of the RS reflects an approximate homogeneity in stellar contents [i.e. constant mean age] and isolated low-mass dwarfs have constant star formation histories (e.g. Weisz et al. 2014), then the low-mass dwarfs in Virgo's core must have quenched their star formation coevally. Moreover, when coupled with a significant contribu- tion by pre-processing, it is implied that these galaxies are highly susceptible to environmental forces, over a range of host masses. This seems plausible given the very high quiescent fractions [>80%] for satellites between 10 6 < M * /M < 10 8 within the Local Volume (Phillips et al. 2015), which has led to the idea of a threshold satellite mass for effective environmental quenching (Geha et al. 2012;Slater & Bell 2014).
If synchronized quenching of low-mass dwarfs in groups [at least to ∼10 12 M ] leads to a flattened faint-end slope of the core RS, we should expect to find the same feature for dwarfs drawn from larger cluster-centric radii. This follows from the fact that a satellite's cluster-centric radius correlates with its infall time (De Lucia et al. 2012) and that the fraction of satellites accreted via groups increases towards low redshift (McGee et al. 2009). Studying the properties of the RS as a function of cluster-centric position (e.g. see Sánchez-Janssen et al. 2008) will be the focus of a future paper in the NGVS series.
Caveats
A major caveat with the above interpretations is that optical colors are not unambiguous tracers of population parameters, especially at low metallicities (Conroy & Gunn 2010). To this point, Kirby et al. (2013) have shown that stellar metallicity increases monotonically for galaxies from [Fe/H] ∼ -2.3 at M * = 10 4 M to slightly super-solar at M * = 10 12 M . Assuming this trend holds in all environments, we can check for any conditions under which the RS would flatten at faint magnitudes. In the middle and bottom panels of Figure 9 we compare the u * -g and g -z color-mass relations in Virgo's core [black lines] to those predicted by the Flexible Stellar Population Synthesis [FSPS] model (Conroy et al. 2009), where the Kirby et al. relation [top panel] is used to assign masses to each model metallicity track and lines of constant age are colored from purple [∼2 Gyr] to red [∼15 Gyr]. Other models (e.g. Bruzual & Charlot 2003) prove inadequate for our needs due to their coarse sampling of metallicity space over the range Z ∼ 4 × 10 -4 to 4 × 10 -3 . Error bars on the NGVS relations reflect standard errors in the mean, measured within seven bins of luminosity [having sizes of 0.5-2.0 dex]. Although we assume single-burst star formation histories for this test, qualitatively similar trends are expected for more complex treatments (e.g. constant star formation with variable quenching epochs; Roediger et al. 2011b).
Since the intent of Fig. 9 is to explore an alternative interpretation of the faint-end flattening of the RS, we limit our discussion to the range M * < 10 8 M , but show the full relations for completeness. Within that range, we find that the data are indeed consistent with Kirby et al.'s mass-metallicity relation, provided that age does not vary greatly therein. Moreover, the color-mass relation for select ages transitions to a flatter slope at lower masses. This confirms our previous statement that it is difficult to meaningfully constrain metallicities below a certain level with optical colors [Z 10 -3 in the case of FSPS], even when ages are independently known. The inconsistent ages we would infer from the the u * -g and g -z colors could likely be ameliorated by lowering the zeropoint of the Kirby et al. relation since the former color responds more strongly to metallicity for log(Z/Z ) -1. The comparisons shown in Fig. 9 therefore cast doubt on whether the flattening of the RS at faint magnitudes implies both a constant age and metallicity for cluster galaxies at low masses. Distinguishing between these scenarios will be more rigorously addressed in forth-coming work on the stellar populations of NGVS galaxies that incorporates UV and NIR photometry as well.
Shortcomings of Galaxy Formation Models
Regardless of the uncertainties inherent to the interpretation of optical colors, we should expect galaxy formation models to reproduce our observations if their physical recipes are correct. Our test of such models is special in that it focuses on the core of a z = 0 galaxy cluster, where the time-integrated effect of environment on galaxy evolution should be maximal. However, Fig. 7 shows that current models produce a shallower RS than observed, in all colors. This issue is not limited to Virgo's core, as Fig. 8 demonstrates that the distributions of RS slopes for entire model clusters populate shallower values than those measured for other nearby clusters. On a related note, Licitra et al. (2016) have shown that clusters at z < 1 in SAMs suffer from ETG populations with too low an abundance and too blue colors, while ∼10% of model clusters have positive RS slopes. On the other hand, Merson et al. (2016) found broad consistency between observations and SAMs in the zeropoint and slope of the RS in z > 1 clusters. This suggests that errors creep into the evolution of cluster galaxies in SAMs at z < 1.
The discrepancies indicated here follow upon similar issues highlighted by modellers themselves. H15 showed that their model produces a RS having bluer colors than observed in the SDSS for galaxies with M * ≥ 10 9.5 M . Vogelsberger et al. (2014) found the Illustris RS suffers the same problem, albeit at higher masses [M * > 10 10.5 M ], while also producing too low of a red fraction at M * < 10 11 M . Trayford et al. (2015) analyzed the colors of EAGLE galaxies, finding that its RS matches that from the GAMA survey (Taylor et al. 2015) for M r < -20.5, but is too red at fainter magnitudes. Our comparisons build on this work by drawing attention to model treatments of dense environments over cosmic time and [hopefully] incentivize modellers to employ our dataset in future work, especially as they extend their focus towards lower galaxy masses. To this end, the reader is reminded of the parametric fits to the NGVS RS provided in the Appendix.
Naturally, the root of the above discrepancies is tied to errors in the stellar populations of model galaxies. The supplementary material of H15 shows that the model exceeds the mean stellar metallicity of galaxies over the range 10 9.5 < M * 10 10 M by several tenths of a dex while undershooting measurements at 10 10.5 < M * 10 11 M by ∼0.1-0.2 dex. The issues with the H15 RS then seems to reflect shortcomings in both the star formation and chemical enrichment histories of their model galaxies. Part of the disagreement facing Illustris likely stems from the fact that their galaxies have older stellar populations than observed, by as much as 4 Gyr, for M * 10 10.5 M (Vogelsberger et al. 2014). Schaye et al. (2015) showed that EAGLE produces a flatter stellar mass-metallicity relation than measured from local galaxies due to too much enrichment at M * 10 10 M . Our inspection of the stellar populations in H15 and EAGLE reveals that their cluster-core galaxies, on average, have roughly a constant mass-weighted age [∼10-11 Gyr] and follow a shallow mass-metallicity relation, with EAGLE metallicities exceeding H15 values by ∼0.3 dex 7 . The discrepant colors produced by models thus reflect errors in both the star formation histories and chemical enrichment of cluster galaxies; for instance, 7 We omit Illustris from this dicussion as their catalogs do not provide mean stellar ages of their galaxies.
ram pressure stripping may be too effective in quenching cluster dwarfs of star formation (e.g. Steinhauser et al. 2016).
Two critical aspects of the RS that modellers must aim to reproduce are the flattenings at both bright and faint magnitudes. The former is already a contentious point between models themselves, with hydro varieties producing a turnover and while SAMs continuously increase [Fig. 7]. We remind the reader that our LOWESS curves are too steep for M g -19 since they essentially represent an extrapolation from intermediate magnitudes; the bright-end flattening is clearly visible in other datasets that span the full cluster and contain more of such galaxies [Fig. 5]. Hydro models appear to supercede SAMs in this regard, although it may be argued that their turnovers are too sharp. In the case of EAGLE, however, it is unclear what causes this turnover as several of their brightest cluster galaxies are star-forming at z = 0 while their luminosity-metallicity relation inverts for M g ≤ -20.
At present, only SAMs have the requisite depth to check for the flattening seen at the faint end of the RS; the effective resolution of cosmological hydro models is too low to probe the luminosity function to M g ∼ -13. Fig. 7 shows that the H15 RS exhibits no obvious change in slope at faint magnitudes, let alone the pronounced flattening seen in Virgo. The faintend flattening is a tantalizing feature of the RS that may hold new physical insights into the evolution of cluster galaxies of low mass. Addressing the absence of these features should be a focal point for future refinements of galaxy formation models.
CONCLUSIONS
We have used homogeneous isophotal photometry in the u * g iz bands for 404 galaxies belonging to the innermost ∼300 kpc of the Virgo cluster to study the CMD in a dense environment at z = 0, down to stellar masses of ∼ 10 6 M . Our main results are:
• The majority of galaxies in Virgo's core populate the RS [red fraction ∼ 0.9];
• The RS has a non-zero slope at intermediate magnitudes [-19 < M g < -14] in all colors, suggesting that stellar age and metallicity both decrease towards lower galaxy masses, and has minimal intrinsic scatter at the faint end;
• The RS flattens at both the brightest and faintest magnitudes [M g < -19 and M g > -14, respectively], where the latter has not been seen before;
• Galaxy formation models produce a shallower RS than observed at intermediate magnitudes, for both Virgo and other nearby clusters. Also, the RS in hydrodynamic models flattens for bright galaxies while that in SAMs varies monotonically over the full range of our dataset.
The flattening of the RS at faint magnitudes raises intriguing possibilities regarding galaxy evolution and/or cluster formation. However, these hinge on whether the flattening genuinely reflects a homogeneity of stellar populations in low-mass galaxies or colors becoming a poor tracer of age/metallicity at low metallicities [e.g. log(Z/Z ) -1.3]. This issue will be addressed in a forthcoming paper on the stellar populations of NGVS galaxies. A topic worth exploring with our parametric fits is whether the flattening of the RS occurs at a common magnitude for all colors. This can be done with the parameter M g ,0 and Table 1 shows that -14 ≤ M g ,0 ≤ -13, in a large majority of cases. For g -˚and ı-z the transition magnitude is brighter than -14, which might be explained by the fact that these colors sample short wavelength baselines and that the RS spans small ranges therein [∼0.25 and 0.15 mag, respectively]. It is also likely that the posterior distributions for the parameters in our fit are correlated.
Another way to assess the magnitude at which the RS flattens involves measuring the local gradient along our LOWESS fits, the advantage being that this approach is non-parametric. Figure 11 shows our RSs [top panel], scaled to a common zeropoint [arbitrarily established at M g ∼ -14], and the variation of the local gradient as a function of magnitude [bottom panel]. We measure the local gradient using a running bin of 9 [thin line] or 51 [thick line] data points, with the smaller bin allowing us to extend our measurements to brighter magnitudes, where our sample is sparse.
The local gradient varies in a consistent way for all colors at M g ≤ -12: the gradient is roughly constant and negative at bright magnitudes and becomes more positive towards faint magnitudes. The behaviors of the gradients at M g > -12 are more irregular as small fluctuations in the LOWESS curves are amplified when the gradients hover near zero. These behaviors are beyond this discussion however; we are interested in the locations where the rate of change of the gradients is maximized [i.e. the second derivatives of the RSs peak]. Disregarding the curves at M g > -12 then, the bottom panel of Fig. 11 shows that the rate of change maximizes in the range -14 < M g < -13, corresponding to an approximate stellar mass of ∼ 4 × 10 7 M (Ferrarese et al. 2016a). The approximate synchronicity of the flattening of the RS adds further insight to our main result on the flattening of the RS by suggesting a mass scale below which internal processes may cease to govern the stellar populations and evolution of dwarf satellites.
Figure 1 .
1 Figure 1. (a) (u * -) color versus absolute g -band magnitude for the 404 galaxies in the core of Virgo. Colored points are purged from our sample of RS candidates due to obvious star formation activity [blue], our completeness limits [grey], significant image contamination [green], or suspected tidal stripping [red]. The vertical lines indicate bins of magnitude referenced in the right-hand panel, with representative errors plotted in each. (b) Color distributions within the four magnitude bins marked at left. The NGVS photometry enables a deep study of the galaxy CMD and we verify that the core of Virgo is highly deficient in star-forming galaxies.
Figure2. CMDs for quiescent galaxies in Virgo's core, in all ten colors measured by the NGVS. Fluxes have been measured consistently in all five bands within apertures corresponding to 1.0 R e,g isophote of each galaxy. Black points represent individual galaxies while red lines show non-parametric fits to the data. The RS defines a color-magnitude relation in all colors that flattens at faint magnitudes, which could be explained by a constant mean age and metallicity for the lowest-mass galaxies in this region [albeit with significant scatter; but see Fig.3]. Representative errors for the same magnitude bins as in Fig.1are shown in each panel.
Figure 3 .
3 Figure 3. Comparison of the observed scatter [dashed lines] about the RS [solid lines] to photometric errors [dotted lines] established from artificial galaxy tests. The comparision is limited to M g-15 since our tests did not probe brighter magnitudes. The scatter and errors, averaged within three bins of luminosity, match quite well, especially at the faintest luminosities, suggesting minimal intrinsic scatter in the colors and stellar populations of these galaxies.
Figure4. RS in (u * -g ) and (g -z ), for different sizes of aperture used to measure galaxy colors. All four curves consider the same sample of galaxies. The choice of aperture has an impact on the slope of the RS at M g -16 mag for (u * -g ), with smaller apertures yielding steeper slopes, while the RS is more stable in (g -z ). The shaded envelopes represent the scatter about the RS for the 0.5 R e,g and 3.0 R e,g apertures.
Figure 5 .
5 Figure 5. (top) Comparison of the u * -g CMD from JL09 for Virgo ETGs [black circles] to that measured here [red dots]. The full sample is plotted for each dataset and LOWESS fits for both are overlaid [solid lines].Representative errors for the NGVS are included along the bottom. (bottom) As above but restricted to the galaxies common to both samples; measurement pairs are joined with lines. The NGVS extends the CMD for this cluster faintward by ∼5 mag, with much improved photometric errors. We also find that JL09's CMR is steeper than our own at intermediate magnitudes, likely due to their inclusion of systems having recent star formation and possible errors in their sky subtraction.
Figure 6 .
6 Figure6. (top row) u * -g CMD and color distributions for galaxies [circles], GCs [dots], UCDs [squares], and galactic nuclei [diamonds] within the core of Virgo. Since our intent is to compare these stellar systems within a common magnitude range, only those CSS having 18 < g < 22 are plotted. Representative errors for each population at faint magnitudes are included at bottom-left. (bottom row) As above but for the g -ı color. At faint magnitudes, comparitively red objects are only found amongst the CSS populations; their colors are likely caused by a higher metal content than those for galaxies of the same luminosity.
Figure 8 .
8 Figure 8. Comparison of RS slopes in real (top panel) and model clusters (other panels). The model slopes are measured from those snapshots which most closely bracket the redshift range of the WINGS clusters [0.03 ≤ z ≤ 0.07]. In all cases the typical slope within model clusters is shallower than observed. The dashed line indicates the RS in Virgo's core.
Figure 9
9 Figure 9. u * -g and g -z color-mass relations [middle and bottom panels; black lines] versus those predicted by the FSPS stellar population model [colored lines], constrained by the Kirby et al. (2013) mass-metallicity relation [top panel]. Each model relation corresponds to a certain fixed age, ranging between ∼2 Gyr [purple] and ∼15 Gyr [red] in steps of 0.025 dex. Error bars on the NGVS relations represent standard errors in the mean within bins of luminosity.
Figure 10 .
10 Figure10. Parameteric fits [green lines] to the RS in Virgo's core, corresponding to the 1.0 R e,g -colors of NGVS galaxies. These fits are compared to the data themselves [black points] as well as non-parametric [LOWESS] fits. Points clipped from the each fit are shown in blue.
Figure 11 .
11 Figure 11. (top) LOWESS fits from Fig. 10, scaled to a common zeropoint at M g ∼ -14. (bottom) Local gradient measured along each RS shown in the top panel using a rolling bin of either 9 [thin line] or 51 [thick line] data points; the former bin size allows us to extend our measurements up to bright galaxies. In all cases, the local gradient begins to flatten in the vicinity of M g ∼ -15.
of Ferrarese et al. 2016a as well]. Furthermore, the JL09 CMR has a lower zeropoint [by ∼0.06] and a shallower slope than the NGVS RS for M g -19, which two-sample t-tests verify as significant [P = 0.01]. The JL09 data also exhibit a flattening of the CMR in the dwarf regime, but at a brighter magnitude than that seen in ours [M g ∼
Table 1
1 Parameters of double power-law fit to the NGVS RS.
Color M g ,0 C 0 β 1 β 2 α rms
(mag) (mag) (mag)
(1) (2) (3) (4) (5) (6) (7)
u * -g u * -˚-13.62 1.552 3.871 0.000 11.51 0.091 -13.52 1.040 2.624 0.000 15.98 0.078 u * -ı -13.45 1.787 4.577 0.000 11.81 0.116
u * -z -13.95 1.927 5.494 0.773 20.73 0.157
g -˚-14.57 0.522 1.036 0.392 57.14 0.047
g -ı -13.81 0.751 1.685 0.578 1333. 0.058
g -z -13.74 0.852 2.808 0.413 23.39 0.124
--ı -13.07 0.230 0.735 0.130 96.97 0.050
--z -13.40 0.342 1.851 0.000 11.86 0.108
ı-z -14.15 0.107 1.133 0.000 15.92 0.102
shape of the RS.
Note that the filters used in the NGVS are not identical to those of the Sloan Digital Sky Survey (SDSS;York et al.
2000), with the u * -band being the most different. Unless otherwise stated, magnitudes are expressed in the MegaCam system throughout this paper.
http://www4.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/community/YorkExtinctionSolver/
The timescale associated with environmental quenching appears contentious, with some groups favoring shorter values (<2 Gyr;Boselli & Gavazzi 2014, and references therein;Haines et al. 2015) and others longer (several Gyr; e.g.Balogh et al. 2000;De Lucia et al. 2012;Taranu et al. 2014).
While the scatter in the JL09 data is likely dominated by the shallower depth of the SDSS imaging, a contribution by distance uncertainties cannot be ruled out, since the Virgo Cluster Catalog spans several sub-groups whose relative distances can exceed 10 Mpc(Mei et al. 2007).
Virgo comprises two large sub-clusters and several massive groups, such that its bright galaxies are spread throughout the cluster.
APPENDIX
Here we present parametric fits for the RS in Virgo's core based on the colors of our galaxies within 1.0 R e,g . Our purpose is to enable the wider community, particularly modellers, to compare our results to their own through simple [continuous] fitting functions. Motivated by the non-parameteric fits in Fig. 2, we choose a double power-law to describe the shape of the RS; we acknowledge that this choice is made strictly on a phenomenological basis and lacks physical motivation. This function is parameterized as,
where β 1 and β 2 represent the asymptotic slopes towards bright and faint magnitudes, respectively, while M g ,0 and C 0 correspond to the magnitude and color of the transition point between the two power-laws, and α reflects the sharpness of the transition.
We fit Equation 1 to our data through an iterative non-linear optimization of χ 2 following the L-BFGS-B algorithm (Byrd et al. 1995;Zhu et al. 1997), restricting α, β 1 , and β 2 to positive values, and M g ,0 and C 0 to lie in the respective ranges [-20, -8] and [0,20]. At each iteration, > 3σ outliers are clipped from each CMD; doing so allows the fits to better reproduce our LOWESS curves. We generally achieve convergence after 5-6 iterations while the fraction of clipped points is <10% in all cases.
Our power-law fits [green curves] are compared to the data [black points] and LOWESS fits [red curves] in Figure 10, while clipped data are represented by the blue points. The best-fit parameters are summarized in Table 1, where the final column lists the rms of each fit. Inspection of the rms values and the curves themselves indicates that our parametric fits do well in tracing the | 74,451 | [
"780540",
"19074",
"738628",
"179615",
"783183",
"780590"
] | [
"303485",
"303485",
"303485",
"303485",
"303485",
"267729",
"444770",
"421423",
"2068",
"303485",
"1862",
"1256",
"165",
"179944",
"179944",
"452453",
"90612",
"2068",
"93746",
"365163",
"252664",
"301081",
"326535",
"452466",
"532765",
"252664",
"116280"
] |
01567309 | en | [
"phys"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01567309/file/1707.06632.pdf | Michele Starnini
email: [email protected]
Bruno Lepri
email: [email protected]
Andrea Baronchelli
email: [email protected]
Alain Barrat
email: [email protected]
Ciro Cattuto
email: [email protected]
Romualdo Pastor-Satorras
email: [email protected]
Robust modeling of human contact networks across different scales and proximity-sensing techniques
Keywords: Social Computing, Computational Social Science, Social Network Analysis, Mobile Sensing, Mathematical Modeling, Wearable Sensors
The problem of mapping human close-range proximity networks has been tackled using a variety of technical approaches. Wearable electronic devices, in particular, have proven to be particularly successful in a variety of settings relevant for research in social science, complex networks and infectious diseases dynamics. Each device and technology used for proximity sensing (e.g., RFIDs, Bluetooth, low-power radio or infrared communication, etc.) comes with specific biases on the closerange relations it records. Hence it is important to assess which statistical features of the empirical proximity networks are robust across different measurement techniques, and which modeling frameworks generalize well across empirical data. Here we compare time-resolved proximity networks recorded in different experimental settings and show that some important statistical features are robust across all settings considered. The observed universality calls for a simplified modeling approach. We show that one such simple model is indeed able to reproduce the main statistical distributions characterizing the empirical temporal networks.
Introduction
Being social animals by nature, most of our daily activities involve face-toface and proximity interactions with others. Although technological advances have enabled remote forms of communication such as calls, video-conferences, e-mails, etc., several studies [START_REF] Whittaker | Informal workplace communication: What is it like and how might we support it?[END_REF][START_REF] Nardi | The place of face to face communication in distributed work[END_REF] and the constant increase in business traveling, provide evidence that co-presence and face-to-face interactions still represent the richest communication channel for informal coordination [START_REF] Kraut | Informal communication in organizations: Form, function, and technology[END_REF], socialization and creation of social bonds [START_REF] Kendon | Organization of Behavior in Face-to-Face Interaction[END_REF][START_REF] Storper | Buzz: Face-to-face contact and the urban economy[END_REF], and the exchange of ideas and information [START_REF] Doherty-Sneddon | Face-to-face and video-mediated communication: A comparison of dialogue structure and task performance[END_REF][START_REF] Nohria | Face-to-face: Making network organizations work[END_REF][START_REF] Wright | The associations between young adults' face-to-face prosocial behaviorsand their online prosocial behaviors[END_REF]. At the same time, close-range physical proximity and face-to-face interactions are known determinants for the transmission of some pathogens such as airborne ones [START_REF] Liljeros | The web of human sexual contacts[END_REF][START_REF] Salathé | A high-resolution human contact network for infectious disease transmission[END_REF]. A quantitative understanding of human dynamics in social gatherings is therefore important not only to understand human behavior, creation of social bonds and flow of ideas, but also to design effective containment strategies and contrast epidemic spreading [START_REF] Starnini | Immunization strategies for epidemic processes in time-varying contact networks[END_REF][START_REF] Smieszek | A low-cost method to assess the epidemiological importance of individuals in controlling infectious disease outbreaks[END_REF][START_REF] Gemmetto | Mitigation of infectious disease at school: targeted class closure vs school closure[END_REF].
Hence, face-to-face and proximity interactions have long been the focus of major attention in social sciences and epidemiology [START_REF] Bales | Interaction process analysis: A method for the study of small groups[END_REF][START_REF] Arrow | Small groups as complex systems: Formation, coordination, development, and adaptation[END_REF][START_REF] Bion | Experiences in groups and other papers[END_REF][START_REF] Eames | Six challenges in measuring contact networks for use in modelling[END_REF] and recently various research groups have developed sensing devices and approaches to automatically measure these interaction networks [START_REF] Eagle | Reality mining: sensing complex social systems[END_REF][START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF][START_REF] Salathé | A high-resolution human contact network for infectious disease transmission[END_REF][START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF][START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF][START_REF] Stopczynski | Measuring large-scale social networks with high resolution[END_REF][START_REF] Toth | The role of heterogeneity in contact timing and duration in network models of influenza spread in schools[END_REF]. Reality Mining (RM) [START_REF] Eagle | Reality mining: sensing complex social systems[END_REF], a study conducted in 2004 by the MIT Media Lab, was the first one to collect data from mobile phones to track the dynamics of a community of 100 business school students over a nine-month period. Following this seminal project, the Social Evolution study [START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Madan | Sensing the "health state" of a community[END_REF] tracked the everyday life of a whole undergraduate dormitory for almost 8 months using mobile phones (i.e. call logs, location data, and proximity interactions). This study was specifically designed to model the adoption of political opinions, the spreading of epidemics, the effect of social interactions on depression and stress, and the eating and physical exercise habits. More recently, in the Friends and Family study 130 graduate students and their partners, sharing the same dormitory, carried smartphones running a mobile sensing platform for 15 months [START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF]. Additional data were also collected from Facebook, credit card statements, surveys including questions about personality traits, group affiliations, daily mood states and sleep quality, etc.
Along similar lines, the SocioPatterns (SP) initiative [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF][START_REF] Isella | Close encounters in a pediatric ward: Measuring face-to-face proximity and mixing patterns with wearable sensors[END_REF] and the Sociometric Badges projects [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF][START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF][START_REF] Onnela | Using sociometers to quantify social interaction patterns[END_REF] have been studying since several years the proximity patterns of human gatherings, in different social contexts, such as scientific conferences [START_REF] Stehlé | Simulation of an SEIR infectious disease model on the dynamic contact network of conference attendees[END_REF], museums [START_REF] Van Den Broeck | The making of sixty-nine days of close encounters at the science gallery[END_REF], schools [START_REF] Stehlé | High-resolution measurements of face-to-face contact patterns in a primary school[END_REF][START_REF] Fournet | Contact patterns among high school students[END_REF], hospitals [START_REF] Isella | Close encounters in a pediatric ward: Measuring face-to-face proximity and mixing patterns with wearable sensors[END_REF] and research institutions [START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF] by endowing participants with active RFID badges (SocioPatterns initiative) or with devices equipped with accelerometers, microphones, Bluetooth and Infrared sensors (Sociometric Badges projects) which capture body movements, prosodic speech features, proximity, and face-to-face interactions respectively.
However, the different technologies (e.g., RFID, Bluetooth, Infrared sensors) employed in these studies might imply potentially relevant differences in measuring contact networks. Interaction range and the angular width for detecting contacts, for instance, vary in a significant way, from less than 1 meter using Infrared sensors to more than 10 meters using Bluetooth sensors, and from 15 degrees using Infrared sensors to 360 degrees using Bluetooth sensors. In many cases, data cleaning and post-processing is based on calibrated power thresholds, temporal smoothing, and other assumptions that introduce their own biases. Finally, experiments themselves are diverse in terms of venue (from conferences to offices), size (from N 50 to N 500 individuals), duration (from a single day to several months) and temporal resolution. The full extent to which the measured proximity networks depends on experimental and data-processing techniques is challenging to assess, as no studies, to the best of our knowledge, have tackled a systematic comparison of different proximity-sensing techniques based on wearable devices.
Here we tackle this task, showing that empirical proximity networks measured in a variety of social gatherings by means of different measurement systems yield consistent statistical patterns of human dynamics, so we can assume that such regularities capture intrinsic properties of human contact networks. The presence of such apparently universal behavior, independent of the measurement framework and details, calls, within a statistical physics perspective, for an explanatory model, based on simple assumptions on human behavior. Indeed, we show that a simple multi-agent model [START_REF] Starnini | Modeling human dynamics of face-to-face interaction networks[END_REF][START_REF] Starnini | Model reproduces individual, group and collective dynamics of human contact networks[END_REF] accurately reproduces the statistical regularities observed across different social contexts.
Related Work
The present study takes inspiration from the emerging body of work investigating the possibilities of analyzing proximity and face-to-face interactions using different kinds of wearable sensors. At present, mobile phones allow the collection of data on specific structural and temporal aspects of social interactions, offering ways to approximate social interactions as spatial proximity or as the co-location of mobile devices, e.g., by means of Bluetooth hits [START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Dong | Modeling the co-evolution of behaviors and social relationships using mobile phone data[END_REF][START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF][START_REF] Madan | Sensing the "health state" of a community[END_REF][START_REF] Stopczynski | Measuring large-scale social networks with high resolution[END_REF]. For example, Do and Gatica Perez have proposed several topic models for capturing group interaction patterns from Bluetooth proximity networks [START_REF] Do | Human interaction discovery in smartphone proximity networks[END_REF][START_REF] Do | Inferring social activities with mobile sensor networks[END_REF]. However, this approach does not always yield good proxies to the social interactions occurring between the individuals carrying the devices.
Mobile phone traces suffer a similar problem: They can be used to model human mobility [START_REF] Gonzaléz | Understanding individual human mobility patterns[END_REF][START_REF] Blondel | A survey of results on mobile phone datasets analysis[END_REF] with the great advantage of easily scaling up to millions of individuals; they too, however, offer only coarse localization and therefore provide only rough co-location information, yielding thus only very limited insights into the social interactions of individuals.
An alternative strategy for collecting data on social interactions is to resort to image and video processing based on cameras placed in the environment [START_REF] Cristani | Social interaction discovery by statistical analysis of F-formations[END_REF][START_REF] Staiano | Salsa: A novel dataset for multimodal group behavior analysis[END_REF]. This approach provides very rich data sets that are, in turn, computationally very complex: They require line-of-sight access to the monitored spaces and people, specific effort for equipping the relevant physical spaces, and can hardly cope with large scale data.
Since 2010, Cattuto et al. [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF] have used a technique for monitoring social interactions that reconciles scalability and resolution by means of proximitysensing systems based on active RFID devices. These devices are capable of sensing spatial proximity over different length scales and even close face-to-face interactions of individuals (1 to 2m), with tunable temporal resolution. The So-cioPatterns initiative has collected and analyzed face-to-face interaction data in many different contexts. These analyses have shown strong heterogeneities in the contact duration of individuals, the robustness of these statistics across contexts, and have revealed highly non-trivial mixing patterns of individuals in schools, hospitals or offices as well as their robustness across various timescales [START_REF] Stehlé | High-resolution measurements of face-to-face contact patterns in a primary school[END_REF][START_REF] Isella | Close encounters in a pediatric ward: Measuring face-to-face proximity and mixing patterns with wearable sensors[END_REF][START_REF] Isella | What's in a crowd? Analysis of face-to-face behavioral networks[END_REF][START_REF] Fournet | Contact patterns among high school students[END_REF][START_REF] Gnois | Data on face-to-face contacts in an office building suggest a low-cost vaccination strategy based on community linkers[END_REF]. These data have been used in data-driven simulations of epidemic spreading phenomena, including the design and validation of containment measures [START_REF] Gemmetto | Mitigation of infectious disease at school: targeted class closure vs school closure[END_REF].
Along a similar line, Olguin Olguin et al. [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF] have designed and employed Sociometric Badges, platforms equipped with accelerometers, microphones, Bluetooth and Infrared sensors which capture body movements, prosodic speech features, proximity and face-to-face interactions respectively. Some previous studies based on Sociometric Badges revealed important insights into human dynamics and organizational processes, such as the impact of electronic communications on the business performance of teams [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF], the relationship between several behavioral features captured by Sociometric Badges, employee' self-perceptions (from surveys) and productivity [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF], the spreading of personality and emotional states [START_REF] Alshamsi | Beyond contagion: Reality mining reveals complex patterns of social influence[END_REF].
Empirical data
In this section, we describe datasets gathered by five different studies: The "Lyon hospital" and "SFHH" conference datasets from the SocioPatterns (SP) initiative [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF], the Trento Sociometric Badges (SB) dataset [START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF], the Social Evolution (SE) dataset [START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Madan | Sensing the "health state" of a community[END_REF], the Friends and Family (FF) [START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF] dataset, and two datasets (Elem and Mid) collected using wireless ranging enabled nodes (WRENs) [START_REF] Toth | The role of heterogeneity in contact timing and duration in network models of influenza spread in schools[END_REF]. The main statistical properties of datasets under consideration are summarized in Table 1, while the settings of the studies are described in detail in the following subsections.
SocioPatterns (SP)
The measurement infrastructure set up by the SP initiative is based on wireless devices embedded in badges, worn by the participants on their chests. Devices exchange radio packets and use them to monitor for proximity of individuals (RFID). Information is sent to receivers installed in the environment, logging contact data. They are tuned so that the face-to-face proximity of two individuals wearing the badges are sensed only when they are facing each other at close range (about 1 to 1.5m). The time resolution is set to 20 seconds, meaning that a contact between two individuals is considered established if their badges exchange at least one packet during such interval, and lasts as long as there is at least one packet exchanged over subsequent 20-second time windows. More details on the experimental setup can be found in Ref. [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF] Here we consider the dataset "Hospital", gathered by the SP initiative at a Lyon Hospital, during 4 workdays, and the dataset "SFHH", gathered by the SP initiative at the congress of the Société Francaise d'Hygiène Hospitaliére, where the experiment was conducted during the first day of a two-days conference. See Ref. [START_REF] Stehlé | Simulation of an SEIR infectious disease model on the dynamic contact network of conference attendees[END_REF] for a detailed description.
Sociometric Badges (SB)
The Sociometric Badges data [START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF] has been collected in a research institute for over a six week consecutive period, involving a population of 54 subjects, during their working hours. The Sociometric Badges, employed for this study, are equipped with accelerometers, microphones, Bluetooth and Infrared sensors which capture body movements, prosodic speech features, co-location and faceto-face interactions respectively [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF]. For the purposes of our study we have exploited the data provided from the Bluetooth and Infrared sensors.
Infrared Data Infrared (IR) transmissions are used to detect face-to-face interactions between people. In order for a badge to be detected by an IR sensor, two individuals must have a direct line of sight and the receiving badge's sensor must be within the transmitting badge's IR signal cone of height h ≤ 1 meter and a radius of r ≤ h tan θ, where θ = ±15 o degrees. The infrared transmission rate (T R ir ) was set to 1Hz.
Bluetooth Data Bluetooth (BT) detections can be used as a coarse indicator of proximity between devices. Radio signal strength indicator (RSSI) is a measure of the signal strength between transmitting and receiving devices. The range of RSSI values for the radio transceiver in the badge is (-128 dBm, 127 dBm). The Sociometric Badges broadcast their ID every five seconds using a 2.4 GHz transceiver (T R radio = 12 transmissions per minute).
Social Evolution (SE)
The Social Evolution dataset was collected as part of a longitudinal study with 74 undergraduate students uniformly distributed among all four academic years (freshmen, sophomores, juniors, seniors). Participants in the study represents 80% of the residents of a dormitory at the campus of a major university in North America. The study participants were equipped with a smartphone (i.e. a Windows Mobile device) incorporating a sensing platform designed for collecting call logs, location and proximity data. Specifically, the software scanned for Bluetooth wireless devices in proximity every six minutes, a compromise between short-term social interactions and battery life [START_REF] Eagle | Inferring friendship network structure by using mobile phone data[END_REF]. With this approach, the BT log of a given smartphone would contain the list of devices in its proximity, sampled every six minutes.
Participants used the Windows Mobile smartphones as their primary phones, with their existing voice plans. Students had also online data access with these phones due to pervasive Wi-Fi on the university campus and in the metropolitan area. As compensation for their participation, students were allowed to keep the smartphones at the end of the experiment. Although relevant academic and extra-curricular activities might have not been covered either because the mobile phones may not be permanently on (e.g., during classes), or because of contacts with people not taking part to the study, the dormitory may still represent the preferential place where students live, cook, and sleep. Additional information on the SE study is available in Madan et al. [START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Madan | Sensing the "health state" of a community[END_REF].
Friends and Family (FF)
The Friends and Family dataset was collected during a longitudinal study capturing the lives of 117 subjects living in a married graduate student residency of a major US university [START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF]. The sample of subjects has a large variety in terms of provenance and cultural background. During the study period, each participant was equipped with an Android-based mobile phone incorporating a sensing software explicitly designed for collecting mobile data. Such software runs in a passive manner and does not interfere with the every day usage of the phone.
Proximity interactions were derived from Bluetooth data in a manner similar to previous studies such as [START_REF] Eagle | Reality mining: sensing complex social systems[END_REF][START_REF] Madan | Social sensing for epidemiological behavior change[END_REF]. Specifically, the Funf phone sensing platform was used to detect Bluetooth devices in the participant's proximity. The Bluetooth scan was performed periodically, every five minutes in order to keep from draining the battery while achieving a high enough resolution for social interactions. With this approach, the BT log of a given smartphone would contain the list of devices in its proximity, sampled every 5 minutes. See Ref. [START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF] for a detailed description of the study.
Toth et al. datasets (Toth et al.)
The datasets, publicly available, were collected by Toth et al. [START_REF] Toth | The role of heterogeneity in contact timing and duration in network models of influenza spread in schools[END_REF] deploying wireless ranging enabled nodes (WRENs) [START_REF] Forys | Wrenmining: large-scale data collection for human contact network research[END_REF] to students in Utah schools. Each WREN was worn by a student and collected time-stamped data from other WRENs in proximity at intervals of approximately 20 seconds. Each recording included a measure of signal strength, which depends on the distance between and relative orientation of the pair of individuals wearing each WREN. More specifically, Toth et al. [START_REF] Toth | The role of heterogeneity in contact timing and duration in network models of influenza spread in schools[END_REF] have applied signal strength criteria such that each retained data point was most likely to represent a pair of students, with face-toface orientation, located 1 meter from each other.
In the current paper, we resort to the data collected from two schools in Utah: One middle school (Mid), an urban public school with 679 students (age range 1214); and one elementary school (Elem), a suburban public school with 476 students, (age range 512). The contact data were captured during school 1. Some average properties of the datasets under consideration. SP-hosp = "SocioPatterns Lyon hospital", SP-sfhh = "SocioPatterns SFHH conference", SB = "Sociometric Badges", SE = "Social Evolution", FF = "Friends and Family", Elem = "Toth's Elementary school", Mid = "Toth's Middle school" hours of two consecutive school days in autumn 2012 from 591 students (87% coverage) at Mid and in winter 2013 from 339 students (71% coverage) at Elem.
Temporal network formalism
Proximity patterns can be naturally analyzed in terms of temporally evolving graphs [START_REF] Holme | Temporal networks[END_REF][START_REF] Holme | Modern temporal network theory: a colloquium[END_REF], whose nodes are defined by the individuals, and whose links represent interactions between pairs of individuals. Interactions need to be aggregated over an elementary time interval ∆t 0 in order to build a temporal network [START_REF] Ribeiro | Quantifying the effect of temporal resolution on time-varying networks[END_REF]. This elementary time step represents the temporal resolution of data, and all the interactions established within this time interval are considered as simultaneous. Taken together, these interactions constitute an "instantaneous" network, formed by isolated nodes and small groups of interacting individuals (not necessarily forming cliques). The sequence of such instantaneous networks forms a temporal, or time-varying, network. The elementary time step ∆t 0 is set to ∆t 0 = 20 seconds in the case of SP data, ∆t 0 = 60 seconds for SMBC data, ∆t 0 = 300 seconds for SE and FF data, and ∆t 0 = 20 seconds for Toth et al. datasets. Note that temporal networks are built by including only non-empty instantaneous graphs, i.e. graphs in which at least a pair of nodes are connected.
Each data set is thus represented by a temporal network with a number N of different interacting individuals, and a total duration of T elementary time steps. Temporal networks can be described in terms of a characteristic function χ(i, j, t) taking the value 1 when individuals i and j are connected at time t, and zero otherwise [START_REF] Starnini | Random walks on temporal networks[END_REF]. Integrating the information of the time-varying network over a given time window T produces an aggregated weighted network, where the weight w ij between nodes i and j represents the total temporal duration of the contacts between agents i and j, w ij = t χ(i, j, t), and the strength s i of a node i, s i = j w ij , represents the cumulated time spent in interactions by individual i. In Table 1 we summarize a number of significant statistical properties, such as the size N , the total duration T in units of elementary time steps ∆t 0 , and the average fraction of individuals interacting at each time step, p. We also report the average degree, k , defined as the average number of interactions per individual, and average strength, s = N -1 i s i , of the aggregated networks, integrated over the whole sequence. One can note that the data sets under consideration are highly heterogeneous in terms of the reported statistical properties. Aggregated network representations preserve such heteogeneity, even though it is important to remark that aggregated properties are sensitive to the time-aggreagating interval [START_REF] Ribeiro | Quantifying the effect of temporal resolution on time-varying networks[END_REF] and therefore to the specificity of data collection and preprocessing.
Comparison among the different datasets
In this section we perform a comparison of several statistical properties of the temporal networks, as defined above, representing the different datasets under consideration.
The temporal pattern of the agents' contacts is probably the most distinctive feature of proximity interaction networks. We therefore start by considering the distribution of the durations ∆t of the contacts between pairs of agents, P (∆t), and the distribution of gap times τ between two consecutive proximity events involving a given individual, P (τ ). The bursty dynamics of human inter- Fig. 2. Probability distribution of the gap times τ between consecutive contacts of pairs of agents, P (τ ), for the different datasets under consideration, compared with numerical simulations of the attractiveness model. A power law form, P (τ ) ∼ τ -γτ , with γτ = 2.1, is plotted as a reference in dashed line.
actions [START_REF] Barabasi | The origin of bursts and heavy tails in human dynamics[END_REF] is revealed by the long-tailed form of these two distributions, which can be described in terms of a power-law function. Figures 1 and2 show the distribution of the contacts duration P (∆t) and gap times P (τ ) for the various sets of empirical data. In both cases, all dataset shows a broad-tailed behavior, that can be loosely described by a power law distribution. In Figures 1 and2 we plot, as a guide for the eye, power-law forms P (∆t) ∼ ∆t -γ ∆t , with exponent γ ∆t ∼ 2.5, and P (τ ) ∼ τ -γτ , with exponent γ τ ∼ 2.1, respectively.
The probability distributions of strength, P (s), and weight, P (w), are a signature of the topological structure of the corresponding aggregated, weighted networks. Since the duration T of the datasets under consideration is quite heterogeneous, see Table 1, we do not reconstruct the aggregated networks by integrating over the whole duration T , but we integrate each temporal network over a time window of fixed length, ∆T = 1000 elementary time steps. That is, we consider a random starting time T 0 (provided that T 0 < T -∆T ), and reconstruct an aggregated network by integrating the temporal network from T 0 to T 0 + ∆T . We average our results by sampling 100 different starting times. Note that, since the elementary time step ∆t 0 is different across different experiments, the real duration of the time window considered is different across different datasets.
Figs. 3 and4 show the weight and strength distributions, P (w) and P (s), of the aggregated networks over ∆T , for the considered datasets. Again, all datasets display a similar heavy tailed weight distribution, roughly compatible with a power-law form, meaning that the heterogeneity shown in the broad-tailed form of the contact duration distribution, P (∆t), persists also over longer time scales. Data sets SB-BT, SE and FF present deviations with respect to the other data sets. The strength distribution P (s) is also broad tailed and quite similar for all data sets considered, but in this case it is not compatible with a power law.
Finally, Fig. 5 shows the average strength as a function of the degree, s(k), in the aggregated networks integrated over an interval ∆T . One can see that if the strength is rescaled by the total strength of the network in the considered time window, s = N -1 T0+∆T t=T0 ij χ(i, j, t), the different data sets show a similar correlation between strength and degree. In particular, Fig. 5 shows that all data sets considered present a slightly superlinear correlation between strength and degree, s(k) ∼ k γ with γ > 1, as highlighted by the linear correlation plotted as a dashed line.
Modeling human contact networks
In the previous Section, we have shown that the temporal networks representing different datasets, highly heterogeneous in terms of size, duration, proximitysensing techniques, and social contexts, are characterized by very similar statistical properties. Here we show that a simple model, in which individuals are endowed with different social attractiveness, is able to reproduce the empirical distributions.
Model definition
The social contexts in which the data were collected can be modeled by a set of N mobile agents free to move in a closed environment, who interact when they are close enough (within the exchange range of the devices) [START_REF] Starnini | Modeling human dynamics of face-to-face interaction networks[END_REF]. The simplifying assumption of the model proposed in [START_REF] Starnini | Modeling human dynamics of face-to-face interaction networks[END_REF] is that the agents perform a random walk in a box of linear size L with periodic boundary conditions (the average density is ρ = N/L 2 ). Whenever two agents are within distance d (with d << L), they start to interact. The key ingredient of the model is that each agent is characterized by an "attractiveness", a i , a quenched random number, extracted from a distribution η(a), representing her power to raise interest in the others, which can be thought of as a proxy for social status or the role played in the considered social gathering. Attractiveness rules the interactions between agents in a natural way: Whenever an individual is involved in an interaction with other peers, she will continue to interact with them with a probability proportional to the attractiveness of her most interesting neighbor, or move away otherwise. Finally, the model incorporates the empirical evidence that not all agents are simultaneously present in system: Individuals can be either in an active state, where they can move and establish interactions, or in an inactive one representing absence from the premises. Thus, at each time step, every active individual becomes inactive with a constant probability r, while inactive individuals can go back to the active state with the complementary probabillty 1-r. See Refs. [START_REF] Starnini | Modeling human dynamics of face-to-face interaction networks[END_REF][START_REF] Starnini | Model reproduces individual, group and collective dynamics of human contact networks[END_REF] for a detailed description of the model.
Model validation
Here we contrast the results obtained by the numerical simulation of the model against empirical data sets. We average our results over 100 runs with parameters N = 100, L = 50, T = 5000. The results of numerical experiments are reported in Figs. 1 to 5, for the corresponding quantities considered, represented by a continuous, blue line.
In the case of the contact duration distribution, P (∆t), Fig. 1, numerical and experimental data show a remarkable match, with some deviations for the SB-BT and FF datasets. Numerical data also show a close behavior to the mentioned power-law distribution with exponent γ ∆t = 2.5. Also in the case of the gap times distribution, P (τ ), Fig. 2, the distribution obtained by numerical simulations of the model is very close to the experimental ones, spanning the same orders of magnitude. The weight distribution P (w) of the model presents a very good fit to the empirical data, see Fig. 3, with the exception of data sets SB-BT, SE and FF, as mentioned above. The strength distribution P (s), Fig. 4, is, as we have commented above, quite noisy, especially for the datasets of smallest size. It follows however a similar trend across the different datasets that is well matched by numerical simulations of the model. Finally, in the case of the average strength of individuals of degree k, s(k), Fig. 5, the most striking feature, namely the superlinear behavior as a function of k, is correctly captured by the numerical simulations of the model.
Discussion
All datasets under consideration show similar statistical properties of the individuals' contacts. The distribution of the contact durations, P (∆t), and the inter-event time distribution, P (τ ), are heavy tailed and compatible with power law forms, and the attractiveness model is able to quantitavely reproduce such behavior. The weight distribution of the aggregated networks, P (w), is also heavy tailed for all datasets and for the attractiveness model, even though some datasets show deviations. The strength distribution P (s) and the correlation between strength and degree, s(k), present a quite noisy behavior, especially for smaller datasets. However, all datasets show a long tailed form of P (s) and a superlinear correlation of the s(k), correctly reproduced by the attractiveness model.
Previous work [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF][START_REF] Isella | What's in a crowd? Analysis of face-to-face behavioral networks[END_REF][START_REF] Fournet | Contact patterns among high school students[END_REF] have shown that the functional shapes of contact and inter-contact durations' distributions were very robust across contexts, for data collected by the SocioPatterns infrastructure as well as by similar RFID sensors. Our results show that this robustness extends in fact to proximity data collected through different types of sensors (e.g., Bluetooth, Infrared, WREN, RFID). This is of particular relevance in the context of modeling human behavior and building data-driven models depending on human interaction data, such as models for the spread of infectious diseases, from two points of view. On the one hand, the robust broadness of these distributions implies that different contacts might play very different roles in a transmission process: Under the common assumption that the transmission probability between two individuals depends on their time in contact, the longer contacts, which are orders of magnitude longer than average, could play a crucial role in disease dynamics. The heterogeneity of contact patterns is also relevant at the individual level, as revealed by broad distributions of strengths and the superlinear behavior of s(k), and is known to have a strong impact on spreading dynamics. In particular, it highlights the existence of "super-contactors", i.e. individuals who account for an important proportion of the overall contact durations and may therefore become super-spreaders in the case of an outbreak On the other hand, the robustness of the distributions found in different contexts represents an important information and asset for modelers: It means that these distributions can be assumed to depend negligibly on the specifics of the situation being modeled and thus directly plugged into the models to create for instance synthetic populations of interacting agents. From another modeling point of view, they also represent a validation benchmark for microscopic models of interactions, which should correctly reproduce such robust features. In fact, as we have shown, a simple model based on mobile agents, and on the concept of social appealing or attractiveness, is able to reproduce most of the main statistical properties of human contact temporal networks. The good fit of this model hints towards the fact that the temporal patterns of human contacts at different time scales can be explained in terms of simple physical processes, without assuming any cognitive processes at work.
It would be of interest to measure and compare several other properties of the contact networks, such as the evolution of the integrated degree distribution P T (k) and of the aggregated average degree in k(T ), or the rate at which the contact neighborhoods of individuals change. Unfortunately, these quantities are difficult to measure in some cases due to the small sizes of the datasets.
Fig.
Fig. Probability distribution of the duration ∆t of the contacts between pairs of agents, P (∆t), for the different datasets under consideration, compared with numerical simulations of the attractiveness model. A power law form, P (∆t) ∼ ∆t -γ ∆t , with γ∆t = 2.5, is plotted as a reference in dashed line.
Fig. 3 .
3 Fig. 3. Weight distribution P (w), for the different datasets under consideration, compared with numerical simulations of the attractiveness model.
Fig. 4 .
4 Fig. 4. Strength distribution P (s), for the different datasets under consideration, compared with numerical simulations of the attractiveness model.
Fig. 5 .
5 Fig. 5. Strength as a function of the degree, s(k), for the different datasets under consideration, compared with numerical simulations of the attractiveness model. A linear correlation s(k) ∼ k is plotted in dashed line, to highlight the superlinear correlation observed in data and model.
Acknowledgments
M.S. acknowledges financial support from the James S. McDonnell Foundation. R.P.-S. acknowledges financial support from the Spanish MINECO, under projects FIS2013-47282-C2-2 and FIS2016-76830-C2-1-P, and additional financial support from ICREA Academia, funded by the Generalitat de Catalunya. C.C. acknowledges support from the Lagrange Laboratory of the ISI Foundation funded by the CRT Foundation. | 41,789 | [
"1658"
] | [
"4956",
"452129",
"502090",
"407864",
"179898",
"103758",
"4956"
] |
01698252 | en | [
"phys"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01698252/file/1710.05589.pdf | Antoine Moinet
Romualdo Pastor-Satorras
Alain Barrat
Effect of
come
I. INTRODUCTION
The propagation patterns of an infectious disease depend on many factors, including the number and properties of the different stages of the disease, the transmission and recovery mechanisms and rates, and the hosts' behavior (e.g., their contacts and mobility) [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF][START_REF] Anderson | Infectious diseases of humans: dynamics and control[END_REF]. Given the inherent complexity of a microscopic description taking into account all details, simple models are typically used as basic mathematical frameworks aiming at capturing the main characteristics of the epidemic spreading process and in particular at understanding if and how strategies such as quarantine or immunization can help contain it. Such models have been developed with increasing levels of sophistication and detail in the description of both the disease evolution and the behaviour of the host population [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF][START_REF] Anderson | Infectious diseases of humans: dynamics and control[END_REF].
The most widely used assumption concerning the disease evolution within each host consists in discretizing the possible health status of individuals [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF][START_REF] Anderson | Infectious diseases of humans: dynamics and control[END_REF]. For instance, in the Susceptible-Infectious-Susceptible (SIS) model, each individual is considered either healthy and susceptible (S) or infectious (I). Susceptible individuals can become infectious through contact with an infectious individual, and recover spontaneously afterwards, becoming susceptible again. In the Susceptible-Infectious-Recovered (SIR) case, recovered individuals are considered as immunized and cannot become infectious again. The rate of infection during a contact is assumed to be the same for all individuals, as well as the rate of recovery.
Obviously, the diffusion of the disease in the host population depends crucially on the patterns of contacts between hosts. The simplest homogeneous mixing assump-tion, which makes many analytical results achievable, considers that individuals are identical and that each has a uniform probability of being in contact with any other individual [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF][START_REF] Anderson | Infectious diseases of humans: dynamics and control[END_REF]. Even within this crude approximation, it is possible to highlight fundamental aspects of epidemic spreading, such as the epidemic threshold, signaling a non-equilibrium phase transition that separates an epidemic-free phase from a phase in which a finite fraction of the population is affected [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF]. However, this approach neglects any non-trivial structure of the contacts effectively occurring within a population, while advances in network science [START_REF] Newman | Networks: An Introduction[END_REF] have shown that a large number of networks of interest have in common important features such as a strong heterogeneity in the number of connections, a large number of triads, a community structure, and a low average shortest path length between two individuals [START_REF] Newman | Networks: An Introduction[END_REF][START_REF] Caldarelli | Scale-Free Networks: Complex Webs in Nature and Technology[END_REF]. Spreading models have thus been adapted to complex networks, and studies have unveiled the important role of each of these properties [START_REF] Pastor-Satorras | [END_REF][START_REF] Barrat | Dynamical processes on complex networks[END_REF][START_REF] Pastor-Satorras | [END_REF]. More recently, a number of studies have also considered spreading processes on time-varying networks [8][9][10][11][12][13], to take into account the fact that contact networks evolve on various timescales and present non-trivial temporal properties such as broad distribution of contact durations [14,15] and burstiness [8,16] (i.e., the timeline of social interactions of a given individual exhibits periods of time with intense activity separated by long quiescent periods with no interactions).
All these modeling approaches consider that the propagation of the disease takes place on a substrate (the contacts between individuals) that does not depend on the disease itself. In this framework, standard containment measures consist in the immunization of individuals, in order to effectively remove them from the popu-lation and thus break propagation paths. Immunization can also (in models) be performed in a targeted way, trying to identify the most important (class of) spreaders and to suppress propagation in the most efficient possible way [17,18]. An important point to consider however is that the structure and properties of contacts themselves can in fact be affected by the presence of the disease in the population, as individuals aware of the disease can modify their behaviour in spontaneous reaction in order to adopt self-protecting measures such as vaccination or mask-wearing. A number of studies have considered this issue along several directions (see Ref. [19] for a review). For instance, some works consider an adaptive evolution of the network [20] with probabilistic redirection of links between susceptible and infectious individuals, to mimic the fact that a susceptible individual might be aware of the infectious state of some of his/her neighbors, and therefore try to avoid contact with them.
Other works introduce behavioral classes in the population, depending on the awareness to the disease [21], possibly consider that the awareness of the disease propagates on a different (static) network than the disease itself, and that being aware of the disease implies a certain level of immunity to it [22,23]. Finally, the fact that an individual takes self-protecting measures that decrease his/her probability to be infected (such as wearing a mask or washing hands more frequently) can depend on the fraction of infectious individuals present in the whole population or among the neighbors of an individual. These measures are then modeled by the fact that the probability of a susceptible catching the disease from an infectious neighbor depends on such fractions [24][25][26][27]. Yet these studies mostly consider contacts occurring on a static underlying contact network (see however [25,26] for the case of a temporal network in which awareness has the very strong effect of reducing the activity of individuals and their number of contacts, either because they are infectious or because of a global knowledge of the overall incidence of the disease).
Here, we consider instead the following scenario: First, individuals are connected by a time-varying network of contacts, which is more realistic than a static one; second, we use the scenario of a relatively mild disease, which does not disrupt the patterns of contacts but which leads susceptible individuals who witness the disease in other individuals to take precautionary measures. We do not assume any knowledge of the overall incidence, which is usually very difficult to know in a real epidemic, especially in real time. We consider SIS and SIR models and both empirical and synthetic temporal networks of contacts. We extend the concept of awareness with respect to the state of neighbors from static to temporal networks and perform extensive numerical simulations to uncover the change in the phase diagram (epidemic threshold and fraction of individuals affected by the disease) as the parameters describing the reaction of the individuals are varied.
II. TEMPORAL NETWORKS
We will consider as substrate for epidemic propagation both synthetic and empirical temporal networks of interactions. We describe them succinctly in the following Subsections.
A. Synthetic networks
Activity-driven network model
The activity driven (AD) temporal network model proposed in Ref. [28] considers a population of N individuals (agents), each agent i characterized by an activity potential a i , defined as the probability that he/she engages in a social act/connection with other agents per unit time. The activity of the agents is a (quenched) random variable, extracted from the activity potential distribution F (a), which can take a priori any form. The temporal network is built as follows: at each time step t, we start with N disconnected individuals. Each individual i becomes active with probability a i . Each active agent generates m links (starts m social interactions) that are connected to m other agents selected uniformly at random (among all agents, not only active ones) 1 . The resulting set of N individuals and links defines the instantaneous network G t . At the next time step, all links are deleted and the procedure is iterated. For simplicity, we will here consider m = 1.
In Ref. [28] it was shown that several empirical networks display broad distributions of node activities, with functional shapes close to power-laws for F (a), with exponents between 2 and 3. The aggregation of the activity-driven temporal network over a time-window of length T yields moreover a static network with a longtailed degree distribution of the form P T (k) ∼ F (k/T ) [28,29]. Indeed, the individuals with the highest activity potential tend to form a lot more connections than the others and behave as hubs, which are known to play a crucial role in spreading processes [START_REF] Pastor-Satorras | [END_REF].
Activity-driven network model with memory
A major shortcoming of the activity-driven model lies in the total absence of correlations between the connections built in successive time steps. It is therefore unable to reproduce a number of features observed in empirical data. An extension of the model tackles this issue by introducing a memory effect into the mechanism of link creation [30]. In the resulting activity-driven model with memory (ADM), each individual keeps track of the set of other individuals with whom there has been an interaction in the past. At each time step t we start as in the AD model with N disconnected individuals, and each individual i becomes active with probability a i . For each link created by an active individual i, the link goes with probability p = q i (t)/[q i (t) + 1] to one of the q i (t) individuals previously encountered by i, and with probability 1 -p towards a never encountered one. In this way, contacts with already encountered other individuals have a larger probability to be repeated and are reinforced. As a result, for a power-law distributed activity F (a), the degree distribution of the temporal network aggregated on a time window T becomes narrow, while the distribution of weights (defined as the number of interactions between two individuals) becomes broad [30].
B. Empirical social networks
In addition to the simple models described above, which do not exhibit all the complexity of empirical data, we also consider two datasets gathered by the SocioPatterns collaboration [START_REF]Sociopatterns collaboration[END_REF], which describe close face-to-face contacts between individuals with a temporal resolution of 20 seconds in specific contexts (for further details, see Ref. [14]). We consider first a dataset describing the contacts between students of nine classes of a high school (Lycée Thiers, Marseilles, France), collected during 5 days in Dec. 2012 ("Thiers" dataset) [START_REF]Sociopatterns dataset: High school dynamic contact networks[END_REF][START_REF] Fournet | [END_REF]. We also use another dataset consisting in the temporal network of contacts between the participants of a conference (2009 Annual French Conference on Nosocomial Infections, Nice, France) during one day ("SFHH" dataset) [10]. The SFHH (conference) data correspond to a rather homogeneous contact network, while the Thiers (high school) population is structured in classes of similar sizes and presents contact patterns that are constrained by strict and repetitive school schedules. In Table I We consider the paradigmatic Susceptible-Infectious-Susceptible (SIS) and Susceptible-Infectious-Recovered (SIR) models to describe the spread of a disease in a fixed population of N individuals. In the SIS model, each individual belongs to one of the following compartments: healthy and susceptible (S) or diseased and infectious (I). A susceptible individual in contact with an infectious becomes infectious at a given constant rate, while each infectious recovers from infection at another constant rate. In the SIR case, infectious individuals enter the recovered (R) compartment and cannot become infectious anymore. We consider a discrete time modeling approach, in which the contacts between individuals are given by a temporal network encoded in a time-dependent adjacency matrix A ij (t) taking value 1 if individuals i and j are in contact at time t, and 0 otherwise. At each time step, the probability that a susceptible individual i becomes infectious is thus given by
p i = 1-j [1-λ A ij (t) σ j ],
where λ is the infection probability, and σ j is the state of node j (σ j = 1 if node j is infectious and 0 otherwise). We define µ as the probability that an infectious individual recovers during a time step. The competition between the transmission and recovery mechanisms determines the epidemic threshold. Indeed, if λ is not large enough to compensate the recovery process (λ/µ smaller than a critical value), the epidemic outbreak will not affect a finite portion of the population, dying out rapidly. On the other hand, if λ/µ is large enough, the spread can lead in the SIS model to a non-equilibrium stationary state in which a finite fraction of the population is in the infectious state. For the SIR model, on the other hand, the epidemic threshold is determined by the fact that the fraction r ∞ = R ∞ /N of individuals in the recovered state at the end of the spread becomes finite for λ/µ larger than the threshold.
In order to numerically determine the epidemic threshold of the SIS model, we adapt the method proposed in Refs. [34,35], which consists in measuring the lifetime and the coverage of realizations of spreading events, where the coverage is defined as the fraction of distinct nodes ever infected during the realization. Below the epidemic threshold, realizations have a finite lifetime and the coverage goes to 0 in the thermodynamic limit. Above threshold, the system in the thermodynamic limit has a finite probability to reach an endemic stationary state, with infinite lifetime and coverage going to 1, while realizations that do not reach the stationary state have a finite lifetime. The threshold is therefore found as the value of λ/µ where the average lifetime of non-endemic realizations diverges. For finite systems, one can operationally define an arbitrary maximum coverage C > 0 (for instance C = 0.5) above which a realization is considered endemic, and look for the peak in the average lifetime of non-endemic realizations as a function of λ/µ.
In the SIR model the lifetime of any realization is finite. We thus evaluate the threshold as the location of the peak of the relative variance of the fraction r ∞ of recovered individuals at the end of the process [36], i.e.,
σ r = r 2 ∞ -r ∞ 2 r ∞ . (1)
B. Modeling risk perception
To model risk perception, we consider the approach proposed in Ref. [24] for static interaction networks. In this framework, each individual i is assumed to be aware of the fraction of his/her neighbors who are infectious at each time step. This awareness leads the individual to take precautionary measures that decrease its probability to become infectious upon contact. This decrease is modeled by a reduction of the transmission probability by an exponential factor: at each time step, the probability of a susceptible node i in contact with an infectious to become infectious depends on the neighborhood of i and is given by λ i (t) = λ 0 exp(-Jn i (t)/k i ) where k i is the number of neighbors of i, n i (t) the number of these neighbors that are in the infectious state at time t, and J is a parameter tuning the degree of awareness or amount of precautionary measures taken by individuals.
Static networks of interactions are however only a first approximation and real networks of contacts between individuals evolve on multiple timescales [15]. We therefore consider in the present work, more realistically, that the set of neighbors of each individual i changes over time. We need thus to extend the previous concept of neighborhood awareness to take into account the history of the contacts of each individual and his/her previous encounters with infectious individuals. We consider that longer contacts with infectious individuals should have a stronger influence on a susceptible individual's awareness, and that the overall effect on any individual depends on the ratio of the time spent in contact with infectious to the total time spent in contact with other individuals. Indeed, two individuals spending a given amount of time in contact with infectious individuals may react differently depending on whether these contacts represent a large fraction of their total number of contacts or not. We moreover argue that the awareness is influenced only by recent contacts, as having encountered ill individuals in a distant past is less susceptible to lead to a change of behaviour. To model this point in a simple way, we consider that each individual has a finite memory of length ∆T and that only contacts taking place in the time window [t -∆T, t[, in which the present time t is excluded, are relevant.
We thus propose the following risk awareness change of behaviour: The probability for a susceptible individual i, in contact at time t with an infectious one, to become infectious, is given by
λ i (t) = λ 0 exp (-α n I (i) ∆T ) (2)
where n I (i) ∆T is the number of contacts with infectious individuals seen by the susceptible during the interval [t-∆T, t[, divided by the total number of contacts counted by the individual during the same time window (repeated contacts between the same individuals are also counted). α is a parameter gauging the strength of the awareness, and the case α = 0 corresponds to the pure SIS process, in which λ i (t) = λ 0 for all individuals and at all times.
IV. EPIDEMIC SPREADING ON SYNTHETIC NETWORKS
A. SIS dynamics
Analytical approach
On a synthetic temporal network, an infectious individual can propagate the disease only when he/she is in contact with a susceptible. As a result, the spreading results from an interplay between the recovery time scale 1/µ, the propagation probability λ conditioned on the existence of a contact and the multiple time scales of the network as emerging from the distribution of nodes' activity F (a). Analogously to what is done for heterogeneous static networks [START_REF] Barrat | Dynamical processes on complex networks[END_REF][START_REF] Pastor-Satorras | [END_REF], it is possible to describe the spread at a mean-field level by grouping nodes in activity classes: all nodes with the same activity a are in this approximation considered equivalent [28]. The resulting equation for the evolution of the number of infectious nodes in the class of nodes with activity a in the original AD model has been derived in Ref. [28] and reads
I t+1 a = I t a -µ I t a + λ a S t a I t a N da + λ S t a I t a a N da .
(3) where I a and S a are the number of infectious and susceptible nodes with activity a, verifying N a = S a + I a .
From this equation one can show, by means of a linear stability analysis, that there is an endemic non-zero steady state if and only if ( a + a 2 )λ/µ > 1 [28]. Noticing that a + a 2 may be regarded as the highest statistically significant activity rate, the interpretation of this equation becomes clear: the epidemic can propagate to the whole network when the smallest time scale of relevance for the infection process is smaller than the time scale of recovery.
Let us now consider the introduction of risk awareness in the SIS dynamics on AD networks. In general, we can write for a susceptible with activity a n I (a
) ∆T = ∆T i=1 a I t-i a N da + I t-i a a N da (a + a ) ∆T , (4)
where the denominator accounts for the average number of contacts of an individual with activity a in ∆T time steps. In the steady state, where the quantities I a become independent of t, the dependence on ∆T in Eq. ( 4) vanishes, since both the average time in contact with infectious individuals and the average total time in contact are proportional to the time window width. Introducing this expression into Eq. ( 2), we obtain
λ a = λ 0 exp -α a I a N da + I a a N da a + a , (5)
which can be inserted into Eq. ( 3). Setting µ = 1 without loss of generality, we obtain the steady state solution
ρ a = λ a (aρ + θ) 1 + λ a (aρ + θ) , (6)
where ρ a = I a /N a and we have defined
ρ = a F (a)ρ a , (7)
θ = a a F (a)ρ a . (8)
Introducing Eqs. ( 5) and ( 6) into Eqs. ( 7) and (8), and expanding at second order in ρ and θ, we obtain after some computations the epidemic threshold
λ c = 1 a + a 2 . (9)
Moreover, setting λ 0 = λ c (1 + ) and expanding at order 1 in we obtain
ρ = 2 Aα + B , (10)
where
A = λ c a 3 a 2 + 3a a 2 + a 2 + 3a 2 a + a (11)
B = λ 2 c a 3 a 2 + 3 a a 2 + 4 a 2 .
This indicates that, at the mean-field level, the epidemic threshold is not affected by the awareness. Nevertheless, the density of infectious individuals in the vicinity of the threshold is reduced as the awareness strength α grows.
In the case of activity driven networks with memory (ADM), no analytical approach is available for the SIS dynamics, even in the absence of awareness. The numerical investigation carried out in Ref. [37] has shown that the memory mechanism, which leads to the repetition of some contacts, reinforcing some links and yielding a broad distribution of weights, has a strong effect in the SIS model. Indeed, the repeating links help the reinfection of nodes that have already spread the disease and make the system more vulnerable to epidemics. As a result, the epidemic threshold is reduced with respect to the memory-less (AD) case. For the SIS dynamics with awareness on ADM networks, we will now resort to numerical simulations.
Numerical simulations
In order to inspect in details the effect of risk awareness on the SIS epidemic process, we perform extensive numerical simulations. Following Refs. [28,37], we consider a distribution of nodes' activities of the form F (a) ∝ a -γ for a ∈ [ , 1], where is a lower activity cut-off introduced to avoid divergences at small activity values. In all simulations we set = 10 -3 and γ = 2. We consider networks up to a size N = 10 5 and a SIS process starting with a fraction I 0 /N = 0.01 of infectious nodes chosen at random in the population. In order to take into account the connectivity of the instantaneous networks, we use as a control parameter the quantity β/µ, where β = 2 a λ 0 is the per capita rate of infection [28]. Notice that the average degree of an instantaneous network is k t = 2 a [29]. With this definition, the critical endemic phase corresponds to
β µ ≥ 2 a a + a 2 . ( 12
)
In Fig. 1 we first explore the effect of the strength of risk awareness, as measured by the parameter α, in the case ∆T = ∞, i.e., when each agent is influenced by the whole history of his/her past contacts, a situation in which awareness effects should be maximal. We plot the steady state average fraction of infectious nodes ρ = a ρ a F (a) as a function of β/µ for three different values of α, and evaluate the position of the effective epidemic threshold, as measured by the peak of the average lifetime of non-endemic realizations, see Sec. III A. Figures 1c) andd) indicate that the effect of awareness in the model (α > 0), with respect to the pure SIS model (α = 0) is to reduce the fraction ρ of infectious individuals for all values of β/µ, and Fig- ures 1a) and b) seem to indicate in addition a shift of the effective epidemic threshold to larger values. This effect is more pronounced for the ADM than for the AD networks. As this shift of the epidemic threshold is in contradiction, at least for the AD case, with the mean-field analysis of the previous paragraphs, we investigate this issue in more details in Fig. 2, where we show, both for the pure SIS model (α = 0) and for a positive value of α, the average lifetime of non-endemic realizations for various system sizes. Strong finite-size effects are observed, especially for the model with awareness (α > 0). Fitting the values of the effective threshold (the position of the lifetime peak) with a law of the form (β/µ) N = (β/µ) ∞ + A N -ν , typical of finite-size scaling analysis [START_REF] Cardy | Finite Size Scaling[END_REF], leads to a threshold in the thermodynamic limit of (β/µ) ∞ = 0.37 [START_REF] Newman | Networks: An Introduction[END_REF] for the pure SIS model on AD networks, (β/µ) ∞ = 0.34(2) for AD with α = 10 (SIS model with awareness), (β/µ) ∞ = 0.29(3) for ADM with α = 0 (pure SIS model) and (β/µ) ∞ = 0.29(2) for ADM with α = 10. We notice here that the extrapolations for α = 0 are less accurate and thus with larger associated errors. Nevertheless, with the evidence at hand, we can conclude that, within error bars, the risk perception has no effect on the epidemic threshold in the thermodynamic limit, in agreement with the result from Eq. ( 12), that gives a theoretical threshold (β/µ) c = 0.366 for the AD case. It is however noteworthy that the effective epidemic threshold measured in finite systems can be quite strongly affected by the awareness mechanism, even for quite large systems, and in a particularly dramatic way for ADM networks.
We finally explore in Fig. 3 the effect of a varying memory length ∆T , at fixed risk awareness strength α. In both AD and ADM networks, an increasing awareness temporal window shifts the effective epidemic threshold towards larger values, up to a maximum given by ∆T = ∞, when the whole system history is available. For the ADM networks, this effect is less clear because of the changing height of the maximum of the lifespan when increasing ∆T . For AD networks, this result is apparently at odds with the mean-field analysis in which ∆T is irrelevant in the stationary state. We should notice, however, that for ∆T → ∞, the critical point is unchanged in the thermodynamic limit with respect to the pure SIS dynamics. Given that for ∆T → ∞ the effects of awareness are the strongest, we expect that a finite ∆T will not be able to change the threshold in the infinite network limit. We can thus attribute the shifts observed to pure finite size effects. Note that this effect is also seen in homogeneous AD networks with uniform activity a (data not shown), observation that we can explain as follows: when ∆T is small, the ratio of contacts with infectious n I (i) ∆T recorded by an individual i can differ significantly from the overall ratio recorded in the whole network in the same time window, which is equal to n I (i) ∆T = ρ (for a uniform activity). Mathematically, we have
λ i = λ 0 exp(-α n I (i) ∆T ) ≥ λ 0 exp(-α ρ) (13)
by concavity of the exponential function. Thus, even if locally and temporarily some individuals perceive an overestimated prevalence of the epidemics and reduce their probability of being infected accordingly, on average the reduction in the transmission rate would be larger if the ensemble average were used instead of the temporal one, and thus the epidemics is better contained in the former case. As ∆T increases, the temporal average n I (i) ∆T becomes closer to the ensemble one ρ and the effect of awareness increases. When ∆T is large enough compared to the time scale of variation of the network 1/a, the local time recording becomes equivalent to an ensemble average, and we recover the mean-field situation.
B. SIR dynamics
Analytical approach
Following an approach similar to the case of the SIS model, the SIR model has been studied at the heterogeneous mean field level in AD networks, in terms of a set of equations for the state of nodes with activity a, which takes the form [39]
I t+1 a = I t a -µ I t a + λ a (N a -I t a -R t a ) I t a N da + λ (N a -I t a -R t a ) I t a a N da , (14)
where N a is the total number of nodes with activity a, and I a and R a are the number of nodes with activity a in the infectious and recovered states, respectively. Again, a linear stability analysis shows the presence of a threshold, which takes the same form as in the SIS case:
β µ ≥ 2 a a + a 2 . ( 15
)
The same expression can be obtained by a different approach, based on the mapping of the SIR processes to bond percolation [40].
Since the SIR model lacks a steady state, we cannot apply in the general case the approach followed in the previous section. The effects of risk perception can be however treated theoretically for a homogeneous network (uniform activity) in the limit ∆T → ∞, which is defined by the effective infection probability
λ(t) = λ 0 exp - α t t 0 ρ(τ ) dτ . ( 16
)
Even this case is hard to tackle analytically, so that we consider instead a modified model defined by the infection probability
λ(t) = λ 0 exp -α t 0 ρ(τ ) dτ . (17)
In this definition the fraction of infectious seen by an individual is no longer averaged over the memory length but rather accumulated over the memory timespan, so that we expect stronger effects of the risk perception with respect to Eq. ( 15), if any. The fraction of susceptibles s = S/N and the fraction of recovered r = R/N in the system obey the equations
ds dt = -λ 0 ρ(t) s(t) e -αr(t)/µ ( 18
)
dr dt = µρ(t) (19)
where in the first equation we have used the second equation to replace t 0 ρ(τ ) dτ in λ(t) by (r(t) -r(0))/µ (with the initial conditions r(0) = 0).
Setting µ = 1 without loss of generality, the final average fraction of recovered individuals after the end of an outbreak is given by
r ∞ = 1 -s(0) exp - λ 0 α (1 -e -αr∞ ) . (20)
Close to the threshold, i.e., for r ∞ ∼ 0, performing an expansion up to second order and imposing the initial condition ρ(0) = 1 -s(0) = 0, we obtain the asymptotic solution
r ∞ 2 λ 0 (α + λ 0 ) (λ 0 -1), (21)
which leads to the critical infection rate λ 0 = 1. This means that, as for the SIS case, the risk perception does not affect the epidemic threshold at the mean field level, at least for a homogeneous network. The only effect of awareness is a depression of the order parameter r ∞ with α, as observed also in the SIS case. The same conclusion is expected to hold for the original model of awareness, with an infection rate of the form Eq. ( 16) as in this case the dynamics is affected to a lower extent. In analogy, for the general case of an heterogeneous AD network, with rate infection given by Eq. ( 2), we expect the effects of awareness on the epidemic threshold to be negligible at the mean-field level. On ADM networks, the numerical analysis of the SIR model carried out in Ref. [37] has revealed a picture opposite to the SIS case. In an SIR process indeed, reinfection is not possible; as a result, repeating contacts are not useful for the diffusion of the infection. The spread is thus favoured by the more random patterns occurring in the memory-less (AD) case, which allows infectious nodes to contact a broader range of different individuals and find new susceptible ones. The epidemic threshold for SIR processes is hence higher in the ADM case than in the AD one [37].
Numerical simulations
To study the effects of risk perception on the dynamics of a SIR spreading process in temporal networks we resort again to numerical simulations. In Fig. 4 we compare the effects of the risk perception mechanism given by Eq. ( 2) for AD and ADM networks. The spread starts with a fraction ρ 0 = I 0 /N = 0.01 of infectious nodes chosen at random in the population and the activity distribution is the same as in the SIS case. In the present simulations the memory span ∆T is infinite and we compare the results obtained for two different values of the awareness strenght α. We see that the effective epidemic threshold is increased for the ADM network, whereas it seems unchanged for the AD network and around a value of β/µ = 0.35, an agreement with the theoretical prediction quoted in the previous section.
The SIR phase transition is rigorously defined for a vanishing initial density of infectious, i.e., in the limit ρ(0) → 0 and s(0) → 1, as can be seen at the mean-field level in the derivation of Eq. ( 21). In Fig. 5 we explore the effects of the initial density ρ 0 = I 0 /N of infectious individuals on the effect of awareness on AD networks. For large values of ρ 0 = I 0 /N , the awareness (α > 0) can significantly decrease the final epidemic size, as already observed in Fig. 4. This effect can be understood by the fact that, for large ρ 0 , more individuals are aware already from the start of the spread and have therefore lower probabilities to be infected. At very small initial densities, on the other hand, r ∞ becomes independent of α. This is at odds with the result in Eq. ( 21), which however was obtained within an approximation that in- creases the effects of awareness. The milder form considered in Eq. ( 2) leads instead to an approximately unaltered threshold, and to a prevalence independent of α.
For ADM networks, Fig. 6 shows the variance of the order parameter for two different values of α. As in the SIS case, we see that an apparent shift of the effective epidemic threshold is obtained, but very strong finite size effects are present even at large size, especially for α > 0. The difference between the effective thresholds at α > 0 and α = 0 decreases as the system size increases, but remains quite large, making it difficult to reach a clear conclusion on the infinite size limit.
V. EPIDEMIC SPREADING ON EMPIRICAL SOCIAL NETWORKS
As neither AD nor ADM networks display all the complex multi-scale features of real contact networks, we now turn to numerical simulations of spreading processes with and without awareness on empirical temporal contact networks, using the datasets described in Sec. II B.
A. SIS dynamics
As we saw in Sec. IV A, the susceptibility defined to evaluate the epidemic threshold of the SIS process is subject to strong finite size effects. Since the empirical networks used in the present section are quite small, we choose to focus only on the main observable of physical interest, i.e., the average prevalence ρ in the steady state of the epidemics.
As we are interested in the influence of the structural properties of the network, we choose to skip the nights in the datasets describing the contacts between individuals, as obviously no social activity was recorded then, to avoid undesired extinction of the epidemic during those periods. In order to run simulations of the SIS spreading, we construct from the data arbitrarily long lasting periodic networks, with the period being the recording duration (once the nights have been removed). For both networks we define the average instantaneous degree k = 1 T data i k t where the sum runs over all the time steps of the data, and k t is the average degree of the snapshot network at time t. We then define β/µ = λ k /µ as the parameter of the epidemic. For each run, a random starting time step is chosen, and a single agent in the same time step, if there is any, is defined as the seed of the infection (otherwise a new starting time is chosen).
In Fig. 7, we compare the curves of the prevalence ρ of the epidemics in the stationary state on both empirical networks, and for increasing values of the memory length ∆T . We can see that an important reduction of the prevalence is occurring even for ∆T = 1. This is due to the presence of many contacts of duration longer than ∆T (contrarily to the AD case): the awareness mechanism decreases the probability of contagion of all these contacts (and in particular of the contacts with very long duration, which have an important role in the propagation) as soon as ∆T > 1, leading to a strong effect even in this case. At large values of the control parameter β/µ, the effect of the awareness is stronger for increasing values of the memory length ∆T , as was observed in Sec. IV A. At small values of β/µ on the contrary, the awareness is optimum for a finite value of ∆T , and the knowledge of the whole contact history is not the best way to contain the epidemics. While a detailed investigation of this effect lies beyond the scope of our work, preliminary work (not shown) seem to indicate that it is linked to the use of the periodicity introduced in the data through the repetition of the dataset.
B. SIR
In this section we study the impact of the awareness on the SIR spreading process running on the empirical networks. In particular, we study the effect of self protection on the fraction of recovered individuals r ∞ in the final state, and on the effective threshold evaluated as the peak of the relative variance of r ∞ defined in Eq. ( 1). In Fig. 8 and9 we plot σ r and r ∞ for different mem- ory length ∆T , for the SFHH conference and the Thiers highschool data respectively. We first notice that a notable effect appears already for ∆T = 1, similarly to the SIS process. However, we see that r ∞ is monotonously reduced as ∆T grows and that the effective threshold is shifted to higher values of β/µ, also monotonously. It is worth noticing that the timescale of the SIR process is much smaller than the one studied in the SIS process because the final state is an absorbing state free of infectious agents. The lifetime of the epidemic in this case is of the order of magnitude of the data duration, so that the periodicity introduced by the repetition of the dataset is not relevant anymore. Overall, we observe for both networks an important reduction of outbreak size when people adopt a self protecting behaviour, as well as a significant shift of the effective epidemic threshold.
VI. CONCLUSION
The implementation of immunization strategies to contain the propagation of epidemic outbreaks in social networks is a task of paramount importance. In this work, we have considered the effects of taking protective measures to avoid infection in the context of social temporal networks, a more faithful representation of the patterns of social contacts than often considered static structures. In this context, we have implemented a model including awareness to the propagating disease in a temporal network, extending previous approaches defined for static frameworks. In our model, susceptible individuals have a local perception of the overall disease prevalence measured as the ratio of the number of previous contacts with infectious individuals on a training window of width ∆T . An increased level of awareness induces a reduction in the probability that a susceptible individual contracts the disease via a contact with an infectious individual.
To explore the effects of disease awareness we have considered the paradigmatic SIS and SIR spreading models on both synthetic temporal networks, based in the activity driven (AD) model paradigm, and empirical faceto-face contact networks collected by the SocioPatterns collaboration. In the case of network models, we consider the original AD model, and a variation, the AD model with memory (ADM), in which a memory kernel mimics some of the non-Markovian effects observed in real social networks.
In the case of synthetic networks, analytical and numerical results hint that in AD networks without memory, the epidemic threshold on both SIS and SIR models is not changed by the presence of awareness, while the epidemic prevalence is diminished for increasing values of the parameter α gauging the strength of awareness. In the case of the ADM model (temporal network with memory effects) on the other hand, awareness seems to be able to shift the threshold to an increased value, but very strong finite size effects are present: our results are compatible with an absence of change of the epidemic threshold in the infinite size limit, while, as for the AD case, the epidemic prevalence is decreased.
In the case of empirical contact networks, we observe in all cases a strong reduction of the prevalence for different values of α and ∆T , and an apparent shift of the effective epidemic threshold. These empirical networks differ from the network models from two crucial points of view. On the one hand, they have a relatively small size. Given that important finite size effects are observed in the models, especially in the one with memory effects, one might also expect stronger effective shifts in such populations of limited size. On the other hand, AD and ADM networks lack numerous realistic features observed in real social systems. On AD and ADM networks, contacts are established with random nodes (even in the ADM case) so that the perception of the density of infectious by any node is quite homogeneous, at least in the hypothesis of a sufficiently large number of contacts recorded (i.e., at large enough times, for a∆T 1). This is not the case for the empirical networks, which exhibits complex patterns such as community structures, as well as broad distributions of contact and inter-contact durations, specific time-scales (e.g., lunch breaks), correlated activity patterns, etc. [41]. This rich topological and temporal structure can lead to strong heterogeneities in the local perception of the disease. In this respect, it would be interesting to investigate the effect of awareness in more realistic temporal network models.
Notably, the awareness mechanism, even if only local and not assuming any global knowledge of the unfolding of the epidemics, leads to a strong decrease of the prevalence and to shifts in the effective epidemic threshold even at quite large size, in systems as diverse as simple models and empirical data. Moreover, some features of empirical contact networks, such as the broad distribution of contact durations, seem to enhance this effect even for short-term memory awareness. Overall, our results indicate that it would be important to take into account awareness effects as much as possible in data-driven simulations of epidemic spread, to study the relative role of the complex properties of contact networks on these effects, and we hope this will stimulate more research into this crucial topic.
FIG. 1 .
1 FIG. 1. Effect of the strength of risk awareness on the SIS spreading on AD and ADM networks with ∆T = ∞. (a): average lifetime of non-endemic runs for AD network, (b): average lifetime of non-endemic runs for ADM networks, (c): Steady state fraction of infectious for AD, (d): Steady state fraction of infectious for ADM. Vertical lines in subplots (a) and (b) indicate the position of the maximum of the average lifetime. Model parameters: µ = 0.015, γ = 2, = 10 -3 , ∆T = ∞ and network size N = 10 5 . Results are averaged over 1000 realizations.
FIG. 2 .
2 FIG. 2. Analysis of finite-size effects. We plot the average lifetime of non-endemic realizations of the SIS process, for different system sizes and 2 different values of α. (a): ADM networks and α = 0. (b): ADM networks with α = 10. (c): AD networks. Vertical lines indicate the position of the maximum of the average lifetime. Model parameters: µ = 0.015, γ = 2, = 10 -3 and ∆T = ∞. Results are averaged over 1000 realizations.
FIG. 3 .
3 FIG. 3. Effect of the local risk perception with increasing memory span ∆T for the SIS spreading on AD and ADM network. (top): AD network. (bottom): ADM network. Vertical lines indicate the position of the maximum of the average lifetime. Model parameters: α = 10, µ = 0.015, γ = 2, = 10 -3 and network size N = 10 4 . Results are averaged over 1000 realizations.
FIG. 4 .
4 FIG.[START_REF] Caldarelli | Scale-Free Networks: Complex Webs in Nature and Technology[END_REF]. Effect of the local risk perception on the SIR spreading on AD networks and ADM networks. We plot r∞ and σr/σ max r
FIG. 5 .FIG. 6 .
56 FIG. 5. Effect of the initial density of infectious on the SIR model on AD networks for different values of the awareness strength α and the initial density of infectious individuals ρ0. Model parameters: ∆T = ∞, µ = 0.015, γ = 2, = 10 -3 and network size N = 10 5 . Results are averaged over 1000 realizations.
ρρFIG. 7 .
7 FIG. 7. Steady state fraction of infectious for the SIS process on both empirical networks, for 2 values of α and different values of ∆T . Model parameters: µ = 0.001 for Thiers and µ = 0.005 for SFHH. Results are averaged over 1000 realizations.
FIG. 8 .
8 FIG. 8. Effect of the risk perception for different values of ∆T on the SIR spreading on SFHH network. (top): normalized standard deviation σr/σ max r . (bottom): order parameter r∞. Model parameters: µ = 0.005. α = 200. Results are averaged over 10 4 realizations.
FIG. 9 .
9 FIG. 9. Effect of the risk perception for different values of ∆T on the SIR spreading on the Thiers network. (top): normalized standard deviation σr/σ max r . (bottom): order parameter r∞. Model parameters: µ = 0.001. α = 200. Results are averaged over 10 4 realizations.
we pro- vide a brief summary of the main properties of these two datasets.
III. MODELLING EPIDEMIC SPREAD IN
TEMPORAL NETWORKS
A. Epidemic models and epidemic threshold
Dataset N T p ∆t k s
Thiers 180 14026 5.67 2.28 24.66 500.5
SFHH 403 3801 26.14 2.69 47.47 348.7
TABLE I. Some properties of the SocioPatterns datasets un-
der consideration: N , number of different individuals engaged
in interactions; T , total duration of the contact sequence, in
units of the elementary time interval t0 = 20 seconds; p, aver-
age number of individuals interacting at each time step; ∆t ,
average duration of a contact; k and s : average degree and
average strength of the nodes in the network aggregated over
the whole time sequence.
Note that with such a definition, an agent may both receive and emit a link to the same other agent. However, we consider here an unweighted and undirected graph, thus in such a case, a single link is considered. Moreover, in the limit of large N , the probability of such an event goes to 0.
ACKNOWLEDGMENTS R.P.-S. acknowledgs financial support from the Spanish Government's MINECO, under projects FIS2013-47282-C2-2 and FIS2016-76830-C2-1-P, from ICREA Academia, funded by the Generalitat de Catalunya regional authorities. | 46,853 | [
"1658"
] | [
"179898",
"4956",
"407864"
] |